Applications/repast-hpc

From HPC
Jump to: navigation , search

Application Details

  • Description: Repast for HPC (Repast-HPC) is a next generation agent-based modeling system intended for large-scale distributed computing platforms.
  • Version: 2.2.0 (compiled with gcc)
  • Modules: repast-hpc/gcc/2.2.0
  • Licence: Github, open-source

Usage

It implements the core Repast Simphony concepts (e.g. contexts and projections), modifying them to work in a parallel distributed environment.

Repast HPC is written in cross-platform C++.


Example

Interactive Session

Below is an example of using repast-hpc on Viper


[pysdlb@login01 ~]$ interactive
salloc: Granted job allocation 520259
Job ID 520259 connecting to c019, please wait...
[username@c019 ~]$ module add repast-hpc/gcc/2.2.0
[username@c019 ~]$ module add boost/gcc/1.53.0
[username@c019 ~]$ module add openmpi/gcc/1.10.5
[username@c019 ~]$ cd repast_hpc/bin
[username@c019 rumor]$ ./rumor_model
usage: X -config string -properties string
  config string: string is the path to the repast exascale
        configuration properties file
  properties string: string is the path to the model properties file


Non interactive job

This runs on the scheduler SLURM


#!/bin/bash
#SBATCH -J MPI-repast-hpc
#SBATCH -N 4
#SBATCH --ntasks-per-node 1
#SBATCH -D /home/user/CODE_SAMPLES/Repast-HPC
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module add repast-hpc/gcc/2.2.0
module add boost/gcc/1.53.0
module add openmpi/gcc/1.10.5

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

mpirun -n 4 ./zombie_model1 zombie1_config.props zombie1_model.props



Further Information

Icon home.png