Difference between revisions of "Applications/repast-hpc"

From HPC
Jump to: navigation , search
(Created page with "__TOC__ ==Application Details== * Description: Repast for HPC (Repast-HPC) is a next generation agent-based modeling system intended for large-scale distributed computing pl...")
 
m
 
(2 intermediate revisions by the same user not shown)
Line 10: Line 10:
 
==Usage==
 
==Usage==
  
It implements the core Repast Simphony concepts (e.g. contexts and projections), modifying them to work in a parallel distributed environment.
+
It implements the core Repast Simphony concepts (e.g. contexts and projections), modifying them to work in a parallel distributed environment.
  
 
Repast HPC is written in cross-platform C++.
 
Repast HPC is written in cross-platform C++.
  
  
 +
===Example===
 +
 +
====Interactive Session====
 +
 +
Below is an example of using repast-hpc on Viper
 +
 +
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 +
 +
[pysdlb@login01 ~]$ interactive
 +
salloc: Granted job allocation 520259
 +
Job ID 520259 connecting to c019, please wait...
 +
[username@c019 ~]$ module add repast-hpc/gcc/2.2.0
 +
[username@c019 ~]$ module add boost/gcc/1.53.0
 +
[username@c019 ~]$ module add openmpi/gcc/1.10.5
 +
[username@c019 ~]$ cd repast_hpc/bin
 +
[username@c019 rumor]$ ./rumor_model
 +
usage: X -config string -properties string
 +
  config string: string is the path to the repast exascale
 +
        configuration properties file
 +
  properties string: string is the path to the model properties file
 +
 +
</pre>
 +
 +
 +
====Non interactive job====
 +
 +
This runs on the scheduler SLURM
 +
 +
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 +
 +
#!/bin/bash
 +
#SBATCH -J MPI-repast-hpc
 +
#SBATCH -N 4
 +
#SBATCH --ntasks-per-node 1
 +
#SBATCH -D /home/user/CODE_SAMPLES/Repast-HPC
 +
#SBATCH -o %N.%j.%a.out
 +
#SBATCH -e %N.%j.%a.err
 +
#SBATCH -p compute
 +
#SBATCH --exclusive
 +
 +
echo $SLURM_JOB_NODELIST
 +
 +
module purge
 +
module add repast-hpc/gcc/2.2.0
 +
module add boost/gcc/1.53.0
 +
module add openmpi/gcc/1.10.5
  
 +
export I_MPI_DEBUG=5
 +
export I_MPI_FABRICS=shm:tmi
 +
export I_MPI_FALLBACK=no
  
===Example===
+
mpirun -n 4 ./zombie_model1 zombie1_config.props zombie1_model.props
  
  
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
[username@login01 ~]$ module add  sparsehash/2016-12-21/gcc-4.9.3
 
 
</pre>
 
</pre>
  

Latest revision as of 14:12, 19 October 2017

Application Details

  • Description: Repast for HPC (Repast-HPC) is a next generation agent-based modeling system intended for large-scale distributed computing platforms.
  • Version: 2.2.0 (compiled with gcc)
  • Modules: repast-hpc/gcc/2.2.0
  • Licence: Github, open-source

Usage

It implements the core Repast Simphony concepts (e.g. contexts and projections), modifying them to work in a parallel distributed environment.

Repast HPC is written in cross-platform C++.


Example

Interactive Session

Below is an example of using repast-hpc on Viper


[pysdlb@login01 ~]$ interactive
salloc: Granted job allocation 520259
Job ID 520259 connecting to c019, please wait...
[username@c019 ~]$ module add repast-hpc/gcc/2.2.0
[username@c019 ~]$ module add boost/gcc/1.53.0
[username@c019 ~]$ module add openmpi/gcc/1.10.5
[username@c019 ~]$ cd repast_hpc/bin
[username@c019 rumor]$ ./rumor_model
usage: X -config string -properties string
  config string: string is the path to the repast exascale
        configuration properties file
  properties string: string is the path to the model properties file


Non interactive job

This runs on the scheduler SLURM


#!/bin/bash
#SBATCH -J MPI-repast-hpc
#SBATCH -N 4
#SBATCH --ntasks-per-node 1
#SBATCH -D /home/user/CODE_SAMPLES/Repast-HPC
#SBATCH -o %N.%j.%a.out
#SBATCH -e %N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_JOB_NODELIST

module purge
module add repast-hpc/gcc/2.2.0
module add boost/gcc/1.53.0
module add openmpi/gcc/1.10.5

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

mpirun -n 4 ./zombie_model1 zombie1_config.props zombie1_model.props



Further Information

Icon home.png