Difference between revisions of "Applications/Star-ccm+"

From HPC
Jump to: navigation , search
m
m
Line 21: Line 21:
  
 
#SBATCH -J STAR_TEST
 
#SBATCH -J STAR_TEST
#SBATCH --nodes=3
+
#SBATCH --nodes=5
 
#SBATCH --ntasks-per-node=28
 
#SBATCH --ntasks-per-node=28
  
Line 35: Line 35:
  
 
module purge
 
module purge
module load starccm+/13.04.011
+
module load starccm+/13.06.011
 
module load openmpi/3.0.0/gcc-7.3.0
 
module load openmpi/3.0.0/gcc-7.3.0
  

Revision as of 14:36, 15 November 2018

Application Details

  • Description : Produced by CD-adapco (Computational Dynamics-Analysis & Design Application Company Ltd), star-ccm+ is used for computer-aided engineering, in particular computational fluid dynamics (CFD).
  • Versions : 11.02.010, 12.04.011, 12.06.011, 13.04.011 and 13.06.011
  • Module names : starccm+/11.02.010, starccm+/12.04.011, starccm+/12.06.011 and starccm+/13.04.011 and starccm+/13.06.011
  • License: University of Hull Engineering department, restricted by POD license


Important information

  • Important: where possible it is recommended to use version 13.06.011, 11.02.010 will be retired
  • The SIM data processed should be the same version of the star-ccm+ application, there is a strong sensitivity in this program.
  • Multiple node processing will need the addition of the file in $HOME/.ssh/config (with StrictHostKeyChecking no)

Usage Examples

Batch Submission


#SBATCH -J STAR_TEST
#SBATCH --nodes=5
#SBATCH --ntasks-per-node=28

#SBATCH -D /home/<username>/TESTCASE/
#SBATCH -o DB.%N.%j.%a.out
#SBATCH -e DB.%N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive

echo $SLURM_NTASKS
echo $SLURM_JOB_NODELIST
echo $SLURM_HOSTS

module purge
module load starccm+/13.06.011
module load openmpi/3.0.0/gcc-7.3.0

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts

starccm+ -podkey <license_key> -licpath <license_path> -mpi platform -power -np $SLURM_NTASKS -machinefile hosts Fluid_Film_On_An_Incline_we_005_SF05_TESTING.sim -batch mesh,run
  • insert your license_key <license_key>.
  • insert the license_path (1999@flex.cd-adapco.com) at <license_path>.
  • <username> your login ID.


[username@login01 ~]$ sbatch starccm.job
Submitted batch job 289522

Interactive

Although the recommended method as a batch session it is also possible to run this as an interactive session. While logged into a compute node (c143):

[username@login01]$ interactive
salloc: Granted job allocation 1114537
Job ID 614537 connecting to c143, please wait...
Last login: Wed Nov 19 09:40:23 2018 from 10.254.5.246

[username@c143 ~]$ module add starccm+/13.04.011
[username@c143 ~]$ module add openmpi/3.0.0/gcc-7.3.0
[username@c143 ~]$ srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hostfile
[username@c143 ~]$ starccm+ -fabricverbose -power -podkey <pod-licence code here> -np 12 -machinefile hostfile -licpath 1999[at]flex.cd-adapco.com testmesh.sim  -batch runsim.java -batch-report
  • Note : [at] = @

Known Issues

  • It would appear that this application does not scale across multi nodes on versions 11 and 12, although version 13 does with the openMPI module also loaded.

Further information