Difference between revisions of "Applications/Star-ccm+"
From HPC
m (→Usage Examples) |
m (→Version 18 and above) |
||
Line 66: | Line 66: | ||
#SBATCH -N 2 # number of nodes | #SBATCH -N 2 # number of nodes | ||
#SBATCH --ntasks-per-node 28 # range: 1..28 | #SBATCH --ntasks-per-node 28 # range: 1..28 | ||
− | #SBATCH -D /home/ | + | #SBATCH -D /home/<username> # your working directory |
# #SBATCH -o DB.%N.%j.%a.out # output directory | # #SBATCH -o DB.%N.%j.%a.out # output directory | ||
# #SBATCH -e DB.%N.%j.%a.err # output directory for error messages | # #SBATCH -e DB.%N.%j.%a.err # output directory for error messages |
Revision as of 12:50, 27 September 2023
Contents
Application Details
- Description: Produced by Siemens CD-adapco (Computational Dynamics-Analysis & Design Application Company Ltd), star-ccm+ is used for computer-aided engineering, in particular, computational fluid dynamics (CFD).
- Versions : 15.02.009 ( + R8) and starccm+/18.04.009.R8
- Module names : starccm+/15.02.009 (and starccm+/15.02.009.R8) and starccm+/18.04.009.R8
- License: University of Hull Engineering department, restricted by POD license.
Important information
- Important: where possible it is recommended to use versions 18.04.009 and 15.02.009, versions 13/14 will be retired soon.
- The SIM data processed should be the same version of the star-ccm+ application, there is a strong sensitivity in this program.
- Multiple node processing will need the addition of the file in $HOME/.ssh/config (with StrictHostKeyChecking no)
Usage Examples
Batch Submission
Up to Version 15
#!/bin/bash #SBATCH -J STAR_15#SBATCH --nodes=5 #SBATCH --ntasks-per-node=28 #SBATCH -D /home/<username>/TESTCASE/ #SBATCH -o DB.%N.%j.%a.out #SBATCH -e DB.%N.%j.%a.err #SBATCH -p compute #SBATCH --exclusive #SBATCH --mail-user= your email address here echo $SLURM_NTASKS echo $SLURM_JOB_NODELIST echo $SLURM_HOSTS module purge module load starccm+/15.04.009.R8 module load openmpi/4.0.5/gcc-7.3.0 export I_MPI_DEBUG=5 export I_MPI_FABRICS=shm:tmi export I_MPI_FALLBACK=no srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts starccm+ -podkey <license_key> -licpath <license-address> -mpi platform -power -np $SLURM_NTASKS -machinefile hosts Fluid_Film_On_An_Incline_we_005_SF05_TESTING.sim -batch mesh,run
- insert your license_key <license_key>.
- insert the license_path (1999@flex.cd-adapco.com) at <license_path>.
- <username> your login ID.
Version 18 and above
#!/bin/bash ################################################################################################### # queue system requests #SBATCH -J Star18 # jobname displayed by squeue #SBATCH -N 2 # number of nodes #SBATCH --ntasks-per-node 28 # range: 1..28 #SBATCH -D /home/<username> # your working directory # #SBATCH -o DB.%N.%j.%a.out # output directory # #SBATCH -e DB.%N.%j.%a.err # output directory for error messages #SBATCH --time 24:00:00 # [HH:MM:SS] time budget #SBATCH -p compute # queue in which this is going ################################################################################################### # custom LOGFILE="$HOME/simul$SLURM_JOBID.log" PERSONAL_PODKEY="PQ8FVQ6wcg" # enter your own this is not valid MACRO="" # MACRO="------------------" ################################################################################################### # standard output for debugging module purge module load starccm+/18.04.009.R8 export I_MPI_DEBUG=5 export I_MPI_FABRICS=shm:ofi srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts starccm+ -mpi intel -podkey $PERSONAL_PODKEY -licpath [mailto:1999@flex.cd-adapco.com 1999@flex.cd-adapco.com] -power -np $SLURM_NTASKS -machinefile hosts Fluid_Film_On_An_Incline_we_005_SF05_TESTING.sim -batch $MACRO -collab >> $LOGFILE
[username@login01 ~]$ sbatch starccm.job Submitted batch job 289522
Interactive
Although the recommended method is a batch session it is also possible to run this as an interactive session. While logged into a compute node (c143):
[username@login01]$ interactive salloc: Granted job allocation 1114537 Job ID 614537 connecting to c143, please wait... Last login: Wed Nov 19 09:40:23 2018 from 10.254.5.246 [username@c143 ~]$ module load starccm+/18.04.009.R8 [username@c143 ~]$ module load openmpi/4.0.5/gcc-7.3.0 [username@c143 ~]$ hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hostfile [username@c143 ~]$ starccm+ -fabricverbose -power -podkey <pod-licence code here> -np 12 -machinefile hostfile -licpath 1999[at]flex.cd-adapco.com testmesh.sim -batch runsim.java
- Note : [at] = @
Star-ccm server
Star-ccm+ can be used as a server within an interactive session on Viper. This would be very useful for graphics from big simulations.
- Check SSH keys are been generated correctly for this version of SSH.
Known Issues
- After version 13; no noted MPI issues.