Difference between revisions of "Applications/Star-ccm+"
From HPC
m (→Usage Examples) |
m (→Version 18 and above) |
||
(18 intermediate revisions by the same user not shown) | |||
Line 10: | Line 10: | ||
== Important information == | == Important information == | ||
− | * '''Important''': where possible it is recommended to use versions 18.04.009 and 15.02.009, versions | + | * '''Important''': where possible it is recommended to use versions 18.04.009 and 15.02.009, versions 14 will be retired soon. |
* The SIM data processed should be the same version of the star-ccm+ application, there is a strong sensitivity in this program. | * The SIM data processed should be the same version of the star-ccm+ application, there is a strong sensitivity in this program. | ||
* '''Multiple node processing''' will need the addition of the file in '''$HOME/.ssh/config''' (with '''StrictHostKeyChecking no''') | * '''Multiple node processing''' will need the addition of the file in '''$HOME/.ssh/config''' (with '''StrictHostKeyChecking no''') | ||
Line 23: | Line 23: | ||
#!/bin/bash | #!/bin/bash | ||
+ | |||
+ | ################################################################################################### | ||
+ | # queue system requests | ||
#SBATCH -J STAR_15#SBATCH --nodes=5 | #SBATCH -J STAR_15#SBATCH --nodes=5 | ||
#SBATCH --ntasks-per-node=28 | #SBATCH --ntasks-per-node=28 | ||
− | |||
#SBATCH -D /home/<username>/TESTCASE/ | #SBATCH -D /home/<username>/TESTCASE/ | ||
#SBATCH -o DB.%N.%j.%a.out | #SBATCH -o DB.%N.%j.%a.out | ||
Line 37: | Line 39: | ||
echo $SLURM_JOB_NODELIST | echo $SLURM_JOB_NODELIST | ||
echo $SLURM_HOSTS | echo $SLURM_HOSTS | ||
+ | |||
+ | ################################################################################################### | ||
module purge | module purge | ||
Line 45: | Line 49: | ||
export I_MPI_FABRICS=shm:tmi | export I_MPI_FABRICS=shm:tmi | ||
export I_MPI_FALLBACK=no | export I_MPI_FALLBACK=no | ||
+ | |||
+ | ################################################################################################### | ||
srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts | srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts | ||
Line 54: | Line 60: | ||
* insert the license_path (1999@flex.cd-adapco.com) at '''<license_path>'''. | * insert the license_path (1999@flex.cd-adapco.com) at '''<license_path>'''. | ||
* '''<username>''' your login ID. | * '''<username>''' your login ID. | ||
+ | * '''Multiple node processing''' will need the addition of the file in '''$HOME/.ssh/config''' (with '''StrictHostKeyChecking no''') | ||
====Version 18 and above==== | ====Version 18 and above==== | ||
+ | |||
+ | From here star-ccm+ has made a change to how it uses the network fabrics. Previously, you could supply a 'native' one, but now they have put the network fabrics stack internally with the program directory structure. | ||
<pre style="background-color: #C8C8C8; color: black; border: 2px solid #C8C8C8; font-family: monospace, sans-serif;"> | <pre style="background-color: #C8C8C8; color: black; border: 2px solid #C8C8C8; font-family: monospace, sans-serif;"> | ||
− | |||
− | |||
################################################################################################### | ################################################################################################### | ||
# queue system requests | # queue system requests | ||
− | #SBATCH -J | + | #SBATCH -J DLB-test # jobname displayed by squeue |
− | #SBATCH -N | + | #SBATCH -N 3 # number of nodes |
− | #SBATCH --ntasks-per-node | + | #SBATCH --ntasks-per-node 26 # range: 1..28 |
− | #SBATCH -D /home/ | + | #SBATCH -D /home/yourID # <----put your working directory here |
− | + | #SBATCH -o DB.%N.%j.%a.out # output directory | |
− | + | #SBATCH -e DB.%N.%j.%a.err # output directory for error messages | |
− | #SBATCH --time | + | #SBATCH --time 1-12:00:00 # [DAY-HH:MM:SS] time budget |
#SBATCH -p compute # queue in which this is going | #SBATCH -p compute # queue in which this is going | ||
Line 75: | Line 82: | ||
# custom | # custom | ||
− | LOGFILE="$HOME/ | + | LOGFILE="$HOME/testlog$SLURM_JOBID.log" |
− | PERSONAL_PODKEY=" | + | PERSONAL_PODKEY="gCXN4vJxgF1Jw" # <--------- not valid, provide your own here! |
MACRO="" | MACRO="" | ||
− | |||
################################################################################################### | ################################################################################################### | ||
− | # standard output for debugging | + | # standard output for debugging, although we start with TCP the fabrics ib0 takes over |
module purge | module purge | ||
Line 87: | Line 93: | ||
export I_MPI_DEBUG=5 | export I_MPI_DEBUG=5 | ||
+ | export I_MPI_OFI_PROVIDER=tcp | ||
export I_MPI_FABRICS=shm:ofi | export I_MPI_FABRICS=shm:ofi | ||
srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts | srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts | ||
− | starccm+ -mpi intel -podkey $PERSONAL_PODKEY -licpath 1999@flex.cd-adapco.com -power -np $SLURM_NTASKS -machinefile hosts | + | starccm+ -mpi intel -podkey $PERSONAL_PODKEY -licpath 1999@flex.cd-adapco.com -power -np $SLURM_NTASKS -mpiflags "-bootstrap slurm" -fabric psm2 -machinefile hosts Xu2017_G3_q01_Un.sim -batch $MACRO -collab >> $LOGFILE |
</pre> | </pre> | ||
+ | |||
+ | * '''Multiple node processing''' will need the addition of the file in '''$HOME/.ssh/config''' (with '''StrictHostKeyChecking no''') | ||
+ | * Using 26 cores (per node) rather than the full 28 (per node) can provide some performance improvements to the network fabric. | ||
+ | |||
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> | <pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> | ||
Line 126: | Line 137: | ||
==Known Issues== | ==Known Issues== | ||
− | * After version | + | * After version 14; no noted MPI issues. |
+ | * After version 18, internal Intel MPI is used. | ||
== Further information == | == Further information == | ||
* [https://www.plm.automation.siemens.com/global/en/support/ https://www.plm.automation.siemens.com/global/en/support/] | * [https://www.plm.automation.siemens.com/global/en/support/ https://www.plm.automation.siemens.com/global/en/support/] | ||
− | |||
{{Licensepagenav}} | {{Licensepagenav}} |
Latest revision as of 10:59, 21 November 2023
Contents
Application Details
- Description: Produced by Siemens CD-adapco (Computational Dynamics-Analysis & Design Application Company Ltd), star-ccm+ is used for computer-aided engineering, in particular, computational fluid dynamics (CFD).
- Versions : 15.02.009 ( + R8) and starccm+/18.04.009.R8
- Module names : starccm+/15.02.009 (and starccm+/15.02.009.R8) and starccm+/18.04.009.R8
- License: University of Hull Engineering department, restricted by POD license.
Important information
- Important: where possible it is recommended to use versions 18.04.009 and 15.02.009, versions 14 will be retired soon.
- The SIM data processed should be the same version of the star-ccm+ application, there is a strong sensitivity in this program.
- Multiple node processing will need the addition of the file in $HOME/.ssh/config (with StrictHostKeyChecking no)
Usage Examples
Batch Submission
Up to Version 15
#!/bin/bash ################################################################################################### # queue system requests #SBATCH -J STAR_15#SBATCH --nodes=5 #SBATCH --ntasks-per-node=28 #SBATCH -D /home/<username>/TESTCASE/ #SBATCH -o DB.%N.%j.%a.out #SBATCH -e DB.%N.%j.%a.err #SBATCH -p compute #SBATCH --exclusive #SBATCH --mail-user= your email address here echo $SLURM_NTASKS echo $SLURM_JOB_NODELIST echo $SLURM_HOSTS ################################################################################################### module purge module load starccm+/15.04.009.R8 module load openmpi/4.0.5/gcc-7.3.0 export I_MPI_DEBUG=5 export I_MPI_FABRICS=shm:tmi export I_MPI_FALLBACK=no ################################################################################################### srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts starccm+ -podkey <license_key> -licpath <license-address> -mpi platform -power -np $SLURM_NTASKS -machinefile hosts Fluid_Film_On_An_Incline_we_005_SF05_TESTING.sim -batch mesh,run
- insert your license_key <license_key>.
- insert the license_path (1999@flex.cd-adapco.com) at <license_path>.
- <username> your login ID.
- Multiple node processing will need the addition of the file in $HOME/.ssh/config (with StrictHostKeyChecking no)
Version 18 and above
From here star-ccm+ has made a change to how it uses the network fabrics. Previously, you could supply a 'native' one, but now they have put the network fabrics stack internally with the program directory structure.
################################################################################################### # queue system requests #SBATCH -J DLB-test # jobname displayed by squeue #SBATCH -N 3 # number of nodes #SBATCH --ntasks-per-node 26 # range: 1..28 #SBATCH -D /home/yourID # <----put your working directory here #SBATCH -o DB.%N.%j.%a.out # output directory #SBATCH -e DB.%N.%j.%a.err # output directory for error messages #SBATCH --time 1-12:00:00 # [DAY-HH:MM:SS] time budget #SBATCH -p compute # queue in which this is going ################################################################################################### # custom LOGFILE="$HOME/testlog$SLURM_JOBID.log" PERSONAL_PODKEY="gCXN4vJxgF1Jw" # <--------- not valid, provide your own here! MACRO="" ################################################################################################### # standard output for debugging, although we start with TCP the fabrics ib0 takes over module purge module load starccm+/18.04.009.R8 export I_MPI_DEBUG=5 export I_MPI_OFI_PROVIDER=tcp export I_MPI_FABRICS=shm:ofi srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts starccm+ -mpi intel -podkey $PERSONAL_PODKEY -licpath [mailto:1999@flex.cd-adapco.com 1999@flex.cd-adapco.com] -power -np $SLURM_NTASKS -mpiflags "-bootstrap slurm" -fabric psm2 -machinefile hosts Xu2017_G3_q01_Un.sim -batch $MACRO -collab >> $LOGFILE
- Multiple node processing will need the addition of the file in $HOME/.ssh/config (with StrictHostKeyChecking no)
- Using 26 cores (per node) rather than the full 28 (per node) can provide some performance improvements to the network fabric.
[username@login01 ~]$ sbatch starccm.job Submitted batch job 289522
Interactive
Although the recommended method is a batch session it is also possible to run this as an interactive session. While logged into a compute node (c143):
[username@login01]$ interactive salloc: Granted job allocation 1114537 Job ID 614537 connecting to c143, please wait... Last login: Wed Nov 19 09:40:23 2018 from 10.254.5.246 [username@c143 ~]$ module load starccm+/18.04.009.R8 [username@c143 ~]$ module load openmpi/4.0.5/gcc-7.3.0 [username@c143 ~]$ hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hostfile [username@c143 ~]$ starccm+ -fabricverbose -power -podkey <pod-licence code here> -np 12 -machinefile hostfile -licpath 1999[at]flex.cd-adapco.com testmesh.sim -batch runsim.java
- Note : [at] = @
Star-ccm server
Star-ccm+ can be used as a server within an interactive session on Viper. This would be very useful for graphics from big simulations.
- Check SSH keys are been generated correctly for this version of SSH.
Known Issues
- After version 14; no noted MPI issues.
- After version 18, internal Intel MPI is used.