Difference between revisions of "Applications/Star-ccm+"

From HPC
Jump to: navigation , search
m
m (Version 18 and above)
 
(38 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
== Application Details ==
 
== Application Details ==
  
* Description : Produced by CD-adapco (Computational Dynamics-Analysis & Design Application Company Ltd), star-ccm+ is used for computer-aided engineering, in particular computational fluid dynamics (CFD).
+
* Description: Produced by Siemens CD-adapco (Computational Dynamics-Analysis & Design Application Company Ltd), star-ccm+ is used for computer-aided engineering, in particular, computational fluid dynamics (CFD).
* Versions :  12.04.011, 12.06.011, 13.04.011, 13.06.011 and 14.02.012
+
* Versions :  15.02.009 ( + R8) and starccm+/18.04.009.R8
* Module names :  starccm+/12.04.011, starccm+/12.06.011 and starccm+/13.04.011, starccm+/13.06.011 and starccm+/14.02.021
 
* License: University of Hull Engineering department, restricted by POD license
 
  
 +
* Module names :  starccm+/15.02.009 (and starccm+/15.02.009.R8) and starccm+/18.04.009.R8
 +
* License: University of Hull Engineering department, restricted by POD license.
  
 
== Important information ==
 
== Important information ==
  
* '''Important''': where possible it is recommended to use version 14.02.021 and 13.06.011, version 11/12  will be retired soon.
+
* '''Important''': where possible it is recommended to use versions 18.04.009 and 15.02.009, versions 14 will be retired soon.
 
* The SIM data processed should be the same version of the star-ccm+ application, there is a strong sensitivity in this program.
 
* The SIM data processed should be the same version of the star-ccm+ application, there is a strong sensitivity in this program.
 
* '''Multiple node processing''' will need the addition of the file in '''$HOME/.ssh/config''' (with '''StrictHostKeyChecking no''')
 
* '''Multiple node processing''' will need the addition of the file in '''$HOME/.ssh/config''' (with '''StrictHostKeyChecking no''')
Line 17: Line 17:
  
 
=== Batch Submission ===
 
=== Batch Submission ===
 +
 +
====Up to Version 15====
  
 
<pre style="background-color: #C8C8C8; color: black; border: 2px solid #C8C8C8; font-family: monospace, sans-serif;">
 
<pre style="background-color: #C8C8C8; color: black; border: 2px solid #C8C8C8; font-family: monospace, sans-serif;">
  
#SBATCH -J STAR_TEST
+
#!/bin/bash
#SBATCH --nodes=5
+
 
 +
###################################################################################################
 +
# queue system requests
 +
 
 +
#SBATCH -J STAR_15#SBATCH --nodes=5
 
#SBATCH --ntasks-per-node=28
 
#SBATCH --ntasks-per-node=28
 
 
#SBATCH -D /home/<username>/TESTCASE/
 
#SBATCH -D /home/<username>/TESTCASE/
 
#SBATCH -o DB.%N.%j.%a.out
 
#SBATCH -o DB.%N.%j.%a.out
Line 29: Line 34:
 
#SBATCH -p compute
 
#SBATCH -p compute
 
#SBATCH --exclusive
 
#SBATCH --exclusive
 +
#SBATCH --mail-user= your email address here
  
 
echo $SLURM_NTASKS
 
echo $SLURM_NTASKS
 
echo $SLURM_JOB_NODELIST
 
echo $SLURM_JOB_NODELIST
 
echo $SLURM_HOSTS
 
echo $SLURM_HOSTS
 +
 +
###################################################################################################
  
 
module purge
 
module purge
module load starccm+/14.02.021
+
module load starccm+/15.04.009.R8
module load openmpi/3.0.0/gcc-7.3.0
+
module load openmpi/4.0.5/gcc-7.3.0
  
 
export I_MPI_DEBUG=5
 
export I_MPI_DEBUG=5
 
export I_MPI_FABRICS=shm:tmi
 
export I_MPI_FABRICS=shm:tmi
 
export I_MPI_FALLBACK=no
 
export I_MPI_FALLBACK=no
 +
 +
###################################################################################################
  
 
srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts
 
srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts
  
starccm+ -podkey <license_key> -licpath <license_path> -mpi platform -power -np $SLURM_NTASKS -machinefile hosts Fluid_Film_On_An_Incline_we_005_SF05_TESTING.sim -batch mesh,run
+
starccm+ -podkey <license_key> -licpath <license-address> -mpi platform -power -np $SLURM_NTASKS -machinefile hosts Fluid_Film_On_An_Incline_we_005_SF05_TESTING.sim -batch mesh,run
 
</pre>
 
</pre>
  
Line 50: Line 60:
 
* insert the license_path (1999@flex.cd-adapco.com) at '''<license_path>'''.
 
* insert the license_path (1999@flex.cd-adapco.com) at '''<license_path>'''.
 
* '''<username>''' your login ID.
 
* '''<username>''' your login ID.
 +
* '''Multiple node processing''' will need the addition of the file in '''$HOME/.ssh/config''' (with '''StrictHostKeyChecking no''')
 +
 +
====Version 18 and above====
 +
 +
From here star-ccm+ has made a change to how it uses the network fabrics. Previously, you could supply a 'native' one, but now they have put the network fabrics stack internally with the program directory structure.
 +
 +
<pre style="background-color: #C8C8C8; color: black; border: 2px solid #C8C8C8; font-family: monospace, sans-serif;">
 +
###################################################################################################
 +
# queue system requests
 +
 +
#SBATCH -J DLB-test  # jobname displayed by squeue
 +
#SBATCH -N 3                    # number of nodes
 +
#SBATCH --ntasks-per-node 26    # range: 1..28
 +
#SBATCH -D /home/yourID        # <----put your working directory here
 +
#SBATCH -o DB.%N.%j.%a.out  # output directory
 +
#SBATCH -e DB.%N.%j.%a.err  # output directory for error messages
 +
#SBATCH --time 1-12:00:00        # [DAY-HH:MM:SS] time budget
 +
#SBATCH -p compute  # queue in which this is going
 +
 +
###################################################################################################
 +
 +
# custom
 +
LOGFILE="$HOME/testlog$SLURM_JOBID.log"
 +
PERSONAL_PODKEY="gCXN4vJxgF1Jw"  # <--------- not valid, provide your own here!
 +
MACRO=""
 +
 +
###################################################################################################
 +
# standard output for debugging, although we start with TCP the fabrics ib0 takes over
 +
 +
module purge
 +
module load starccm+/18.04.009.R8
 +
 +
export I_MPI_DEBUG=5
 +
export I_MPI_OFI_PROVIDER=tcp
 +
export I_MPI_FABRICS=shm:ofi
 +
 +
srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts
 +
 +
starccm+ -mpi intel -podkey $PERSONAL_PODKEY -licpath 1999@flex.cd-adapco.com -power -np $SLURM_NTASKS -mpiflags "-bootstrap slurm" -fabric psm2 -machinefile hosts Xu2017_G3_q01_Un.sim -batch $MACRO -collab >> $LOGFILE
 +
 +
</pre>
 +
 +
* '''Multiple node processing''' will need the addition of the file in '''$HOME/.ssh/config''' (with '''StrictHostKeyChecking no''')
 +
* Using 26 cores (per node) rather than the full 28 (per node) can provide some performance improvements to the network fabric.
  
  
Line 59: Line 113:
 
===Interactive===
 
===Interactive===
  
Although the recommended method as a batch session it is also possible to run this as an interactive session.
+
Although the recommended method is a batch session it is also possible to run this as an interactive session.
 
While logged into a compute node (''c143''):
 
While logged into a compute node (''c143''):
  
Line 68: Line 122:
 
Last login: Wed Nov 19 09:40:23 2018 from 10.254.5.246
 
Last login: Wed Nov 19 09:40:23 2018 from 10.254.5.246
  
[username@c143 ~]$ module add starccm+/14.02.021
+
[username@c143 ~]$ module load starccm+/18.04.009.R8
[username@c143 ~]$ module add openmpi/3.0.0/gcc-7.3.0
+
[username@c143 ~]$ module load openmpi/4.0.5/gcc-7.3.0
[username@c143 ~]$ srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hostfile
+
[username@c143 ~]$ hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hostfile
[username@c143 ~]$ starccm+ -fabricverbose -power -podkey <pod-licence code here> -np 12 -machinefile hostfile -licpath 1999[at]flex.cd-adapco.com testmesh.sim  -batch runsim.java -batch-report
+
[username@c143 ~]$ starccm+ -fabricverbose -power -podkey <pod-licence code here> -np 12 -machinefile hostfile -licpath 1999[at]flex.cd-adapco.com testmesh.sim  -batch runsim.java  
 
</pre>
 
</pre>
 
* '''Note''' : [at] = @
 
* '''Note''' : [at] = @
 +
 +
==Star-ccm server==
 +
 +
Star-ccm+ can be used as a server within an interactive session on Viper. This would be very useful for graphics from big simulations.
 +
 +
* Check SSH keys are been generated correctly for this version of SSH.
  
 
==Known Issues==
 
==Known Issues==
  
* It would appear that this application does not scale across multi nodes on versions 11 and 12, although version 13 does with the openMPI module also loaded.
+
* After version 14; no noted MPI issues.
 +
* After version 18, internal Intel MPI is used.
  
 
== Further information ==  
 
== Further information ==  
  
* [http://mdx.plm.automation.siemens.com/ http://mdx.plm.automation.siemens.com/]
+
* [https://www.plm.automation.siemens.com/global/en/support/ https://www.plm.automation.siemens.com/global/en/support/]
 
+
{{Licensepagenav}}
==Navigation==
 
 
 
* [[Main_Page|Home]]
 
* [[Applications|Application support]] *
 
* [[General|General]]
 
* [[Programming|Programming support]]
 

Latest revision as of 10:59, 21 November 2023

Application Details

  • Description: Produced by Siemens CD-adapco (Computational Dynamics-Analysis & Design Application Company Ltd), star-ccm+ is used for computer-aided engineering, in particular, computational fluid dynamics (CFD).
  • Versions : 15.02.009 ( + R8) and starccm+/18.04.009.R8
  • Module names : starccm+/15.02.009 (and starccm+/15.02.009.R8) and starccm+/18.04.009.R8
  • License: University of Hull Engineering department, restricted by POD license.

Important information

  • Important: where possible it is recommended to use versions 18.04.009 and 15.02.009, versions 14 will be retired soon.
  • The SIM data processed should be the same version of the star-ccm+ application, there is a strong sensitivity in this program.
  • Multiple node processing will need the addition of the file in $HOME/.ssh/config (with StrictHostKeyChecking no)

Usage Examples

Batch Submission

Up to Version 15


#!/bin/bash

###################################################################################################
# queue system requests

#SBATCH -J STAR_15#SBATCH --nodes=5
#SBATCH --ntasks-per-node=28
#SBATCH -D /home/<username>/TESTCASE/
#SBATCH -o DB.%N.%j.%a.out
#SBATCH -e DB.%N.%j.%a.err
#SBATCH -p compute
#SBATCH --exclusive
#SBATCH --mail-user= your email address here

echo $SLURM_NTASKS
echo $SLURM_JOB_NODELIST
echo $SLURM_HOSTS

###################################################################################################

module purge
module load starccm+/15.04.009.R8
module load openmpi/4.0.5/gcc-7.3.0

export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no

###################################################################################################

srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts

starccm+ -podkey <license_key> -licpath <license-address> -mpi platform -power -np $SLURM_NTASKS -machinefile hosts Fluid_Film_On_An_Incline_we_005_SF05_TESTING.sim -batch mesh,run
  • insert your license_key <license_key>.
  • insert the license_path (1999@flex.cd-adapco.com) at <license_path>.
  • <username> your login ID.
  • Multiple node processing will need the addition of the file in $HOME/.ssh/config (with StrictHostKeyChecking no)

Version 18 and above

From here star-ccm+ has made a change to how it uses the network fabrics. Previously, you could supply a 'native' one, but now they have put the network fabrics stack internally with the program directory structure.

###################################################################################################
# queue system requests

#SBATCH -J DLB-test  # jobname displayed by squeue
#SBATCH -N 3                    # number of nodes
#SBATCH --ntasks-per-node 26    # range: 1..28
#SBATCH -D /home/yourID         # <----put your working directory here
#SBATCH -o DB.%N.%j.%a.out  # output directory
#SBATCH -e DB.%N.%j.%a.err  # output directory for error messages
#SBATCH --time 1-12:00:00         # [DAY-HH:MM:SS] time budget
#SBATCH -p compute   # queue in which this is going

###################################################################################################

# custom
LOGFILE="$HOME/testlog$SLURM_JOBID.log"
PERSONAL_PODKEY="gCXN4vJxgF1Jw"   # <--------- not valid, provide your own here!
MACRO=""

###################################################################################################
# standard output for debugging, although we start with TCP the fabrics ib0 takes over

module purge
module load starccm+/18.04.009.R8

export I_MPI_DEBUG=5
export I_MPI_OFI_PROVIDER=tcp
export I_MPI_FABRICS=shm:ofi

srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hosts

starccm+ -mpi intel -podkey $PERSONAL_PODKEY -licpath [mailto:1999@flex.cd-adapco.com 1999@flex.cd-adapco.com] -power -np $SLURM_NTASKS -mpiflags "-bootstrap slurm" -fabric psm2 -machinefile hosts Xu2017_G3_q01_Un.sim -batch $MACRO -collab >> $LOGFILE

  • Multiple node processing will need the addition of the file in $HOME/.ssh/config (with StrictHostKeyChecking no)
  • Using 26 cores (per node) rather than the full 28 (per node) can provide some performance improvements to the network fabric.


[username@login01 ~]$ sbatch starccm.job
Submitted batch job 289522

Interactive

Although the recommended method is a batch session it is also possible to run this as an interactive session. While logged into a compute node (c143):

[username@login01]$ interactive
salloc: Granted job allocation 1114537
Job ID 614537 connecting to c143, please wait...
Last login: Wed Nov 19 09:40:23 2018 from 10.254.5.246

[username@c143 ~]$ module load starccm+/18.04.009.R8
[username@c143 ~]$ module load openmpi/4.0.5/gcc-7.3.0
[username@c143 ~]$ hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hostfile
[username@c143 ~]$ starccm+ -fabricverbose -power -podkey <pod-licence code here> -np 12 -machinefile hostfile -licpath 1999[at]flex.cd-adapco.com testmesh.sim  -batch runsim.java 
  • Note : [at] = @

Star-ccm server

Star-ccm+ can be used as a server within an interactive session on Viper. This would be very useful for graphics from big simulations.

  • Check SSH keys are been generated correctly for this version of SSH.

Known Issues

  • After version 14; no noted MPI issues.
  • After version 18, internal Intel MPI is used.

Further information





Special License Applications | Main Page | Further Topics