Difference between revisions of "Applications/Star-ccm+"

From HPC
Jump to: navigation , search
m
m
Line 64: Line 64:
 
===Interactive===
 
===Interactive===
  
 +
Although the recommended method as a batch session it is also possible to run this as an interactive session.
 
While logged into a compute node (''c143''):
 
While logged into a compute node (''c143''):
  
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
<pre style="background-color: black; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 +
[username@login01]$ interactive
 
[username@c143 ~]$ module load starccm+/11.02.010
 
[username@c143 ~]$ module load starccm+/11.02.010
 
[username@c143 ~]$ srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hostfile
 
[username@c143 ~]$ srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hostfile

Revision as of 10:01, 29 November 2017

Application Details

  • Description : Produced by CD-adapco (Computational Dynamics-Analysis & Design Application Company Ltd), star-ccm+ is used for computer-aided engineering, in particular computational fluid dynamics (CFD).
  • Versions : 11.02.010/12.04.011
  • Module names : starccm+/11.02.010 and starccm+/12.04.011
  • License: University of Hull Engineering department, restricted by POD license


Usage Examples

Batch Submission


#!/bin/bash

#SBATCH -J TM-40
#SBATCH -N 2
#SBATCH --ntasks-per-node=20
#SBATCH -n 40

#SBATCH -o test.out
#SBATCH -e test.err
#SBATCH -p compute
#SBATCH --exclusive
#SBATCH --time=2-00:00:00

echo $SLURM_NTASKS
echo $SLURM_JOB_NODELIST
echo $SLURM_HOSTS

module purge
module add starccm+/11.02.010
module add intel/compiler/64/2016
module add intel/mpi/64/5.1.3.181

export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181
export I_MPI_DEBUG=5
export I_MPI_FABRICS=shm:tmi
export I_MPI_FALLBACK=no
export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH
export MPI_IB_PKEY=0x7fff

sim_file="testmesh.sim"
STARTMACRO="runsim.java"

srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hostfile

starccm+ -fabricverbose -power -podkey <pod-licence code here> -np $SLURM_NTASKS -machinefile hostfile -licpath 1999[at]flex.cd-adapco.com $sim_file -batch $STARTMACRO -batch-report
  • Note : [at] = @
Icon warning.png This example runs across Ethernet only at the moment, there is an upgrade pending to allow this to run on the full Omni path in the near future.
[username@login01 ~]$ sbatch starccm.job
Submitted batch job 289522

Interactive

Although the recommended method as a batch session it is also possible to run this as an interactive session. While logged into a compute node (c143):

[username@login01]$ interactive
[username@c143 ~]$ module load starccm+/11.02.010
[username@c143 ~]$ srun hostname -s | sort | uniq -c | awk '{ print $2":"$1 }' > hostfile
[username@c143 ~]$ starccm+ -fabricverbose -power -podkey <pod-licence code here> -np 12 -machinefile hostfile -licpath 1999[at]flex.cd-adapco.com testmesh.sim  -batch runsim.java -batch-report


Further information