Applications/Ansys

From HPC
Revision as of 11:18, 3 February 2017 by MSummerbell (talk | contribs) (Journal File)

Jump to: navigation , search

Application Details

  • Versions: V17.0, V17.2
  • Module names: ansys/v170, ansys/v172
  • License:


Job Submission Script

#!/bin/bash

#SBATCH -J ANSYS # Job Name
#SBATCH -N 1      # Number of  Nodes to use
#SBATCH  -n 28   # Number of CPUs
#SBATCH -o %N.%j.%a.out # Output file name
#SBATCH -e %N.%j.%a.err  # Error file name
#SBATCH -p compute       # Partition to run on
#SBATCH --exclusive       # Instructs SLURM to not run any other job in the node(s) selected

# Load Ansys Version 17.2<br/>
module add ansys/v172

# This is the run command the -b instructs Ansys to run in batch mode the -i specifies the input file
ansys172 -b -i /home/test/Input.lgw

Ansys Fluent

Usage Examples

Interactive
[username@login01 ~]$ interactive
salloc: Granted job allocation 296769
Job ID 296769 connecting to c170, please wait...
Last login: Wed Jan 25 09:10:51 2017 from 10.254.5.246
[username@c170 ~]$ module add ansys/v172
[username@c170 ~]$ fluent

Journal File

A journal file needs to be created to control the job when running

rcd "hw_RANS"
/solve/dual-time-iterate 2000 10
wcd "hw_LESHot"

Line 1 - Read case and data from hw_RANS</br> Line 2 - Number of iterations</br> Line 3 - Write case and date to hw_LESHOT</br>

Job Submission Script

#!/bin/bash
#SBATCH -J ANSYS_FLUENT # Job Name
#SBATCH -N 1 # Number of  Nodes to use
#SBATCH -n 28 # Number of CPUs
#SBATCH -o %N.%j.%a.out # Output file name
#SBATCH -e %N.%j.%a.err # Error file name
#SBATCH -p compute # Partition to run on
#SBATCH --exclusive # Instructs SLURM to not run any other job in the node(s) selected
  
# Remove all currently running modules and load Intel MPI and Ansys V17.2
module purge
module load intel/mpi/64/5.1.3.181
module load ansys/v172
 
export FLUENT_GUI=off  #Turns the Fluent GUI off
export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181 #Tells fluent where Intel MPI is located
export I_MPI_DEBUG=5 # Intel MPI level of error messages
export I_MPI_FABRICS=shm:tmi # Sets the Omnipath interconnect message protocol
export I_MPI_FALLBACK=no # No fallback to ethernet
 
#Checks number of tasks and sets number of processes
if [ -z "$SLURM_NPROCS" ]; then
N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
else
 		N=$SLURM_NPROCS
fi

echo $SLURM_JOB_NODELIST # Prints Node range to output file
# Prints number of processes to output file
echo $SLURM_NPROCS
echo -e "N: $N\n";
 
# run fluent in batch on the allocated node(s)
srun hostname -s > hostfile
# Set architecture of the CPU (in this case amd64)
FLUENT_ARCH=lnamd64 export FLUENT_ARCH
# Appends to the library path psm2 library file
export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH
# This is the run command. Note -i specifies the name of the input journal file
fluent -ssh  3ddp -g -t$N -mpi=intel -pib.infinipath  -cnf=hostfile -i my_fluent_file

Job Submission

[username@login01 ~]$ sbatch fluent.job
Submitted batch job 289535

Tuning Fluent tasks

Number of Nodes and Cores

The number of nodes and CPU cores Fluent can be run across can be increased, which may result in performance improvements. This is performed by altering the number of nodes (#SBATCH -N XX) in the submission script. In order to make use of the additional CPU cores allocated alter #SBATCH -n xx, the number of CPUS should be Number of nodes * 28.

MPI and Interconnect

Please use Fluent with Intel MPI and the Omnipath interconnect for best performance.

Further Information

Forum Support: Viper Ansys Forum
Ansys Website: Ansys