Difference between revisions of "Applications/Ansys"
MSummerbell (talk | contribs) (→Job Submission) |
MSummerbell (talk | contribs) (→Interactive) |
||
Line 40: | Line 40: | ||
=====Interactive===== | =====Interactive===== | ||
− | <pre style="background-color: # | + | <pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> |
[username@login01 ~]$ interactive | [username@login01 ~]$ interactive | ||
salloc: Granted job allocation 296769 | salloc: Granted job allocation 296769 | ||
Line 48: | Line 48: | ||
</pre> | </pre> | ||
− | <pre style="background-color: # | + | <pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> |
[username@c170 ~]$ fluent | [username@c170 ~]$ fluent | ||
</pre> | </pre> |
Revision as of 10:25, 3 February 2017
Contents
Application Details
- Versions: V17.0, V17.2
- Module names: ansys/v170, ansys/v172
- License:
- Forum Support: Viper Ansys Forum
- Further information: Ansys
Job Submission Script
#!/bin/bash #SBATCH -J ANSYS #SBATCH -N 1 # number of cores (nodes) to use #SBATCH --ntasks-per-node 28 #SBATCH -o %N.%j.%a.out #SBATCH -e %N.%j.%a.err #SBATCH -p compute # use "highmem" or "compute" node # maximum execution time #SBATCH --exclusive # Instructs SLURM to not run any other job in the node(s) selected module load ansys/v172 ansys172 -b -i /home/test/Input.lgw
Details:
Line 1 – Just a standard line that needs to be at the top of the file
Line 2 – The -J sets the name of the job, in this case to ANSYS. This doesn’t impact on the job and doesn’t have to be unique. It helps distinguish tasks when looking in squeue (LINK)
Line 3 – This requests you are allocated 1 compute node.
Line 4 – This requests 28 slots on the node. (the full compute node)
Line 5 and 6 – These set the output and error files. The log file will contain Fluent console output, the error file will contain information that may be useful if things don’t work as expected (at a cluster level)
Line 7 – This requests the job runs on one of the compute nodes on the compute queue.
Line 8 – this requests the job runs exclusively on a node i.e (no other jobs)
Line 9- Load Ansys Version 17.2
Line 10– This is the run command the -b instructs Ansys to run in batch mode the -i specifies the input file
Ansys Fluent
Usage Examples
Interactive
[username@login01 ~]$ interactive salloc: Granted job allocation 296769 Job ID 296769 connecting to c170, please wait... Last login: Wed Jan 25 09:10:51 2017 from 10.254.5.246 [username@c170 ~]$ module add ansys/v172
[username@c170 ~]$ fluent
Journal File
A journal file needs to be created to control the job when running
rcd "hw_RANS" /solve/dual-time-iterate 2000 10 wcd "hw_LESHot"
Line 1 - Read case and data from hw_RANS Line 2 - Number of iterations Line 3 - Write case and date to hw_LESHOT
Job Submission Script
#!/bin/bash #SBATCH -J ANSYS_FLUENT # sensible name for the job #SBATCH -N 1 #SBATCH -n 28 #SBATCH -o %N.%j.%a.out #SBATCH -e %N.%j.%a.err #SBATCH -p compute #SBATCH --exclusive # load the relevant module files module purge module load intel/mpi/64/5.1.3.181 module load ansys/v172 export FLUENT_GUI=off export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181 export I_MPI_DEBUG=5 export I_MPI_FABRICS=shm:tmi export I_MPI_FALLBACK=no if [ -z "$SLURM_NPROCS" ]; then N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') )) else N=$SLURM_NPROCS fi echo $SLURM_JOB_NODELIST echo $SLURM_NPROCS echo -e "N: $N\n"; # run fluent in batch on the allocated node(s) srun hostname -s > hostfile FLUENT_ARCH=lnamd64 export FLUENT_ARCH export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH fluent -ssh 3ddp -g -t$N -mpi=intel -pib.infinipath -cnf=hostfile -i my_fluent_file
Details:
Line 1 – Just a standard line that needs to be at the top of the file
Line 2 – The -J sets the name of the job, in this case to ANSYS_FLUENT. This doesn’t impact on the job and doesn’t have to be unique. It helps distinguish tasks when looking in squeue (LINK)
Line 3 – This requests you are allocated 1 compute node.
Line 4 – This requests 28 slots on the node. (the full compute node)
Line 5 and 6 – These set the output and error files. The log file will contain Fluent console output, the error file will contain information that may be useful if things don’t work as expected (at a cluster level)
Line 7 – This requests the job runs on one of the compute nodes on the compute queue.
Line 8 – this requests the job runs exclusively on a node i.e (no other jobs)
Line 9 – Comment
Line 10 – Remove all currently running modules
Line 11 – Load Intel MPI module
Line 12- Load Ansys Version 17.2
Line 13 – Turns the Fluent GUI off
Line 14 – Tells fluent where Intel MPI is located
Line 15 – Intel MPI level of error messages
Line 16 – Sets the Omnipath interconnect message protocol
Line 17 – Tells Intel MPI not to fallback to Ethernet
Line 18 - 20 – Checks number of tasks and sets number of processes
Line 21 – Prints Node range to output file
Line 22 – 23 – Prints number of processes to output file
Line 24 – Comment
Line 25 – Produces a hostfile listing nodes to run on
Line 26 – Tells fluent the architecture of the CPU (in this case amd64)</br>
Line 27 – Appends to the library path psm2 library file
Line 28 – This is the run command. Note -i specifies the name of the input journal file
Job Submission
[username@login01 ~]$ sbatch fluent.job Submitted batch job 289535
Tuning Fluent tasks
Number of Nodes and Cores
The number of nodes and CPU cores Fluent can be run across can be increased, which may result in performance improvements. This is performed by altering the number of nodes (#SBATCH -N XX) in the submission script. In order to make use of the additional CPU cores allocated alter #SBATCH -n xx, the number of CPUS should be Number of nodes * 28.
MPI and Interconnect
Please use Fluent with Intel MPI and the Omnipath interconnect for best performance.