Difference between revisions of "Applications/Ansys"
MSummerbell (talk | contribs) (→Job Submission Script) |
(→Job Submission Script) |
||
(38 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
− | + | ==Application Details== | |
− | * Versions: | + | * Versions: V18.2, V19.3, V19.4 and V19.5 |
− | * Module names: ansys/ | + | * Module names: ansys/v182, ansys/v193, ansys/v194 (ANSYS2019R2), ansys/v195 (ANSYS2019R3), ansys/v202 (ANSYS2020R2), ansys/v212 (ANSYS2021R2) and ansys/v231 and ansys/v231 (ANSYS2023R1) |
− | * License: | + | * License: Restricted |
− | * | + | |
− | + | * Note Ansys version 17 will be retired and no longer available | |
− | + | ||
+ | ==Ansys== | ||
+ | |||
+ | The software creates simulated computer models of structures, electronics, or machine components to simulate strength, toughness, elasticity, temperature distribution, electromagnetism, fluid flow, and other attributes. Ansys is used to determine how a product will function with different specifications, without building test products or conducting crash tests. For example, Ansys software may simulate how a bridge will hold up after years of traffic, how to best process salmon in a cannery to reduce waste, or how to design a slide that uses less material without sacrificing safety. | ||
+ | |||
+ | Most Ansys simulations are performed using the Ansys Workbench software, which is one of the company's main products. Typically Ansys users break down larger structures into small components that are each modelled and tested individually. A user may start by defining the dimensions of an object, and then adding weight, pressure, temperature and other physical properties. Finally, the Ansys software simulates and analyzes movement, fatigue, fractures, fluid flow, temperature distribution, electromagnetic efficiency and other effects over time. | ||
+ | |||
+ | ==Use on HPC== | ||
===Job Submission Script=== | ===Job Submission Script=== | ||
− | <pre style="background-color: # | + | <pre style="background-color: #E5E4E2; color: black; font-family: monospace, sans-serif;"> |
#!/bin/bash | #!/bin/bash | ||
− | #SBATCH -J | + | #SBATCH -J TestProg |
− | #SBATCH -N 1 | + | #SBATCH -N 1 # Number of Nodes to use |
− | #SBATCH - | + | #SBATCH -n 28 # Number of CPUs |
− | # | + | #SBATCH -o %N.%j.%a.out # Output file name |
− | #SBATCH -o %N.%j.%a.out | + | #SBATCH -e %N.%j.%a.err # Error file name |
− | #SBATCH -e %N.%j.%a.err | + | #SBATCH -p compute # Partition to run on |
− | #SBATCH -p compute | + | #SBATCH --exclusive # Instructs SLURM to not run any other job in the node(s) selected |
− | + | #SBATCH --mail-user= your email address here | |
− | #SBATCH --exclusive | + | |
+ | # Remove all currently running modules and load Intel MPI and Ansys | ||
+ | |||
+ | module purge | ||
+ | module add intel/mpi/64/5.1.3.181 | ||
+ | module add ansys/v195 | ||
+ | |||
+ | export FLUENT_GUI=off #Turns the Fluent GUI off | ||
+ | export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181 #Tells fluent where Intel MPI is located | ||
+ | export I_MPI_DEBUG=5 # Intel MPI level of error messages | ||
+ | export I_MPI_FABRICS=shm:tmi # Sets the Omnipath to interconnect message protocol | ||
+ | export I_MPI_FALLBACK=no # No fallback to ethernet | ||
+ | |||
+ | #Checks the number of tasks and sets the number of processes | ||
+ | if [ -z "$SLURM_NPROCS" ]; then | ||
+ | N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') )) | ||
+ | else | ||
+ | N=$SLURM_NPROCS | ||
+ | fi | ||
+ | |||
+ | echo $SLURM_JOB_NODELIST # Prints Node range to the output file | ||
+ | # Prints number of processes to output file | ||
+ | echo $SLURM_NPROCS | ||
+ | echo -e "N: $N\n"; | ||
+ | |||
+ | # run fluent in batch on the allocated node(s) | ||
+ | srun hostname -s > hostfile | ||
+ | # Set the architecture of the CPU (in this case amd64) | ||
+ | FLUENT_ARCH=lnamd64 export FLUENT_ARCH | ||
+ | # Appends to the library path psm2 library file | ||
+ | export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH | ||
+ | # This is the run command. Note -i specifies the name of the input journal file | ||
+ | |||
+ | fluent -ssh 3ddp -g -t$N -mpi=intel -pib.infinipath -cnf=hostfile -i test.in | ||
− | |||
− | |||
</pre> | </pre> | ||
− | + | * It would appear that V231 and later, UNSET I_MPI_ROOT to work with some internal MPI libraries. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | ==Ansys Fluent== | |
− | + | ===Usage Examples=== | |
=====Interactive===== | =====Interactive===== | ||
− | <pre style="background-color: # | + | <pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> |
[username@login01 ~]$ interactive | [username@login01 ~]$ interactive | ||
salloc: Granted job allocation 296769 | salloc: Granted job allocation 296769 | ||
Job ID 296769 connecting to c170, please wait... | Job ID 296769 connecting to c170, please wait... | ||
Last login: Wed Jan 25 09:10:51 2017 from 10.254.5.246 | Last login: Wed Jan 25 09:10:51 2017 from 10.254.5.246 | ||
− | [username@c170 ~]$ module add ansys/ | + | [username@c170 ~]$ module add ansys/v195 |
− | |||
− | |||
− | |||
[username@c170 ~]$ fluent | [username@c170 ~]$ fluent | ||
</pre> | </pre> | ||
− | ===Journal File=== | + | ====Journal File==== |
A journal file needs to be created to control the job when running | A journal file needs to be created to control the job when running | ||
− | <pre style="background-color: # | + | <pre style="background-color: #E5E4E2; color: black; font-family: monospace, sans-serif;"> |
rcd "hw_RANS" | rcd "hw_RANS" | ||
/solve/dual-time-iterate 2000 10 | /solve/dual-time-iterate 2000 10 | ||
wcd "hw_LESHot" | wcd "hw_LESHot" | ||
+ | exit | ||
</pre> | </pre> | ||
− | Line 1 - Read case and data from hw_RANS | + | Line 1 - Read case and data from hw_RANS<br/> |
− | Line 2 - Number of iterations | + | Line 2 - Number of iterations<br/> |
− | Line 3 - Write case and date to hw_LESHOT | + | Line 3 - Write case and date to hw_LESHOT<br/> |
− | ===Job Submission Script=== | + | Line 4 - End Ansys run and exit |
− | <pre style="background-color: # | + | |
+ | ====Job Submission Script==== | ||
+ | <pre style="background-color: #E5E4E2; color: black; font-family: monospace, sans-serif;"> | ||
#!/bin/bash | #!/bin/bash | ||
− | #SBATCH -J ANSYS_FLUENT # | + | #SBATCH -J ANSYS_FLUENT # Job Name |
− | #SBATCH -N 1 | + | #SBATCH -N 1 # Number of Nodes to use |
− | #SBATCH -n 28 | + | #SBATCH -n 28 # Number of CPUs |
− | #SBATCH -o %N.%j.%a.out | + | #SBATCH -o %N.%j.%a.out # Output file name |
− | #SBATCH -e %N.%j.%a.err | + | #SBATCH -e %N.%j.%a.err # Error file name |
− | #SBATCH -p compute | + | #SBATCH -p compute # Partition to run on |
− | #SBATCH --exclusive | + | #SBATCH --exclusive # Instructs SLURM to not run any other job in the node(s) selected |
+ | #SBATCH --mail-user= your email address here | ||
− | # load | + | # Remove all currently running modules and load Intel MPI and Ansys V19.5 |
module purge | module purge | ||
− | module | + | module add intel/mpi/64/5.1.3.181 |
− | module | + | module add ansys/v195 |
− | export FLUENT_GUI=off | + | export FLUENT_GUI=off #Turns the Fluent GUI off |
− | export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181 | + | export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181 #Tells fluent where Intel MPI is located |
− | export I_MPI_DEBUG=5 | + | export I_MPI_DEBUG=5 # Intel MPI level of error messages |
− | export I_MPI_FABRICS=shm:tmi | + | export I_MPI_FABRICS=shm:tmi # Sets the Omnipath interconnect message protocol |
− | export I_MPI_FALLBACK=no | + | export I_MPI_FALLBACK=no # No fallback to ethernet |
+ | #Checks number of tasks and sets number of processes | ||
if [ -z "$SLURM_NPROCS" ]; then | if [ -z "$SLURM_NPROCS" ]; then | ||
N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') )) | N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') )) | ||
Line 91: | Line 121: | ||
N=$SLURM_NPROCS | N=$SLURM_NPROCS | ||
fi | fi | ||
− | echo $SLURM_JOB_NODELIST | + | |
+ | echo $SLURM_JOB_NODELIST # Prints Node range to output file | ||
+ | # Prints number of processes to output file | ||
echo $SLURM_NPROCS | echo $SLURM_NPROCS | ||
echo -e "N: $N\n"; | echo -e "N: $N\n"; | ||
− | + | # run fluent in batch on the allocated node(s) | |
srun hostname -s > hostfile | srun hostname -s > hostfile | ||
+ | # Set architecture of the CPU (in this case amd64) | ||
FLUENT_ARCH=lnamd64 export FLUENT_ARCH | FLUENT_ARCH=lnamd64 export FLUENT_ARCH | ||
+ | # Appends to the library path psm2 library file | ||
export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH | export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH | ||
− | + | # This is the run command. Note -i specifies the name of the input journal file | |
fluent -ssh 3ddp -g -t$N -mpi=intel -pib.infinipath -cnf=hostfile -i my_fluent_file | fluent -ssh 3ddp -g -t$N -mpi=intel -pib.infinipath -cnf=hostfile -i my_fluent_file | ||
</pre> | </pre> | ||
− | + | ====Job Submission==== | |
− | + | <pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | ===Job Submission=== | ||
− | <pre style="background-color: # | ||
[username@login01 ~]$ sbatch fluent.job | [username@login01 ~]$ sbatch fluent.job | ||
Submitted batch job 289535 | Submitted batch job 289535 | ||
Line 140: | Line 148: | ||
The number of nodes and CPU cores Fluent can be run across can be increased, which may result in performance improvements. This is performed by altering the number of nodes (#SBATCH -N XX) in the submission script. In order to make use of the additional CPU cores allocated alter #SBATCH -n xx, the number of CPUS should be Number of nodes * 28. | The number of nodes and CPU cores Fluent can be run across can be increased, which may result in performance improvements. This is performed by altering the number of nodes (#SBATCH -N XX) in the submission script. In order to make use of the additional CPU cores allocated alter #SBATCH -n xx, the number of CPUS should be Number of nodes * 28. | ||
+ | |||
+ | {| | ||
+ | |style="width:5%; border-width: 0;cellpadding=0" | [[File:icon_exclam3.png]] | ||
+ | |style="width:95%; border-width: 0;cellpadding=0" | Please Note the license pool is shared by all users, using a large amount of cores may stop other Ansys jobs starting due to licensing restrictions. | ||
+ | |- | ||
+ | |} | ||
====MPI and Interconnect==== | ====MPI and Interconnect==== | ||
− | Please use Fluent with Intel MPI and the Omnipath interconnect for best performance. | + | |
+ | Please use Fluent with Intel MPI and the Omnipath interconnect for the best performance. | ||
+ | |||
+ | ===Visualisation=== | ||
+ | |||
+ | Ansys fluent can be used within a client/server model within the internal settings | ||
+ | |||
+ | ===License Settings=== | ||
+ | |||
+ | The license is presently set to the University's pooled Ansys license server. | ||
+ | |||
+ | If you wish to use a different one (ie. for more licenses etc), you need to insert the following lines in your SLURM script or on the command line for an interactive session (for example only for the license server) | ||
+ | |||
+ | <pre> | ||
+ | module load ansys/v194 | ||
+ | |||
+ | # Tell Fluent where the licence server is. | ||
+ | |||
+ | export ANSYSLI_SERVERS=2325@myflexlmserver.example.com | ||
+ | export ANSYSLMD_LICENSE_FILE=1055@myflexlmserver.example.com | ||
+ | </pre> | ||
+ | |||
+ | ==Issues== | ||
+ | |||
+ | One problem which could occur is the following when used in a sbatch file: | ||
+ | |||
+ | <pre> | ||
+ | ssh_askpass: exec(/usr/libexec/openssh/ssh-askpass): No such file or directory | ||
+ | Host key verification failed. | ||
+ | ssh_askpass: exec(/usr/libexec/openssh/ssh-askpass): No such file or directory | ||
+ | Host key verification failed. | ||
+ | </pre> | ||
+ | |||
+ | * Ensure the userfile '''~userid/.ssh/config''' contains: | ||
+ | |||
+ | <pre> | ||
+ | StrictHostKeyChecking no | ||
+ | # X11Forwarding yes | ||
+ | UserKnownHostsFile=/dev/null | ||
+ | </pre> | ||
+ | |||
+ | |||
+ | ==Further Information== | ||
+ | |||
+ | * Ansys Website: [http://www.ansys.com Ansys] | ||
+ | |||
+ | {{Licensepagenav}} |
Latest revision as of 14:17, 9 August 2024
Application Details
- Versions: V18.2, V19.3, V19.4 and V19.5
- Module names: ansys/v182, ansys/v193, ansys/v194 (ANSYS2019R2), ansys/v195 (ANSYS2019R3), ansys/v202 (ANSYS2020R2), ansys/v212 (ANSYS2021R2) and ansys/v231 and ansys/v231 (ANSYS2023R1)
- License: Restricted
- Note Ansys version 17 will be retired and no longer available
Ansys
The software creates simulated computer models of structures, electronics, or machine components to simulate strength, toughness, elasticity, temperature distribution, electromagnetism, fluid flow, and other attributes. Ansys is used to determine how a product will function with different specifications, without building test products or conducting crash tests. For example, Ansys software may simulate how a bridge will hold up after years of traffic, how to best process salmon in a cannery to reduce waste, or how to design a slide that uses less material without sacrificing safety.
Most Ansys simulations are performed using the Ansys Workbench software, which is one of the company's main products. Typically Ansys users break down larger structures into small components that are each modelled and tested individually. A user may start by defining the dimensions of an object, and then adding weight, pressure, temperature and other physical properties. Finally, the Ansys software simulates and analyzes movement, fatigue, fractures, fluid flow, temperature distribution, electromagnetic efficiency and other effects over time.
Use on HPC
Job Submission Script
#!/bin/bash #SBATCH -J TestProg #SBATCH -N 1 # Number of Nodes to use #SBATCH -n 28 # Number of CPUs #SBATCH -o %N.%j.%a.out # Output file name #SBATCH -e %N.%j.%a.err # Error file name #SBATCH -p compute # Partition to run on #SBATCH --exclusive # Instructs SLURM to not run any other job in the node(s) selected #SBATCH --mail-user= your email address here # Remove all currently running modules and load Intel MPI and Ansys module purge module add intel/mpi/64/5.1.3.181 module add ansys/v195 export FLUENT_GUI=off #Turns the Fluent GUI off export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181 #Tells fluent where Intel MPI is located export I_MPI_DEBUG=5 # Intel MPI level of error messages export I_MPI_FABRICS=shm:tmi # Sets the Omnipath to interconnect message protocol export I_MPI_FALLBACK=no # No fallback to ethernet #Checks the number of tasks and sets the number of processes if [ -z "$SLURM_NPROCS" ]; then N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') )) else N=$SLURM_NPROCS fi echo $SLURM_JOB_NODELIST # Prints Node range to the output file # Prints number of processes to output file echo $SLURM_NPROCS echo -e "N: $N\n"; # run fluent in batch on the allocated node(s) srun hostname -s > hostfile # Set the architecture of the CPU (in this case amd64) FLUENT_ARCH=lnamd64 export FLUENT_ARCH # Appends to the library path psm2 library file export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH # This is the run command. Note -i specifies the name of the input journal file fluent -ssh 3ddp -g -t$N -mpi=intel -pib.infinipath -cnf=hostfile -i test.in
- It would appear that V231 and later, UNSET I_MPI_ROOT to work with some internal MPI libraries.
Ansys Fluent
Usage Examples
Interactive
[username@login01 ~]$ interactive salloc: Granted job allocation 296769 Job ID 296769 connecting to c170, please wait... Last login: Wed Jan 25 09:10:51 2017 from 10.254.5.246 [username@c170 ~]$ module add ansys/v195 [username@c170 ~]$ fluent
Journal File
A journal file needs to be created to control the job when running
rcd "hw_RANS" /solve/dual-time-iterate 2000 10 wcd "hw_LESHot" exit
Line 1 - Read case and data from hw_RANS
Line 2 - Number of iterations
Line 3 - Write case and date to hw_LESHOT
Line 4 - End Ansys run and exit
Job Submission Script
#!/bin/bash #SBATCH -J ANSYS_FLUENT # Job Name #SBATCH -N 1 # Number of Nodes to use #SBATCH -n 28 # Number of CPUs #SBATCH -o %N.%j.%a.out # Output file name #SBATCH -e %N.%j.%a.err # Error file name #SBATCH -p compute # Partition to run on #SBATCH --exclusive # Instructs SLURM to not run any other job in the node(s) selected #SBATCH --mail-user= your email address here # Remove all currently running modules and load Intel MPI and Ansys V19.5 module purge module add intel/mpi/64/5.1.3.181 module add ansys/v195 export FLUENT_GUI=off #Turns the Fluent GUI off export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181 #Tells fluent where Intel MPI is located export I_MPI_DEBUG=5 # Intel MPI level of error messages export I_MPI_FABRICS=shm:tmi # Sets the Omnipath interconnect message protocol export I_MPI_FALLBACK=no # No fallback to ethernet #Checks number of tasks and sets number of processes if [ -z "$SLURM_NPROCS" ]; then N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') )) else N=$SLURM_NPROCS fi echo $SLURM_JOB_NODELIST # Prints Node range to output file # Prints number of processes to output file echo $SLURM_NPROCS echo -e "N: $N\n"; # run fluent in batch on the allocated node(s) srun hostname -s > hostfile # Set architecture of the CPU (in this case amd64) FLUENT_ARCH=lnamd64 export FLUENT_ARCH # Appends to the library path psm2 library file export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH # This is the run command. Note -i specifies the name of the input journal file fluent -ssh 3ddp -g -t$N -mpi=intel -pib.infinipath -cnf=hostfile -i my_fluent_file
Job Submission
[username@login01 ~]$ sbatch fluent.job Submitted batch job 289535
Tuning Fluent tasks
Number of Nodes and Cores
The number of nodes and CPU cores Fluent can be run across can be increased, which may result in performance improvements. This is performed by altering the number of nodes (#SBATCH -N XX) in the submission script. In order to make use of the additional CPU cores allocated alter #SBATCH -n xx, the number of CPUS should be Number of nodes * 28.
Please Note the license pool is shared by all users, using a large amount of cores may stop other Ansys jobs starting due to licensing restrictions. |
MPI and Interconnect
Please use Fluent with Intel MPI and the Omnipath interconnect for the best performance.
Visualisation
Ansys fluent can be used within a client/server model within the internal settings
License Settings
The license is presently set to the University's pooled Ansys license server.
If you wish to use a different one (ie. for more licenses etc), you need to insert the following lines in your SLURM script or on the command line for an interactive session (for example only for the license server)
module load ansys/v194 # Tell Fluent where the licence server is. export ANSYSLI_SERVERS=[mailto:2325@myflexlmserver.example.com 2325@myflexlmserver.example.com] export ANSYSLMD_LICENSE_FILE=[mailto:1055@myflexlmserver.example.com 1055@myflexlmserver.example.com]
Issues
One problem which could occur is the following when used in a sbatch file:
ssh_askpass: exec(/usr/libexec/openssh/ssh-askpass): No such file or directory Host key verification failed. ssh_askpass: exec(/usr/libexec/openssh/ssh-askpass): No such file or directory Host key verification failed.
- Ensure the userfile ~userid/.ssh/config contains:
StrictHostKeyChecking no # X11Forwarding yes UserKnownHostsFile=/dev/null
Further Information
- Ansys Website: Ansys