Difference between revisions of "Applications/Ansys"

From HPC
Jump to: navigation , search
(Job Submission Script)
(Job Submission Script)
 
(38 intermediate revisions by 3 users not shown)
Line 1: Line 1:
===Application Details===
+
==Application Details==
* Versions: V17.0, V17.2
+
* Versions: V18.2, V19.3, V19.4 and V19.5
* Module names: ansys/v170, ansys/v172
+
* Module names: ansys/v182, ansys/v193, ansys/v194 (ANSYS2019R2), ansys/v195 (ANSYS2019R3), ansys/v202 (ANSYS2020R2), ansys/v212 (ANSYS2021R2) and ansys/v231 and ansys/v231 (ANSYS2023R1)
* License:
+
* License: Restricted
* Forum Support: [http://hpc.phpbb.hull.ac.uk/viewforum.php?f=10&sid=4bdc913307b103754005e4b827bc8a18 Viper Ansys Forum]
+
 
* Further information: [http://www.ansys.com Ansys]
+
* Note Ansys version 17 will be retired and no longer available
<br/>
+
 
 +
==Ansys==
 +
 
 +
The software creates simulated computer models of structures, electronics, or machine components to simulate strength, toughness, elasticity, temperature distribution, electromagnetism, fluid flow, and other attributes. Ansys is used to determine how a product will function with different specifications, without building test products or conducting crash tests. For example, Ansys software may simulate how a bridge will hold up after years of traffic, how to best process salmon in a cannery to reduce waste, or how to design a slide that uses less material without sacrificing safety.
 +
 
 +
Most Ansys simulations are performed using the Ansys Workbench software, which is one of the company's main products. Typically Ansys users break down larger structures into small components that are each modelled and tested individually. A user may start by defining the dimensions of an object, and then adding weight, pressure, temperature and other physical properties. Finally, the Ansys software simulates and analyzes movement, fatigue, fractures, fluid flow, temperature distribution, electromagnetic efficiency and other effects over time.
 +
 
 +
==Use on HPC==
  
 
===Job Submission Script===
 
===Job Submission Script===
<pre style="background-color: #C8C8C8; color: black; border: 2px solid black; font-family: monospace, sans-serif;">
+
<pre style="background-color: #E5E4E2; color: black; font-family: monospace, sans-serif;">
 
#!/bin/bash
 
#!/bin/bash
  
#SBATCH -J ANSYS
+
#SBATCH -J TestProg
#SBATCH -N 1     # number of cores (nodes) to use
+
#SBATCH -N 1     # Number of Nodes to use
#SBATCH --ntasks-per-node 28
+
#SBATCH -n 28     # Number of CPUs
#SBATCH -D /home/test
+
#SBATCH -o %N.%j.%a.out     # Output file name
#SBATCH -o %N.%j.%a.out
+
#SBATCH -e %N.%j.%a.err     # Error file name
#SBATCH -e %N.%j.%a.err
+
#SBATCH -p compute     # Partition to run on
#SBATCH -p compute     # use "highmem" or "compute" node
+
#SBATCH --exclusive     # Instructs SLURM to not run any other job in the node(s) selected
# maximum execution time
+
#SBATCH --mail-user= your email address here
#SBATCH --exclusive       # Instructs SLURM to not run any other job in the node(s) selected
+
 
 +
# Remove all currently running modules and load Intel MPI and Ansys
 +
 
 +
module purge
 +
module add intel/mpi/64/5.1.3.181
 +
module add ansys/v195
 +
 
 +
export FLUENT_GUI=off  #Turns the Fluent GUI off
 +
export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181 #Tells fluent where Intel MPI is located
 +
export I_MPI_DEBUG=5 # Intel MPI level of error messages
 +
export I_MPI_FABRICS=shm:tmi # Sets the Omnipath to interconnect message protocol
 +
export I_MPI_FALLBACK=no # No fallback to ethernet
 +
 
 +
#Checks the number of tasks and sets the number of processes
 +
if [ -z "$SLURM_NPROCS" ]; then
 +
N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
 +
else
 +
                N=$SLURM_NPROCS
 +
fi
 +
 
 +
echo $SLURM_JOB_NODELIST # Prints Node range to the output file
 +
# Prints number of processes to output file
 +
echo $SLURM_NPROCS
 +
echo -e "N: $N\n";
 +
 
 +
# run fluent in batch on the allocated node(s)
 +
srun hostname -s > hostfile
 +
# Set the architecture of the CPU (in this case amd64)
 +
FLUENT_ARCH=lnamd64 export FLUENT_ARCH
 +
# Appends to the library path psm2 library file
 +
export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH
 +
# This is the run command. Note -i specifies the name of the input journal file
 +
 
 +
fluent -ssh  3ddp -g -t$N -mpi=intel -pib.infinipath  -cnf=hostfile -i test.in
  
module load ansys/v172
 
ansys172 -b -i /home/test/Input.lgw
 
 
</pre>
 
</pre>
  
Details:<br/>
+
* It would appear that V231 and later, UNSET I_MPI_ROOT to work with some internal MPI libraries.
Line 1 – Just a standard line that needs to be at the top of the file<br/>
 
Line 2 – The -J sets the name of the job, in this case to ANSYS. This doesn’t impact on the job and doesn’t have to be unique. It helps distinguish tasks when looking in squeue (LINK) <br/>
 
Line 3 – This requests you are allocated 1 compute node. <br/>
 
Line 4 – This requests 28 slots on the node. (the full compute node)<br/>
 
Line 5 - Directory of the output and error files<br/>
 
Line 6 and 7 – These set the output and error files. The log file will contain Fluent console output, the error file will contain information that may be useful if things don’t work as expected (at a cluster level)<br/>
 
Line 8 – This requests the job runs on one of the compute nodes on the compute queue.<br/>
 
Line 9 – this requests the job runs exclusively on a node i.e (no other jobs)<br/>
 
Line 10- Load Ansys Version 17.2<br/>
 
Line 11 – This is the run command the -b instructs Ansys to run in batch mode the -i specifies the input file<br/>
 
  
===Ansys Fluent===
+
==Ansys Fluent==
  
====Usage Examples====
+
===Usage Examples===
  
 
=====Interactive=====
 
=====Interactive=====
<pre style="background-color: #C8C8C8; color: black; border: 2px solid black; font-family: monospace, sans-serif;">
+
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
[username@login01 ~]$ interactive
 
[username@login01 ~]$ interactive
 
salloc: Granted job allocation 296769
 
salloc: Granted job allocation 296769
 
Job ID 296769 connecting to c170, please wait...
 
Job ID 296769 connecting to c170, please wait...
 
Last login: Wed Jan 25 09:10:51 2017 from 10.254.5.246
 
Last login: Wed Jan 25 09:10:51 2017 from 10.254.5.246
[username@c170 ~]$ module add ansys/v172
+
[username@c170 ~]$ module add ansys/v195
</pre>
 
 
 
<pre style="background-color: #C8C8C8; color: black; border: 2px solid black; font-family: monospace, sans-serif;">
 
 
[username@c170 ~]$ fluent
 
[username@c170 ~]$ fluent
 
</pre>
 
</pre>
  
===Journal File===
+
====Journal File====
 
A journal file needs to be created to control the job when running
 
A journal file needs to be created to control the job when running
<pre style="background-color: #C8C8C8; color: black; border: 2px solid blue; font-family: monospace, sans-serif;">
+
<pre style="background-color: #E5E4E2; color: black; font-family: monospace, sans-serif;">
 
rcd "hw_RANS"
 
rcd "hw_RANS"
 
/solve/dual-time-iterate 2000 10
 
/solve/dual-time-iterate 2000 10
 
wcd "hw_LESHot"
 
wcd "hw_LESHot"
 +
exit
 
</pre>
 
</pre>
Line 1 - Read case and data from hw_RANS
+
Line 1 - Read case and data from hw_RANS<br/>
Line 2 - Number of iterations
+
Line 2 - Number of iterations<br/>
Line 3 - Write case and date to hw_LESHOT
+
Line 3 - Write case and date to hw_LESHOT<br/>
===Job Submission Script===
+
Line 4 - End Ansys run and exit
<pre style="background-color: #C8C8C8; color: black; border: 2px solid blue; font-family: monospace, sans-serif;">
+
 
 +
====Job Submission Script====
 +
<pre style="background-color: #E5E4E2; color: black; font-family: monospace, sans-serif;">
 
#!/bin/bash
 
#!/bin/bash
#SBATCH -J ANSYS_FLUENT # sensible name for the job
+
#SBATCH -J ANSYS_FLUENT # Job Name
#SBATCH -N 1
+
#SBATCH -N 1 # Number of  Nodes to use
#SBATCH -n 28
+
#SBATCH -n 28 # Number of CPUs
#SBATCH -o %N.%j.%a.out
+
#SBATCH -o %N.%j.%a.out # Output file name
#SBATCH -e %N.%j.%a.err
+
#SBATCH -e %N.%j.%a.err # Error file name
#SBATCH -p compute
+
#SBATCH -p compute # Partition to run on
#SBATCH --exclusive
+
#SBATCH --exclusive # Instructs SLURM to not run any other job in the node(s) selected
 +
#SBATCH --mail-user= your email address here
 
    
 
    
# load the relevant module files
+
# Remove all currently running modules and load Intel MPI and Ansys V19.5
 
module purge
 
module purge
module load intel/mpi/64/5.1.3.181
+
module add intel/mpi/64/5.1.3.181
module load ansys/v172
+
module add ansys/v195
 
   
 
   
export FLUENT_GUI=off
+
export FLUENT_GUI=off  #Turns the Fluent GUI off
export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181
+
export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181 #Tells fluent where Intel MPI is located
export I_MPI_DEBUG=5
+
export I_MPI_DEBUG=5 # Intel MPI level of error messages
export I_MPI_FABRICS=shm:tmi
+
export I_MPI_FABRICS=shm:tmi # Sets the Omnipath interconnect message protocol
export I_MPI_FALLBACK=no
+
export I_MPI_FALLBACK=no # No fallback to ethernet
 
   
 
   
 +
#Checks number of tasks and sets number of processes
 
if [ -z "$SLURM_NPROCS" ]; then
 
if [ -z "$SLURM_NPROCS" ]; then
 
N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
 
N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
Line 91: Line 121:
 
  N=$SLURM_NPROCS
 
  N=$SLURM_NPROCS
 
fi
 
fi
echo $SLURM_JOB_NODELIST
+
 
 +
echo $SLURM_JOB_NODELIST # Prints Node range to output file
 +
# Prints number of processes to output file
 
echo $SLURM_NPROCS
 
echo $SLURM_NPROCS
 
echo -e "N: $N\n";
 
echo -e "N: $N\n";
 
   
 
   
  # run fluent in batch on the allocated node(s)
+
# run fluent in batch on the allocated node(s)
 
srun hostname -s > hostfile
 
srun hostname -s > hostfile
 +
# Set architecture of the CPU (in this case amd64)
 
FLUENT_ARCH=lnamd64 export FLUENT_ARCH
 
FLUENT_ARCH=lnamd64 export FLUENT_ARCH
 +
# Appends to the library path psm2 library file
 
export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH
 
export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH
 
+
# This is the run command. Note -i specifies the name of the input journal file
 
fluent -ssh  3ddp -g -t$N -mpi=intel -pib.infinipath  -cnf=hostfile -i my_fluent_file
 
fluent -ssh  3ddp -g -t$N -mpi=intel -pib.infinipath  -cnf=hostfile -i my_fluent_file
 
</pre>
 
</pre>
  
Details:<br/>
+
====Job Submission====
Line 1 – Just a standard line that needs to be at the top of the file<br/>
+
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
Line 2 – The -J sets the name of the job, in this case to ANSYS_FLUENT. This doesn’t impact on the job and doesn’t have to be unique. It helps distinguish tasks when looking in squeue (LINK) <br/>
 
Line 3 – This requests you are allocated 1 compute node. <br/>
 
Line 4 – This requests 28 slots on the node. (the full compute node)<br/>
 
Line 5 and 6 – These set the output and error files. The log file will contain Fluent console output, the error file will contain information that may be useful if things don’t work as expected (at a cluster level)<br/>
 
Line 7 – This requests the job runs on one of the compute nodes on the compute queue.<br/>
 
Line 8 – this requests the job runs exclusively on a node i.e (no other jobs)<br/>
 
Line 9 – Comment<br/>
 
Line 10 – Remove all currently running modules<br/>
 
Line 11 – Load Intel MPI module<br/>
 
Line 12- Load Ansys Version 17.2<br/>
 
Line 13 – Turns the Fluent GUI off<br/>
 
Line 14 – Tells fluent where Intel MPI is located<br/>
 
Line 15 – Intel MPI level of error messages<br/>
 
Line 16 – Sets the Omnipath interconnect message protocol<br/>
 
Line 17 – Tells Intel MPI not to fallback to Ethernet<br/>
 
Line 18 - 20 – Checks number of tasks and sets number of processes<br/>
 
Line 21 – Prints Node range to output file<br/>
 
Line 22 – 23 – Prints number of processes to output file<br/>
 
Line 24 – Comment<br/>
 
Line 25 – Produces a hostfile listing nodes to run on<br/>
 
Line 26 – Tells fluent the architecture of the CPU (in this case amd64)</br>
 
Line 27 – Appends to the library path psm2 library file<br/>
 
Line 28 – This is the run command. Note -i specifies the name of the input journal file<br/>
 
 
 
===Job Submission===
 
<pre style="background-color: #C8C8C8; color: black; border: 2px solid black; font-family: monospace, sans-serif;">
 
 
[username@login01 ~]$ sbatch fluent.job
 
[username@login01 ~]$ sbatch fluent.job
 
Submitted batch job 289535
 
Submitted batch job 289535
Line 140: Line 148:
  
 
The number of nodes and CPU cores Fluent can be run across can be increased, which may result in performance improvements. This is performed by altering the number of nodes (#SBATCH -N XX) in the submission script. In order to make use of the additional CPU cores allocated alter #SBATCH -n xx, the number of CPUS should be Number of nodes * 28.
 
The number of nodes and CPU cores Fluent can be run across can be increased, which may result in performance improvements. This is performed by altering the number of nodes (#SBATCH -N XX) in the submission script. In order to make use of the additional CPU cores allocated alter #SBATCH -n xx, the number of CPUS should be Number of nodes * 28.
 +
 +
{|
 +
|style="width:5%; border-width: 0;cellpadding=0" | [[File:icon_exclam3.png]]
 +
|style="width:95%; border-width: 0;cellpadding=0" | Please Note the license pool is shared by all users, using a large amount of cores may stop other Ansys jobs starting due to licensing restrictions.
 +
|-
 +
|}
  
 
====MPI and Interconnect====
 
====MPI and Interconnect====
Please use Fluent with Intel MPI and the Omnipath interconnect for best performance.
+
 
 +
Please use Fluent with Intel MPI and the Omnipath interconnect for the best performance.
 +
 
 +
===Visualisation===
 +
 
 +
Ansys fluent can be used within  a client/server model within the internal settings
 +
 
 +
===License Settings===
 +
 
 +
The license is presently set to the University's pooled Ansys license server.
 +
 
 +
If you wish to use a different one (ie. for more licenses etc), you need to insert the following lines in your SLURM script or on the command line for an interactive session (for example only for the license server)
 +
 
 +
<pre>
 +
module load ansys/v194
 +
 +
# Tell Fluent where the licence server is.
 +
 
 +
export ANSYSLI_SERVERS=2325@myflexlmserver.example.com
 +
export ANSYSLMD_LICENSE_FILE=1055@myflexlmserver.example.com
 +
</pre>
 +
 
 +
==Issues==
 +
 
 +
One problem which could occur is the following when used in a sbatch file:
 +
 
 +
<pre>
 +
ssh_askpass: exec(/usr/libexec/openssh/ssh-askpass): No such file or directory
 +
Host key verification failed.
 +
ssh_askpass: exec(/usr/libexec/openssh/ssh-askpass): No such file or directory
 +
Host key verification failed.
 +
</pre>
 +
 
 +
* Ensure the userfile  '''~userid/.ssh/config''' contains:
 +
 
 +
<pre>
 +
StrictHostKeyChecking no
 +
# X11Forwarding yes
 +
UserKnownHostsFile=/dev/null
 +
</pre>
 +
 
 +
 
 +
==Further Information==
 +
 
 +
* Ansys Website: [http://www.ansys.com Ansys]
 +
 
 +
{{Licensepagenav}}

Latest revision as of 14:17, 9 August 2024

Application Details

  • Versions: V18.2, V19.3, V19.4 and V19.5
  • Module names: ansys/v182, ansys/v193, ansys/v194 (ANSYS2019R2), ansys/v195 (ANSYS2019R3), ansys/v202 (ANSYS2020R2), ansys/v212 (ANSYS2021R2) and ansys/v231 and ansys/v231 (ANSYS2023R1)
  • License: Restricted
  • Note Ansys version 17 will be retired and no longer available

Ansys

The software creates simulated computer models of structures, electronics, or machine components to simulate strength, toughness, elasticity, temperature distribution, electromagnetism, fluid flow, and other attributes. Ansys is used to determine how a product will function with different specifications, without building test products or conducting crash tests. For example, Ansys software may simulate how a bridge will hold up after years of traffic, how to best process salmon in a cannery to reduce waste, or how to design a slide that uses less material without sacrificing safety.

Most Ansys simulations are performed using the Ansys Workbench software, which is one of the company's main products. Typically Ansys users break down larger structures into small components that are each modelled and tested individually. A user may start by defining the dimensions of an object, and then adding weight, pressure, temperature and other physical properties. Finally, the Ansys software simulates and analyzes movement, fatigue, fractures, fluid flow, temperature distribution, electromagnetic efficiency and other effects over time.

Use on HPC

Job Submission Script

#!/bin/bash

#SBATCH -J TestProg 
#SBATCH -N 1     # Number of  Nodes to use
#SBATCH -n 28     # Number of CPUs
#SBATCH -o %N.%j.%a.out     # Output file name
#SBATCH -e %N.%j.%a.err     # Error file name
#SBATCH -p compute     # Partition to run on
#SBATCH --exclusive     # Instructs SLURM to not run any other job in the node(s) selected
#SBATCH --mail-user= your email address here

# Remove all currently running modules and load Intel MPI and Ansys 

module purge
module add intel/mpi/64/5.1.3.181
module add ansys/v195

export FLUENT_GUI=off  #Turns the Fluent GUI off
export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181 #Tells fluent where Intel MPI is located
export I_MPI_DEBUG=5 # Intel MPI level of error messages
export I_MPI_FABRICS=shm:tmi # Sets the Omnipath to interconnect message protocol
export I_MPI_FALLBACK=no # No fallback to ethernet

#Checks the number of tasks and sets the number of processes
if [ -z "$SLURM_NPROCS" ]; then
N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
else
                N=$SLURM_NPROCS
fi

echo $SLURM_JOB_NODELIST # Prints Node range to the output file
# Prints number of processes to output file
echo $SLURM_NPROCS
echo -e "N: $N\n";

# run fluent in batch on the allocated node(s)
srun hostname -s > hostfile
# Set the architecture of the CPU (in this case amd64)
FLUENT_ARCH=lnamd64 export FLUENT_ARCH
# Appends to the library path psm2 library file
export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH
# This is the run command. Note -i specifies the name of the input journal file

fluent -ssh  3ddp -g -t$N -mpi=intel -pib.infinipath  -cnf=hostfile -i test.in

  • It would appear that V231 and later, UNSET I_MPI_ROOT to work with some internal MPI libraries.

Ansys Fluent

Usage Examples

Interactive
[username@login01 ~]$ interactive
salloc: Granted job allocation 296769
Job ID 296769 connecting to c170, please wait...
Last login: Wed Jan 25 09:10:51 2017 from 10.254.5.246
[username@c170 ~]$ module add ansys/v195
[username@c170 ~]$ fluent

Journal File

A journal file needs to be created to control the job when running

rcd "hw_RANS"
/solve/dual-time-iterate 2000 10
wcd "hw_LESHot"
exit

Line 1 - Read case and data from hw_RANS
Line 2 - Number of iterations
Line 3 - Write case and date to hw_LESHOT
Line 4 - End Ansys run and exit

Job Submission Script

#!/bin/bash
#SBATCH -J ANSYS_FLUENT # Job Name
#SBATCH -N 1 # Number of  Nodes to use
#SBATCH -n 28 # Number of CPUs
#SBATCH -o %N.%j.%a.out # Output file name
#SBATCH -e %N.%j.%a.err # Error file name
#SBATCH -p compute # Partition to run on
#SBATCH --exclusive # Instructs SLURM to not run any other job in the node(s) selected
#SBATCH --mail-user= your email address here
  
# Remove all currently running modules and load Intel MPI and Ansys V19.5
module purge
module add intel/mpi/64/5.1.3.181
module add ansys/v195
 
export FLUENT_GUI=off  #Turns the Fluent GUI off
export I_MPI_ROOT=/trinity/clustervision/CentOS/7/apps/intel/impi/5.1.3.181 #Tells fluent where Intel MPI is located
export I_MPI_DEBUG=5 # Intel MPI level of error messages
export I_MPI_FABRICS=shm:tmi # Sets the Omnipath interconnect message protocol
export I_MPI_FALLBACK=no # No fallback to ethernet
 
#Checks number of tasks and sets number of processes
if [ -z "$SLURM_NPROCS" ]; then
N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r 's/([0-9]+)\(x([0-9]+)\)/\1 * \2/') ))
else
 		N=$SLURM_NPROCS
fi

echo $SLURM_JOB_NODELIST # Prints Node range to output file
# Prints number of processes to output file
echo $SLURM_NPROCS
echo -e "N: $N\n";
 
# run fluent in batch on the allocated node(s)
srun hostname -s > hostfile
# Set architecture of the CPU (in this case amd64)
FLUENT_ARCH=lnamd64 export FLUENT_ARCH
# Appends to the library path psm2 library file
export LD_LIBRARY_PATH=/usr/lib64/psm2-compat:$LD_LIBRARY_PATH
# This is the run command. Note -i specifies the name of the input journal file
fluent -ssh  3ddp -g -t$N -mpi=intel -pib.infinipath  -cnf=hostfile -i my_fluent_file

Job Submission

[username@login01 ~]$ sbatch fluent.job
Submitted batch job 289535

Tuning Fluent tasks

Number of Nodes and Cores

The number of nodes and CPU cores Fluent can be run across can be increased, which may result in performance improvements. This is performed by altering the number of nodes (#SBATCH -N XX) in the submission script. In order to make use of the additional CPU cores allocated alter #SBATCH -n xx, the number of CPUS should be Number of nodes * 28.

Icon exclam3.png Please Note the license pool is shared by all users, using a large amount of cores may stop other Ansys jobs starting due to licensing restrictions.

MPI and Interconnect

Please use Fluent with Intel MPI and the Omnipath interconnect for the best performance.

Visualisation

Ansys fluent can be used within a client/server model within the internal settings

License Settings

The license is presently set to the University's pooled Ansys license server.

If you wish to use a different one (ie. for more licenses etc), you need to insert the following lines in your SLURM script or on the command line for an interactive session (for example only for the license server)

module load ansys/v194
 
# Tell Fluent where the licence server is.

export ANSYSLI_SERVERS=[mailto:2325@myflexlmserver.example.com 2325@myflexlmserver.example.com]
export ANSYSLMD_LICENSE_FILE=[mailto:1055@myflexlmserver.example.com 1055@myflexlmserver.example.com]

Issues

One problem which could occur is the following when used in a sbatch file:

ssh_askpass: exec(/usr/libexec/openssh/ssh-askpass): No such file or directory
Host key verification failed.
ssh_askpass: exec(/usr/libexec/openssh/ssh-askpass): No such file or directory
Host key verification failed.
  • Ensure the userfile ~userid/.ssh/config contains:
StrictHostKeyChecking no
# X11Forwarding yes
UserKnownHostsFile=/dev/null


Further Information





Special License Applications | Main Page | Further Topics