Difference between revisions of "LS-DYNA"

From HPC
Jump to: navigation , search
(Application Details)
m (Further Information)
 
(17 intermediate revisions by 3 users not shown)
Line 1: Line 1:
===Application Details===
+
==Application Details==
 
* Description:
 
* Description:
 
* Versions: V17.0, V17.2
 
* Versions: V17.0, V17.2
* Module names: ansys/v170, ansys/v172
+
* Module names: ansys/v170, ansys/v172, ansys/182 and ansys/v193
* License:
+
* License: Closed
 +
 
 +
==Usage Examples==
  
 
===Job Submission Script===
 
===Job Submission Script===
 +
 
<pre style="background-color: #E5E4E2; color: black; font-family: monospace, sans-serif;">
 
<pre style="background-color: #E5E4E2; color: black; font-family: monospace, sans-serif;">
 
#!/bin/bash
 
#!/bin/bash
#SBATCH -c 1 # Number of CPUS per Node
+
#SBATCH -c 1 # Number of CPUS per Node, Must be set to 1
#SBATCH --mem-per-cpu=4G
+
#SBATCH --mem-per-cpu=4G # Sets memory per CPU to 4GB
#SBATCH -J LS-DYNA # Name of Job
+
#SBATCH -J LS-DYNA # Name of Job
#SBATCH -p compute
+
#SBATCH -p compute # Use compute partition
#SBATCH -o %N.%j.%a.out
+
#SBATCH -o %N.%j.%a.out # Output file name
#SBATCH -e %N.%j.%a.err
+
#SBATCH -e %N.%j.%a.err   # Error file name
 +
#SBATCH --mail-user=your email address here
  
module purge
+
# Remove all loaded modules
 +
module purge                            
  
module load ansys/v172
+
# Load Ansys V17.2 & Intel MPI
module load intel/mpi/64/5.1.3.181
+
module add ansys/v172          
 +
module add intel/mpi/64/5.1.3.181
  
memory=$(($SLURM_MEM_PER_CPU/8))
+
# Divide memory between cores
 +
memory=$(($SLURM_MEM_PER_CPU/8))  
  
lsdyna i=all.k memory=${memory}M ncpu=$SLURM_CPUS_PER_TASK
+
# This is the run command Note i specifies the input file, memory the memory available and ncpu the number of CPUs used
 +
lsdyna i=all.k memory=${memory}M ncpu=$SLURM_CPUS_PER_TASK  
 
</pre>
 
</pre>
Details:<br/>
 
Line 1 - just a standard line that needs to be at the top of the file<br />
 
Line 2 - The -c flag sets the number of CPUs for the job. This must be set to 1 as LS-DYNA is not licensed for more than 1 CPU <br/>
 
Line 3 - The -J flag sets the name of the job, in the case to LS-DYNA<br/>
 
Line 4 - The -p flag sets the partition to use e.g. compute, high memory<br/>
 
Line 5 - The -o flag sets the output file name<br/>
 
Line 6 - The -e flag sets the output error<br/>
 
Line 7 - Remove all loaded modules<br/>
 
Line 8 - Load Ansys V17.2<br/>
 
Line 9 - Intel MPI<br/>
 
Line 10 - Memory divided between cores<br/>
 
Line 11 - This is the run command Note i specifies the input file, memory the memory available and ncpu the number of CPUs used<br/>
 
 
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
[username@login01 lsdyna5]$ sbatch ls-dyna.job
 
[username@login01 lsdyna5]$ sbatch ls-dyna.job
 
Submitted batch job 306050
 
Submitted batch job 306050
 
</pre>
 
</pre>
 +
 +
===Parallel Processing Options===
 +
There are two approaches to using multiple processing cores within a node with LS-DYNA. The first is Shared Memory Parallel processing which uses ncpu=X to define the number of cores to use. The alternative is Massively Parallel Processing (MPP) which uses the flag -np=X. To use MPP, you also need to use the -dis flag to enable distributed processing. The difference between these settings for one sample job can be seen below:
 +
 +
<pre>
 +
Original run: lsdyna memory=${memory}M i=all.k ncpu=5
 +
estimated total cpu time          =    290065 sec (      80 hrs 34 mins)
 +
estimated cpu time to complete    =    290061 sec (      80 hrs 34 mins)
 +
estimated total clock time        =    59061 sec (      16 hrs 24 mins)
 +
estimated clock time to complete  =    59059 sec (      16 hrs 24 mins)
 +
 +
Run using -dis flag: lsdyna memory=${memory}M -dis i=all.k ncpu=5
 +
estimated total cpu time          =    38261 sec (      10 hrs 37 mins)
 +
estimated cpu time to complete    =    38260 sec (      10 hrs 37 mins)
 +
estimated total clock time        =    45861 sec (      12 hrs 44 mins)
 +
estimated clock time to complete  =    45859 sec (      12 hrs 44 mins)
 +
 +
Run using "lsdyna -dis memory=${memory}M pr=aa_r_dy i=all.k -np 5"
 +
estimated total cpu time          =    20336 sec (      5 hrs 38 mins)
 +
estimated cpu time to complete    =    20336 sec (      5 hrs 38 mins)
 +
estimated total clock time        =    22135 sec (      6 hrs  8 mins)
 +
estimated clock time to complete  =    22134 sec (      6 hrs  8 mins)
 +
</pre>
 +
 +
==Further Information==
 +
 +
* Forum Support: [http://hpc.phpbb.hull.ac.uk/viewforum.php?f=10&sid=4bdc913307b103754005e4b827bc8a18 Viper Ansys Forum]<br/>
 +
* Ansys Website: [http://www.ansys.com Ansys]
 +
 +
{{Licensepagenav}}

Latest revision as of 11:21, 16 November 2022

Application Details

  • Description:
  • Versions: V17.0, V17.2
  • Module names: ansys/v170, ansys/v172, ansys/182 and ansys/v193
  • License: Closed

Usage Examples

Job Submission Script

#!/bin/bash	
#SBATCH -c 1	# Number of CPUS per Node, Must be set to 1
#SBATCH --mem-per-cpu=4G  # Sets memory per CPU to 4GB
#SBATCH -J LS-DYNA  # Name of Job
#SBATCH -p compute # Use compute partition
#SBATCH -o %N.%j.%a.out  # Output file name
#SBATCH -e %N.%j.%a.err   # Error file name
#SBATCH --mail-user=your email address here

# Remove all loaded modules
module purge                             

# Load Ansys V17.2 & Intel MPI
module add ansys/v172            
module add intel/mpi/64/5.1.3.181  

# Divide memory between cores
memory=$(($SLURM_MEM_PER_CPU/8)) 

# This is the run command Note i specifies the input file, memory the memory available and ncpu the number of CPUs used
lsdyna i=all.k memory=${memory}M ncpu=$SLURM_CPUS_PER_TASK 
[username@login01 lsdyna5]$ sbatch ls-dyna.job
Submitted batch job 306050

Parallel Processing Options

There are two approaches to using multiple processing cores within a node with LS-DYNA. The first is Shared Memory Parallel processing which uses ncpu=X to define the number of cores to use. The alternative is Massively Parallel Processing (MPP) which uses the flag -np=X. To use MPP, you also need to use the -dis flag to enable distributed processing. The difference between these settings for one sample job can be seen below:

Original run: lsdyna memory=${memory}M i=all.k ncpu=5
 estimated total cpu time          =    290065 sec (      80 hrs 34 mins)
 estimated cpu time to complete    =    290061 sec (      80 hrs 34 mins)
 estimated total clock time        =     59061 sec (      16 hrs 24 mins)
 estimated clock time to complete  =     59059 sec (      16 hrs 24 mins)

Run using -dis flag: lsdyna memory=${memory}M -dis i=all.k ncpu=5
 estimated total cpu time          =     38261 sec (      10 hrs 37 mins)
 estimated cpu time to complete    =     38260 sec (      10 hrs 37 mins)
 estimated total clock time        =     45861 sec (      12 hrs 44 mins)
 estimated clock time to complete  =     45859 sec (      12 hrs 44 mins)

Run using "lsdyna -dis memory=${memory}M pr=aa_r_dy i=all.k -np 5"
 estimated total cpu time          =     20336 sec (       5 hrs 38 mins)
 estimated cpu time to complete    =     20336 sec (       5 hrs 38 mins)
 estimated total clock time        =     22135 sec (       6 hrs  8 mins)
 estimated clock time to complete  =     22134 sec (       6 hrs  8 mins)

Further Information





Special License Applications | Main Page | Further Topics