Difference between revisions of "LS-DYNA"

From HPC
Jump to: navigation , search
(Application Details)
Line 8: Line 8:
 
<pre style="background-color: #E5E4E2; color: black; font-family: monospace, sans-serif;">
 
<pre style="background-color: #E5E4E2; color: black; font-family: monospace, sans-serif;">
 
#!/bin/bash
 
#!/bin/bash
#SBATCH -c 1 # Number of CPUS per Node
+
#SBATCH -c 1                           # Number of CPUS per Node, Must be set to 1
#SBATCH --mem-per-cpu=4G
+
#SBATCH --mem-per-cpu=4G # Sets memory per CPU to 4GB
#SBATCH -J LS-DYNA # Name of Job
+
#SBATCH -J LS-DYNA                 # Name of Job
#SBATCH -p compute
+
#SBATCH -p compute               # Use compute partition
#SBATCH -o %N.%j.%a.out
+
#SBATCH -o %N.%j.%a.out       # Output file name
#SBATCH -e %N.%j.%a.err
+
#SBATCH -e %N.%j.%a.err         # Error file name
  
module purge
+
# Remove all loaded modules
 +
module purge                            
  
module load ansys/v172
+
#Load Ansys V17.2 & Intel MPI
module load intel/mpi/64/5.1.3.181
+
module add ansys/v172          
 +
module add intel/mpi/64/5.1.3.181
  
memory=$(($SLURM_MEM_PER_CPU/8))
+
#Divide memory between cores
 
+
memory=$(($SLURM_MEM_PER_CPU/8))  
lsdyna i=all.k memory=${memory}M ncpu=$SLURM_CPUS_PER_TASK
+
# This is the run command Note i specifies the input file, memory the memory available and ncpu the number of CPUs used
 +
lsdyna i=all.k memory=${memory}M ncpu=$SLURM_CPUS_PER_TASK  
 
</pre>
 
</pre>
Details:<br/>
 
Line 1 - just a standard line that needs to be at the top of the file<br />
 
Line 2 - The -c flag sets the number of CPUs for the job. This must be set to 1 as LS-DYNA is not licensed for more than 1 CPU <br/>
 
Line 3 - The -J flag sets the name of the job, in the case to LS-DYNA<br/>
 
Line 4 - The -p flag sets the partition to use e.g. compute, high memory<br/>
 
Line 5 - The -o flag sets the output file name<br/>
 
Line 6 - The -e flag sets the output error<br/>
 
Line 7 - Remove all loaded modules<br/>
 
Line 8 - Load Ansys V17.2<br/>
 
Line 9 - Intel MPI<br/>
 
Line 10 - Memory divided between cores<br/>
 
Line 11 - This is the run command Note i specifies the input file, memory the memory available and ncpu the number of CPUs used<br/>
 
 
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 
[username@login01 lsdyna5]$ sbatch ls-dyna.job
 
[username@login01 lsdyna5]$ sbatch ls-dyna.job
 
Submitted batch job 306050
 
Submitted batch job 306050
 
</pre>
 
</pre>
 +
==Further Information==
 +
Forum Support: [http://hpc.phpbb.hull.ac.uk/viewforum.php?f=10&sid=4bdc913307b103754005e4b827bc8a18 Viper Ansys Forum]<br/>
 +
Ansys Website: [http://www.ansys.com Ansys]

Revision as of 10:52, 3 February 2017

Application Details

  • Description:
  • Versions: V17.0, V17.2
  • Module names: ansys/v170, ansys/v172
  • License:

Job Submission Script

#!/bin/bash	
#SBATCH -c 1	                          # Number of CPUS per Node, Must be set to 1
#SBATCH --mem-per-cpu=4G  # Sets memory per CPU to 4GB
#SBATCH -J LS-DYNA                 # Name of Job
#SBATCH -p compute                # Use compute partition
#SBATCH -o %N.%j.%a.out        # Output file name
#SBATCH -e %N.%j.%a.err         # Error file name

# Remove all loaded modules
module purge                             

#Load Ansys V17.2 & Intel MPI
module add ansys/v172            
module add intel/mpi/64/5.1.3.181  

#Divide memory between cores
memory=$(($SLURM_MEM_PER_CPU/8)) 
# This is the run command Note i specifies the input file, memory the memory available and ncpu the number of CPUs used
lsdyna i=all.k memory=${memory}M ncpu=$SLURM_CPUS_PER_TASK 
[username@login01 lsdyna5]$ sbatch ls-dyna.job
Submitted batch job 306050

Further Information

Forum Support: Viper Ansys Forum
Ansys Website: Ansys