Difference between revisions of "LS-DYNA"
From HPC
(→Usage Examples) |
m |
||
Line 2: | Line 2: | ||
* Description: | * Description: | ||
* Versions: V17.0, V17.2 | * Versions: V17.0, V17.2 | ||
− | * Module names: ansys/v170, ansys/v172 | + | * Module names: ansys/v170, ansys/v172, ansys/182 and ansys/v193 |
* License: | * License: | ||
==Usage Examples== | ==Usage Examples== | ||
Line 57: | Line 57: | ||
==Further Information== | ==Further Information== | ||
− | Forum Support: [http://hpc.phpbb.hull.ac.uk/viewforum.php?f=10&sid=4bdc913307b103754005e4b827bc8a18 Viper Ansys Forum]<br/> | + | |
− | Ansys Website: [http://www.ansys.com Ansys] | + | * Forum Support: [http://hpc.phpbb.hull.ac.uk/viewforum.php?f=10&sid=4bdc913307b103754005e4b827bc8a18 Viper Ansys Forum]<br/> |
+ | * Ansys Website: [http://www.ansys.com Ansys] | ||
+ | |||
+ | |||
+ | ==Navigation== | ||
+ | |||
+ | * [[Main_Page|Home]] | ||
+ | * [[Applications|Application support]] * | ||
+ | * [[General|General]] | ||
+ | * [[Programming|Programming support]] |
Revision as of 13:07, 24 May 2019
Contents
Application Details
- Description:
- Versions: V17.0, V17.2
- Module names: ansys/v170, ansys/v172, ansys/182 and ansys/v193
- License:
Usage Examples
Job Submission Script
#!/bin/bash #SBATCH -c 1 # Number of CPUS per Node, Must be set to 1 #SBATCH --mem-per-cpu=4G # Sets memory per CPU to 4GB #SBATCH -J LS-DYNA # Name of Job #SBATCH -p compute # Use compute partition #SBATCH -o %N.%j.%a.out # Output file name #SBATCH -e %N.%j.%a.err # Error file name # Remove all loaded modules module purge # Load Ansys V17.2 & Intel MPI module add ansys/v172 module add intel/mpi/64/5.1.3.181 # Divide memory between cores memory=$(($SLURM_MEM_PER_CPU/8)) # This is the run command Note i specifies the input file, memory the memory available and ncpu the number of CPUs used lsdyna i=all.k memory=${memory}M ncpu=$SLURM_CPUS_PER_TASK
[username@login01 lsdyna5]$ sbatch ls-dyna.job Submitted batch job 306050
Parallel Processing Options
There are two approaches to using multiple processing cores within a node with LS-DYNA. The first is Shared Memory Parallel processing which uses ncpu=X to define the number of cores to use. The alternative is Massively Parallel Processing (MPP) which uses the flag -np=X. To use MPP, you also need to use the -dis flag to enable distributed processing. The difference between these settings for one sample job can be seen below:
Original run: lsdyna memory=${memory}M i=all.k ncpu=5 estimated total cpu time = 290065 sec ( 80 hrs 34 mins) estimated cpu time to complete = 290061 sec ( 80 hrs 34 mins) estimated total clock time = 59061 sec ( 16 hrs 24 mins) estimated clock time to complete = 59059 sec ( 16 hrs 24 mins) Run using -dis flag: lsdyna memory=${memory}M -dis i=all.k ncpu=5 estimated total cpu time = 38261 sec ( 10 hrs 37 mins) estimated cpu time to complete = 38260 sec ( 10 hrs 37 mins) estimated total clock time = 45861 sec ( 12 hrs 44 mins) estimated clock time to complete = 45859 sec ( 12 hrs 44 mins) Run using "lsdyna -dis memory=${memory}M pr=aa_r_dy i=all.k -np 5" estimated total cpu time = 20336 sec ( 5 hrs 38 mins) estimated cpu time to complete = 20336 sec ( 5 hrs 38 mins) estimated total clock time = 22135 sec ( 6 hrs 8 mins) estimated clock time to complete = 22134 sec ( 6 hrs 8 mins)
Further Information
- Forum Support: Viper Ansys Forum
- Ansys Website: Ansys