Difference between revisions of "LS-DYNA"
From HPC
MSummerbell (talk | contribs) (→Application Details) |
MSummerbell (talk | contribs) |
||
Line 8: | Line 8: | ||
<pre style="background-color: #E5E4E2; color: black; font-family: monospace, sans-serif;"> | <pre style="background-color: #E5E4E2; color: black; font-family: monospace, sans-serif;"> | ||
#!/bin/bash | #!/bin/bash | ||
− | #SBATCH -c 1 # Number of CPUS per Node | + | #SBATCH -c 1 # Number of CPUS per Node, Must be set to 1 |
− | #SBATCH --mem-per-cpu=4G | + | #SBATCH --mem-per-cpu=4G # Sets memory per CPU to 4GB |
− | #SBATCH -J LS-DYNA # Name of Job | + | #SBATCH -J LS-DYNA # Name of Job |
− | #SBATCH -p compute | + | #SBATCH -p compute # Use compute partition |
− | #SBATCH -o %N.%j.%a.out | + | #SBATCH -o %N.%j.%a.out # Output file name |
− | #SBATCH -e %N.%j.%a.err | + | #SBATCH -e %N.%j.%a.err # Error file name |
− | module purge | + | # Remove all loaded modules |
+ | module purge | ||
− | module | + | #Load Ansys V17.2 & Intel MPI |
− | module | + | module add ansys/v172 |
+ | module add intel/mpi/64/5.1.3.181 | ||
− | memory=$(($SLURM_MEM_PER_CPU/8)) | + | #Divide memory between cores |
− | + | memory=$(($SLURM_MEM_PER_CPU/8)) | |
− | lsdyna i=all.k memory=${memory}M ncpu=$SLURM_CPUS_PER_TASK | + | # This is the run command Note i specifies the input file, memory the memory available and ncpu the number of CPUs used |
+ | lsdyna i=all.k memory=${memory}M ncpu=$SLURM_CPUS_PER_TASK | ||
</pre> | </pre> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> | <pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;"> | ||
[username@login01 lsdyna5]$ sbatch ls-dyna.job | [username@login01 lsdyna5]$ sbatch ls-dyna.job | ||
Submitted batch job 306050 | Submitted batch job 306050 | ||
</pre> | </pre> | ||
+ | ==Further Information== | ||
+ | Forum Support: [http://hpc.phpbb.hull.ac.uk/viewforum.php?f=10&sid=4bdc913307b103754005e4b827bc8a18 Viper Ansys Forum]<br/> | ||
+ | Ansys Website: [http://www.ansys.com Ansys] |
Revision as of 10:52, 3 February 2017
Application Details
- Description:
- Versions: V17.0, V17.2
- Module names: ansys/v170, ansys/v172
- License:
Job Submission Script
#!/bin/bash #SBATCH -c 1 # Number of CPUS per Node, Must be set to 1 #SBATCH --mem-per-cpu=4G # Sets memory per CPU to 4GB #SBATCH -J LS-DYNA # Name of Job #SBATCH -p compute # Use compute partition #SBATCH -o %N.%j.%a.out # Output file name #SBATCH -e %N.%j.%a.err # Error file name # Remove all loaded modules module purge #Load Ansys V17.2 & Intel MPI module add ansys/v172 module add intel/mpi/64/5.1.3.181 #Divide memory between cores memory=$(($SLURM_MEM_PER_CPU/8)) # This is the run command Note i specifies the input file, memory the memory available and ncpu the number of CPUs used lsdyna i=all.k memory=${memory}M ncpu=$SLURM_CPUS_PER_TASK
[username@login01 lsdyna5]$ sbatch ls-dyna.job Submitted batch job 306050
Further Information
Forum Support: Viper Ansys Forum
Ansys Website: Ansys