Difference between revisions of "Applications/Cuda"
From HPC
m |
|||
Line 1: | Line 1: | ||
==Application Details== | ==Application Details== | ||
− | * Description: CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU ( | + | * Description: CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (Graphics Processing Unit). |
− | * Version: 9.0.176,10.1.168 and 11.0 | + | * Version: 9.0.176,10.1.168 and 11.5.0 ('''preferred''') |
− | * Modules: cuda/8.0.61, cuda/9.0.176, and cuda/10.1.168 and cuda/11.0 | + | * Modules: cuda/8.0.61, cuda/9.0.176, and cuda/10.1.168 and cuda/11.5.0 |
* Licence: Free to download, but owned by NVidia | * Licence: Free to download, but owned by NVidia | ||
− | '''Note''' : Versions 6.5.14 and 7.5.18 are now retired, version 8.0.61 is designated for retirement | + | '''Note''' : Versions 6.5.14 and 7.5.18 are now retired, version 8.0.61 and 11.0.3 (superseded) is designated for retirement |
==Usage Examples== | ==Usage Examples== | ||
Line 23: | Line 23: | ||
Last login: Fri Mar 16 10:05:54 2018 from gpu03 | Last login: Fri Mar 16 10:05:54 2018 from gpu03 | ||
− | [username@gpu03 ~]$ module add cuda/11.0 | + | [username@gpu03 ~]$ module add cuda/11.5.0 |
[username@gpu03 ~]$ ./gpuTEST | [username@gpu03 ~]$ ./gpuTEST | ||
Revision as of 13:41, 26 October 2021
Application Details
- Description: CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (Graphics Processing Unit).
- Version: 9.0.176,10.1.168 and 11.5.0 (preferred)
- Modules: cuda/8.0.61, cuda/9.0.176, and cuda/10.1.168 and cuda/11.5.0
- Licence: Free to download, but owned by NVidia
Note : Versions 6.5.14 and 7.5.18 are now retired, version 8.0.61 and 11.0.3 (superseded) is designated for retirement
Usage Examples
Note: this example is done on a node with a GPU accelerator, usually access would be achieved with the scheduler
[username@login01 ~]$ interactive -pgpu salloc: Granted job allocation 1014031 Job ID 1014031 connecting to gpu03, please wait... Last login: Fri Mar 16 10:05:54 2018 from gpu03 [username@gpu03 ~]$ module add cuda/11.5.0 [username@gpu03 ~]$ ./gpuTEST
Further Information