Difference between revisions of "Applications/Cuda"

From HPC
Jump to: navigation , search
m
m
Line 2: Line 2:
  
 
* Description: CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (graphics processing unit).
 
* Description: CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (graphics processing unit).
* Version: 6.5.14, 7.5.18, 8.0.61 and 9.0.176
+
* Version: 6.5.14, 7.5.18, 8.0.61, 9.0.176 and 10.1.168 ('''preferred''')
* Modules: cuda/6.5.14, cuda/7.5.18,  cuda/8.0.61 and cuda/9.0.176
+
* Modules: cuda/6.5.14, cuda/7.5.18,  cuda/8.0.61, cuda/9.0.176 and cuda/10.1.168
 
* Licence: Free, but owned by NVidia  
 
* Licence: Free, but owned by NVidia  
  
Line 9: Line 9:
 
{|
 
{|
 
|style="width:5%; border-width: 0" | [[File:icon_tick.png]]
 
|style="width:5%; border-width: 0" | [[File:icon_tick.png]]
|style="width:95%; border-width: 0" | NVIDIA CUDA® modules 7.5.18, 8.0.61 and 9.0.176 have the Deep Neural Network library (cuDNN) included. It is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN is part of the NVIDIA Deep Learning SDK.
+
|style="width:95%; border-width: 0" | All NVIDIA CUDA® modules have the Deep Neural Network library (cuDNN) included. It is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN is part of the NVIDIA Deep Learning SDK.
 
|-
 
|-
 
|}
 
|}
Line 21: Line 21:
 
Last login: Fri Mar 16 10:05:54 2018 from gpu03
 
Last login: Fri Mar 16 10:05:54 2018 from gpu03
  
[username@gpu03 ~]$ module add cuda/9.0.176
+
[username@gpu03 ~]$ module add cuda/10.1.168
 
[username@gpu03 ~]$ ./gpuTEST
 
[username@gpu03 ~]$ ./gpuTEST
  

Revision as of 08:28, 30 March 2020

Application Details

  • Description: CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (graphics processing unit).
  • Version: 6.5.14, 7.5.18, 8.0.61, 9.0.176 and 10.1.168 (preferred)
  • Modules: cuda/6.5.14, cuda/7.5.18, cuda/8.0.61, cuda/9.0.176 and cuda/10.1.168
  • Licence: Free, but owned by NVidia

Usage Examples

Icon tick.png All NVIDIA CUDA® modules have the Deep Neural Network library (cuDNN) included. It is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN is part of the NVIDIA Deep Learning SDK.

Note: this example is done on a node with a GPU accelerator, usually access would be achieved with the scheduler


[username@login01 ~]$ interactive -pgpu
salloc: Granted job allocation 1014031
Job ID 1014031 connecting to gpu03, please wait...
Last login: Fri Mar 16 10:05:54 2018 from gpu03

[username@gpu03 ~]$ module add cuda/10.1.168
[username@gpu03 ~]$ ./gpuTEST

Further Information


Navigation