Difference between revisions of "General/What is Viper"

From HPC
Jump to: navigation , search
m
m
Line 30: Line 30:
 
* Dedicated, high-efficiency chiller on AS3 roof for cooling
 
* Dedicated, high-efficiency chiller on AS3 roof for cooling
 
* UPS and generator power failover
 
* UPS and generator power failover
 +
 +
[[File:Rack-diagram.jpg]]

Revision as of 16:51, 22 February 2017

Introduction

Viper is the University of Hull's supercomputer and is located on site with its own dedicated team to administrate it and develop applications upon it.

A supercomputer is a computer with a high level of computing performance compared to a general-purpose computer. It achieves this level of performance by allowing the user to split a large job into smaller computation tasks which then run in parallel, or by running a single task with many different scenarios on the data across parallel processing units.

Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g., a very complex weather simulation application.

Like just about all other supercomputers it runs on Linux, which is similar to UNIX in many ways.


Viper Specifications

Physical Hardware

Viper is based around the Linux operating system and is composed of approximately 5,500 processing cores with the following specialised areas:

  • 180 compute nodes, each with 2x 14-core Broadwell E5-2680v4 processors (2.4 –3.3 GHz), 128 GB DDR4 RAM
  • 4 High memory nodes, each with 4x 10-core Haswell E5-4620v3 processors (2.0 GHz), 1TB DDR4 RAM
  • 4 GPU nodes, each identical to compute nodes with the addition of 4x Nvidia Tesla K40m GPUs per node
  • 2 Visualisations nodes with 2x Nvidia GTX 980TI
  • Intel Omni-Path interconnect (100 Gb/s node-switch and switch-switch)
  • 500 TB parallel file system (BeeGFS)


Infrastructure

  • 4 racks with dedicated cooling and hot-aisle containment
  • Additional rack for storage and management components
  • Dedicated, high-efficiency chiller on AS3 roof for cooling
  • UPS and generator power failover

Rack-diagram.jpg