NVIDIA 16-GPU 32GB DGX-2 Deep Learning System with V100

Product Overview

Contact us for pricing

The world's most powerful deep learning system for the most complex AI challenges.

TCSDGX2-PB


NVIDIA 16-GPU 32GB DGX-2 Deep Learning System with V100

In response to the rapidly growing demands of today’s modern AI workloads, from growing deep neural networks to algorithms automatically detecting features in complex data - processing deep learning has completely changed the surface of computational technology. Paving the way for modern AI, NVIDIA’s ® DGX-2™ is recognised as ‘the World’s most powerful Deep Learning system’ with unprecedented levels of compute, the platform is targeted at deep learning computing boasting the processing power of its equally magnificent predecessor, the DGX-1. This is the first ever server to usher in the SXM3 form factor allowing you to experience new levels of AI speed and scale. The first ever petaFLOPS system that combines 16 fully interconnected GPUs for 10X the deep learning performance alongside ground-breaking GPU scale allowing you to train models 4X bigger on a single node.

Perfect for leading edge research demands, the NVSwitch allows leveraging of model parallelism and includes new levels of inter-GPU bandwidth. Embrace model-parallel training with a networking fabric in DGX-2 that delivers 2.4TB/s of bisection bandwidth for a 24X increase over prior generations. Moreover, two of the fastest CPUs available, the Intel Platinum Skylake generation the CPUs and triple the memory of the DGX-1 has enough CPU power to stream data to the GPUs and avoid bottlenecks in deep learning. Without scaling costs and complexities yet still responding to business imperatives, the DGX-2 is powered by DGX software, enabling simplified deployment at a scale. With an accelerated deployment model and purpose-built for ease of scale, businesses can spend more time driving insights and less time on building complex infrastructures.

Designed to train what was thought to be previously impossible, experience new levels of AI speed and scale with the DGX-2, spend less time on optimising and focus your resources on discovery – ‘with every NVIDIA system, get started fast, train faster and remain faster with an integrated solution that includes software tools and NVIDIA expertise’.

 

Key Features

16 x NVIDIA Tesla V100 GPU's

 

Request more information Send to a friend View all servers

  • CPU Cores

    24

  • CPU Family

    Intel Xeon

  • CPU Manufacturer

    Intel

  • CPU Quantity (Maximum)

    2

  • CPU Series

    Intel Xeon

  • CPU Speed

    2.7 GHz

  • GPU Family

    NVIDIA Tesla Series

  • GPU Manufacturer

    NVIDIA

  • GPU Memory Sizes

    512GB

  • GPU Model

    Tesla V100

  • GPU Quantity

    16

  • Manufacturer

    NVIDIA

  • Memory (Maximum)

    1.5TB

  • Network Adapter

    8 x 100Gb/sec Infiniband/100GigE Dual 10/25Gb/s Ethernet

  • Operating Temperature

    5°C ~ +35°C

  • Power Consumption

    10 kW

  • Software (Installed)

    Ubuntu Linux OS

  • SSD's (Installed)

    2 x 960GB NVME SSD's
    30TB (8 x 3.84TB) NVME SSD's

Get in touch to discuss our range of solutions

+44 (0) 1727 876 100

Find your solution

Test out any of our solutions at Boston Labs

To help our clients make informed decisions about new technologies, we have opened up our research & development facilities and actively encourage customers to try the latest platforms using their own tools and if necessary together with their existing hardware. Remote access is also available

Contact us

ISC 2024

Latest Event

ISC 2024 | 13th - 15th May 2024, Congress Center, Hamburg

International Super Computing is a can't miss event for anyone interested in HPC, tech, and more.

more info