Nvidia DGX A100 System 80gb Datasheet Web Us
Nvidia DGX A100 System 80gb Datasheet Web Us
Nvidia DGX A100 System 80gb Datasheet Web Us
The A100 80GB GPU increases GPU memory bandwidth 30 percent Up to 1.25X Higher Throughput for AI Inference
over the A100 40GB GPU, making it the world’s first with 2 terabytes DGX A100 640GB 1.25X
per second (TB/s). It also has significantly more on-chip memory DGX A100 320GB 1X
(MB) level 2 cache that’s nearly 7X larger, maximizing compute RNN-T Inference: Single Stream
Sequences Per Second - Relative Performance
performance. DGX A100 also debuts the third generation of NVIDIA® MLPerf 0.7 RNN-T measured with (1/7) MIG slices. Framework: TensorRT 7.2, dataset = LibriSpeech, precision = FP16.
NVLink®, which doubles the GPU-to-GPU direct bandwidth to 600
gigabytes per second (GB/s), almost 10X higher than PCIe Gen 4,
and an NVIDIA NVSwitch™ that’s 2X faster than the last generation.
Up to 83X Higher Throughput than CPU, 2X Higher Throughput
This unprecedented power delivers the fastest time to solution, than DGX A100 320GB on Big Data Analytics Benchmark
allowing users to tackle challenges that weren't possible or
83X
practical before. DGX A100 640GB
DGX-1 11X
The World’s Most Secure AI System for Enterprise CPU Only 1X Up to 83X
With the fastest input/output (IO) architecture of any DGX system, Proven Infrastructure Solutions Built With Trusted
NVIDIA DGX A100 is the foundational building block for large-scale
Data Center Leaders
AI supercomputers like NVIDIA DGX Cloud and NVIDIA DGX
SuperPOD™. DGX A100 features up to eight single-port NVIDIA In combination with leading storage and networking technology
ConnectX®-6 or ConnectX-7 adapters for clustering and up to two providers, a portfolio of infrastructure solutions is available
dual-port ConnectX-6 or ConnectX-7 adapters for storage and that incorporates the best of the NVIDIA DGX BasePOD™ reference
networking, all capable of 200Gb/s. With ConnectX-7 connectivity to architecture. Delivered as fully integrated, ready-to-deploy
the NVIDIA Quantum-2 InfiniBand switches, DGX SuperPOD can be offerings through our NVIDIA Partner Network (NPN), these
built with fewer switches and cables, saving capex and opex on the solutions simplify and accelerate data center AI deployments.
© 2023 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, ConnectX, CUDA-X, DGX, DGX BasePOD, DGX SuperPOD,
NGC, NVLink, and NVSwitch, are trademarks and/or registered trademarks of NVIDIA Corporation. All company and product names
are trademarks or registered trademarks of the respective owners with which they are associated. Features, pricing, availability, and
specifications are all subject to change without notice. 2660752 Mar23