NVIDIA H100 NVL 94GB
1,00€
Delivery will be made from 21 days. Free Shipping for orders $1000+
10 in stock
NVIDIA H100 NVL 94GB. NVL variant for dual-GPU inference stacks, 94GB per GPU. Direct import, 1–3y warranty, compliant paperwork, quick delivery, any payment method.
Product description
H100 94GB NVL Original graphics card: graphics, speed, and capabilities without compromise
NVIDIA H100 94GB NVL Original is a specialised accelerator based on the Hopper architecture, designed for training and inference of the largest language models (LLMs) and generative systems. It combines an increased memory capacity of 94 GB HBM3 with a bandwidth of nearly 4 TB/s, allowing it to process models with hundreds of billions of parameters without losing efficiency.
The main feature of the NVL version is support for scaling via NVLink, which allows multiple cards to be combined into a single system with high data transfer speeds — up to 600 GB/s between GPUs. This makes the H100 NVL the optimal choice for data centres and infrastructures working with advanced AI tasks.
Specifications
- GPU memory: 94GB HBM3
- FP64 performance: 30 teraFLOPS
- FP64 tensor core performance: 60 teraFLOPS
- FP32 performance: 60 teraFLOPS
- TF32 tensor core performance: 835 teraFLOPS
- BFLOAT 16 tensor core performance: 1,671 teraFLOPS
- FP16 tensor core performance: 1,671 teraFLOPS
- FP8 tensor core performance: 3,341 teraFLOPS
- INT8 tensor core performance: 3,341 TOPS
- Video memory bandwidth: 3.9 TB/s
- Decoders: 7 NVDEC, 7 JPEG
- Maximum thermal design power (TDP): 350-400 W (configurable)
- Multi-instance GPU: up to 7 MIGS with 12 GB each
- Form factor: PCIe with dual-slot air cooling
- Interconnect: NVIDIA NVLink: 600 GB/s, PCIe Gen5: 128 GB/s
- Server options: Partner and NVIDIA-certified systems with 1-8 GPUs
- NVIDIA Enterprise: Included
Key benefits and use cases
The H100 94GB NVL is in demand in areas where maximum computing power and scalability are required:
- LLM inference and training — running and optimising the largest models, including generative AI.
- Cloud and data centres — building clusters of dozens of GPUs with low interconnect latency.
- Multimodal models — processing text, images, and video within a single system.
- GPU virtualisation — using MIG and vGPU for flexible resource allocation.
- Enterprise AI infrastructure — integration with NVIDIA AI Enterprise.
How the H100 NVL differs from the classic H100
While the standard H100 80GB PCIe is designed for general-purpose AI and HPC tasks, the H100 NVL 94GB was created specifically for scalable AI clusters and working with extremely large LLMs. Its increased memory capacity and NVLink make it the preferred solution for those building infrastructures with dozens or hundreds of GPUs.
Essentially, the H100 NVL is the choice for companies working at the forefront of generative AI, where every millisecond and scalability matters.
Why you should buy the NVIDIA H100 94GB NVL Original at OsoDoso-Store
- Direct delivery from the US and Europe;
- 3-year warranty;
- Any form of payment. Card, bank transfer (w/ex VAT), USDT cryptocurrency;
- consultations with specialists on integration into server configurations and clusters.
Buying H100 94GB NVL Original at OsoDoso-Store means investing in a proven and scalable tool for working with generative AI and models of the highest level.
NVIDIA H100 94GB NVL Original is an accelerator focused on the future of artificial intelligence. With increased memory, NVLink support, and Hopper architecture, it provides the necessary performance headroom for the largest models and data centres, forming the basis for new generations of AI systems.
Additional information
Weight | 1,8 kg |
---|---|
Dimensions | 268 × 112 cm |
Country of manufacture | Taiwan |
Model | NVIDIA H100 |
Technological process | 4 nm |
Architecture | Hopper |
Number of CUDA cores | 14592 |
Number of Tensor cores | 432 |
GPU Frequency (MHz) | 1665 |
GPU Boost Frequency (MHz) | 1837 |
Video memory size (GB) | 94 |
Memory type | HBM3 |
Memory bus width (bits) | 5120 |
Memory Bandwidth (GB/s) | 3900 |
Cache L2 (MB) | 50 |
Connection interface (PCIe) | PCIe 5.0 x16 |
FP16 performance (TFLOPS) | 1513 |
FP32 performance (TFLOPS) | 30 |
FP64 performance (TFLOPS) | 30 |
Cooling type | Passive (server module) |
Number of occupied slots (pcs) | 2 |
Temperature range (°C) | 0–85 |
NVLink Throughput (GB/s) | 600 |
Multi-GPU support | Yes, via NVLink |
Virtualization/MIG support | MIG (up to 7 instances) |
Width (mm) | 112 |
Length (mm) | 268 |
Weight (kg) | 1.8 |
Product reviews
Only logged in customers who have purchased this product may leave a review.
Reviews
There are no reviews yet.