How to Calculate GPU VRAM
What is GPU VRAM?
A GPU VRAM calculator estimates the video RAM required to run a large language model locally. VRAM requirements depend on model size (parameters) and numerical precision (quantization).
Formula
VRAM ≈ (model_params × 2) / 1e9 (GiB for float16); adjust for precision/batch
- P
- Model parameters (billions) — Model size (e.g., 7B, 13B, 70B)
- bits
- Precision (bits) — 8 (int8), 16 (float16/bfloat16), 32 (float32)
- VRAM
- VRAM required (GiB) — Video memory needed
Step-by-Step Guide
- 1VRAM (bytes) = Parameters × bytes per parameter
- 2FP32: 4 bytes/param, FP16/BF16: 2 bytes, INT8: 1 byte, INT4: 0.5 bytes
- 3Add ~20% overhead for activations and KV cache
- 47B model at FP16 = 7B × 2 = 14GB minimum
Worked Examples
Input
7B params at FP16
Result
~14GB VRAM minimum (RTX 3090 or better)
Input
70B params at INT4
Result
~35GB VRAM (2× A100 40GB)
Input
13B params at INT8
Result
~13GB VRAM (RTX 4090)
Frequently Asked Questions
How much VRAM does a 7B model need?
Roughly 14 GiB in float16 (2 bytes/param). Int8 quantization: ~7 GiB. float32: ~28 GiB. Add overhead for activations.
What is quantization?
Reducing precision (float32 → int8) reduces VRAM and speeds inference, with minor quality loss. Popular for deployment.
Does batch size affect VRAM?
Yes. Larger batches require more VRAM for activations and intermediate values. Smaller batches = lower memory but slower throughput.
Ready to calculate? Try the free GPU VRAM Calculator
Try it yourself →