Files
vllm/docs/source/features/quantization/index.md
2025-01-06 21:40:31 +08:00

301 B

(quantization-index)=

Quantization

Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.

:caption: Contents
:maxdepth: 1

supported_hardware
auto_awq
bnb
gguf
int8
fp8
fp8_e5m2_kvcache
fp8_e4m3_kvcache