Files
vllm/docs/source/features/quantization/index.md
2025-01-29 11:38:29 +08:00

285 B

(quantization-index)=

Quantization

Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.

:::{toctree} :caption: Contents :maxdepth: 1

supported_hardware auto_awq bnb gguf int8 fp8 quantized_kvcache :::