Files
vllm/docs/source/features/quantization/index.md
2025-04-01 08:32:45 -07:00

22 lines
306 B
Markdown

(quantization-index)=
# Quantization
Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.
:::{toctree}
:caption: Contents
:maxdepth: 1
supported_hardware
auto_awq
bnb
gguf
gptqmodel
int4
int8
fp8
quark
quantized_kvcache
:::