Files
vllm/docs/features/quantization
2025-06-23 05:24:23 +00:00
..
2025-05-25 06:05:38 -07:00

title
title
Quantization

{ #quantization-index }

Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.

Contents: