Files
vllm-dev/docs/features/quantization
2025-07-29 19:45:08 -07:00
..

Quantization

Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.

Contents: