Files
vllm/docs/features/quantization/README.md
2025-07-16 15:33:41 -04:00

551 B

Quantization

Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.

Contents: