Updates docs with correction about default cuda version

Correct 12.1 --> 12.4
This commit is contained in:
Michael Goin
2025-01-07 17:29:07 -05:00
committed by GitHub
parent 973f5dc581
commit c1d1875ba3

View File

@ -2,7 +2,7 @@
# Installation for CUDA
vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.1) binaries.
vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.4) binaries.
## Requirements
@ -43,12 +43,12 @@ Therefore, it is recommended to install vLLM with a **fresh new** environment. I
You can install vLLM using either `pip` or `uv pip`:
```console
$ # Install vLLM with CUDA 12.1.
$ # Install vLLM with CUDA 12.4.
$ pip install vllm # If you are using pip.
$ uv pip install vllm # If you are using uv.
```
As of now, vLLM's binaries are compiled with CUDA 12.1 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions:
As of now, vLLM's binaries are compiled with CUDA 12.4 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions:
```console
$ # Install vLLM with CUDA 11.8.