Compare commits

...

1 Commits

Author SHA1 Message Date
c1d1875ba3 Updates docs with correction about default cuda version
Correct 12.1 --> 12.4
2025-01-07 17:29:07 -05:00

View File

@ -2,7 +2,7 @@
# Installation for CUDA # Installation for CUDA
vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.1) binaries. vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.4) binaries.
## Requirements ## Requirements
@ -43,12 +43,12 @@ Therefore, it is recommended to install vLLM with a **fresh new** environment. I
You can install vLLM using either `pip` or `uv pip`: You can install vLLM using either `pip` or `uv pip`:
```console ```console
$ # Install vLLM with CUDA 12.1. $ # Install vLLM with CUDA 12.4.
$ pip install vllm # If you are using pip. $ pip install vllm # If you are using pip.
$ uv pip install vllm # If you are using uv. $ uv pip install vllm # If you are using uv.
``` ```
As of now, vLLM's binaries are compiled with CUDA 12.1 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions: As of now, vLLM's binaries are compiled with CUDA 12.4 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions:
```console ```console
$ # Install vLLM with CUDA 11.8. $ # Install vLLM with CUDA 11.8.