mirror of
https://github.com/vllm-project/vllm.git
synced 2025-10-20 23:03:52 +08:00
Compare commits
1 Commits
ci/build/2
...
correct-do
Author | SHA1 | Date | |
---|---|---|---|
c1d1875ba3 |
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
# Installation for CUDA
|
# Installation for CUDA
|
||||||
|
|
||||||
vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.1) binaries.
|
vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.4) binaries.
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
@ -43,12 +43,12 @@ Therefore, it is recommended to install vLLM with a **fresh new** environment. I
|
|||||||
You can install vLLM using either `pip` or `uv pip`:
|
You can install vLLM using either `pip` or `uv pip`:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ # Install vLLM with CUDA 12.1.
|
$ # Install vLLM with CUDA 12.4.
|
||||||
$ pip install vllm # If you are using pip.
|
$ pip install vllm # If you are using pip.
|
||||||
$ uv pip install vllm # If you are using uv.
|
$ uv pip install vllm # If you are using uv.
|
||||||
```
|
```
|
||||||
|
|
||||||
As of now, vLLM's binaries are compiled with CUDA 12.1 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions:
|
As of now, vLLM's binaries are compiled with CUDA 12.4 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ # Install vLLM with CUDA 11.8.
|
$ # Install vLLM with CUDA 11.8.
|
||||||
|
Reference in New Issue
Block a user