mirror of
				https://github.com/vllm-project/vllm.git
				synced 2025-10-28 20:34:35 +08:00 
			
		
		
		
	Compare commits
	
		
			1 Commits
		
	
	
		
			v0.8.5.pos
			...
			correct-do
		
	
	| Author | SHA1 | Date | |
|---|---|---|---|
| c1d1875ba3 | 
| @ -2,7 +2,7 @@ | ||||
|  | ||||
| # Installation for CUDA | ||||
|  | ||||
| vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.1) binaries. | ||||
| vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.4) binaries. | ||||
|  | ||||
| ## Requirements | ||||
|  | ||||
| @ -43,12 +43,12 @@ Therefore, it is recommended to install vLLM with a **fresh new** environment. I | ||||
| You can install vLLM using either `pip` or `uv pip`: | ||||
|  | ||||
| ```console | ||||
| $ # Install vLLM with CUDA 12.1. | ||||
| $ # Install vLLM with CUDA 12.4. | ||||
| $ pip install vllm # If you are using pip. | ||||
| $ uv pip install vllm # If you are using uv. | ||||
| ``` | ||||
|  | ||||
| As of now, vLLM's binaries are compiled with CUDA 12.1 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions: | ||||
| As of now, vLLM's binaries are compiled with CUDA 12.4 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions: | ||||
|  | ||||
| ```console | ||||
| $ # Install vLLM with CUDA 11.8. | ||||
|  | ||||
		Reference in New Issue
	
	Block a user
	