Logo
Explore Help
Register Sign In
frozenleaves/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://github.com/vllm-project/vllm.git synced 2025-10-20 23:03:52 +08:00
Code Issues Packages Projects Releases Wiki Activity
Files
a5464dcf92bba8dfd052fc79bfc40e08aee515d9
vllm/docker
History
Zhewen Li 44c8555621 [CI/Build] Fix AMD import failures in CI (#26841)
Signed-off-by: zhewenli <zhewenli@meta.com>
2025-10-16 07:28:20 +00:00
..
Dockerfile
[CI] Raise VLLM_MAX_SIZE_MB to 500 due to failing Build wheel - CUDA 12.9 (#26722)
2025-10-14 10:52:05 -07:00
Dockerfile.cpu
Remove Python 3.9 support ahead of PyTorch 2.9 in v0.11.1 (#26416)
2025-10-08 10:40:42 -07:00
Dockerfile.nightly_torch
Bump Flashinfer to v0.4.0 (#26326)
2025-10-08 23:58:44 -07:00
Dockerfile.ppc64le
[CI/Build] Fix ppc64le CPU build and tests (#22443)
2025-10-11 13:04:42 +08:00
Dockerfile.rocm
[CI/Build] Fix AMD import failures in CI (#26841)
2025-10-16 07:28:20 +00:00
Dockerfile.rocm_base
[ROCm][Build] Add support for AMD Ryzen AI MAX / AI 300 Series (#25908)
2025-10-01 21:39:49 +00:00
Dockerfile.s390x
[CI/Build] Replace vllm.entrypoints.openai.api_server entrypoint with vllm serve command (#25967)
2025-10-02 10:04:57 -07:00
Dockerfile.tpu
Always use cache mounts when installing vllm to avoid populating pip cache in the image. Also remove apt cache. (#23270)
2025-08-21 18:01:03 -04:00
Dockerfile.xpu
[XPU] Upgrade NIXL to remove CUDA dependency (#26570)
2025-10-11 05:15:23 +00:00
Powered by Gitea Version: 1.24.0-rc0 Page: 38ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API