Logo
Explore Help
Register Sign In
frozenleaves/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://github.com/vllm-project/vllm.git synced 2025-10-20 14:53:52 +08:00
Code Issues Packages Projects Releases Wiki Activity
Files
e6e898f95ded60a6282c0f7a4b78278c2de49ed7
vllm/docker
History
elvischenv 5e49c3e777 Bump Flashinfer to v0.4.0 (#26326)
Signed-off-by: elvischenv <219235043+elvischenv@users.noreply.github.com>
2025-10-08 23:58:44 -07:00
..
Dockerfile
Bump Flashinfer to v0.4.0 (#26326)
2025-10-08 23:58:44 -07:00
Dockerfile.cpu
Remove Python 3.9 support ahead of PyTorch 2.9 in v0.11.1 (#26416)
2025-10-08 10:40:42 -07:00
Dockerfile.nightly_torch
Bump Flashinfer to v0.4.0 (#26326)
2025-10-08 23:58:44 -07:00
Dockerfile.ppc64le
[CI/Build] Replace vllm.entrypoints.openai.api_server entrypoint with vllm serve command (#25967)
2025-10-02 10:04:57 -07:00
Dockerfile.rocm
[ROCm][CI/Build] Use ROCm7.0 as the base (#25178)
2025-09-18 09:36:55 -07:00
Dockerfile.rocm_base
[ROCm][Build] Add support for AMD Ryzen AI MAX / AI 300 Series (#25908)
2025-10-01 21:39:49 +00:00
Dockerfile.s390x
[CI/Build] Replace vllm.entrypoints.openai.api_server entrypoint with vllm serve command (#25967)
2025-10-02 10:04:57 -07:00
Dockerfile.tpu
Always use cache mounts when installing vllm to avoid populating pip cache in the image. Also remove apt cache. (#23270)
2025-08-21 18:01:03 -04:00
Dockerfile.xpu
[CI/Build] Replace vllm.entrypoints.openai.api_server entrypoint with vllm serve command (#25967)
2025-10-02 10:04:57 -07:00
Powered by Gitea Version: 1.24.0-rc0 Page: 42ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API