Logo
Explore Help
Register Sign In
frozenleaves/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://github.com/vllm-project/vllm.git synced 2025-10-20 14:53:52 +08:00
Code Issues Packages Projects Releases Wiki Activity
Files
ab4be40fc5bde2dbe76e3fb081dc3721219b5a2b
vllm/docker
History
jiahanc 41d3071918 [NVIDIA] [Perf] Update to leverage flashinfer trtllm FP4 MOE throughput kernel (#26714)
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
2025-10-16 16:20:25 -07:00
..
Dockerfile
[NVIDIA] [Perf] Update to leverage flashinfer trtllm FP4 MOE throughput kernel (#26714)
2025-10-16 16:20:25 -07:00
Dockerfile.cpu
Remove Python 3.9 support ahead of PyTorch 2.9 in v0.11.1 (#26416)
2025-10-08 10:40:42 -07:00
Dockerfile.nightly_torch
[NVIDIA] [Perf] Update to leverage flashinfer trtllm FP4 MOE throughput kernel (#26714)
2025-10-16 16:20:25 -07:00
Dockerfile.ppc64le
[CI/Build] Fix ppc64le CPU build and tests (#22443)
2025-10-11 13:04:42 +08:00
Dockerfile.rocm
[CI/Build] Fix AMD import failures in CI (#26841)
2025-10-16 07:28:20 +00:00
Dockerfile.rocm_base
[ROCm][Build] Add support for AMD Ryzen AI MAX / AI 300 Series (#25908)
2025-10-01 21:39:49 +00:00
Dockerfile.s390x
[CI/Build] Replace vllm.entrypoints.openai.api_server entrypoint with vllm serve command (#25967)
2025-10-02 10:04:57 -07:00
Dockerfile.tpu
Always use cache mounts when installing vllm to avoid populating pip cache in the image. Also remove apt cache. (#23270)
2025-08-21 18:01:03 -04:00
Dockerfile.xpu
[XPU] Upgrade NIXL to remove CUDA dependency (#26570)
2025-10-11 05:15:23 +00:00
Powered by Gitea Version: 1.24.0-rc0 Page: 39ms Template: 1ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API