This website requires JavaScript.
Explore
Help
Register
Sign In
frozenleaves
/
vllm-dev
Watch
1
Star
0
Fork
0
You've already forked vllm-dev
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
main
vllm-dev
/
vllm
/
attention
History
Kunshang Ji
fce10dbed5
[XPU] Add xpu torch.compile support (
#22609
)
...
Signed-off-by: Kunshang Ji <
kunshang.ji@intel.com
>
2025-08-27 05:33:27 +00:00
..
backends
[Docs] Fix warnings in
mkdocs build
(
#23649
)
2025-08-26 18:19:23 +00:00
layers
[Misc] Modify CacheConfig import (
#23459
)
2025-08-23 06:05:27 +00:00
ops
[Kernel] Add FP8 support with FlashMLA backend (
#22668
)
2025-08-22 02:26:32 +00:00
utils
[MISC] Add init files for python package (
#20908
)
2025-07-15 12:16:33 +00:00
__init__.py
Remove duplicate entry in vllm.attention.__all__ (
#23296
)
2025-08-20 17:14:59 -07:00
layer.py
[XPU] Add xpu torch.compile support (
#22609
)
2025-08-27 05:33:27 +00:00
selector.py
[gpt-oss] Enable gpt-oss on ampere (
#22714
)
2025-08-12 03:21:44 -07:00