A high-throughput and memory-efficient inference and serving engine for LLMs
amd
blackwell
cuda
deepseek
deepseek-v3
gpt
gpt-oss
inference
kimi
llama
llm
llm-serving
model-serving
moe
openai
pytorch
qwen
qwen3
tpu
transformer
Updated 2025-10-20 14:31:03 +08:00
Community maintained hardware plugin for vLLM on Ascend
Updated 2025-10-20 09:50:44 +08:00
A high-throughput and memory-efficient inference and serving engine for LLMs
amd
cuda
deepseek
gpt
hpu
inference
inferentia
llama
llm
llm-serving
llmops
mlops
model-serving
pytorch
qwen
rocm
tpu
trainium
transformer
xpu
Updated 2025-10-11 16:48:30 +08:00