deepseek mtp
, enable_shared_expert_dp
and use_cached_kv_cache_bytes
(#3074)
### What this PR does / why we need it?
This miscellaneous contains several small fixes:
1) fix initialization and forward bugs of DeepseekMTPLayer with
`shared_expert_dp` enabled.
2) fix a tensor shape mismatches after o_proj caused by a work-aroud
change in NPUModelRunner.
3) avoid unnecessary decline of kv_cache memory (default: 64MB) with
`use_cached_kv_cache_bytes` disabled.
4) fall back `fused_moe_state` from `MC2` to `All2All` since the padding
logic of `mc2_mask` is incompatible with input hidden_states when
`shared_expert_dp` enabled.
Once this PR is merged, users can launch disaggregated_prefill
deployments (large_ep) with `deepseek_mtp` and `shared_expert_dp` as
`v0.9.1-dev` branch. The remaining problem of kv_cache tokens decline
compared to `v0.9.1-dev` will be resolved by
https://github.com/vllm-project/vllm-ascend/pull/3073.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
E2E vllm serving about deepseek_mtp with torchair graph mode and
`enable_shared_expert_dp` with eager mode. Large ep deployments are also
tested with this PR.
- vLLM version: v0.10.2
- vLLM main:
5aeb925452
---------
Signed-off-by: linfeng-yuan <1102311262@qq.com>
deepseek mtp
, enable_shared_expert_dp
and use_cached_kv_cache_bytes
(#3074)
deepseek mtp
, enable_shared_expert_dp
and use_cached_kv_cache_bytes
(#3074)
vLLM Ascend Plugin
| About Ascend | Documentation | #sig-ascend | Users Forum | Weekly Meeting |
English | 中文
Latest News 🔥
- [2025/09] We released the new official version v0.9.1! Please follow the official guide to start deploy large scale Expert Parallelism (EP) on Ascend.
- [2025/08] We hosted the vLLM Beijing Meetup with vLLM and Tencent! Please find the meetup slides here.
- [2025/06] User stories page is now live! It kicks off with LLaMA-Factory/verl//TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
- [2025/06] Contributors page is now live! All contributions deserve to be recorded, thanks for all contributors.
- [2025/05] We've released first official version v0.7.3! We collaborated with the vLLM community to publish a blog post sharing our practice: Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU.
- [2025/03] We hosted the vLLM Beijing Meetup with vLLM team! Please find the meetup slides here.
- [2025/02] vLLM community officially created vllm-project/vllm-ascend repo for running vLLM seamlessly on the Ascend NPU.
- [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.
Overview
vLLM Ascend (vllm-ascend
) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.
It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.
By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.
Prerequisites
- Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series, Atlas 800I A3 Inference series, Atlas A3 Training series, Atlas 300I Duo (Experimental)
- OS: Linux
- Software:
- Python >= 3.9, < 3.12
- CANN >= 8.2.rc1 (Ascend HDK version refers to here)
- PyTorch >= 2.7.1, torch-npu >= 2.7.1.dev20250724
- vLLM (the same version as vllm-ascend)
Getting Started
Please use the following recommended versions to get started quickly:
Version | Release type | Doc |
---|---|---|
v0.10.2rc1 | Latest release candidate | QuickStart and Installation for more details |
v0.9.1 | Latest stable version | QuickStart and Installation for more details |
Contributing
See CONTRIBUTING for more details, which is a step-by-step guide to help you set up development environment, build and test.
We welcome and value any contributions and collaborations:
- Please let us know if you encounter a bug by filing an issue
- Please use User forum for usage questions and help.
Branch
vllm-ascend has main branch and dev branch.
- main: main branch,corresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
- vX.Y.Z-dev: development branch, created with part of new releases of vLLM. For example,
v0.7.3-dev
is the dev branch for vLLMv0.7.3
version.
Below is maintained branches:
Branch | Status | Note |
---|---|---|
main | Maintained | CI commitment for vLLM main branch and vLLM 0.10.x branch |
v0.7.1-dev | Unmaintained | Only doc fixed is allowed |
v0.7.3-dev | Maintained | CI commitment for vLLM 0.7.3 version, only bug fix is allowed and no new release tag any more. |
v0.9.1-dev | Maintained | CI commitment for vLLM 0.9.1 version |
rfc/feature-name | Maintained | Feature branches for collaboration |
Please refer to Versioning policy for more details.
Weekly Meeting
- vLLM Ascend Weekly Meeting: https://tinyurl.com/vllm-ascend-meeting
- Wednesday, 15:00 - 16:00 (UTC+8, Convert to your timezone)
License
Apache License 2.0, as found in the LICENSE file.