### What this PR does / why we need it?
vLLM Ascend plugin (vllm-ascend) is a backend plugin for running vLLM on
the Ascend NPU.
This plugin is the recommended approach for supporting the Ascend
backend within the vLLM community. It adheres to the principles outlined
in the [RFC]: Hardware pluggable, providing a hardware-pluggable
interface that decouples the integration of the Ascend NPU with vLLM.
This patch also include changes to make CI work and use cache speed up
e2e test, including:
1. Change push (post merge ci) and pull_request (pr ci) trigger branch
to main
2. Make mypy work by ignore base_communicator and clear unused deps
3. Several improvements for vllm_ascend_test:
- use cache (pip, ms, hf) speed up e2e test (25mins --> 5mins)
- switch `git clone` command to `action/checkout` to speedup checkout
and
- Enable sv for pytest for better info dump
- Remove network host to resole `docker: conflicting ontions: cannot
attach both user-defined and non-user-definednetwork-modes`, which is a
problem on docker 1.45 but not on 1.39.
4. Adapt MLA decode optimizations:
cabaf4eff3
### Does this PR introduce _any_ user-facing change?
Yes, init the PR.
### How was this patch tested?
- This is the first PR to make ascend NPU work on vLLM. All code is
tested on ascend with vLLM V0 Engine.
- CI passed
---------
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: MengqingCao <cmq0113@163.com>
Co-authored-by: wangshuai09 <391746016@qq.com>
Co-authored-by: Shanshan Shen <467638484@qq.com>
Co-authored-by: wangli <wangli858794774@gmail.com>
vLLM Ascend Plugin
| About Ascend | Developer Slack (#sig-ascend) |
Latest News 🔥
- [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.
Overview
vLLM Ascend plugin (vllm-ascend
) is a backend plugin for running vLLM on the Ascend NPU.
This plugin is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.
By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.
Prerequisites
Support Devices
- Atlas A2 Training series (Atlas 800T A2, Atlas 900 A2 PoD, Atlas 200T A2 Box16, Atlas 300T A2)
- Atlas 800I A2 Inference series (Atlas 800I A2)
Dependencies
Requirement | Supported version | Recommended version | Note |
---|---|---|---|
vLLM | main | main | Required for vllm-ascend |
Python | >= 3.9 | 3.10 | Required for vllm |
CANN | >= 8.0.RC2 | 8.0.RC3 | Required for vllm-ascend and torch-npu |
torch-npu | >= 2.4.0 | 2.5.1rc1 | Required for vllm-ascend |
torch | >= 2.4.0 | 2.5.1 | Required for torch-npu and vllm required |
Find more about how to setup your environment in here.
Getting Started
Note
Currently, we are actively collaborating with the vLLM community to support the Ascend backend plugin, once supported you can use one line command
pip install vllm vllm-ascend
to compelete installation.
Installation from source code:
# Install vllm main branch according:
# https://docs.vllm.ai/en/latest/getting_started/installation/cpu/index.html#build-wheel-from-source
git clone --depth 1 https://github.com/vllm-project/vllm.git
cd vllm
pip install -r requirements-build.txt
VLLM_TARGET_DEVICE=empty pip install .
# Install vllm-ascend main branch
git clone https://github.com/vllm-project/vllm-ascend.git
cd vllm-ascend
pip install -e .
Run the following command to start the vLLM server with the Qwen/Qwen2.5-0.5B-Instruct model:
# export VLLM_USE_MODELSCOPE=true to speed up download
vllm serve Qwen/Qwen2.5-0.5B-Instruct
curl http://localhost:8000/v1/models
Please refer to vLLM Quickstart for more details.
Building
Build Python package from source
git clone https://github.com/vllm-project/vllm-ascend.git
cd vllm-ascend
pip install -e .
Build container image from source
git clone https://github.com/vllm-project/vllm-ascend.git
cd vllm-ascend
docker build -t vllm-ascend-dev-image -f ./Dockerfile .
See Building and Testing for more details, which is a step-by-step guide to help you set up development environment, build and test.
Contributing
We welcome and value any contributions and collaborations:
- Please let us know if you encounter a bug by filing an issue.
- Please see the guidance on how to contribute in CONTRIBUTING.md.
License
Apache License 2.0, as found in the LICENSE file.