### What this PR does / why we need it?
This patch enables the doc build for vllm-ascend
- Add sphinx build for vllm-ascend
- Enable readthedocs for vllm-ascend
- Fix CI:
- exclude vllm-empty/tests/mistral_tool_use to skip `You need to agree
to share your contact information to access this model` which introduce
in
314cfade02
- Install test req to fix
https://github.com/vllm-project/vllm-ascend/actions/runs/13304112758/job/37151690770:
```
vllm-empty/tests/mistral_tool_use/conftest.py:4: in <module>
import pytest_asyncio
E ModuleNotFoundError: No module named 'pytest_asyncio'
```
- exclude docs PR
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
1. test locally:
```bash
# Install dependencies.
pip install -r requirements-docs.txt
# Build the docs and preview
make clean; make html; python -m http.server -d build/html/
```
Launch browser and open http://localhost:8000/.
2. CI passed with preview:
https://vllm-ascend--55.org.readthedocs.build/en/55/
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
vLLM Ascend Plugin
| About Ascend | Developer Slack (#sig-ascend) |
English | 中文
Latest News 🔥
- [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.
Overview
vLLM Ascend plugin (vllm-ascend
) is a backend plugin for running vLLM on the Ascend NPU.
This plugin is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.
By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.
Prerequisites
- Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series
- Software:
- Python >= 3.9
- CANN >= 8.0.RC2
- PyTorch >= 2.4.0, torch-npu >= 2.4.0
- vLLM (the same version as vllm-ascend)
Find more about how to setup your environment step by step in here.
Getting Started
Note
Currently, we are actively collaborating with the vLLM community to support the Ascend backend plugin, once supported you can use one line command
pip install vllm vllm-ascend
to compelete installation.
Installation from source code:
# Install vllm main branch according:
# https://docs.vllm.ai/en/latest/getting_started/installation/cpu/index.html#build-wheel-from-source
git clone --depth 1 https://github.com/vllm-project/vllm.git
cd vllm
pip install -r requirements-build.txt
VLLM_TARGET_DEVICE=empty pip install .
# Install vllm-ascend main branch
git clone https://github.com/vllm-project/vllm-ascend.git
cd vllm-ascend
pip install -e .
Run the following command to start the vLLM server with the Qwen/Qwen2.5-0.5B-Instruct model:
# export VLLM_USE_MODELSCOPE=true to speed up download
vllm serve Qwen/Qwen2.5-0.5B-Instruct
curl http://localhost:8000/v1/models
Please refer to official docs for more details.
Contributing
See CONTRIBUTING for more details, which is a step-by-step guide to help you set up development environment, build and test.
We welcome and value any contributions and collaborations:
- Please feel free comments here about your usage of vLLM Ascend Plugin.
- Please let us know if you encounter a bug by filing an issue.
License
Apache License 2.0, as found in the LICENSE file.