Yikun Jiang 46977f9f06 [Doc] Add sphinx build for vllm-ascend (#55)
### What this PR does / why we need it?

This patch enables the doc build for vllm-ascend

- Add sphinx build for vllm-ascend
- Enable readthedocs for vllm-ascend
- Fix CI:
- exclude vllm-empty/tests/mistral_tool_use to skip `You need to agree
to share your contact information to access this model` which introduce
in
314cfade02
- Install test req to fix
https://github.com/vllm-project/vllm-ascend/actions/runs/13304112758/job/37151690770:
      ```
      vllm-empty/tests/mistral_tool_use/conftest.py:4: in <module>
          import pytest_asyncio
      E   ModuleNotFoundError: No module named 'pytest_asyncio'
      ```
  - exclude docs PR

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
1. test locally:
    ```bash
    # Install dependencies.
    pip install -r requirements-docs.txt
    
    # Build the docs and preview
    make clean; make html; python -m http.server -d build/html/
    ```
    
    Launch browser and open http://localhost:8000/.

2. CI passed with preview:
    https://vllm-ascend--55.org.readthedocs.build/en/55/

Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
2025-02-13 18:44:17 +08:00
2025-02-05 10:53:12 +08:00
2025-02-05 10:53:12 +08:00
2025-02-05 10:53:12 +08:00
2025-02-05 10:53:12 +08:00
2025-01-29 02:44:13 -08:00
2025-02-05 10:53:12 +08:00

vllm-ascend

vLLM Ascend Plugin

| About Ascend | Developer Slack (#sig-ascend) |

English | 中文


Latest News 🔥


Overview

vLLM Ascend plugin (vllm-ascend) is a backend plugin for running vLLM on the Ascend NPU.

This plugin is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.

By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.

Prerequisites

  • Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series
  • Software:
    • Python >= 3.9
    • CANN >= 8.0.RC2
    • PyTorch >= 2.4.0, torch-npu >= 2.4.0
    • vLLM (the same version as vllm-ascend)

Find more about how to setup your environment step by step in here.

Getting Started

Note

Currently, we are actively collaborating with the vLLM community to support the Ascend backend plugin, once supported you can use one line command pip install vllm vllm-ascend to compelete installation.

Installation from source code:

# Install vllm main branch according:
# https://docs.vllm.ai/en/latest/getting_started/installation/cpu/index.html#build-wheel-from-source
git clone --depth 1 https://github.com/vllm-project/vllm.git
cd vllm
pip install -r requirements-build.txt
VLLM_TARGET_DEVICE=empty pip install .

# Install vllm-ascend main branch
git clone https://github.com/vllm-project/vllm-ascend.git
cd vllm-ascend
pip install -e .

Run the following command to start the vLLM server with the Qwen/Qwen2.5-0.5B-Instruct model:

# export VLLM_USE_MODELSCOPE=true to speed up download
vllm serve Qwen/Qwen2.5-0.5B-Instruct
curl http://localhost:8000/v1/models

Please refer to official docs for more details.

Contributing

See CONTRIBUTING for more details, which is a step-by-step guide to help you set up development environment, build and test.

We welcome and value any contributions and collaborations:

  • Please feel free comments here about your usage of vLLM Ascend Plugin.
  • Please let us know if you encounter a bug by filing an issue.

License

Apache License 2.0, as found in the LICENSE file.

Description
Community maintained hardware plugin for vLLM on Ascend
Readme Apache-2.0 97 MiB
Languages
Python 83.9%
C++ 14.7%
Shell 1%
CMake 0.2%