Files
vllm-ascend/docs/source/user_guide/additional_config.md
sdmyzlp 7bdc606677 Support multistream of shared experts in FusedMoE (#997)
Contains on #1111 for completeness.

<!--  Thanks for sending a pull request!

BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html

-->
### What this PR does / why we need it?
Implement multi-stream parallelism for MoE layers with shared experts,
where computation of shared experts will be overlapped with expert token
dispatch and combine. Also, when multi-stream is enabled, weights of
shared experts will be force to replicate across all cards, regardless
of any tensor parallelism configurations, to avoid AllReduce operations.

With the expected overlaping being:
```
| shared gate_up | shared act |              | shared down |
|    dispatch    | routed gate_up, act, down |   combine   |
```

<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.

- Please clarify why the changes are needed. For instance, the use case
and bug description.

- Fixes #
-->

### Does this PR introduce _any_ user-facing change?
No.

<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->

### How was this patch tested?
Tested on 1x16 910 node, with tailored 2 layer DSKv2.
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->

---------

Signed-off-by: sdmyzlp <lrwei2@petalmail.com>
2025-06-11 09:18:38 +08:00

3.2 KiB

Additional Configuration

addintional configuration is a mechanism provided by vLLM to allow plugins to control inner behavior by their own. vLLM Ascend uses this mechanism to make the project more flexible.

How to use

With either online mode or offline mode, users can use additional configuration. Take Qwen3 as an example:

Online mode:

vllm serve Qwen/Qwen3-8B --additional-config='{"config_key":"config_value"}'

Offline mode:

from vllm import LLM

LLM(model="Qwen/Qwen3-8B", additional_config={"config_key":"config_value"})

Configuration options

The following table lists the additional configuration options available in vLLM Ascend:

Name Type Default Description
torchair_graph_config dict {} The config options for torchair graph mode
ascend_scheduler_config dict {} The config options for ascend scheduler
expert_tensor_parallel_size str 0 Expert tensor parallel size the model to use.
refresh bool false Whether to refresh global ascend config content. This value is usually used by rlhf case.
expert_map_path str None When using expert load balancing for the MOE model, an expert map path needs to be passed in.

The details of each config option are as follows:

torchair_graph_config

Name Type Default Description
enabled bool False Whether to enable torchair graph mode
enable_multistream_moe bool False Whether to enable multistream shared expert
enable_view_optimize bool True Whether to enable torchair view optimization
use_cached_graph bool False Whether to use cached graph
graph_batch_sizes list[int] [] The batch size for torchair graph cache
graph_batch_sizes_init bool False Init graph batch size dynamically if graph_batch_sizes is empty

ascend_scheduler_config

Name Type Default Description
enabled bool False Whether to enable ascend scheduler for V1 engine

ascend_scheduler_config also support the options from vllm scheduler config. For example, you can add chunked_prefill_enabled: true to ascend_scheduler_config as well.

Example

A full example of additional configuration is as follows:

{
    "torchair_graph_config": {
        "enabled": true,
        "use_cached_graph": true,
        "graph_batch_sizes": [1, 2, 4, 8],
        "graph_batch_sizes_init": false,
        "enable_multistream_moe": false
    },
    "ascend_scheduler_config": {
        "enabled": true,
        "chunked_prefill_enabled": true,
    },
    "expert_tensor_parallel_size": 1,
    "refresh": false,
}