mirror of
https://github.com/vllm-project/vllm-ascend.git
synced 2025-10-20 13:43:53 +08:00
1. disable test_eagle_ccorrectness test, we'll reopen it once oom error fixed. 2. drop transformers version limit for main, since vLLM rely on >=4.55.0, see:65552b476b
3. fix kv_connector_output bug, see:796bae07c5
- vLLM version: v0.10.0 - vLLM main:d1af8b7be9
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
28 lines
410 B
Plaintext
28 lines
410 B
Plaintext
# Should be mirrored in pyporject.toml
|
|
cmake>=3.26
|
|
decorator
|
|
einops
|
|
numpy<2.0.0
|
|
packaging
|
|
pip
|
|
pybind11
|
|
pyyaml
|
|
scipy
|
|
setuptools>=64
|
|
setuptools-scm>=8
|
|
torch>=2.7.1
|
|
torchvision
|
|
wheel
|
|
|
|
# requirements for disaggregated prefill
|
|
msgpack
|
|
quart
|
|
|
|
# Required for N-gram speculative decoding
|
|
numba
|
|
|
|
# Install torch_npu
|
|
--pre
|
|
--extra-index-url https://mirrors.huaweicloud.com/ascend/repos/pypi
|
|
torch-npu==2.7.1.dev20250724
|