[docs] refactor: use verl consistently in the codebase (#1390)

### Checklist Before Starting

- [x] Search for similar PR(s).

### What does this PR do?

Always use verl instead of veRL in the codebase, and add a CI check for
this.

### Specific Changes

mostly doc changes


### Test

Added to sanity tests.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [x] Add `[BREAKING]` to the PR title if it breaks any API.
- [x] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [x] Add CI test(s) if neccessary.

cc @ShaohonChen
This commit is contained in:
H
2025-05-09 17:54:57 -07:00
committed by GitHub
parent c06b9624b3
commit 2d81677ac8
4 changed files with 12 additions and 4 deletions

View File

@ -46,3 +46,9 @@ jobs:
- name: Run license test
run: |
python3 tests/sanity/check_license.py --directory .
- name: Assert naming convention
run: |
if grep -rIn --exclude-dir=.git --exclude-dir=.github --exclude-dir=venv --exclude-dir=__pycache__ 'veRL' .; then
echo "Please use verl instead of veRL in the codebase"
exit 1
fi

View File

@ -23,7 +23,7 @@ RUN pip3 install --no-cache-dir \
RUN pip3 install --no-cache-dir flash-attn==2.7.0.post2 --no-build-isolation
# vllm depends on ray, and verl does not support ray > 2.37
# vllm depends on ray
RUN pip3 install --no-cache-dir vllm==0.6.3 ray==2.10
# install apex

View File

@ -1,8 +1,10 @@
# Upgrading to vllm >= 0.7
Note: verl+vllm 0.8.3 is now stable. Please see ``docs/README_vllm0.8.md`` for upgrade guide.
## Installation
Note: This version of verl+vllm 0.7+ supports **FSDP** for training and **vLLM** for rollout.
Note: At time of writing, verl+vllm 0.7.x supports **FSDP** for training and **vLLM** for rollout.
```
# Create the conda environment
@ -68,4 +70,4 @@ VLLM_USE_PRECOMPILED=1 pip install --editable .
```
Then you can enable the V1 engine by setting `export VLLM_USE_V1=1`. In some benchmark tests, the V1 engine demonstrates a 1.5x speed improvement over the vLLM V0 engine.
The stable support of the vLLM V1 engine will come soon.
The stable support of the vLLM V1 engine is available on verl main.

View File

@ -263,7 +263,7 @@ class ActorRolloutRefWorker(MegatronWorker):
elif self.config.rollout.name == 'sglang':
from verl.workers.rollout.sglang_rollout import SGLangRollout
# NOTE(linjunrong): Due to recent fp8 support in SGLang. Now importing any symbol relate to SGLang's model_runner would check CUDA device capability.
# However, due to veRL's setting, the main process of ray can not find any CUDA device, which would potentially lead to:
# However, due to verl's setting, the main process of ray can not find any CUDA device, which would potentially lead to:
# "RuntimeError: No CUDA GPUs are available".
# For this reason, sharding_manager.__init__ should not import FSDPSGLangShardingManager and we import it here use the abs path.
# check: https://github.com/sgl-project/sglang/blob/00f42707eaddfc2c0528e5b1e0094025c640b7a0/python/sglang/srt/layers/quantization/fp8_utils.py#L76