Shawn/Yuxuan Tong b00f77d855 [dev] feat: immigrate from yapf & pylint to ruff based on pre-commit (#1010)
> [!WARNING]
> We are [immigrating to `ruff` as the linter and formatter and
`pre-commit` as the managing
tool](https://github.com/volcengine/verl/pull/1010).
>
> If your branch is based on a previous commit using `yapf` and
`pylint`, simply merging might trigger overwhelming linting errors,
while **you are only expected to resolve ones in the files related to
your PR**.
>
> To resolve this issue, please try the following workaround to only
include the files you **really changed** in the PR:
>
> 1. In your branch, fix linting and format with `ruff`: `ruff check
--fix && ruff-format`
> 2. Squash into a single commit in a new branch: `git reset --soft
$(git merge-base main HEAD) && git add -A && git commit -m "feat: ..."`
> 3. Merge with the latest main: `git merge origin/main`
> 4. Force push to your branch: `git push --force`

We add the reminder above to the documentation to tell contributors how
to avoid overwhelming linting errors.

### Motivation

According to dicussion in #896, this PR immigrates from yapf & pylint to
ruff based on pre-commit, which allows unified version control and
automatic hook on committing.

### Summary

The `pre-commit` hook and CI

- checks staged / committed files in commits / PR's
- checks all files each month (This should fail before we fix all the
files by the ruff standard)

### Explanation for the Failing CI Workflow `pre-commit`

For now, we only apply `ruff format` and `ruff check --fix` **without
resolving all the errors**, since there are too many errors to resolve,
which causes the CI workflow `pre-commit` fails.

For resolving the remaining errors, we leave to future commits.
Specifically, the `pre-commit` hook and CI will require every commit to
fix its related files with `ruff`, which will fix all the files
incrementally.

### Reviewing Suggestion

The commit
3d93f51ba8
is huge since we apply `ruff` to all the files. To review the main
changes, please check the commits before and after it.
2025-04-18 07:49:31 -07:00

verl: Volcano Engine Reinforcement Learning for LLMs

GitHub Repo stars GitHub forks Twitter GitHub contributors Documentation

verl is a flexible, efficient and production-ready RL training library for large language models (LLMs).

verl is the open-source version of HybridFlow: A Flexible and Efficient RLHF Framework paper.

verl is flexible and easy to use with:

  • Easy extension of diverse RL algorithms: The hybrid-controller programming model enables flexible representation and efficient execution of complex Post-Training dataflows. Build RL dataflows such as GRPO, PPO in a few lines of code.

  • Seamless integration of existing LLM infra with modular APIs: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as FSDP, Megatron-LM, vLLM, SGLang, etc

  • Flexible device mapping: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes.

  • Ready integration with popular HuggingFace models

verl is fast with:

  • State-of-the-art throughput: SOTA LLM training and inference engine integrations and SOTA RL throughput.

  • Efficient actor model resharding with 3D-HybridEngine: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases.

News

  • [2025/05] verl will be presented at GOSIM x PyTorch Day 2025. See you in Paris!
  • [2025/04] We will give a tutorial about latest post-training techniques and programming guide for verl at ICLR 2025. See you in Singapore!
  • [2025/03] verl v0.3.0.post1 is released! See release note for details.
  • [2025/03] DAPO is the open-sourced SOTA RL algorithm that achieves 50 points on AIME 2024 based on the Qwen2.5-32B pre-trained model, surpassing the previous SOTA achieved by DeepSeek's GRPO (DeepSeek-R1-Zero-Qwen-32B). DAPO's training is fully powered by verl and the reproduction code is publicly available now.
  • [2025/03] We introduced the programming model of verl at the vLLM Beijing Meetup and verl intro and updates at the SGLang-LMSYS Org Meetup in Sunnyvale mid-March.
more...

Key Features

  • FSDP and Megatron-LM for training.
  • vLLM, SGLang and HF Transformers for rollout generation.
  • Compatible with Hugging Face Transformers and Modelscope Hub: Qwen-2.5, Llama3.1, Gemma2, DeepSeek-LLM, etc
  • Supervised fine-tuning.
  • Reinforcement learning with PPO, GRPO, ReMax, REINFORCE++, RLOO, PRIME, etc.
    • Support model-based reward and function-based reward (verifiable reward)
    • Support vision-language models (VLMs) and multi-modal RL
  • Flash attention 2, sequence packing, sequence parallelism support via DeepSpeed Ulysses, LoRA, Liger-kernel.
  • Scales up to 70B models and hundreds of GPUs.
  • Experiment tracking with wandb, swanlab, mlflow and tensorboard.

Upcoming Features

Getting Started

Documentation

Quickstart:

Running a PPO example step-by-step:

Reproducible algorithm baselines:

For code explanation and advance usage (extension):

Blogs from the community

Performance Tuning Guide

The performance is essential for on-policy RL algorithm. We have written a detailed performance tuning guide to help you optimize performance.

Upgrade to vLLM v0.8.2

verl now supports vLLM>=0.8.2 when using FSDP as the training backend. Please refer to this document for the installation guide and more information. Please avoid vllm 0.7.x, which contains bugs that may lead to OOMs and unexpected errors.

Use Latest SGLang

SGLang is fully supported with verl, and SGLang RL Group is working extensively on building unique features, including multi-turn agentic RL, VLM RLHF, server-based RL, and partial rollout. Please refer to this document for the installation guide and more information.

Citation and acknowledgement

If you find the project helpful, please cite:

@article{sheng2024hybridflow,
  title   = {HybridFlow: A Flexible and Efficient RLHF Framework},
  author  = {Guangming Sheng and Chi Zhang and Zilingfeng Ye and Xibin Wu and Wang Zhang and Ru Zhang and Yanghua Peng and Haibin Lin and Chuan Wu},
  year    = {2024},
  journal = {arXiv preprint arXiv: 2409.19256}
}

verl is inspired by the design of Nemo-Aligner, Deepspeed-chat and OpenRLHF. The project is adopted and contributed by Bytedance, Anyscale, LMSys.org, Alibaba Qwen team, Shanghai AI Lab, Tsinghua University, UC Berkeley, UCLA, UIUC, University of Hong Kong, ke.com, All Hands AI, ModelBest, OpenPipe, JD AI Lab, Microsoft Research, StepFun, Amazon, Linkedin, Meituan, Camel-AI, OpenManus, Prime Intellect, NVIDIA research, Baichuan, and many more.

Awesome work using verl

  • TinyZero: a reproduction of DeepSeek R1 Zero recipe for reasoning tasks GitHub Repo stars
  • DAPO: the fully open source SOTA RL algorithm that beats DeepSeek-R1-zero-32B GitHub Repo stars
  • SkyThought: RL training for Sky-T1-7B by NovaSky AI team. GitHub Repo stars
  • simpleRL-reason: SimpleRL-Zoo: Investigating and Taming Zero Reinforcement Learning for Open Base Models in the Wild GitHub Repo stars
  • Easy-R1: Multi-modal RL training framework GitHub Repo stars
  • OpenManus-RL: LLM Agents RL tunning framework for multiple agent environments. GitHub Repo stars
  • deepscaler: iterative context scaling with GRPO GitHub Repo stars
  • rllm: async RL training with verl-pipeline GitHub Repo stars
  • PRIME: Process reinforcement through implicit rewards GitHub Repo stars
  • RAGEN: a general-purpose reasoning agent training framework GitHub Repo stars
  • Logic-RL: a reproduction of DeepSeek R1 Zero on 2K Tiny Logic Puzzle Dataset. GitHub Repo stars
  • Search-R1: RL with reasoning and searching (tool-call) interleaved LLMs GitHub Repo stars
  • ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning GitHub Repo stars
  • DeepRetrieval: Hacking Real Search Engines and retrievers with LLMs via RL for information retrieval GitHub Repo stars
  • Code-R1: Reproducing R1 for Code with Reliable Rewards GitHub Repo stars
  • Skywork-OR1: Skywork open reaonser series GitHub Repo stars
  • ToRL: Scaling tool-integrated RL GitHub Repo stars
  • cognitive-behaviors: Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective STaRs GitHub Repo stars
  • PURE: Credit assignment is the key to successful reinforcement fine-tuning using process reward model GitHub Repo stars
  • MetaSpatial: Reinforcing 3D Spatial Reasoning in VLMs for the Metaverse GitHub Repo stars
  • DeepEnlighten: Reproduce R1 with social reasoning tasks and analyze key findings GitHub Repo stars
  • DeepResearcher: Scaling deep research via reinforcement learning in real-world environments GitHub Repo stars
  • self-rewarding-reasoning-LLM: self-rewarding and correction with generative reward models GitHub Repo stars
  • critic-rl: LLM critics for code generation GitHub Repo stars
  • VAGEN: Training VLM agents with multi-turn reinforcement learning GitHub Repo stars
  • AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning GitHub Repo stars
  • Trust Region Preference Approximation: A simple and stable reinforcement learning algorithm for LLM reasoning. GitHub Repo stars
  • DQO: Enhancing multi-Step reasoning abilities of language models through direct Q-function optimization
  • FIRE: Flaming-hot initiation with regular execution sampling for large language models
  • Rec-R1: Bridging Generative Large Language Models and Recommendation Systems via Reinforcement Learning
  • all-hands/openhands-lm-32b-v0.1: A strong, open coding agent model, trained with multi-turn fine-tuning

Contribution Guide

Contributions from the community are welcome! Please check out our project roadmap and good first issues to see where you can contribute.

Code Linting and Formatting

Warning

We are immigrating to ruff as the linter and formatter and pre-commit as the managing tool.

If your branch is based on a previous commit using yapf and pylint, simply merging might trigger overwhelming linting errors, while you are only expected to resolve ones in the files related to your PR.

To resolve this issue, please try the following workaround to only include the files you really changed in the PR:

  1. In your branch, fix linting and format with ruff: ruff check --fix && ruff-format
  2. Squash into a new single commit: git reset --soft $(git merge-base main HEAD) && git add -A && git commit -m "feat: ..."
  3. Merge with the latest main: git merge origin/main
  4. Force push to your branch: git push --force

We use pre-commit to help improve code quality. To initialize pre-commit, run:

pip install pre-commit
pre-commit install

You can also manually run pre-commit by:

pre-commit run

Adding CI tests

If possible, please add CI test(s) for your new feature:

  1. Find the most relevant workflow yml file, which usually corresponds to a hydra default config (e.g. ppo_trainer, ppo_megatron_trainer, sft_trainer, etc).
  2. Add related path patterns to the paths section if not already included.
  3. Minimize the workload of the test script(s) (see existing scripts for examples).

We are HIRING! Send us an email if you are interested in internship/FTE opportunities in MLSys/LLM reasoning/multimodal alignment.

Description
verl: Volcano Engine Reinforcement Learning for LLMs
Readme Apache-2.0 678 MiB
Languages
Python 91.3%
Shell 8.2%
Roff 0.4%
Jinja 0.1%