From 1ad3aca6828ec3985a1de1dc3f206522fc27a518 Mon Sep 17 00:00:00 2001 From: Sergio Paniego Blanco Date: Tue, 30 Sep 2025 12:10:55 +0200 Subject: [PATCH] Updated TRL integration docs (#25684) Signed-off-by: sergiopaniego Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Signed-off-by: Sergio Paniego Blanco Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> --- docs/training/trl.md | 52 +++++++++++++++++++++++++++++++++++++++----- mkdocs.yaml | 1 - 2 files changed, 47 insertions(+), 6 deletions(-) diff --git a/docs/training/trl.md b/docs/training/trl.md index c7c1a5a3bb..acf48cc4ec 100644 --- a/docs/training/trl.md +++ b/docs/training/trl.md @@ -1,12 +1,54 @@ # Transformers Reinforcement Learning -Transformers Reinforcement Learning (TRL) is a full stack library that provides a set of tools to train transformer language models with methods like Supervised Fine-Tuning (SFT), Group Relative Policy Optimization (GRPO), Direct Preference Optimization (DPO), Reward Modeling, and more. The library is integrated with 🤗 transformers. +[Transformers Reinforcement Learning](https://huggingface.co/docs/trl) (TRL) is a full stack library that provides a set of tools to train transformer language models with methods like Supervised Fine-Tuning (SFT), Group Relative Policy Optimization (GRPO), Direct Preference Optimization (DPO), Reward Modeling, and more. The library is integrated with 🤗 transformers. Online methods such as GRPO or Online DPO require the model to generate completions. vLLM can be used to generate these completions! -See the guide [vLLM for fast generation in online methods](https://huggingface.co/docs/trl/main/en/speeding_up_training#vllm-for-fast-generation-in-online-methods) in the TRL documentation for more information. +See the [vLLM integration guide](https://huggingface.co/docs/trl/main/en/vllm_integration) in the TRL documentation for more information. + +TRL currently supports the following online trainers with vLLM: + +- [GRPO](https://huggingface.co/docs/trl/main/en/grpo_trainer) +- [Online DPO](https://huggingface.co/docs/trl/main/en/online_dpo_trainer) +- [RLOO](https://huggingface.co/docs/trl/main/en/rloo_trainer) +- [Nash-MD](https://huggingface.co/docs/trl/main/en/nash_md_trainer) +- [XPO](https://huggingface.co/docs/trl/main/en/xpo_trainer) + +To enable vLLM in TRL, set the `use_vllm` flag in the trainer configuration to `True`. + +## Modes of Using vLLM During Training + +TRL supports **two modes** for integrating vLLM during training: **server mode** and **colocate mode**. You can control how vLLM operates during training with the `vllm_mode` parameter. + +### Server mode + +In **server mode**, vLLM runs as an independent process on dedicated GPUs and communicates with the trainer through HTTP requests. This configuration is ideal when you have separate GPUs for inference, as it isolates generation workloads from training, ensuring stable performance and easier scaling. + +```python +from trl import GRPOConfig + +training_args = GRPOConfig( + ..., + use_vllm=True, + vllm_mode="server", # default value, can be omitted +) +``` + +### Colocate mode + +In **colocate mode**, vLLM runs inside the trainer process and shares GPU memory with the training model. This avoids launching a separate server and can improve GPU utilization, but may lead to memory contention on the training GPUs. + +```python +from trl import GRPOConfig + +training_args = GRPOConfig( + ..., + use_vllm=True, + vllm_mode="colocate", +) +``` + +Some trainers also support **vLLM sleep mode**, which offloads parameters and caches to GPU RAM during training, helping reduce memory usage. Learn more in the [memory optimization docs](https://huggingface.co/docs/trl/main/en/reducing_memory_usage#vllm-sleep-mode). !!! info - For more information on the `use_vllm` flag you can provide to the configs of these online methods, see: - - [`trl.GRPOConfig.use_vllm`](https://huggingface.co/docs/trl/main/en/grpo_trainer#trl.GRPOConfig.use_vllm) - - [`trl.OnlineDPOConfig.use_vllm`](https://huggingface.co/docs/trl/main/en/online_dpo_trainer#trl.OnlineDPOConfig.use_vllm) + For detailed configuration options and flags, refer to the documentation of the specific trainer you are using. diff --git a/mkdocs.yaml b/mkdocs.yaml index 1535fcc622..6f2be65a18 100644 --- a/mkdocs.yaml +++ b/mkdocs.yaml @@ -102,7 +102,6 @@ plugins: - https://numpy.org/doc/stable/objects.inv - https://pytorch.org/docs/stable/objects.inv - https://psutil.readthedocs.io/en/stable/objects.inv - - https://huggingface.co/docs/transformers/main/en/objects.inv markdown_extensions: - attr_list