Doc: add a environment to fix that the memory capacity is unbalanced (#1105)

if we use sglang as the rollout engine, we should export
SGL_DISABLE_TP_MEMORY_INBALANCE_CHECK to avoid that the memory capacity
is unbalanced, please refer to [#5426 in
sglang](https://github.com/sgl-project/sglang/pull/5426)

# why we should export SGL_DISABLE_TP_MEMORY_INBALANCE_CHECK when using
SGLang as the rollout engine in verl?
1. verl initializes a SGlangRollout module during rollout, which is used
to evaluate/generate samples.

2. SGLangRollout will initialize VerlEngine, further initialize a torch.
Distributed. DeviceMesh, used to support the TP.

3. DeviceMesh.init () internally checks the free video memory of all
participating devices, and if the difference is too large (more than
about 10%), it directly reports an error, preventing initialization
failures or communication deadlock.

# Why might there be inconsistent graphic memory?
## Ray Distributed Actor loads the model at different times:
verl uses ray multi-process multi-gpu concurrent training, and each
`WorkerDict` may be called at different times:
`self.rollout = SGLangRollout(...)`
different workers initialize the model at different times → different
memory usage.

## Delayed initialization causes memory bias
Some workers enter the model loading/infer process earlier than others,
such as `generate_sequences()` or `compute_log_prob()`.
The early-loaded worker video memory has been eaten by the model, and
the late-loaded worker video memory is still empty → the graphic memory
gap is large.

## Verl+SGLang's TP initialization goes "all device broadcast", but
there is no uniform release timing
SGLangRollout only needs to involve the part of the graphics card used
by the rollout machine, but its VerlEngine initialization calls
torch.distribut.init process group() and broadcast a bunch of weights.
Result in:

Non-rollout cards also participate in communication;

Then initialize DeviceMesh, and the error "inconsistent memory" is
reported.

## Different loading modes of FSDP/TP models also cause deviations
if the following parameters are set
```
actor.fsdp_config.param_offload=True
ref.fsdp_config.param_offload=True
```

Some worker parameters are on the CPU, and some parameters are shard to
the GPU in advance. This also creates an asymmetric distribution of
video memory.

---------

Co-authored-by: ocss884 <ocss.lin@gmail.com>
This commit is contained in:
mlmz
2025-04-18 12:28:17 +08:00
committed by GitHub
parent ec59b8788c
commit c98fb3197b

View File

@ -37,6 +37,7 @@ We use Qwen/Qwen2-7B-Instruct on the gsm8k dataset for a simple test.
.. code-block:: bash
export SGL_DISABLE_TP_MEMORY_INBALANCE_CHECK=True
PYTHONUNBUFFERED=1 python3 -m verl.trainer.main_ppo \
data.train_files=$HOME/data/gsm8k/train.parquet \
data.val_files=$HOME/data/gsm8k/test.parquet \
@ -70,6 +71,51 @@ We use Qwen/Qwen2-7B-Instruct on the gsm8k dataset for a simple test.
trainer.test_freq=10 \
trainer.total_epochs=15 2>&1 | tee verl_demo.log
Why export SGL_DISABLE_TP_MEMORY_INBALANCE_CHECK?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. ``verl`` initializes a ``SGLangRollout`` module during rollout, which is used to evaluate/generate samples.
2. ``SGLangRollout`` will initialize ``VerlEngine``, and further initialize a ``torch.distributed.DeviceMesh``, used to support Tensor Parallel (TP).
3. ``DeviceMesh.init()`` internally checks the free GPU memory of all participating devices. If the difference is too large (more than ~10%), it directly reports an error to avoid initialization failures or deadlocks.
Why might there be inconsistent GPU memory?
"""""""""""""""""""""""""""""""""""""""""""
**1. Ray Distributed Actor loads the model at different times**
``verl`` uses Ray-based multi-process, multi-GPU concurrent training. Each ``WorkerDict`` may be called at different times:
.. code-block:: python
self.rollout = SGLangRollout(...)
Different workers initialize the model at different times → different memory usage.
**2. Delayed initialization causes memory bias**
Some workers start model loading/inference (e.g., ``generate_sequences()``, ``compute_log_prob()``) earlier than others.
Early workers already use up GPU memory → late workers still have empty memory → memory difference appears.
**3. SGLang's TP init uses "all-device broadcast", but there's no uniform release timing**
Although ``SGLangRollout`` may only involve subset of GPUs, its ``VerlEngine`` initialization calls ``torch.distributed.init_process_group()`` and broadcasts weights, so:
- Non-rollout GPUs also join the communication.
- Later on, ``DeviceMesh`` init will fail due to "inconsistent memory".
**4. Different FSDP/TP loading behaviors also lead to mismatch**
If using:
.. code-block:: bash
actor.fsdp_config.param_offload=True
ref.fsdp_config.param_offload=True
Then some workers keep params on CPU while others already sharded to GPU → leads to asymmetric memory layout.
Using SGLang as the Inference Backend for PPO Training Across Multiple Machines
------------------------------------------------------------------------------
SGLang also supports running verl's RAY-based cross-machine inference in IPv4 and IPv6 scenarios. In the script below, we use TP=16 for cross-machine inference. Suppose we have two interconnected machines: node0 with IP 10.94.16.4 and node1 with IP 10.94.16.5.