Update max_length explanation for VLMs

This commit is contained in:
sergiopaniego
2025-10-07 14:35:02 +02:00
parent 6be53e19bc
commit 75a0582b72
2 changed files with 16 additions and 4 deletions

View File

@ -567,8 +567,14 @@ accelerate launch \
### Configuration Tips
> [!WARNING]
> VLM training may fail if image tokens are truncated. We highly recommend disabling truncation by setting `max_prompt_length` to `None`.
> [!TIP]
> For VLMs, truncating may remove image tokens, leading to errors during training. To avoid this, set `max_length=None` in the [`GRPOConfig`]. This allows the model to process the full sequence length without truncating image tokens.
>
> ```python
> GRPOConfig(max_length=None, ...)
> ```
>
> Only use `max_length` when you've verified that truncation won't remove image tokens for the entire dataset.
- Use LoRA on vision-language projection layers
- Enable 4-bit quantization to reduce memory usage

View File

@ -549,8 +549,14 @@ accelerate launch \
### Configuration Tips
> [!WARNING]
> VLM training may fail if image tokens are truncated. We highly recommend disabling truncation by setting `max_prompt_length` to `None`.
> [!TIP]
> For VLMs, truncating may remove image tokens, leading to errors during training. To avoid this, set `max_length=None` in the [`RLOOConfig`]. This allows the model to process the full sequence length without truncating image tokens.
>
> ```python
> RLOOConfig(max_length=None, ...)
> ```
>
> Only use `max_length` when you've verified that truncation won't remove image tokens for the entire dataset.
- Use LoRA on vision-language projection layers
- Enable 4-bit quantization to reduce memory usage