diff --git a/docs/source/grpo_trainer.md b/docs/source/grpo_trainer.md index fa322672050..a3d99953706 100644 --- a/docs/source/grpo_trainer.md +++ b/docs/source/grpo_trainer.md @@ -563,8 +563,14 @@ accelerate launch \ ### Configuration Tips -> [!WARNING] -> VLM training may fail if image tokens are truncated. We highly recommend disabling truncation by setting `max_prompt_length` to `None`. +> [!TIP] +> For VLMs, truncating may remove image tokens, leading to errors during training. To avoid this, set `max_prompt_length=None` in the [`GRPOConfig`]. This allows the model to process the full sequence length without truncating image tokens. +> +> ```python +> GRPOConfig(max_prompt_length=None, ...) +> ``` +> +> Only use `max_prompt_length` when you've verified that truncation won't remove image tokens for the entire dataset. - Use LoRA on vision-language projection layers - Enable 4-bit quantization to reduce memory usage diff --git a/docs/source/rloo_trainer.md b/docs/source/rloo_trainer.md index deac7ea7a92..36d315e678d 100644 --- a/docs/source/rloo_trainer.md +++ b/docs/source/rloo_trainer.md @@ -545,8 +545,14 @@ accelerate launch \ ### Configuration Tips -> [!WARNING] -> VLM training may fail if image tokens are truncated. We highly recommend disabling truncation by setting `max_prompt_length` to `None`. +> [!TIP] +> For VLMs, truncating may remove image tokens, leading to errors during training. To avoid this, set `max_prompt_length=None` in the [`RLOOConfig`]. This allows the model to process the full sequence length without truncating image tokens. +> +> ```python +> RLOOConfig(max_prompt_length=None, ...) +> ``` +> +> Only use `max_prompt_length` when you've verified that truncation won't remove image tokens for the entire dataset. - Use LoRA on vision-language projection layers - Enable 4-bit quantization to reduce memory usage