Update all model references in use_model.md to use Qwen/Qwen3-0.6B
as specifically requested by qgallouedec.
Changes:
- Replace Qwen/Qwen2.5-0.5B with Qwen/Qwen3-0.6B in all 3 locations
- Simpler model reference consistent with reviewer's suggestion
Address reviewer feedback by replacing trl-lib/Qwen2-0.5B-XPO with the
official Qwen/Qwen2.5-0.5B model in all use_model.md examples.
Changes:
- Replace model references in 3 locations to use Qwen organization model
- More consistent with rest of TRL documentation
- Less misleading than custom trl-lib namespace model
Resolves#4385
- Replace edbeeching/gpt-neo-125M-imdb with trl-lib/Qwen2-0.5B-XPO in peft_integration.md
- Replace kashif/stack-llama-2 with trl-lib/Qwen2-0.5B-XPO in use_model.md (3 occurrences)
- All personal developer namespace models now use common trl-lib namespace
* Compatibilitywith padding free and iterable dataset
* Fix collator test
* add a test for streaming
* some cleaning
* improve and fix tests
* tiny revert
* bump datasets to 3.0.0
* Added else clause to avoid NameError on optimizer_offload
* Accounted for deepspeed's renaming in 0.16.4
* Switched to packaging.version.parse over the (broken) tuple split
* Moved from NotImplementedError to RuntimeError in else clause
* Add support for additional generation kwargs in GRPO Trainer
- Extend GRPOConfig to support additional generation kwargs
- Update GRPOTrainer to incorporate additional generation parameters
- Add tests for training with additional generation kwargs for both standard and vLLM modes
* Add missing vllm_gpu_memory_utilization=0.5
* 🔧 Refactor GRPO generation parameters and configuration
- Restructure GRPOConfig to separate generation parameters
- Add support for top_p, top_k, min_p, repetition_penalty, and length_penalty
- Remove additional_generation_kwargs in favor of explicit parameters
- Update GRPOTrainer to use new generation parameter configuration
* Update tests
* Remove length_penalty and fix tests
* Update defaults and docs
- Change temperature type from Optional[float] to float
- Set default top_p to 1.0 instead of None
- Simplify parameter descriptions by removing redundant "if set to None" text
- Maintain consistent type hints and default values for generation parameters
* GRPO remove optional type hint for temperature parameter
* Remove length_penalty from sampling_kwargs dict in GRPOTrainer
* some refactoring
* top k None support
* change value of in test to amke them work
---------
Co-authored-by: Robert Veres <robert.veres@languagetool.org>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <gallouedec.quentin@gmail.com>
* Deprecate liger
* remove import
* oops, shouldn't be here
* Fix other deprecations
* remove liger from gkd for now
* remove liger for teacher
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* Fix logits computation in DPO trainer prediction step
* fix compute_metrics for bco and test
* same for cpo
* same from dpo
* for kto
* anf finally orpo
* Apply style fixes
---------
Co-authored-by: kyungdae-jo <kyungdae.jo@navercorp.com>
Co-authored-by: Quentin Gallouédec <gallouedec.quentin@gmail.com>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* ✨ Enhance GRPO logging with configurable completions sampling
- Update `GRPOConfig` to replace `log_completions` with `log_completions_steps`
- Add `print_prompt_completions_sample()` utility function for rich console logging
- Modify `GRPOTrainer` to additionally print 5 random prompt-completion pairs every log_completions_steps steps
* GRPO trainer completions logging, move wandb checks together
* Add rich availability check and use fallback in print_prompt_completions_sample when rich is not available
* Update docstrings on print_prompt_completions_sample
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Revert back to simple log_completions bool
* GRPO log completions fully
* Remove print fallback from print_prompt_completions_sample
* Move accelerator main process check up for grpo log completions
* Explicit variable names in print_prompt_completions_sample
* Make GRPOConfig docstring match field description
* Update log_completions docs again
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update GRPOConfig docs to match field
* improve readibility when prompt or completions are multilines
* log reward
* prevent hanging, don't print without rich, print reward
* style
---------
Co-authored-by: Robert Veres <robert.veres@languagetool.org>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <gallouedec.quentin@gmail.com>
* Add num_updates and epsilon parameters to GRPOConfig and GRPOTrainer
* test sampler
* update the loss computation
* fix eval sampler
* should work now
* buffer inputs with grad accum
* optimize when num_iterations == 1
* test
* minor comment removal and fix log metric
* beta position
* clarify comment [ci skip]
* clarify sampler doc [ci skip]
* fix collision with eval logging
* clarify
* 🔧 Optimize GRPO training by conditionally loading reference model based on beta value
* ✅ Add test for GRPOTrainer with beta=0 to ensure no reference model and KL divergence
* 🔧 Refactor GRPOTrainer code for improved readability and maintainability
* 🔧 Simplify per_token_loss calculation in GRPOTrainer for clarity
* fix test, style, and some struct for clarity
---------
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* return dataset if it's preprocessed
* add is_processed flag variable
* add test
* move test_sft_trainer_directly_with_pretokenized_data to Tester2
* Update sft_trainer.py
* no need for padding and truncation
* minor reorganization
* Update trl/trainer/sft_trainer.py
* let the collator pad
* style
* fix tests
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* fix check for AutoLigerKernelForCausalLM
* fix case where AutoLigerKernelForCausalLM is not defined
* update min liger version
* formatting
* fix win CI
- Fixed a bug where an extra `len` call inside the error message caused a `TypeError` instead of the expected `ValueError`.
- Replaced `len(len(args.reward_weights))` with the correct `len(args.reward_weights)` to properly calculate the number of reward weights.
- Ensured that a `ValueError` is now raised with an accurate and clear message when the number of reward weights does not match the number of reward functions.
This fix prevents confusion during debugging and ensures proper error handling during validation.
Tested with cases where:
- `args.reward_weights` is None (default case).
- `args.reward_weights` has mismatched lengths with `reward_funcs`.
* added reward weights for multi-reward runs in GRPO
* reward_weights are float, moved from GRPOTrainer to GRPOConfig
* minor comment fix
* minor
* fix test
* missing link
---------
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* add token accuracy metric
* fix return type
* shift tokens
* use compute_loss so that the model is called only once
* add to logs
* log from main process
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Distribute
* fix some logic errors
* fix and document RepeatRandomSampler
* comment
* doc clarification
* fix type hint
* more readable
* fix eval
* fix tests
* roll back to distribute generation
* improve comment [ci skip]
* fix slice
* catch for eval batch size as well; fix completion_ids in vllm
* log completions
* Revert "log completions"
This reverts commit 1e4af8ffb8dda15d7596e707ac784208db88135a.
* Before the first training step, the model has no optimizer: fix ds3
* properly unwrap torch.compile-ed models with GRPO
* add test and compat with reward models
* ignore test windows
* properly unwrap torch.compile-ed models with GRPO
* add test and compat with reward models
* ignore test windows
* chore: lint
* style
---------
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* [DOCS] add GRPOTrainer to README.md
I replaced RLOOTrainer with GRPOTrainer because you thought you might want to keep it limited, but let me know if you want both.
* Update README.md
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* add eval loss logging during predition
* make sure the train and eval logs aren't mixed
* test grpo in eval
* fix tests
---------
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* rloo custom reward function and test
* idont even know why i did that
* removing get_reward_custom
* remove get_reward_custom test
* fix code quality check
* adding test
* end this mysery already
* fix test
* initial commit
* doc on custom reward function
* test
* doc doc doc
* fix collator
* style
* links?
* I need a docdoc 🎵
* fix link
* I do like writing doc tbh
* it takes time, but it's worth it
* no return!
* type hint
* it's probably the best of both worlds [ci skip]
* new doc before implementation
* tests
* more doc
* style
* multiple pretrained funcs
* fix arg name
* main?
* example for R1
* fix script
* clearer
* import [ci skip]
* Update docs/source/grpo_trainer.md
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
---------
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* init grpo [ci skip]
* initial version
* refine args defs
* model card
* initial doc
* fix badges
* fix spaces
* try link to super in doc
* temperature, fix indexing, and std=0.0
* grpo script for cli
* peft support
* move data preparation in `compute_loss`
* weird doc trial
* fix device and some logging
* unwrap_model_for_generation for distributed setting
* Compat with distrib training
* revert grpo config doc trial (didn't work)
* test
* allow model to be str and processing_class to be none; fix loss computation
* advantage is always 0.0: don't log
* fix peft not installed
* proper reward model for testing
* fix script for cli
* add trl grpo to cli doc
* test peft
* flush left
* fix reward calculation
* new reward model
* support any reward model
* fix reward processing class def
* log reward std
* fix reward logging
* fix grad computation
* skip embed layer in test
* remove optimizer_cls_and_kwargs
* improve GRPO default args
* reduce mem usage for grpo test
* reduce mem usage in test grpo
* reduce memory usage for test
* Fix the test
* remove redondant
* fix min version
* Update test_grpo_trainer.py
* Update test_grpo_trainer.py
* Fix test, finally found the solution!
* some doc
* Update doc-builder workflow to use specific commit sha
* more doc
* advantages
* drop cancel fo no grad
* logged metrics [ci skip]
* completion col is ignored [ci skip]
* fix latex
* double space? ~?
* try a latex fix
* with branch
* Empty commit
* Empty commit
* double space seems to be the solution
* set default for max_length and max prompt lenngth and add guidelines for defaults
* remove dep kwargs
* truncate prompt in prm
* Update CONTRIBUTING.md [ci skip]
* vllm online dpo
* new arg and add back generation config [skip ci]
* import utils
* optional import and comment
* is_vllm_available
* support conv and not conv [ci skip]
* add old code back
* use func [skip ci]
* fix _generate call
* fix and dedicated func
* top k 50
* style
* add import error
* new testing model
* Update OnlineDPOTrainer class with new features
* test vllm
* fix generate tiny script
* max len arg
* fix comment [ci skip]
* revert num_return_sequences
* vllm dep
* Add require_torch_accelerator import and skip test if vllm is not available
* proper require_torch_accelerator
* add vllm section
* Add hfoption sections to speeding_up_training.md
* no, an id
* Update vllm dependency to exclude Windows platform
* Note on future release
* style
* adding readme for ultrafeedback dataset
* using ModelCard as DatasetsCard like hf datasets is understaffed
* more info in readme.md of the dataset
* generated readme for all dataset scripts
* precommit
* fixing test
* md format; corrections; generation script link
* some collections
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* padding free
* specify dtype
* test
* warnings when not flash attention
* fix test
* remove
* docstring padding-free
* flash-attn dep
* Stronger warning
* require_flash_attn in test
* flash-attn in CI
* rm flash-attn from dep
* Remove flash-attn dependency from test workflows
* refactor
* Update .github/workflows/tests.yml
* Update trl/trainer/dpo_trainer.py
* drop require flash-attn
* fix dtype
* refine warning
* Update trl/trainer/dpo_config.py
* Add logic to compute mean logits for chosen and rejected tokens with padding-free
* format
* Update trl/trainer/dpo_trainer.py
* Update trl/trainer/dpo_trainer.py
* fix comment [ci skip]
* fix num logits to keep
* Implemented integration with Comet in `LogCompletionsCallback`. Implemented related integration test.
* Implemented integration with Comet in `CPOTrainer.evaluation_loop()` during logging of `game_log` table.
* Implemented integration with Comet in `CPOTrainer.evaluation_loop()` during logging of `game_log` table.
* Implemented integration with Comet in `DPOTrainer.evaluation_loop()` during logging of `game_log` table.
* Implemented integration with Comet in `BCOTrainer.evaluation_loop()` during logging of `game_log` table.
* Implemented integration with Comet in `KTOTrainer.evaluation_loop()` during logging of `game_log` table.
* Implemented integration with Comet in `ORPOTrainer.evaluation_loop()` during logging of `game_log` table.
* Added support for Comet URL integration into model cards created by trainers.
* Moved `get_comet_experiment_url()` into utils.py
* Updated Comet badge in the model card to use PNG image instead of text.
* Fixed bug related to running PPO example during model saving. The error as following: 'GPTNeoXForCausalLM' object has no attribute 'policy'. Introduced guard check that attribute `policy` exists.
* Implemented utility method to handle logging of tabular data to the Comet experiment.
* Implemented logging of the completions table to Comet by `PPOTrainer`.
* Implemented logging of the completions table to Comet by `WinRateCallback`.
* Implemented logging of the completions table to Comet by `RLOOTrainer` and `RewardTrainer`.
* Restored line to the main branch version.
* Moved Comet related utility methods into `trainer/utils.py` to resolve merge conflict with master branch,
* Update trl/trainer/utils.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Implemented raising of `ModuleNotFoundError` error when logging table to Comet if `comet-ml` is not installed.
* import comet with other imports
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* initial skeleton
* tokenize fn
* adding bos and eos to tokenization fn
* prmtrainer
* fixing small typo in tokenize
* typo in input_ids and labels construction
* numpy dimension
* introduce the stepwise reward trainer
* update markdown files
* let user decide post step separator in config
* doc post_step_separator
* do not add post step_tokens to last step of the reasoning process
* renaming prm to stepwisereward
* formatting
* fix tokenize kwargs
* adapt test to the new post_token args
* adding example script
* fix small typo
* add create_model_card and renaming
* fixing booleans
* Adding the new stepwise_preference instead of placeholders for datasets
* formatting
* Update docs/source/_toctree.yml
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update examples/scripts/stepwise_reward_modeling.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update trl/trainer/stepwise_reward_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update trl/trainer/stepwise_reward_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* update push to hub
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* step_separator can't be None
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* fix suggested typos
* add citation
* reformat doc
* reordering init
* push to hub prm800k
* changing dataset in example
* change dataset format to align with the sky is blue example
* fix tokenization column names
* fix num labels in openai example
* add support for conversational dataset
* remove training whitespace
* replace tokenizer with processing class
* Update docs/source/dataset_formats.mdx
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* remove openai_prm800k
* Update trl/trainer/stepwise_reward_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update trl/trainer/stepwise_reward_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update docs/source/stepwise_reward_trainer.mdx
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update docs/source/stepwise_reward_trainer.mdx
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* renaming
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* renaming
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* minor renamings in docs
* using prm800k instead of openai_prm800k
* update num labels to 2 following the new format
* changing doc examples to math examples
* change reference to dataset_formats.mdx
* changing dataset config in test
* remove conversational dataset support
* remove conv dataset support
* fix bos token
* fix scriptarguments in example
* completion to completions
* remove valuerror for step_separator inside steps
* run precommit
* remove conv dataset support
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* renaming zen dataset
* remove unused printing
* unknown label column
* introduce the train on last step arg
* _tokenize support train_on_last_step
* incorporate train_on_last_step to tests
* formatting
* remove comments in trainer
* Refactor `tokenize_row`
* Update max_completion_length parameter in StepwiseRewardConfig
* Collator
* Update comment
* Update type hint
* fix table
* Remove collator
* don't need pad token id
* add error back
* max length args
* use tokenizer arg
* Update doc
* label -> labels
* fixing tokenization issues in tokenize row
* correct labels for token classification
* adding max_length to tokenize_row
* reformat tests
* adding tests for tokenize row
* fixing typos in comments
* update doc
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* Add math_shepherd.py script for dataset processing
* split the dataset
* formatting
* same evaluation method for the two training methods
* adding filtering to example script
* formatting
* Add features to avoid casting labels to bool in dataset tokenization
* Update docs/source/stepwise_reward_trainer.mdx [ci skip]
* Add learning_rate parameter to StepwiseRewardConfig class
* update doc
* Remove unused setup_chat_format function
* Fix warning message in stepwise_reward_modeling.py
* Update logging steps in stepwise_reward_trainer.mdx
* little doc change [ci skip]
* Fix copyrights
* fix space after copyrights
* Update dataset loading in stepwise_reward_modeling.py
* refine compute_accuracy and proper test
* fix tests
* style
* renamings
* renaming in init
* doc renaming
* fix sorting and tag
* experiemental [ci skip]
* trigger CI
* other doc fix
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* function calling training support for SFTTraining
* adding tool support to data_utils
* adding test for function calling tokenizer
* reverting changes to sfttrainer and config,added maybe_apply_chat_template
* arg for maybe_apply_chat_templates docstring
* Doc sectioning
* minor test modification
* minor doc modification
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* refactor parser
* Only document some methods
* Update imports in cli_utils.py and remove config option in utils.py
* add `test_parse_args_and_arg_override_config` and remove unnecessary mocks [ci skip]
* fix comment [ci skip]
* fix comment [ci skip]
* Extra arg in config also returned
* fix docstring [ci skip]
* add mock back
* use `deprecate_kwarg`
* Add guidelines for working with warnings in the codebase
* Remove unnecessary warnings and improve code initialization
* Fix warnings and improve accuracy calculation
* Add rich library dependency for text formatting
* Update LoRA weight loading warning message
* Fix logging and import issues in AlignPropConfig
* Fix warnings and improve code readability
* Remove unused import statements
* Refactor CPOTrainer class in cpo_trainer.py
* Remove unnecessary warnings and raise ValueError for missing model
* Fix warnings and improve code consistency
* Update CONTRIBUTING.md to clarify the purpose of warnings
* Fix string formatting in DataCollatorForCompletionOnlyLM class
* Update SimPO loss parameters in CPOTrainer
* Fix warnings and remove unnecessary code in ConstantLengthDataset class
* Clarify warning guidelines
* Rewrite the entire section
* Fix capitalization in CONTRIBUTING.md
* Fix formatting in CONTRIBUTING.md
* Add script_utils.md to the documentation
* Refactor ScriptArguments class documentation
* Refactor TrlParser class to improve code organization and readability
* first commit
* uncomment
* other tests adaptations
* Remove unused variable in test_setup_chat_format
* Remove unused import statement
* style
* Add Bart model
* Update BCOTrainerTester class in test_bco_trainer.py
* Update model IDs and tokenizers in test files
* Add new models and processors
* Update model IDs in test files
* Fix formatting issue in test_dataset_formatting.py
* Refactor dataset formatting in test_dataset_formatting.py
* Fix dataset sequence length in SFTTrainerTester
* Remove tokenizer
* Remove print statement
* Add reward_model_path and sft_model_path to PPO trainer
* Fix tokenizer padding issue
* Add chat template for testing purposes in PaliGemma model
* Update PaliGemma model and chat template
* Increase learning rate to speed up test
* Update model names in run_dpo.sh and run_sft.sh scripts
* Update model and dataset names
* Fix formatting issue in test_dataset_formatting.py
* Fix formatting issue in test_dataset_formatting.py
* Remove unused chat template
* Update model generation script
* additional models
* Update model references in test files
* Remove unused imports in test_online_dpo_trainer.py
* Add is_llm_blender_available import and update reward_tokenizer
* Refactor test_online_dpo_trainer.py: Move skipped test case decorator
* remove models without chat templates
* Update model names in scripts and tests
* Update model_id in test_modeling_value_head.py
* Update model versions in test files
* Fix formatting issue in test_dataset_formatting.py
* Update embedding model ID in BCOTrainerTester
* Update test_online_dpo_trainer.py with reward model changes
* Update expected formatted text in test_dataset_formatting.py
* Add reward_tokenizer to TestOnlineDPOTrainer
* fix tests
* Add SIMPLE_CHAT_TEMPLATE to T5 tokenizer
* Fix dummy_text format in test_rloo_trainer.py
* Skip outdated test for chatML data collator
* Add new vision language models
* Commented out unused model IDs in test_vdpo_trainer
* Update model and vision configurations in generate_tiny_models.py and test_dpo_trainer.py
* Update model and tokenizer references
* Don't push if it already exists
* Add comment explaining test skip
* Fix model_exists function call and add new models
* Update LlavaForConditionalGeneration model and processor
* `qgallouedec` -> `trl-internal-testing`
* Create mergekit_utils.py
* adding mergekit as an optional dependancy
* adding MergeModel to callbacks
* adding mergekit_utils dependencies to callbacks
* setting lower bound for mergekit
* setting mergekit lower band to 0.0.5.1
* adding support for MergeModelCallBack __init__.py
* adding support for mergemodelcallback
* mergemodelcallback tests
* Update callbacks.py
* Update __init__.py
* Update __init__.py
* Update test_callbacks.py
* Update trl/trainer/callbacks.py
removing ## from docs
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update trl/trainer/callbacks.py
removing ## from docs
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update trl/trainer/callbacks.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* using different dataset for tests
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update trl/mergekit_utils.py
adding types
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update trl/mergekit_utils.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Apply suggestions from code review
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* replacing get_last_checkpoint
* renaming Merge to merge_models
* setting mergers default value to linear
* removing unnecessary docs and comments
* adding docstring to Mergeconfig
* adding mergekits link to docstring
* precommit
* removing duplicated import
* typos in mergekit_utils docstring
* fixing tests
* making mergemodelcallback tests optional
* Make import optional
* minor
* use tmp dir in test
* sort
* Add import error checks for mergekit extra
* use a common _merge_and_maybe_push method and compat with windows path
* debug windows
* Update dependencies for mergekit and add test dependencies
* Add assertion to check if merged folder exists in the last checkpoint
* Fix temporary directory cleanup in test_callbacks.py
* Add sys import and skip test for Python versions below 3.10 due to cleanup errors with temp dir
* revert change for debug
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* Fix "Use specified data_collator instead of hard-coding the option"
* Remove query_responses = [] since it's immediately overwritten afterwards.
* Use self.data_collator
* Use specified data_collator instead of hard-coded one in PPOTrainer
* Move the data_collator creation
* Run make precommit
* Support num_logits_to_keep, which computes necessary logits in the forward pass.
* update doc
* bug fix
* update
* check is model supports num_logits_to_keep
* ruff format
* update test file
* peft model support
* test passed
* update
* apply use_num_logits_to_keep
* fix num_logits_to_keep compute bug
* compare all outputs
* pytest
* pass test
* use check_min_version
* format
* test_dpo_trainer_use_num_logits_to_keep passed
* add some comments
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* Bump dev version to `0.13.0.dev0`
* Update version number to 0.12 in CITATION.cff
* 🧽 Fix judge documentation (#2318)
* Update judge examples and documentation
* without ':'
* Clean doc
* Fix typo in example code
* Add space after Attributes
* Update attribute name in judges.py
* Add installation instructions for llm-blender library
* Update PairRMJudge attributes documentation
* Fix return type in PairRMJudge
* Revert "🧽 Fix judge documentation (#2318)"
This reverts commit 337005d95169371935fb87f1c559c7412f8472a4.
* Revert "🧽 Fix judge documentation (#2318)"
This reverts commit 337005d95169371935fb87f1c559c7412f8472a4.
* 🧽 Fix judge documentation (#2318)
* Update judge examples and documentation
* without ':'
* Clean doc
* Fix typo in example code
* Add space after Attributes
* Update attribute name in judges.py
* Add installation instructions for llm-blender library
* Update PairRMJudge attributes documentation
* Fix return type in PairRMJudge
* Bump dev version to `0.13.0.dev0`
* Update version number to 0.12 in CITATION.cff
* Add publication date to blog post
* 🧽 Fix judge documentation (#2318)
* Update judge examples and documentation
* without ':'
* Clean doc
* Fix typo in example code
* Add space after Attributes
* Update attribute name in judges.py
* Add installation instructions for llm-blender library
* Update PairRMJudge attributes documentation
* Fix return type in PairRMJudge
* Revert "🧽 Fix judge documentation (#2318)"
This reverts commit 337005d95169371935fb87f1c559c7412f8472a4.
* Update blog post publication dates
* revert to p5
* Update image URLs in index.mdx
* Sort and uniform thumbnail
* Update image alignment in index.mdx
* Bump dev version to `0.13.0.dev0`
* Update version number to 0.12 in CITATION.cff
* 🧽 Fix judge documentation (#2318)
* Update judge examples and documentation
* without ':'
* Clean doc
* Fix typo in example code
* Add space after Attributes
* Update attribute name in judges.py
* Add installation instructions for llm-blender library
* Update PairRMJudge attributes documentation
* Fix return type in PairRMJudge
* Revert "🧽 Fix judge documentation (#2318)"
This reverts commit 337005d95169371935fb87f1c559c7412f8472a4.
* Add conditional check for LLMBlender availability in test_judges.py
* Fix import issues and update test requirements
* Remove unused imports
* Add require_peft decorator to test cases
* Fix import_utils module to use correct package name for llm_blender
* Found min version and test
* Update Slack notification titles
* Update dependencies versions
* Update GitHub Actions workflow to include setup.py and reorder file paths
* Revert "Update Slack notification titles"
This reverts commit be02a7f2de87905e86a847540770968d0416934a.
* Update Slack notification titles
* Remove pull_request branch restriction in tests.yml
* add check code quality back
* Fix PairRMJudge model loading issue
* Add conditional check for LLMBlender availability in test_judges.py
* Fix import issues and update test requirements
* Remove unused imports
* Add require_peft decorator to test cases
* Fix import_utils module to use correct package name for llm_blender
* Update trainer_utils import and save strategy in online_dpo_trainer.py
* fix back-compat for online-dpo
* better comment
* Update transformers dependency to commit f33904
* clean deps
* new tests
* tests
* Add tests without optional dependencies workflow
* Update dependencies in tests.yml
* cpu version of torch
* Update dependencies and installation commands
* Disable fail-fast in test workflow
* Update test matrix in workflows file
* try fix windows
* Remove "rich" from required packages in setup.py
* Update dependency installation in tests.yml
* Add torch and deepspeed installation for windows-latest
* Fix conditional statement in workflow file
* Add torch and deepspeed installation for Windows
* Fix if statement
* Update torch and deepspeed dependencies
* Update liger package requirement for non-Windows platforms
* remove scipy dep
* Add torch GPU requirement for testing_utils
* Update trl/trainer/judges.py
* Refactor reward processing in OnlineDPOTrainer
* Refactor completion decoding and reward processing
* remove strip
* remove warning
* Add reward_tokenizer to training script
* Add reward_tokenizer and reward_processing_class to OnlineDPOTrainer test
* propagate to xpo and nash
* style
* reduce memory requirement with inference_mode
* fix tests
* pairrm judge llmblender
* setUpClass(cls)
* Add setUpClass method to TestJudges class
* truncation left for reward tokenizer
* don't logcompletion without eval dataset
* only eval when possible
* use the pair-judges
* add test
* Update trl/trainer/online_dpo_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update trl/trainer/online_dpo_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* decode and skip special characters
* initial nash
* return tensors
* Update trl/trainer/online_dpo_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update trl/trainer/online_dpo_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update trl/trainer/online_dpo_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* add back the logging
* use batch_decode
* add judges api to XPO trainer
* Update tests/test_online_dpo_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* judge in examples
* judge in config
* add back logs when using reward model
* typo
* add back model_scores logging when using reward model
* log scores for reward model only
* better cond on what to log
* same for rlhf reward
* Update trl/trainer/online_dpo_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* use decode_and_strip_padding
* error if both reward and judge or none are set
* remove unused check
* Uniform way to pass conversation into judge
* heading -> leading
* LogCompletionsCallback compat with online method
* Update Online DPO doc
* check if data is conversational for judges
* update example
* remove comment
* use zip
* fix stats xpo
* Replace judge with PairRMJudge and import AutoModelForSequenceClassification
* update xpo documentation
* Remove doc duplication
* update nash doc
* XPO trl chat
* nash md doc
* HfPairwiseJudge
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* `get_batch_sample` -> `generate_from_model[_and_ref]`
* add `num_items_in_batch=None`
* `num_items_in_batch` in `training_step`
* Fix return type hint
* desc for unpair dataset util
* update example
* process in KTO
* Update doc
* KTO doc rewrite
* fix orpo doc
* add other dataset config names in test
* update doc image
* fix links in doc
* Update reward and log probability metrics in KTOTrainer doc
* skip enc-dec test
* Update docs/source/kto_trainer.mdx
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
---------
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* setup_chat_format: throw error if there was already a template
* fix lint
* clarify in docs
* fix test?
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* in progress
* refactor concatenated_inputs and concatenated_forward
* progress
* further modif
* padding side
* eos prompt enc dec
* prompt_padding_side
* drop prompt apdding side collator
* working on decoder only
* dpo trainer
* Fix loss_mask type conversion bug
* bad attention mask
* try to get the same tokens as main
* fix loss mask
* fix unused col
* added comment
* raise error when paddind token not set
* remove private method tests
* initial vlm support
* make it work for paligemma
* minor test updates
* style
* improve readibility
* improve doc
* style
* flush left and truncate
* flush left in the code
* fix empty_cols and make max_length optional
* always add eos token
* minor changes and doc
* style
* fix docstring
* preference collator in doc
* fix doc
* optional max_completion_length
* Investigating CI failing
* style
* just dpo trainer test
* just idefics
* paligemma
* llava
* test cli
* dataset in test
* all tests
* Update trl/trainer/dpo_trainer.py
* Update trl/trainer/dpo_trainer.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update trl/trainer/dpo_trainer.py
* Update trl/trainer/dpo_trainer.py
* reference to ref
* rich descriptions
* fix logits reporting
* fix truncation
* remove chat template from dpo_vlm
* `get_batch_sample` -> `generate_from_model[_and_ref]`
* add `num_items_in_batch=None`
* `num_items_in_batch` in `training_step`
* Fix return type hint
* test tokenize row
* fix test
---------
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update log_example_reports.py
1. Added logging: Imported the logging module and set up a logger in the main function. This allows for better error tracking and debugging.
2. Improved file reading: Used a with statement to ensure the file is properly closed after reading. Also added error handling to catch and log any issues when reading the file.
3. Error handling for Slack SDK import: Added a try-except block to handle cases where the slack_sdk might not be installed.
4. Enhanced Slack message sending: Added error handling and logging for the Slack message sending process. This will help identify any issues with the Slack integration.
* style
* Update log_reports.py
1. Logging: Added logging to track errors and important events.
2. Error Handling: Wrapped the log file processing in a try-except block to handle potential errors gracefully.
3. Logging Total Failed Tests: Added a log statement to report the total number of failed tests
* style
* further improve
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* `DPOScriptArguments` to `ScriptArguments`
* use dataset_train_split
* Use scriptarguments
* dataset names in command lines
* use `ScriptArguments` everywhere
* ignore biais buffer to end
* remove in v0.13
* rm comment
* update test commands
* Update docs/source/rloo_trainer.md
* Update tests/test_rloo_trainer.py
* Added dataset_train_split argument to ppo.py and rloo.py
* update scripts with dataset_train_split
* Updated README.md with CLI examples and additional usage instructions
Added Command Line Interface (CLI) examples for SFT, DPO, and Chat features.
Improved the "How to Use" section by providing code examples for SFTTrainer and RewardTrainer.
Included installation instructions for both Python Package and source-based installation.
Refined highlights to better showcase efficiency and scalability features.
Updated the repository clone instructions for working with examples.
Added new links to CLI documentation and contribution guide for better navigation.
* Update README.md
* Update README.md
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update README.md
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update README.md
* update badges
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* Update log_example_reports.py
1. Added logging: Imported the logging module and set up a logger in the main function. This allows for better error tracking and debugging.
2. Improved file reading: Used a with statement to ensure the file is properly closed after reading. Also added error handling to catch and log any issues when reading the file.
3. Error handling for Slack SDK import: Added a try-except block to handle cases where the slack_sdk might not be installed.
4. Enhanced Slack message sending: Added error handling and logging for the Slack message sending process. This will help identify any issues with the Slack integration.
* style
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* Update incorrect data processing in DataCollatorForChatML
Fix the extra BOS token and the absence of an EOS token in the returned input_ids, and potentially the absence of a target string in the returned labels.
* Update trl/trainer/utils.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* style
* move comment
* add test for DataCollatorForChatML
* update comment with more details
* update assert reports and comments, and adds verification that the last token of input_ids should be EOS token
* new line at the end of file for code quality
* Update tests/test_utils.py
* Update tests/test_utils.py
* Update tests/test_utils.py
* update tests
* fix test
* Update tests/test_utils.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update tests/test_utils.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* formatting
* fix typo
* simplify
* Revert "simplify"
This reverts commit 7e4006c87265665183032932ca05dffef567e38b.
* tokenize full messages
* dont add eos
* eos is in the last token
* simplify DataCollatorForChatML
* Update tests/test_utils.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* Update README.md
Fix grammatical errors in README.md
fixes issue #2185
Description:
I found a grammatical error in the README.md of the project. This PR fixes the error to improve the overall readability and clarity of the documentation.
Changes:
Corrected grammatical errors
Updated lines to reflect the correct grammar
Reasoning: The original text contained a grammatical error that could confuse readers. This fix ensures that the documentation is accurate and easy to understand.
Closes#2185
* Update README.md
Co-authored-by: Edward Beeching <edbeeching@users.noreply.github.com>
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Edward Beeching <edbeeching@users.noreply.github.com>
* add argument for dropout
* increase default lr
* change default lr in examples
* fix bug in calculation of KL batch size
* KL batch size should be args.per_device_train_batch_size
* Update kto_trainer.mdx with hparam recs
* typo
* allow dropout to be disabled
* update lr in sample scrippt
* Update kto_config.py
* Update trl/trainer/kto_trainer.py
* Update docs/source/kto_trainer.mdx
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* clarify ConstantLengthDataset usage
* dont provide dataset text field when formatting func is provided
* kto maybe_apply_chat_template
* default text field
* doc
* remove maybe_apply_chat_template from kto example
* dataset text field always a str
* remove `dataset_text_field="text"`
* update doc
* conversational dataset support for dpo
* support standard dataset for extract prompt
* test standard dataset for extract prompt
* fix maybe
* fix maybe apply prompt
* style
* overwrite default learning rate of DPO
* style
* rlaif script
* `writer_batch_size` in `train_test_split`
* initial dpo doc refactoring
* vision data section in doc
* lil format modif
* refine Vision datasets
* refine doc
* test new loss type format
* restrcture loss function
* table loss type
* simplify `unsloth`
* improve doc
* looged metrics up
* refine loss section
* Fix label_smoothing parameter in DPOConfig
* dataset for test
* update readme
* Update docs/source/dpo_trainer.mdx
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* try colorized code block
* refine doc style
* further refine doc
* Update docs/source/dpo_trainer.mdx
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* re add pali gemma test
* Add missing period
---------
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* tokenize while training
* same for nashmd and xpo
* Update trl/trainer/online_dpo_trainer.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
---------
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
The function _process_tokens in trl/trainers/kto_trainer.py crashes if the prompt_input_ids are an empty list.
- added a check for nonzero length
- added a check for nonzero length of answer_input_ids for consistency
The checks happen when determining when subtracting 1 from max_length (happens when BOS or EOS is already present).
* fix neftune_noise_alpha
* del neftune_noise_alpha first
* check len after removing handle
* make sure we do not load twice
* Update trl/trainer/sft_trainer.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* remove neftune from SFTTrainer as the superclass has it now
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* learning rate recomentations for kto
* update from suggestion
* override default lr
* add tip tag
* Update trl/trainer/kto_config.py
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* make Orpotrainer run faster on tpu
* less data transfer
* train-trl.py
* fix
* set device_map=auto
* add is_torch_xla_available guards
* delete file
* address comments
* make presubmit
* Update transformer version in setup.py
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* initial xpo trainer
* compute rewards and ref log probs in smaller batches
* add logging
* initial log docs
* fix global_step increment
* fix metric descriptions
* use messages API
* use training_step API
* fix logs
* add test
* add back max_new_tokens
* use max_new_tokens
* refactor
* top_k is an int
* fix formatting
* fix the loss
* fix logging
* fix logging
* fix logging
* fix loss
* calcuate pi_log_ratio once
* fix stats
* fix loss
* do not log loss again
* fix docs
* add disable_dropout_in_model via flag
* comments
* revert doc change
* rm empty cache in online dpo
* improve doc xpo config
* some comment
* fix loggings stats
* fix docs
* save the model
* fix model and reward model
* Update trl/trainer/xpo_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
---------
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Subtract a penalty from OnPolicy Trainers if output does not contain an EOS token
* Caught a few other problems
* Updated the documentation for RLOO trainer and PPOv2Trainer
* Corrected the default type and value for missing_eos_penalty
* Made RLOO Trainer consistent with Online DPO and PPOv2
* Removed --non_eos_penalty from all documentation
* Made missing_eos_penalty examples positive (because we subtract).
* Caught two more incorrect examples
* Removed unnecessary whitespace to make ruff happy
* Update trl/trainer/utils.py
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* drop canonical
* Delete ultrafeedback_prompt_only.py dataset script
* reduce dif in best_of_n
* try to revert best_of_n to make github happy
* anyway...
* fix: prevent unpackaging error due to additional **aux_loss** returned by **concatenated_forward** function when **aux_loss_enabled** is set to True.
* Refactor: Simplify tuple unpacking in `concatenated_forward` call in `get_batch_loss_metrics` function
* Refactor: improve code quality
* fix dataset and value error in sft
* Update trl/trainer/sft_trainer.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* move the test to the right place
---------
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* correct formatting of star sign in kto_trainer.mdx
The "*" symbol in markdown doesn't show. I changed it to $\times$ so the mathematical formula is clearer
* fix markdown
* one more try
* feat : add kto command
* feat : add support for apo loss in KTO Trainer
* feat : make kto script compatible with dpo-formatted datasets
* fix: lint data utils
* add loss_type in kto test
* fix: data utils docstrings
* fix: add dataset reformat test
* fix: lint tests
* fix: only reference kl_logps if needed
---------
Co-authored-by: Karel D'Oosterlinck <karel@contextual.ai>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* tokenize and process DPO data via batches
* use helpers
* updated _process_tokens
* fixed
* incorporate build_tokenized_answer in the _tokenizer
* Update trl/trainer/dpo_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update trl/trainer/dpo_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* fix tokenizer for is_vision_model
* Update trl/trainer/dpo_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* give the _tokenize the tokenizer as well as optional processor
* fix tests
* add bos and eos tokens
* add prompt_pixel_attention_mask
* Update trl/trainer/dpo_trainer.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* truncate by max_length
* formatting
* fix for enc-dec
* For encoder-decoder models, we need to use the prepared decoder_input_ids
* add tests for _build_tokenized_answer and _tokenize_feature
* check for EOS and BOS tokens
* formatting
* do not include pixel mask if they are not provided
* undo refactor
* undo add_bos_token_if_needed change
* refactor tokenizer into smaller helpers
* add back comments
* fix type hints
* format
* fix t5 tests
* args are never optional
* move cat to appropriate helper
* fix _truncate_tokens
* add tests for _truncate_tokens
* remove dead code
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Test for #1970
* style
* drop last element in the batch for test
* check prompt_input_ids not modified
---------
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* Fix issue with precompute_ref_log_probs not working when rpo_alpha is None
* Test: Add test for precompute_ref_log_probs with rpo_alpha=None
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* fix model to save in ppov2
currently saving self.backup_model but this should be self.model
self.backup_model is only a temp model used to store the policy and
value function whereas self.model should have just the policy to save
* simplified logic
* remove unused ordereddict
* format
* fix the fix
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* feat: support RS-LoRA in the ModelConfig
* build: bump minimum peft version to support rslora
* test: add test for get_peft_config
* test: make test python 3.8 friendly
* rm unused marker
* minor changes
* simplify, clarify doc
* update deps (peft in test)
* re-ordering
* fix setup
---------
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* skip bigbird in ci
* readd big bird test
* pytest parametrize
* dont check the version
* rm model name
* re add big bird
* Merge branch 'main' into readd-bigbird-save-load-test
---------
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
* Bug Fix while training using SFTTrainer with DataCollatorForCompletionOnlyLM
Added ```dataset_text_field``` in the SFTConfig while training
* Update docs/source/sft_trainer.mdx
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* online dpo cleanups
* remove unused self.policy
* add OnlineDPOTrainer and config to __init__.py
* import from trainer
* online dpo test
* rename policy to model and ref_policy to ref_model
* renamed internally
* formatting
* Fix `torch_dtype` handling through CLI
The `torch_dtype` is not properly handled when provided via the TRL CLI
since it's provided initially as a string, but is then casted to
`torch.dtype` before providing it to the `{DPO,SFT}Trainer`, which means
that those trainers should handle the scenario where `torch_dtype` is a
`torch.dtype` too.
* Add `torch_dtype` tests in `test_{dpo,sft}_trainer.py`
* Forward contribution credits
* Run `make precommit`
---------
Co-authored-by: Tash Srivastava <yash-srivastava19@users.noreply.github.com>
* Preserve token fields when converting TrainingArguments to SFTConfig
TrainingArguments.to_dict() redacts token fields, so we have to
individually copy them over when converting to SFTConfig to avoid
breaking push_to_hub functionality.
Also adds a test.
* run precommit
* one-line args_as_dict definition per suggestion from kashif
* generalize token copying to match TrainingArguments behavior
* unwrap |= on dict, to support python 3.8
* use .update instead of |= or for-loop
* Remove extra whitespaces
* idefics
* vdpo
* sft idefics
* pad with test
* use prompt instead of tokenizer
* rm name main
* support vlm in tokenize row
* temp fix for regex in lora_target_module
* format
* vdpo
* tmp float16 hard code
* concatenated_forward support for vision
* style and new command line
* all-linear
* format
* delete old examples
* get image
* upcast
* new test
* modified test
* new strat for tokenizer
* rm token transfer
* integrate vision in dpo example
* format
* add FDivergenceType back
* precommit
* pillow test dep
* optional prompt
* `evaluation_strategy` to `eval_strategy`
* revert vsft change (oos)
* update test
* test
* comment and support more in process
* update process
* update doc for vdpo
* caution about limited support
* Update docs/source/dpo_trainer.mdx
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* revert DPO example changes
* cleaner way to check if a model is vision
* comment
* update vdpo example
* rename
---------
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* add a test case for num_train_epochs
* fix ci
* quick change
* disable push to hub
* debug windows ci
* try another fix
* skip subprocess tests on windows
Current handling of `response_masks` inside `batch_forward_pass`
function does not take padding into consideration which results with
shape unmatch during masking. Since response mask is a mask tensor of
response tokens, response tokens should not be concatenated with a
`torch.zeros(query_length)` and masking operation should be done without
slicing.
Remove the concatenation of the response mask, remove the slicing from
the response mask since response mask already has the length of `end -
start + 1`, which is equal to length of `masks[j, start:end]`.
* Step 1: update ppo_trainer and hello_world example
* Step 2: Refine comments and add parameter type
* Step 2: Add missing parameter comments
* Step 1: Organize ptx loss into a function and add ptx_loss to train_stats
* Step 1 updates: add comment to ptx_loss function, fix a bug and add warning message
* Step 2: 1) Add ppo_ptx trainig example as ppo; 2) separate pretrain data fetch and iterate
* Step 2: Remove loss from columns_to_log in ppo_ptx example
* Remove data set revision in load imbd dataset
* Run pre-commit and fix format issues
* Initial draft of f-divergence fn
* Update f-divergence to avoid overflow
* fix test errors and comments
* Add Unit tests for dpo loss with alpha and js div f
* Adjust format
* Fix test error
* Reverse this update
* Add test cases
* Reverse un-needed updates
* Update code style
* Try to fix code fmt error
* remove extra end line
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* working trl parser with config
correctly overrides yaml config with command line arguments
adds return_remaining_strings
when return_remaining_strings is False, raises error if yaml contains
extra args that are not in the dataclasses
simpler and cleaner than previous yaml parsing and merging
addresses #1733
* lowercase trlparser
* Add test for skipping preproc if packing=True
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
* Allow skipping of validation for packing=True
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
* Use dummy dataset in no packing preproc test
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
---------
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
* Don't override optimize_device_cache when optimize_cuda_cache is not provided
Raise an exception when both optimize_cuda_cache and optimize_device_cache are set
* Minor fix
* [ORPO] Correct label mask for pad tokens
Recent [fix](57aebe9c36) for calculating NLL loss for a whole sequence introduced a bug. When input_ids are copied to labels, pad tokens are not masked.
This PR aims to path this by masking labels based on the attention mask.
* -100 -> label_pad_token_id
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* fixed adding bos and eos token unconditionally
* fixed typo of tokenizer -> self.tokenizer. Also added update to ORPO
* fixed code quality, and added BOS/EOS fix to KTO
* code reformatting with pre-commit run --all-files
* bug fix: check input id length before checking for EOS/BOS
* add `Loss Functions` section in the doc.
* add bce loss with reward shift in KTOTrainer
* add underlying distribution matching
* update example to use underlying distribution matching
* add config description
* fix 'referenced before assignment' error
* add 'bco' and 'udm' test cases
* run pre-commit
* add `scikit-learn` dependency
* raise error is sklearn is not available
* call TrainingArguments().__post_init__() for proper init
* initial DPOConfig
* fix doc string
* use DPOConfig
* fix missing import
* fix DpoScriptArguments
* override args config when given in init
* use DPOConfig
* fix output dir name
* over-ride with depreicated arguments if given
* use DPOConfig in tests
* fix comment
* add custom_message
* use dataset_train_name and dataset_test_name
* beta is also in the training_args
* fix loss_type docs
* Update trl/commands/cli_utils.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update trl/commands/cli_utils.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update trl/commands/cli_utils.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* use DPOScriptArguments
---------
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* adds option to skip dataset preparation in SFTTrainer
* before changing the template
* adds support for new schema
* a few fixes to data collator to support new schema
* updates args
* precommit
* adds sys prompt to chat template and other fixes
* updates template, fixes collator for multiple images
* precommit
* rename vsft to vstf_llava
* adding integration tests
* adds integration test for vsft
* precommit
* adds back chat template
* docs
* typo
* adds eval, precommit
* adds peft launch args
* formatting
* fixes no deps tests by checking if PIL lib exists
* Update __init__.py
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* Correct ppo_epochs usage
The usage of ppo_epochs is incorrect here.
In 8534f0edf8/trl/trainer/ppo_config.py (L104C8-L104C58)
the ppo_epochs was described as "Number of optimisation epochs per batch of samples".
However, here it is used as the usual epoch number, in which you do one iteration over the training dataset.
* Update ppo_trainer.mdx
* Update docs/source/ppo_trainer.mdx
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
the type hint forces a list which raises a "all-linear" layer not found. forcing a string makes it work. updating the type hint to `Union[str, list[str]]` also raise a parsing error
* Refactor test
* Make batched tokenizer
* Make is FAST 🔥!
* Hack to the max
* Run on main process
* Refactor
* Add unit test
* f
* r
* Refactor
* Remove bs
* Refactor to tokenize once
* Add typing
* Add test for KL getter
* Add `use_cache=False` in `concatenated_forward`
Prevents `ORPOTrainer` from using the cache, as it's not required for computing the logits and runs into conflicts with Flash Attention 2
* Add `use_cache=False` to `concatenated_forward`
Co-authored-by: Kashif Rasul <kashif@users.noreply.github.com>
---------
Co-authored-by: Kashif Rasul <kashif@users.noreply.github.com>
* add CPOTrainer
* add docs
* fix formatting
* removed precompute_ref_log_probs arg
* remove precompute_ref_log_probs
* typos
* finish cpo trainer doc
* remove redundant lines
* typo
* formatting
* compute chosen nll loss also for enc-dec models
* fix gradient error of inplace operation for enc-dec models
* formatting
* use CPOConfig
* formatting
* use model_init_kwargs from CPOConfig
* comments in example
* fix doc string
* fix typo in docstring
* update year
* fixed typo
* use preference dataset
* fix learning rate
* move dataset_num_proc to configs
* Update cpo paper link from HF: cpo_trainer.mdx
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* update description for CPO: cpo_trainer.mdx
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* remove _prepare_deepspeed for cpo
Because CPO does not need init for reference model
* Add explanation to CPO loss
* format
* fix bug when lengths are given
* add CPOTrainer to README
* fix grammer
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* CLI V1
* v1 CLI
* add rich enhancmeents
* revert unindented change
* some comments
* cleaner CLI
* fix
* fix
* remove print callback
* move to cli instead of trl_cli
* revert unneeded changes
* fix test
* Update trl/commands/sft.py
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* remove redundant strings
* fix import issue
* fix other issues
* add packing
* add config parser
* some refactor
* cleaner
* add example config yaml file
* small refactor
* change a bit the logic
* fix issues here and there
* add CLI in docs
* move to examples/sft
* remove redundant licenses
* make it work on dpo
* set to None
* switch to accelerate and fix many things
* add docs
* more docs
* added tests
* doc clarification
* more docs
* fix CI for windows and python 3.8
* fix
* attempt to fix CI
* fix?
* test
* fix
* tweak?
* fix
* test
* another test
* fix
* test
* fix
* fix
* fix
* skip tests for windows
* test @lvwerra approach
* make dev
* revert unneeded changes
* fix sft dpo
* optimize a bit
* address final comments
* update docs
* final comment
---------
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* Add use_bnb and load_in_4bit arguments.
Make it optional and not supported on all platforms
Signed-off-by: yuanwu <yuan.wu@intel.com>
* Change the use_reentrant default value to False
If the default value of gradient_checkpointing is True, set the
use_reentrant default value as False. Because the following error
happens.
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 191 with name base_model.model.model.layers.31.self_attn.v_proj.lora_B.default.weight has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.
Signed-off-by: yuanwu <yuan.wu@intel.com>
* Add model_dtype for loading the model in model_dtype
Signed-off-by: yuanwu <yuan.wu@intel.com>
* Reformate the patch
Signed-off-by: yuanwu <yuan.wu@intel.com>
---------
Signed-off-by: yuanwu <yuan.wu@intel.com>
* fix 8-bit multi-gpu training bug see https://github.com/huggingface/trl/issues/1348
* Update dpo_llama2.py
make gradient_checkpointing_kwargs configurable.
* Update dpo_llama2.py
remote unnecessary config of device_map
* format with make precommit
---------
Co-authored-by: ubuntu <lili@liveremier.ai>
Clarify that language models must be transformers models for text. This is a bit redundant with intro description, but attempts to better address a question that that comes up (issue 1257).
Closes: #1257
In the document as it is now the best practice recommendations don't seem neither consistent nor correct.
For example, the documentation links a tweet with a recommendation to merge adaptors into a quantized model, and a script that supposedly illustrates how to apply that recommendation. But the script actually does the opposite of what the tweet recommends, first dequantizing the model.
There are similar inconsistencies/ambiguities further in that paragraph. For example, saying that using an unquantized model would lead to lower performance (I changed it to "higher memory demand").
Overall, I updated the paragraph to improve consistency and provided links to slightly more evidence-based merging recommendations.
Both the argument's name as well as the value need to be renamed.
Otherwise we get both
NameError: name 'train_dataset' is not defined
and
TypeError: PPOTrainer.__init__() got an unexpected keyword argument 'train_dataset'
* Update dpo_trainer.py
update reference_free parameter for dpo_loss
* Update dpo_trainer for reference_free case
updated the docstring typo and set device parameter to ref_logratios tensor
* Remove stray commas from test data
* Codemod Unittest assertions to bare asserts
* Make `assertAlmostEqual` tests more idiomatic
* DRY some test strings
* Update dpo_trainer.py
Added support for num_proc to tokenize the training dataset.
* Update dpo_trainer.py
added type in the new num_proc variable
* added test case
* add test case
* fix type
---------
Co-authored-by: imraviagrawal <ravi.agrawal@umass.edu>
Co-authored-by: Ravi Agrawal <raviagrawal@Ravis-MacBook-Pro.local>
* fix: only load data on main process
* define is_main_process once
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* avoid re-initializing PartialState on train dataset check
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* avoid re-initializing PartialState on eval dataset check
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* process dataset on main first to take advantage of caching
* fix typo in docs
* use decorator to manage state
* Revert "fix typo in docs"
This reverts commit 0880a188812a698f7106853245ce1ba96a036831.
* Revert "Revert "fix typo in docs""
This reverts commit ff7ee33fbeedcd0032b728d86a17cfcb10e43f9b.
* Revert "use decorator to manage state"
This reverts commit 7ac7a45949f621941fedc522f0d2ca7b29367c3a.
* use is_local_main_process instead of is_main_process
* fix: use context manager instead of attribute
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* Update trl/trainer/sft_trainer.py
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* fix: improve error message when `pad_token_id` is not configured
* Add test for error raised when pad_token is None
* Fix pre-commit errors
* Fix error in the test environment
* Fix FSDP error
Fixes error when `loss` field of model output is non-empty, and indexing as [0] returns loss instead of logits. Can happen with FSDP.
* Apply suggestions from code review
force return_dict
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* Fix reported KL in PPO trainer
previously this was always reporting the estimated KL, even when using `kl_penalty = 'full'` (or `abs`, etc).
Now we return the actual KL calculated in `compute_rewards()`, and report that.
* fix test
* Fix instruction token masking
Fix instruction token masking if the first instruction is tokenized differently than the others, or in general if no instruction is detected before the first response.
* Bugfix for edge case
(in case either of the templates isn't found at all, ...idxs[0] might not exist)
* Add test for instruction masking fix
* Allow separate devices for target/ref models.
* Remove original/duplicate.
* Cleanup original, black formatting.
---------
Co-authored-by: Jon Durbin <jonathan@convai.com>
* Address issue #1122
Issue [#1122](https://github.com/huggingface/trl/issues/1122)
takes care of an inconsistency between `_prepare_packed_dataloader`
and `_prepare_non_packed_dataloader`
* made attention_mask field in ConstantLengthDataset a tensor
* add: support for peft in ddpo.
* revert to the original modeling_base.
* style
* specify weight_name
* explicitly specify weight_name
* fix: parameter parsing
* fix: trainable_layers.
* parameterize use_lora.
* fix one more trainable_layers
* debug
* debug
* more fixes.
* manually set unet of sd_pipeline
* make trainable_layers cleaner.
* more fixes
* remove prints.
* tester class for LoRA too.
* add peft_module_casting_to_bf16 in DPOTrainer
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* Update trl/trainer/dpo_trainer.py
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* SFT Trainer enhancements
* remove the callback `PeftSavingCallback`
* bump the version of transformers to `4.31.0`
* remove `PeftSavingCallback` from all places.
* use logprobs if it exists in the batch
* add features to tokenized batch if in data
* make get_batch_logps a static method
* add tokenize_batch_element dataset mapper
* Remove tokenize_batch method from DPODataCollator
* Initial sketch to precompute reference_logps
* run ref model via pytorch dataloader
* add a padding helper
* clean up the helper
* use logprob item()
* default behaviour
* clean up collator
* add docstring
* copy data back to cpu if needed
* use get_train_dataloader methods
* fix tests
* rename: more explicit variable name precompute_ref_log_probs
* improve comment
* update comment
* Update trl/trainer/dpo_trainer.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* refactor models into setup parameters
* parametrize precompute_ref_log_probs flag
* remove useless test
* Update trl/trainer/dpo_trainer.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update tests/test_dpo_trainer.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update tests/test_dpo_trainer.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update trl/trainer/dpo_trainer.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Update trl/trainer/dpo_trainer.py
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* update function arg name
* distinguish between pad token_id and mask values
* fix tokenization #932 by @nrailg
* fix test
* undo test refactor
* new line
* undo breaking change
* Update token counter condition to allow Llama tokenizer
* Acount for merged tokens on certain tokenizers such Llama-2 tokenizer
* Update variable name to match list value when truncating response
* map function on multi-gpu and gather
* Add test cases for DPOTrainer tokenization step
* revert since we need the prepeared model
* Use gather_with_metrics on ref_logps precomputation to keep original dataset size
* Add flag to keep track of when ref_logps are precomputed
* make variable names private
* formatting
* if precompute_ref_log_probs is true one can use non-peft to populate log-probs
* Use tokenizer padding token unless padding_value is set
* Move dataset.map(tokenize_batch) outside dataloader to avoid serialization errors
* eval can be none
* move to cpu to avoid gpu oom
* remove unneeded cast to float32
* remove unneeded
* fix merge
* fix merge
* fix merge
* add precompute log-prob status via tqdm
* Truncate answer if too longer once prompt has been truncated
* Add prompt_input_ids to batch to enable generation
* formatting and add lora example
* fix formatting
* Tokenize row now expects sample to have space on chosen/rejected for llama
* Revert "Tokenize row now expects sample to have space on chosen/rejected for llama"
This reverts commit dd07a10fe8c19b6ac6bbcc7b8144189756710d52.
* raise error when using zero-3 with precompute_ref_log_probs
---------
Co-authored-by: Pablo Vicente Juan <p.vicente.juan@gmail.com>
Co-authored-by: Shoaib Burq <saburq@gmail.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* first attempts at refactor of dpo trainer
* removed extra stuff in prediction step
* import fixes
* label names
* all working
---------
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* Update utils.py
update compute_accuracy to deal with the cases where str_chosen and str_rej got the same scores, which is probably what the developers don't want
* Update utils.py
updated so only warning is reserved
* Update trl/trainer/utils.py
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
---------
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* reward adapter loaded as part of init
more flexible, clearer args
* fixed script for multi gpu
unwrap model since it is DDP
downside, with reward adapter it seems we need to use
find_unused_parameters=True
* remove gradient from reward score calculation
* change supported_args back to None
func.__code__.co_varnames was used to count the function arguments for formatting_func. This code actually counted the function variables rather than function parameters.
* fix: dpo trainer ds config
ref_model and model shouldn share the same ds config, so we shouldn modify the ds config directly. or else, it will cause sth wrong when init deepspeed engine
* fix: import sort
import sort by isort
* adds model kwargs to SFT and DPO trainers
* adds checks for model_kwarg passing when model is not str
* changed warning to ValueError
* renames model_kwargs to model_init_kwargs
* corrects argument names in
* First unwrap the model and then process the input embeddings
* Changed base_model to base_model.model to stay consistent with peft model abstractions
* initial skeleton
* iterative trainer for decoder only
* iterative trainer unittest
* encoder_decoder support
* fix typo in unittest
* init
* fix typo
* fix init typo
* adding loggings and safety checker
* fixed minor issues
* doc
* table of contents update
* add test for seq2seq2 models
* change year
* adding text as step input
* precommit
* fixing typo
* run precommit
* fixing typo in safety checker
* fix text tokenization issue
* add truncate and inherit from trainer
* remove iterative config from tests
* remove iterative config from init
* fix peft model
* change truncation side based on truncation_mode
* removed iterativeconfig autodoc
* fixed typo in trainer.mdx
* remove mention of iterative config in docs
* make sure optimizer and scheduler are created
* adding max_steps to test
* remove log_stats fn
* remove compute loss
* fixing encoder decoder detection
* fix PPODecorator
* run precommit
* fix testing
* fix small typos in iterative trainer
* adapted function log and eval
* make use of forward hooks
* correctly delete attributes
* fix RM DPP issues
* revert unneeded changes
* more fixes
* fix diff
* fix
* propagate to SFT
* Update examples/scripts/reward_modeling.py
* propagate the fix on DPO trainer
* add to example scripts
* trigger CI
* adding specific dict structure to tracker_kwargs doc string to enable changing tracker params like wandb experiment name for ease, avoids needing to go deep into accelerate source
* push changes
* set default dict
* refactor
* use typing extension
---------
Co-authored-by: Laura O'Mahony <lauraomahony@L-MacBook-Pro.fritz.box>
Co-authored-by: Costa Huang <costa.huang@outlook.com>
* Add whiten ops before compute advatanges
1. From LLaMA 2 paper, it says:
```
We also find it important to whiten the final linear scores (shown here by reversing the sigmoid with the logit function) in order to increase stability and balance properly with the KL penalty term (β) above.
```
2. This function is taken from [alpaca_farm](64e489c67e/src/alpaca_farm/rl/ppo_trainer.py (L86))
* Fix type def of self
---------
Co-authored-by: Lin Junpeng <linjunpeng@sensetime.com>
* add SLiC hinge loss
* fix links
* beta when loss is hinge is reciprocal of margin
* fix tests
* fix docs
* doc strings
* fix method name
* raise error if loss_type is not correct
* Update trl/trainer/dpo_trainer.py
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* fix formatting
---------
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* init
* run
* Update custom eval loop to aid DPO debugging (#770)
* sample_during_eval -> generate_during_eval
* Remove unused return_tokens
* Add import utils for W&B, prevent test fails
* Optimize dataloader random batch selection
* Separate prompt and response in logs
Makes it much easier to quickly read the starts of the generations
* Simplify logging
* reset eval steps
* manual merge fixes
* revert merge
* remove self.max_length
* style
* fix max_length
---------
Co-authored-by: Tom Aarsen <37621491+tomaarsen@users.noreply.github.com>
* Update utils.py
* correctly assign instruction_template in DataCollatorForCompletionOnlyLM
* correctly use instruction_token_ids in DataCollatorForCompletionOnlyLM
* DataCollatorForCompletionOnlyLM: fix instruction_template / response_template type check: handle cases where instruction_template is None
* make precommit
* Test DataCollatorForCompletionOnlyLM with pre-tokenized instruction_template
* Start adding margin to RM training
* Fix typo and cleanup
* Fix incompatibilities when not using margin
* Format using 'make precommit'
* Add documentation and test for reward trainer
* Run 'make precommit'
* Update docs/source/reward_trainer.mdx
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Fix missed merge conflict in reward trainer docs
---------
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
This change avoids setting report_to="all" (the default behavior in
transformers v4), which could lead to unexpected error messages for
inexperienced users. Note that the default value of report_to will
change anyway to "none" in transformers v5.
* docs: add initial version of docs for `PPOTrainer`
* Apply suggestions from code review Leandro
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* updated docs based on feedback leandro
- specified reference to reward model
- added batched generator
- added line of saving model
- remove reference model
* Apply suggestions from code review
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
---------
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
It took a while to understand why zero-masked tokens are one less than the length of query tokens.
If I got it correctly, it is because the first logit (and state-value) from the outputs refers to the second token in the query.
Hope this comment can be helpful to others who may encounter a similar question in the first-pass reading of the code :)
* update to `prepare_model_for_kbit_training`
from deprecated `prepare_model_for_int8_training`
and add `use_gradient_checkpointing=args.gradient_checkpointing` to
automatically follow the gradient checkpointing choice
is also the workaround for #694
* workaround for gradient checkpointing issue
calling model.gradient_checkpointing_enable() twice causes issues
this workaround calls it in prepare_model_for_kbit_training and then
changes the arg to false to make sure it isn't called again in
huggingface trainer inner loop
also changes stack_llama_2 sft trainer to use correct device map for ddp
training so that you can test this issue
* fix PeftConfig loading from a remote repo.
* failed to catch hf_hub_download() EntryNotFoundError.
At least in huggingface-hub 0.10.1, the error for "not found" is:
huggingface_hub.utils._errors.EntryNotFoundError: 404 Client Error
* pass precommit checks.
* replace some bare excepts with specific codes
* catch LocalEntryNotFoundError additionally.
* Broken first pre-draft
* Change structure to leverage user-definition of pipeline
- reward function, pipeline and scheduler will be left to the user to define
- pipeline and scheduler contract interfaces is what the framework will define
- none of this actually works
* Incremental progress: trying to get the set-up running e2e
* Incemental progress: successfully running code
* Incremental progress: running setup
Next steps: fix accelerate gardient acc assertion error when we set value > 1
* Formatting and code standards
* Incremental prog: break down code a bit
- new config flag to notify code of async reward fetching
- break off image handling code and throw it on to user to define how to handle it
- more code restructuring
* Incremental progress:
1. More code sectioning off into own methods (more for readibility than anything else)
* Incremental progress:
1. clear up contracts
2. type the reward function and prompt function
* Code shuffling and expansion of tracker, accelerator config args to beyond wandb
* More small additions
Add tensorboard logging function
Remove wandb logging function for now
Consolidate the data that get's thrown to the logging function
Add README
* Formatting
* Formatting
* Remove print statement
Make tensorboard tracking the sole tracking for the training example
* 1. start of testing
2. more refactoring
3. start of docstrings
4. parameter rename
* Basic Tests
Formatting
* Docs according to the norm
* Doocs, credits and rename file
* docs and corrections
* Put example config to respectable state
* Add recent run params
* Correct the name of the library
* Move requirements to EXTRAS
* - Add license banners
- Guard import of DDPO functions with if_diffusers_available
- doc strings for output types
* Add snippet to pull weights from huggingface + banner
* Test if passes on CI/CD
* Minor refactor
* Test dummy unet
* Possible fix for randomly disappearing attribute
* Shuffling arrangement in hopes of meeting memory requirements
* Proper Names
* Appease windows memory allocator issues for the cpu device
* Remove print statements
* Update docs/source/ddpo_trainer.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update docs/source/ddpo_trainer.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Add docstrings and correct url
* Spelling and grammar
* Add more documentation and commandline parsing for example script
* Markdown synatx correction
* Revert accidentally committed file and put the correct one
* More docs
* Remove subclassing and add docs for leftoover subclassing
* Put back subclassing
* Reward metadata and more docs
* Remove save_load_save flag
* Grammar
* Update trl/trainer/ddpo_trainer.py
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* Update tests/test_ddpo_trainer.py
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* Update setup.py
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* Update examples/scripts/stable_diffusion_tuning.py
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* Edits to the readme for DDPO
* Renamed modelling_sd_base to modeling_sd_base
* Insert try and catch for bitsandbytes import
* Change to smaller model
* Correct tolerance for floating point comparison
* Remove dummy unet and move to check is isfinite
* 1. Expand interface to ensure other Stable Diffusion pipelines could be covered
2. remove extra identification
* 1. Remove most of the asserts except for one and add value error
2. Remove default run name
* Remove progress bar
* Docs
* Put back progress bar
* 1. Revert progress bar deletion completely
2. grammar
3. relocate line
* Experiment
* Remove experiment parts and format properly
* Change formatting and edit info in docs
* Grammar
* Refactor out most of nitty gritty of loading/saving from trainer to example model
Readme addition
* Docs additions
* 1. Proper formatting fr the test file
2. incorporatioon of pull frm hub if fails try local
3. doc strings for interface
4. highlight in the trainer, that this is only ready fr sd pipelines
* Resources for before and after
* Attempt at embedding images
* Post testing example script
* Consistent naming and document edits in light of new args
* Remove resources and add CDN links in html in doc file
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* Update dpo_trainer.py
Make ref_model optional.
* add tests for ref_model=None
* better handling for ref_model=None
* Update dpo_trainer.py
Correct docstring
* move instantiation of self.ref_model closer to model
* use .disable_adapters instead of .get_base_model
* handle ref_model=None in get_batch_samples
* fix failing test in dpo_trainer due to disable_dropout_in_model
* Update trl/trainer/dpo_trainer.py
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* Add reward/score scaling/normalization/clipping
* Run pre-commit to fix styles and remove some dupe code
* Make sure score module and pretrained_model have the same dtype
* Add multi_adapter_rl_v2.py
* Add log_with
* Add more verbose help message for use_score_norm
* Fix score clipping for float16
* Minor fix
* WIP
* improve inference docs
* improve training faq
* update toctree
* fix toctree
* fix improve blog
* improve blog
* fix customization
* reword faq a bit
* reword inference a bit
* add references back
* integrate feedback from code review
* fix link in html
* Allow tokenized ids in DataCollatorForCompletionOnlyLM. Add test and docs
* Formatting
* Documentation
* Remove unused code from test
---------
Co-authored-by: Ivan Sanchez <ivan.sanchez@zyte.com>
* initial stack-llama-2 scripts
* removed unused function
* add accelerate
* link to stack-llama-2 code
* running the model
* pre-commit fixes
* use the merge_peft script
* Add section on logged metrics
* refactor grad accum
* quick fix
* use correct place to step optim
* push changes
* cleanup and fix division by zero in `masked_var`
* revert back changes
* use unbiased var
* deal with division by zero
* add test case
* calculate advantage only once
* format
* add warning
* add more warnings
* quick fix
* remove unhelpful warning
* fix test cases
* fix test cases
* bump version given the breaking change
* black
* refactor
* update test cases
* error out
* push changes
* remove exact div
* add comments
* Add unit test to DataCollatorForCompletionOnlyLM to reproduce the bug.
* Change comparison target from examples[i][input_ids] to batch[labels][i] in DataCollatorForCompletionOnlyLM
* added DataCollatorForChatCompletionOnlyLM
* added simple test
* merged the two collators and fixed ### in completion
* fix response template
* fixing ordering in test
* quality
* fixed minor comments & make doc
* chat test back
* Update tests/test_sft_trainer.py
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* remove response/pairs from the DPO side
* Simplify get_hh helper function
* removed unused import
* update tests and docs for dpo_trainer
---------
Co-authored-by: Tom Aarsen <Cubiegamedev@gmail.com>
Co-authored-by: Shoaib Burq <saburq@gmail.com>
* added sfttrainer and rmtrainer example scripts.
* added few lines in the documentation.
* moved notebooks.
* delete `examples/summarization`
* remove from docs as well
* refactor sentiment tuning
* more refactoring.
* updated docs for multi-adapter RL.
* add research projects folder
* more refactor
* refactor docs.
* refactor structure
* add correct scripts all over the place
* final touches
* final touches
* updated documentation from feedback.
This PR fixes a bug in `_get_current_device()` where the global process index was being returned instead of the local one.
With this fix, it is possible to run training in **multi-node** environments and avoid the dreaded `RuntimeError: CUDA error: invalid device ordinal` :)
* Debug the tortuous logic in `_prepare_dataset` function
There are two issues with the previous `_prepare_dataset` function.
1. Tortuous and burdensome logic: the `is_already_dataset` variable is confusing and not helpful. So, remove it.
2. The comments and the logics do not match.
For instance, in the previous version, the comments said "check if torch dataset ... and do nothing". However, when "dataset" is a torch.utils.data.Dataset and `packing = True`? It will still move into the _prepare_non_packed_dataloader(...) function call.
The corrected version will do nothing if the dataset is already a torch dataloader/dataset/ConstantLengthDataset.
* Lint: sft_trainer.py
* Lint empty line
* v1 of alpaca datacollator
* make sure to match the response tokens
* add test
* add it in main init
* add check
* adapt test
---------
Co-authored-by: Costa Huang <costa.huang@outlook.com>
* First draft of best-of-n sampler class
* Formatting
* Add best-of-n class to init
* Rearrange files
* Correction
* Make sure input query is in shape
* check for numpy.ndarray type
* Fix for shapes and types AND linter fixes
* Make reward pipeline a callback for more broader application
* Documentation for best-of-n sampler class usage
* Docs update for best-of-n class
* Doc fixes for best-of-n sampler class
* Remove colon from new addition
* Change user callback output type and associated side-effects of said change
* Relocate param because of collision
* Documentation update
* Make input param keyword easier to grasp
* Remove comments and add docstrings
* Tests and fixes for best_of_n sampler class
* Change input arg name
* Formatting
* Removed unnecessary cloning
* Implement evaluation/prediction for RewardTrainer
* Stick with unittest assertions
* Perform prediction forward calls without gradient
* Remove Literal to preserve Python 3.7 support
I recognize that I can also import from typing_extensions with a try-except,
but that is a bit overkill for this I feel.
* Remove eval_steps=1 to prevent flaky test on CI
The flaky test is caused by a division by zero when dividing by the runtime.
This is done on the transformers side, so it's not a TRL issue.
In practice, this won't happen - it only happens because both the model
and dataset are tiny.
* better reward modelling
tokenizer can be separately specified from model
removed old llama tokenizer hacks
evaluate after first step option to make nicer graphs
black + isort
* removed tokenizer hacks from supervised ft
* black and flake8
* Fixed some type annotations of trl.trainer.PPoTrainer
- Ref model should be Optional
- The usual annotation for the Huggingface tokenizers is PreTrainedTokenizerBase. Not using that messes up people's annotation checks.
- Fixed the comments wrt the other two points
* fix quality and style
* synced & requality & restyled
* fixed rl training args
added steps argument and break to respect max training epochs
added more PPOConfig args to script args
removed llama tokenizer hacks
removed extra args in dataset
changed to llamatokenizer from autotokenizer
black + isort
* black and flake8
* style, quality, and switch back to AutoTokenizer
* Fix bug when loading local peft model
Fix bug in https://github.com/lvwerra/trl/issues/341
* Fix loading bug when load lora mode
Fix loading bug when load lora model but not resuming training
1. Implement the fix logic described in https://github.com/lvwerra/trl/pull/342#pullrequestreview-1422298054
2. Set peft lora weight to trainable.
* Remove is_trainable
Leave is_trainable to future PR.
* add test_load_pretrained_peft
Check that the model saved with peft class interface can be loaded properly.
* Create best_of_n.ipynb
* First draft
* Refactor as ref vs ppo vs non-ppo
* Changed notebook location and added README to explain motivation
* 1. Spelling and formatting refactor
2. Minor refactor of notebook
* Formatting of notebook
* Give a key to the wandb PPOConfig config entry
There is a lot of stuff with very generic keys in the `PPOConfig` dict, and the user may have logged a `wandb` config dict elsewhere.
I know I had that problem. To counter that, I pass the PPOConfig dict in a dict under the key `trl_ppo_trainer_config`, to prevent collisions & be very clear.
* did black --line-length 119 --target-version py38 examples tests trl
isort examples tests trl and black --check --line-length 119 --target-version py38 examples tests trl
isort --check-only examples tests trl
flake8 examples tests trl
* Moving `total_ppo_epochs`, forward_batch_size` and `log_with` to post init method and let the dataclass automatically assign the other member variables.
* Using default factory functions for initializing dict
* Using fields + metadata for args description
* Reformatting the file using black(jupyter)
* Trying styling checks again
* Adding new args from PR 238
---------
Co-authored-by: gaurav.vi <gaurav.vi@media.net>
description:Submit a bug report to help us improve TRL
labels:["bug"]
body:
- type:markdown
attributes:
value:|
Thanks for taking the time to fill out this bug report! 🤗
🚩 If it is your first time submitting, be sure to check our [bug report guidelines](https://github.com/huggingface/trl/blob/main/CONTRIBUTING.md#did-you-find-a-bug)
- type:textarea
id:reproduction
validations:
required:true
attributes:
label:Reproduction
description:|
Please provide a code sample that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
value:|
```python
from trl import ...
```
outputs:
```
Traceback (most recent call last):
File "example.py", line 42, in <module>
...
```
- type:textarea
id:system-info
attributes:
label:System Info
description:|
Please provide information about your system: platform, Python version, PyTorch version, Transformers version, devices, TRL version, ...
You can get this information by running `trl env` in your terminal.
placeholder:Copy-paste the output of `trl env`
validations:
required:true
- type:checkboxes
id:terms
attributes:
label:Checklist
description:|
Before submitting, please confirm that you've completed each of the following.
If an item doesn't apply to your issue, check it anyway to show you've reviewed it.
options:
- label:"I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))"
required:true
- label:"I have included my system information"
required:true
- label:"Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))"
required:true
- label:"Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))"
description:Submit a proposal/request for a new TRL feature
labels:["Feature request"]
body:
- type:textarea
id:feature-request
validations:
required:true
attributes:
label:Feature request
description:|
A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist.
- type:textarea
id:motivation
validations:
required:true
attributes:
label:Motivation
description:|
Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too.
- type:textarea
id:contribution
validations:
required:true
attributes:
label:Your contribution
description:|
Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD [readme](https://github.com/huggingface/trl/blob/main/CONTRIBUTING.md)
description:Submit a proposal/request to implement a new trainer for a post-training method
labels:["New trainer"]
body:
- type:textarea
id:description-request
validations:
required:true
attributes:
label:Method description
description:|
Put any and all important information relative to the method
- type:checkboxes
id:information-tasks
attributes:
label:Open source status
description:|
Please note that if the method implementation isn't available or model weights with training datasets aren't available, we are less likely to implement it in `trl`.
options:
- label:"The method implementation is available"
- label:"The model weights are available"
- label:"The training datasets are available"
- type:textarea
id:additional-info
attributes:
label:Provide useful links for the implementation
description:|
Please provide information regarding the implementation, the weights, and the authors.
Please mention the authors by @gh-username if you're aware of their usernames.
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly. They may suggest changes to make the code even better.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/trl/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a GitHub issue? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
abstract: "With trl you can train transformer language models with Proximal Policy Optimization (PPO). The library is built on top of the transformers library by \U0001F917 Hugging Face. Therefore, pre-trained language models can be directly loaded via transformers. At this point, most decoder and encoder-decoder architectures are supported."
Everyone is welcome to contribute, and we value everybody's contribution. Code contributions are not the only way to help the community. Answering questions, helping others, and improving the documentation are also immensely valuable.
Before you start contributing make sure you installed all the dev tools:
It also helps us if you spread the word! Reference the library in blog posts about the awesome projects it made possible, shout out on Twitter every time it has helped you, or simply ⭐️ the repository to say thank you.
However you choose to contribute, please be mindful and respect our [code of conduct](https://github.com/huggingface/trl/blob/main/CODE_OF_CONDUCT.md).
**This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).**
## Ways to contribute
There are several ways you can contribute to TRL:
* Fix outstanding issues with the existing code.
* Submit issues related to bugs or desired new features.
* Implement trainers for new post-training algorithms.
* Contribute to the examples or the documentation.
If you don't know where to start, there is a special [Good First Issue](https://github.com/huggingface/trl/labels/%F0%9F%91%B6%20good%20first%20issue) listing. It will give you a list of open issues that are beginner-friendly and help you start contributing to open-source. The best way to do that is to open a Pull Request and link it to the issue that you'd like to work on. We try to give priority to opened PRs as we can easily track the progress of the fix, and if the contributor does not have time anymore, someone else can take the PR over.
For something slightly more challenging, you can also take a look at the [Good Second Issue](https://github.com/huggingface/trl/labels/Good%20Second%20Issue) list. In general though, if you feel like you know what you're doing, go for it and we'll help you get there! 🚀
> All contributions are equally valuable to the community. 🥰
Before you start contributing make sure you have installed all the dev tools:
```bash
pip install -e ".[dev]"
pip install -e .[dev]
```
## Did you find a bug?
## Fixing outstanding issues
* Ensure the bug was not already reported by searching on GitHub under Issues.
* If you're unable to find an open issue addressing the problem, open a new one. Be sure to include a title and clear description, as much relevant information as possible, and a code sample or an executable test case demonstrating the expected behavior that is not occurring.
* Be sure to add the complete error messages.
If you notice an issue with the existing code and have a fix in mind, feel free to [start contributing](#submitting-a-pull-request-pr) and open a Pull Request!
#### Did you write a patch that fixes a bug?
## Submitting a bug-related issue or feature request
* Open a new GitHub pull request with the patch.
* Ensure that your PR includes a test that fails without your patch, and pass with it.
* Ensure the PR description clearly describes the problem and solution. Include the relevant issue number if applicable.
Do your best to follow these guidelines when submitting a bug-related issue or a feature request. It will make it easier for us to come back to you quickly and with good feedback.
## PR submission guidelines
### Did you find a bug?
* Keep each PR focused. While it's more convenient, do not combine several unrelated fixes together. Create as many branches as needing to keep each PR focused.
* Do not mix style changes/fixes with "functional" changes. It's very difficult to review such PRs and it most likely get rejected.
* Do not add/remove vertical whitespace. Preserve the original style of the file you edit as much as you can.
* Do not turn an already submitted PR into your development playground. If after you submitted PR, you discovered that more work is needed - close the PR, do the required work and then submit a new PR. Otherwise each of your commits requires attention from maintainers of the project.
* If, however, you submitted a PR and received a request for changes, you should proceed with commits inside that PR, so that the maintainer can see the incremental fixes and won't need to review the whole PR again. In the exception case where you realize it'll take many many commits to complete the requests, then it's probably best to close the PR, do the work and then submit it again. Use common sense where you'd choose one way over another.
The TRL library is robust and reliable thanks to users who report the problems they encounter.
### Before you submit a PR
Before you report an issue, we would really appreciate it if you could **make sure the bug was not already reported** (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the library itself, and not your code.
First you want to make sure that all the tests pass:
Once you've confirmed the bug hasn't already been reported, please include the following information in your issue so we can quickly resolve it:
* Your **OS type and version**, **Python**, **PyTorch**, **TRL** and **Transformers** versions.
* A short, self-contained, code snippet that allows us to reproduce the bug in less than 30s.
* The *full* traceback if an exception is raised.
* Attach any other additional information, like screenshots, you think may help.
To get the OS and software versions automatically, run the following command:
```bash
make test
trl env
```
Then before submitting your PR make sure the code quality follows the standards. You can run the following command to format and test:
### Do you want a new feature?
If there is a new feature you'd like to see in TRL, please open an issue and describe:
1. What is the *motivation* behind this feature? Is it related to a problem or frustration with the library? Is it a feature related to something you need for a project? Is it something you worked on and think it could benefit the community?
Whatever it is, we'd love to hear about it!
2. Describe your requested feature in as much detail as possible. The more you can tell us about it, the better we'll be able to help you.
3. Provide a *code snippet* that demonstrates the feature's usage.
4. If the feature is related to a paper, please include a link.
If your issue is well written we're already 80% of the way there by the time you create it.
## Do you want to implement a new trainer?
New post-training methods are published frequently and those that satisfy the following criteria are good candidates to be integrated into TRL:
* **Simplicity:** Does the new method achieve similar performance as prior methods, but with less complexity? A good example is Direct Preference Optimization (DPO) [[Rafailov et al, 2023]](https://huggingface.co/papers/2305.18290), which provided a simpler and compelling alternative to RLHF methods.
* **Efficiency:** Does the new method provide a significant improvement in training efficiency? A good example is Odds Ratio Preference Optimization (ORPO) [[Hong et al, 2023]](https://huggingface.co/papers/2403.07691), which utilizes a similar objective as DPO but requires half the GPU VRAM.
Methods that only provide incremental improvements at the expense of added complexity or compute costs are unlikely to be included in TRL.
If you want to implement a trainer for a new post-training method, first open an issue and provide the following information:
* A short description of the method and a link to the paper.
* Link to the implementation if it is open-sourced.
* Link to model weights trained with the method if they are available.
Based on the community and maintainer feedback, the next step will be to implement the trainer and config classes. See the following examples for inspiration:
* Paired preference optimisation: [`dpo_trainer.py`](./trl/trainer/dpo_trainer.py) and [`dpo_config.py`](./trl/trainer/dpo_config.py)
* RL-based optimisation: [`rloo_trainer.py](./trl/trainer/rloo_trainer.py) and [`rloo_config.py](./trl/trainer/rloo_config.py)
* Online optimisation: [`online_dpo_trainer.py`](./trl/trainer/online_dpo_trainer.py) and [`online_dpo_config.py`](./trl/trainer/online_dpo_config.py)
## Do you want to add documentation?
We're always looking for improvements to the documentation that make it more clear and accurate. Please let us know how the documentation can be improved, such as typos, dead links, and any missing, unclear, or inaccurate content... We'll be happy to make the changes or help you contribute if you're interested!
## Submitting a pull request (PR)
Before writing code, we strongly advise you to search through the existing PRs or issues to make sure that nobody is already working on the same thing. If you are unsure, it is always a good idea to open an issue to get some feedback.
You will need basic `git` proficiency to be able to contribute to TRL. `git` is not the easiest tool to use but it has the greatest manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro Git](https://git-scm.com/book/en/v2) is a very good reference.
Follow these steps to start contributing:
1. Fork the [repository](https://github.com/huggingface/trl) by clicking on the 'Fork' button on the repository's page. This creates a copy of the code under your GitHub user account.
2. Clone your fork to your local disk, and add the base repository as a remote. The following command assumes you have your public SSH key uploaded to GitHub. See the following guide for more [information](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository).
3. Create a new branch to hold your development changes, and do this for every new PR you work on.
Start by synchronizing your `main` branch with the `upstream/main` branch (more details in the [GitHub Docs](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork)):
```bash
git checkout main
git fetch upstream
git merge upstream/main
```
Once your `main` branch is synchronized, create a new branch from it:
```bash
git checkout -b a-descriptive-name-for-my-changes
```
**Do not** work on the `main` branch.
4. Set up a development environment by running the following command in a conda or a virtual environment you've created for working on this library:
```bash
pip install -e .[dev]
```
(If TRL was already installed in the virtual environment, remove it with `pip uninstall trl` before reinstalling it.)
Alternatively, if you are using [Visual Studio Code](https://code.visualstudio.com/Download), the fastest way to get set up is by using the provided Dev Container. Check [the documentation on how to get started with dev containers](https://code.visualstudio.com/docs/remote/containers).
5. Develop the features on your branch.
As you work on the features, you should make sure that the test suite passes. You should run the tests impacted by your changes like this (see below an explanation regarding the environment variable):
```bash
pytest tests/<TEST_TO_RUN>.py
```
> For the following commands leveraging the `make` utility.
You can also run the full suite with the following command.
```bash
make test
```
TRL relies on `ruff` for maintaining consistent code formatting across its source files. Before submitting any PR, you should apply automatic style corrections and run code verification checks.
We provide a `precommit` target in the `Makefile` that simplifies this process by running all required checks and optimizations on only the files modified by your PR.
To apply these checks and corrections in one step, use:
```bash
make precommit
```
This command runs the following:
* Executes `pre-commit` hooks to automatically fix style issues with `ruff` and other tools.
* Runs additional scripts such as adding copyright information.
If you prefer to apply the style corrections separately or review them individually, the `pre-commit` hook will handle the formatting for the files in question.
Once you're happy with your changes, add changed files using `git add` and make a commit with `git commit` to record your changes locally:
6. Once you are satisfied (**and the checklist below is happy too**), go to the webpage of your fork on GitHub. Click on 'Pull request' to send your changes to the project maintainers for review.
7. It's ok if maintainers ask you for changes. It happens to core contributors too! To ensure everyone can review your changes in the pull request, work on your local branch and push the updates to your fork. They will automatically appear in the pull request.
### Checklist
1. The title of your pull request should be a summary of its contribution;
2. If your pull request addresses an issue, please mention the issue number in the pull request description to make sure they are linked (and people consulting the issue know you are working on it);
3. To indicate a work in progress please prefix the title with `[WIP]`, or mark the PR as a draft PR. These are useful to avoid duplicated work, and to differentiate it from PRs ready to be merged;
4. Make sure existing tests pass;
5. Add high-coverage tests. No quality testing = no merge.
### Tests
An extensive test suite is included to test the library behavior and several examples. Library tests can be found in
the [tests folder](https://github.com/huggingface/trl/tree/main/tests).
We use `pytest` to run the tests. From the root of the
repository here's how to run tests with `pytest` for the library:
```bash
make style && make quality
python -m pytest -sv ./tests
```
## Do you want to contribute to the documentation?
That's how `make test` is implemented (without the `pip install` line)!
* Docs are in the `docs/` folder and can be updated there.
You can specify a smaller set of tests to test only the feature
you're working on.
### Default values guidelines
1. **Use defaults when appropriate**:
Provide default values unless the parameter's value varies significantly by use case. For example, datasets or models should not have defaults, but parameters like `learning_rate` should.
2. **Prioritize proven defaults**:
Default values should align with those recommended in the original paper or method. Alternatives require strong evidence of superior performance in most cases.
3. **Ensure safety and predictability**:
Defaults must be safe, expected and reliable. Avoid settings that could lead to surprising outcomes, such as excessive memory usage or poor performance in edge cases.
4. **Balance consistency and flexibility**:
Aim for consistent defaults across similar functions or methods. However, consistency should not be preferred to point 2 or 3.
5. **Opt-in for new features**:
Do not enable new features or improvements (e.g., novel loss functions) by default. Users should explicitly opt-in to use these.
### Writing documentation
High-quality documentation is crucial for maintaining a project that is easy to use, understand, and extend. When adding new features, ensure they are thoroughly documented to maintain consistency and clarity throughout the project.
To illustrate what good documentation looks like, here’s an example of a well-documented function:
* **Line Wrapping:** Applied a consistent line wrap at column 120 to improve readability.
* **Definite Articles:** Removed definite articles where possible to streamline language. (Eg: Changed "The string to replicate" to "String to replicate")
* **Type Annotations:**
* Always include type definitions, indicating if a parameter is optional and specifying the default value.
* **String Defaults:**
* Ensured that default string values are wrapped in double quotes:
```txt
defaults to `"foo"`
```
* **Dictionary Typing:**
* Replaced generic `dict` type hints with more explicit `dict[str, Any]` to clarify expected key-value pairs.
* **Default Value Formatting:**
* Consistently surrounded default values with backticks for improved formatting:
```txt
defaults to `4`
```
* **Sub-sectioning:** When the number of arguments is large, consider breaking them into sub-sections for better readability.
include_variance (`bool`, *optional*, defaults to `False`):
Whether to include the variance of the dataset in the results.
Returns:
`dict[str, float]`:
A dictionary containing calculated statistics such as mean, median, and optionally variance.
"""
...
```
### Deprecation and backward compatibility
Our approach to deprecation and backward compatibility is flexible and based on the feature’s usage and impact. Each deprecation is carefully evaluated, aiming to balance innovation with user needs.
When a feature or component is marked for deprecation, its use will emit a warning message. This warning will include:
* **Transition Guidance**: Instructions on how to migrate to the alternative solution or replacement.
* **Removal Version**: The target version when the feature will be removed, providing users with a clear timeframe to transition.
Example:
```python
warnings.warn(
"The `Trainer.foo` method is deprecated and will be removed in version 0.14.0. "
"Please use the `Trainer.bar` class instead.",
FutureWarning,
)
```
The deprecation and removal schedule is based on each feature's usage and impact, with examples at two extremes:
* **Experimental or Low-Use Features**: For a feature that is experimental or has limited usage, backward compatibility may not be maintained between releases. Users should therefore anticipate potential breaking changes from one version to the next.
* **Widely-Used Components**: For a feature with high usage, we aim for a more gradual transition period of approximately **5 months**, generally scheduling deprecation around **5 minor releases** after the initial warning.
These examples represent the two ends of a continuum. The specific timeline for each feature will be determined individually, balancing innovation with user stability needs.
### Working with warnings
Warnings play a critical role in guiding users toward resolving potential issues, but they should be used thoughtfully to avoid unnecessary noise. Unlike logging, which provides informational context or operational details, warnings signal conditions that require attention and action. Overusing warnings can dilute their importance, leading users to ignore them entirely.
#### Definitions
* **Correct**: An operation is correct if it is valid, follows the intended approach, and aligns with the current best practices or guidelines within the codebase. This is the recommended or intended way to perform the operation.
* **Supported**: An operation is supported if it is technically valid and works within the current codebase, but it may not be the most efficient, optimal, or recommended way to perform the task. This includes deprecated features or legacy approaches that still work but may be phased out in the future.
#### Choosing the right message
* **Correct → No warning**:
If the operation is fully valid and expected, no message should be issued. The system is working as intended, so no warning is necessary.
* **Correct but deserves attention → No warning, possibly a log message**:
When an operation is correct but uncommon or requires special attention, providing an informational message can be helpful. This keeps users informed without implying any issue. If available, use the logger to output this message. Example:
```python
logger.info("This is an informational message about a rare but correct operation.")
```
* **Correct but very likely a mistake → Warning with option to disable**:
In rare cases, you may want to issue a warning for a correct operation that’s very likely a mistake. In such cases, you must provide an option to suppress the warning. This can be done with a flag in the function. Example:
```python
def my_function(foo, bar, _warn=True):
if foo == bar:
if _warn:
logger.warning("foo and bar are the same, this is likely a mistake. Ignore this warning by setting `_warn=False`.")
# Do something
```
* **Supported but not correct → Warning**:
If the operation is technically supported but is deprecated, suboptimal, or could cause future issues (e.g., conflicting arguments), a warning should be raised. This message should be actionable, meaning it must explain how to resolve the issue. Example:
```python
def my_function(foo, bar):
if foo and bar:
logger.warning("Both `foo` and `bar` were provided, but only one is allowed. Ignoring `foo`. Please pass only one of these arguments.")
# Do something
```
* **Not supported → Exception**:
If the operation is invalid or unsupported, raise an exception. This indicates that the operation cannot be performed and requires immediate attention. Example:
```python
def my_function(foo, bar):
if foo and bar:
raise ValueError("Both `foo` and `bar` were provided, but only one is allowed. Please pass only one of these arguments.")
```
By following this classification, you ensure that warnings, information, and exceptions are used appropriately, providing clear guidance to the user without cluttering the system with unnecessary messages.
python -m pytest -n auto --dist=loadfile -s -v ./tests/
pytest -n auto -m "not slow and not low_priority" -s -v --reruns 5 --reruns-delay 1 --only-rerun '(OSError|Timeout|HTTPError.*502|HTTPError.*504||not less than or equal to 0.01)'tests/
quality:
black --check --line-length 119 --target-version py38 $(check_dirs)
> Train transformer language models with reinforcement learning.
## What is it?
With `trl` you can train transformer language models with Proximal Policy Optimization (PPO). The library is built on top of the [`transformers`](https://github.com/huggingface/transformers) library by 🤗 Hugging Face. Therefore, pre-trained language models can be directly loaded via `transformers`. At this point most of decoder architectures and encoder-decoder architectures are supported.
**Highlights:**
-`PPOTrainer`: A PPO trainer for language models that just needs (query, response, reward) triplets to optimise the language model.
-`AutoModelForCausalLMWithValueHead`&`AutoModelForSeq2SeqLMWithValueHead`: A transformer model with an additional scalar output for each token which can be used as a value function in reinforcement learning.
- Example: Train GPT2 to generate positive movie reviews with a BERT sentiment classifier.
## How it works
Fine-tuning a language model via PPO consists of roughly three steps:
1.**Rollout**: The language model generates a response or continuation based on query which could be the start of a sentence.
2.**Evaluation**: The query and response are evaluated with a function, model, human feedback or some combination of them. The important thing is that this process should yield a scalar value for each query/response pair.
3.**Optimization**: This is the most complex part. In the optimisation step the query/response pairs are used to calculate the log-probabilities of the tokens in the sequences. This is done with the model that is trained and and a reference model, which is usually the pre-trained model before fine-tuning. The KL-divergence between the two outputs is used as an additional reward signal to make sure the generated responses don't deviate to far from the reference language model. The active language model is then trained with PPO.
<ahref="https://huggingface.co/trl-lib"><imgalt="Hugging Face Hub"src="https://img.shields.io/badge/🤗%20Hub-trl--lib-yellow"></a>
</p>
## 🎉 What's New
**OpenEnv Integration:** TRL now supports **[OpenEnv](https://huggingface.co/blog/openenv)**, the open-source framework from Meta for defining, deploying, and interacting with environments in reinforcement learning and agentic workflows.
Explore how to seamlessly integrate TRL with OpenEnv in our [dedicated documentation](openenv).
## Overview
TRL is a cutting-edge library designed for post-training foundation models using advanced techniques like Supervised Fine-Tuning (SFT), Proximal Policy Optimization (PPO), and Direct Preference Optimization (DPO). Built on top of the [🤗 Transformers](https://github.com/huggingface/transformers) ecosystem, TRL supports a variety of model architectures and modalities, and can be scaled-up across various hardware setups.
## Highlights
- **Trainers**: Various fine-tuning methods are easily accessible via trainers like [`SFTTrainer`](https://huggingface.co/docs/trl/sft_trainer), [`GRPOTrainer`](https://huggingface.co/docs/trl/grpo_trainer), [`DPOTrainer`](https://huggingface.co/docs/trl/dpo_trainer), [`RewardTrainer`](https://huggingface.co/docs/trl/reward_trainer) and more.
- **Efficient and scalable**:
- Leverages [🤗 Accelerate](https://github.com/huggingface/accelerate) to scale from single GPU to multi-node clusters using methods like [DDP](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) and [DeepSpeed](https://github.com/deepspeedai/DeepSpeed).
- Full integration with [🤗 PEFT](https://github.com/huggingface/peft) enables training on large models with modest hardware via quantization and LoRA/QLoRA.
- Integrates [🦥 Unsloth](https://github.com/unslothai/unsloth) for accelerating training using optimized kernels.
- **Command Line Interface (CLI)**: A simple interface lets you fine-tune with models without needing to write code.
## Installation
### Python package
Install the library with pip:
### Python Package
Install the library using `pip`:
```bash
pip install trl
```
### From source
If you want to run the examples in the repository a few additional libraries are required. Clone the repository and install it with pip:
If you want to use the latest features before an official release, you can install TRL from source:
If you wish to develop TRL, you should install in editable mode:
### Repository
If you want to use the examples you can clone the repository with the following command:
```bash
pip install -e .
git clone https://github.com/huggingface/trl.git
```
## How to use
## Quick Start
### Example
This is a basic example on how to use the library. Based on a query the language model creates a response which is then evaluated. The evaluation could be a human in the loop or another model's output.
For more flexibility and control over training, TRL provides dedicated trainer classes to post-train language models or PEFT adapters on a custom dataset. Each trainer in TRL is a light wrapper around the 🤗 Transformers trainer and natively supports distributed training methods like DDP, DeepSpeed ZeRO, and FSDP.
### `SFTTrainer`
Here is a basic example of how to use the [`SFTTrainer`](https://huggingface.co/docs/trl/sft_trainer):
For a detailed example check out the example python script `examples/sentiment/scripts/gpt2-sentiment.py`, where GPT2 is fine-tuned to generate positive movie reviews. An few examples from the language models before and after optimisation are given below:
<pstyle="text-align: center;"><b>Figure:</b> A few review continuations before and after optimisation. </p>
</div>
[`GRPOTrainer`](https://huggingface.co/docs/trl/grpo_trainer) implements the [Group Relative Policy Optimization (GRPO) algorithm](https://huggingface.co/papers/2402.03300) that is more memory-efficient than PPO and was used to train [Deepseek AI's R1](https://huggingface.co/deepseek-ai/DeepSeek-R1).
## References
```python
fromdatasetsimportload_dataset
fromtrlimportGRPOTrainer
### Proximal Policy Optimisation
The PPO implementation largely follows the structure introduced in the paper **"Fine-Tuning Language Models from Human Preferences"** by D. Ziegler et al. \[[paper](https://arxiv.org/pdf/1909.08593.pdf), [code](https://github.com/openai/lm-human-preferences)].
The language models utilize the `transformers` library by 🤗 Hugging Face.
# Dummy reward function: count the number of unique characters in the completions
defreward_num_unique_chars(completions,**kwargs):
return[len(set(c))forcincompletions]
trainer=GRPOTrainer(
model="Qwen/Qwen2-0.5B-Instruct",
reward_funcs=reward_num_unique_chars,
train_dataset=dataset,
)
trainer.train()
```
### `DPOTrainer`
[`DPOTrainer`](https://huggingface.co/docs/trl/dpo_trainer) implements the popular [Direct Preference Optimization (DPO) algorithm](https://huggingface.co/papers/2305.18290) that was used to post-train [Llama 3](https://huggingface.co/papers/2407.21783) and many other models. Here is a basic example of how to use the `DPOTrainer`:
You can use the TRL Command Line Interface (CLI) to quickly get started with post-training methods like Supervised Fine-Tuning (SFT) or Direct Preference Optimization (DPO):
Read more about CLI in the [relevant documentation section](https://huggingface.co/docs/trl/clis) or use `--help` for more details.
## Development
If you want to contribute to `trl` or customize it to your needs make sure to read the [contribution guide](https://github.com/huggingface/trl/blob/main/CONTRIBUTING.md) and make sure you make a dev install:
```bash
git clone https://github.com/huggingface/trl.git
cd trl/
pip install -e .[dev]
```
## Experimental
A minimal incubation area is available under `trl.experimental` for unstable / fast-evolving features. Anything there may change or be removed in any release without notice.
Example:
```python
fromtrl.experimental.new_trainerimportNewTrainer
```
Read more in the [Experimental docs](https://huggingface.co/docs/trl/experimental_overview).
## Citation
```bibtex
@misc{vonwerra2022trl,
author={Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert},
author={Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
> VERSION needs to be formatted following the `v{major}.{minor}.{patch}` convention. We need to follow this convention to be able to retrieve versioned scripts.
## Major/Minor Release
### 1. Ensure your local repository is up to date with the upstream repository
```bash
git checkout main
git pull origin main
```
> [!WARNING]
> Do not merge other pull requests into `main` until the release is done. This is to ensure that the release is stable and does not include any untested changes. Announce internally (#trl-internal) to other maintainers that you are doing a release and that they must not merge PRs until the release is done.
### 2. Create a release branch from main
```bash
git checkout -b release-v{major}.{minor}
```
### 3. Change the version in the following files
-`.github/workflows/tests_latest.yml`:
```diff
- with: { ref: v{major}.{minor-1}-release }
+ with: { ref: v{major}.{minor}-release }
```
- `CITATION.cff`
```diff
- version: "{major}.{minor-1}"
+ version: "{major}.{minor}"
```
- `VERSION`
```diff
- {major}.{minor}.0.dev0
+ {major}.{minor}.0
```
### 4. Commit and push these changes
```shell
git add .github/workflows/tests_latest.yml CITATION.cff VERSION
git commit -m 'Release: {major}.{minor}'
git push origin release-v{major}.{minor}
```
### 5. Create a pull request
from `release-v{major}.{minor}` to `main`, named `Release: v{major}.{minor}`, wait for tests to pass, and request a review.
### 6. Once the pull request is approved, merge it into `main`
It will automatically publish the new version of the package on PyPI.
### 7. Add a tag in git to mark the release
```shell
git checkout main
git pull origin main
git tag -a v{major}.{minor}.0 -m 'Adds tag v{major}.{minor}.0 for PyPI'
git push origin v{major}.{minor}.0
```
### 8. Create a branch `v{major}.{minor}-release` for future patch releases
```shell
git checkout -b v{major}.{minor}-release
git push origin v{major}.{minor}-release
```
This ensures that future patch releases (`v{major}.{minor}.1`, `v{major}.{minor}.2`, etc.) can be made separately from `main`.
### 9. Create a GitHub Release
1. Go to the repo’s [releases section](https://github.com/huggingface/trl/releases) on GitHub.
2. Click **Draft a new release**.
3. Select the `v{major}.{minor}.0` tag you just created in step 7.
4. Add a title (`v{major}.{minor}.0`) and a short description of what’s new.
5. Click **Publish Release**.
### 10. Bump to dev version
1. Create a branch `bump-dev-version-{major}.{minor+1}` from `main` and checkout to it.
TRL supports the Binary Classifier Optimization (BCO).
The [BCO](https://huggingface.co/papers/2404.04656) authors train a binary classifier whose logit serves as a reward so that the classifier maps {prompt, chosen completion} pairs to 1 and {prompt, rejected completion} pairs to 0.
For a full example have a look at [`examples/scripts/bco.py`].
## Expected dataset type
The [`experimental.bco.BCOTrainer`] requires an [unpaired preference dataset](dataset_formats#unpaired-preference).
The [`experimental.bco.BCOTrainer`] supports both [conversational](dataset_formats#conversational) and [standard](dataset_formats#standard) dataset formats. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
## Expected model format
The BCO trainer expects a model of `AutoModelForCausalLM`, compared to PPO that expects `AutoModelForCausalLMWithValueHead` for the value function.
## Using the `BCOTrainer`
For a detailed example have a look at the `examples/scripts/bco.py` script. At a high level we need to initialize the `BCOTrainer` with a `model` we wish to train and a reference `ref_model` which we will use to calculate the implicit rewards of the preferred and rejected response.
The `beta` refers to the hyperparameter of the implicit reward, and the dataset contains the 3 entries listed above. Note that the `model` and `ref_model` need to have the same architecture (ie decoder only or encoder-decoder).
```python
training_args=BCOConfig(
beta=0.1,
)
bco_trainer=BCOTrainer(
model,
model_ref,
args=training_args,
train_dataset=train_dataset,
processing_class=tokenizer,
)
```
After this one can then call:
```python
bco_trainer.train()
```
## Underlying Distribution matching (UDM)
In practical scenarios, the thumbs-up and thumbs-down datasets are likely to have divergent underlying distributions of prompts.
Consider an LLM deployed for user feedback: if the model excels in writing tasks but underperforms in coding, the thumbs-up dataset will be dominated by writing-related prompts, while the thumbs-down dataset will contain mostly coding-related prompts.
If the prompts in your desired and undesired datasets differ a lot, it is useful to enable UDM.
Set `prompt_sample_size` to define how many prompts are selected to train the UDM classifier and start the training with the provided embedding function:
```python
training_args=BCOConfig(
beta=0.1,
prompt_sample_size=512,
)
bco_trainer=BCOTrainer(
model,
model_ref,
args=training_args,
train_dataset=train_dataset,
processing_class=tokenizer,
embedding_func=embedding_func,
embedding_tokenizer=self.embedding_tokenizer,
)
bco_trainer.train()
```
### For Mixture of Experts Models: Enabling the auxiliary loss
MOEs are the most efficient if the load is about equally distributed between experts.
To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss.
This option is enabled by setting `output_router_logits=True` in the model config (e.g. MixtralConfig).
To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: 0.001).
TRL provides a powerful command-line interface (CLI) to fine-tune large language models (LLMs) using methods like Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and more. The CLI abstracts away much of the boilerplate, letting you launch training jobs quickly and reproducibly.
## Commands
Currently supported commands are:
### Training Commands
-`trl dpo`: fine-tune a LLM with DPO
-`trl grpo`: fine-tune a LLM with GRPO
-`trl kto`: fine-tune a LLM with KTO
-`trl reward`: train a Reward Model
-`trl rloo`: fine-tune a LLM with RLOO
-`trl sft`: fine-tune a LLM with SFT
### Other Commands
-`trl env`: get the system information
-`trl vllm-serve`: serve a model with vLLM
## Fine-Tuning with the TRL CLI
### Basic Usage
You can launch training directly from the CLI by specifying required arguments like the model and dataset:
<hfoptionsid="command_line">
<hfoptionid="SFT">
```bash
trl sft \
--model_name_or_path Qwen/Qwen2.5-0.5B \
--dataset_name stanfordnlp/imdb
```
</hfoption>
<hfoptionid="DPO">
```bash
trl dpo \
--model_name_or_path Qwen/Qwen2.5-0.5B \
--dataset_name anthropic/hh-rlhf
```
</hfoption>
<hfoptionid="Reward">
```bash
trl reward \
--model_name_or_path Qwen/Qwen2.5-0.5B \
--dataset_name trl-lib/ultrafeedback_binarized
```
</hfoption>
</hfoptions>
### Using Configuration Files
To keep your CLI commands clean and reproducible, you can define all training arguments in a YAML configuration file:
<hfoptionsid="config_file">
<hfoptionid="SFT">
```yaml
# sft_config.yaml
model_name_or_path:Qwen/Qwen2.5-0.5B
dataset_name:stanfordnlp/imdb
```
Launch with:
```bash
trl sft --config sft_config.yaml
```
</hfoption>
<hfoptionid="DPO">
```yaml
# dpo_config.yaml
model_name_or_path:Qwen/Qwen2.5-0.5B
dataset_name:anthropic/hh-rlhf
```
Launch with:
```bash
trl dpo --config dpo_config.yaml
```
</hfoption>
<hfoptionid="Reward">
```yaml
# reward_config.yaml
model_name_or_path:Qwen/Qwen2.5-0.5B
dataset_name:trl-lib/ultrafeedback_binarized
```
Launch with:
```bash
trl reward --config reward_config.yaml
```
</hfoption>
</hfoptions>
### Scaling Up with Accelerate
TRL CLI natively supports [🤗 Accelerate](https://huggingface.co/docs/accelerate), making it easy to scale training across multiple GPUs, machines, or use advanced setups like DeepSpeed — all from the same CLI.
You can pass any `accelerate launch` arguments directly to `trl`, such as `--num_processes`. For more information see [Using accelerate launch](https://huggingface.co/docs/accelerate/en/basic_tutorials/launch#using-accelerate-launch).
<hfoptionsid="launch_args">
<hfoptionid="SFT inline">
```bash
trl sft \
--model_name_or_path Qwen/Qwen2.5-0.5B \
--dataset_name stanfordnlp/imdb \
--num_processes 4
```
</hfoption>
<hfoptionid="SFT w/ config file">
```yaml
# sft_config.yaml
model_name_or_path:Qwen/Qwen2.5-0.5B
dataset_name:stanfordnlp/imdb
num_processes:4
```
Launch with:
```bash
trl sft --config sft_config.yaml
```
</hfoption>
<hfoptionid="DPO inline">
```bash
trl dpo \
--model_name_or_path Qwen/Qwen2.5-0.5B \
--dataset_name anthropic/hh-rlhf \
--num_processes 4
```
</hfoption>
<hfoptionid="DPO w/ config file">
```yaml
# dpo_config.yaml
model_name_or_path:Qwen/Qwen2.5-0.5B
dataset_name:anthropic/hh-rlhf
num_processes:4
```
Launch with:
```bash
trl dpo --config dpo_config.yaml
```
</hfoption>
<hfoptionid="Reward inline">
```bash
trl reward \
--model_name_or_path Qwen/Qwen2.5-0.5B \
--dataset_name trl-lib/ultrafeedback_binarized \
--num_processes 4
```
</hfoption>
<hfoptionid="Reward w/ config file">
```yaml
# reward_config.yaml
model_name_or_path:Qwen/Qwen2.5-0.5B
dataset_name:trl-lib/ultrafeedback_binarized
num_processes:4
```
Launch with:
```bash
trl reward --config reward_config.yaml
```
</hfoption>
</hfoptions>
### Using `--accelerate_config` for Accelerate Configuration
The `--accelerate_config` flag lets you easily configure distributed training with [🤗 Accelerate](https://github.com/huggingface/accelerate). This flag accepts either:
- the name of a predefined config profile (built into TRL), or
- a path to a custom Accelerate YAML config file.
#### Predefined Config Profiles
TRL provides several ready-to-use Accelerate configs to simplify common training setups:
| Name | Description |
| --- | --- |
| `fsdp1` | Fully Sharded Data Parallel Stage 1 |
| `fsdp2` | Fully Sharded Data Parallel Stage 2 |
| `zero1` | DeepSpeed ZeRO Stage 1 |
| `zero2` | DeepSpeed ZeRO Stage 2 |
| `zero3` | DeepSpeed ZeRO Stage 3 |
| `multi_gpu` | Multi-GPU training |
| `single_gpu` | Single-GPU training |
To use one of these, just pass the name to `--accelerate_config`. TRL will automatically load the corresponding config file from `trl/accelerate_config/`.
#### Example Usage
<hfoptionsid="accelerate_config">
<hfoptionid="SFT inline">
```bash
trl sft \
--model_name_or_path Qwen/Qwen2.5-0.5B \
--dataset_name stanfordnlp/imdb \
--accelerate_config zero2 # or path/to/my/accelerate/config.yaml
```
</hfoption>
<hfoptionid="SFT w/ config file">
```yaml
# sft_config.yaml
model_name_or_path:Qwen/Qwen2.5-0.5B
dataset_name:stanfordnlp/imdb
accelerate_config:zero2 # or path/to/my/accelerate/config.yaml
```
Launch with:
```bash
trl sft --config sft_config.yaml
```
</hfoption>
<hfoptionid="DPO inline">
```bash
trl dpo \
--model_name_or_path Qwen/Qwen2.5-0.5B \
--dataset_name anthropic/hh-rlhf \
--accelerate_config zero2 # or path/to/my/accelerate/config.yaml
```
</hfoption>
<hfoptionid="DPO w/ config file">
```yaml
# dpo_config.yaml
model_name_or_path:Qwen/Qwen2.5-0.5B
dataset_name:anthropic/hh-rlhf
accelerate_config:zero2 # or path/to/my/accelerate/config.yaml
```
Launch with:
```bash
trl dpo --config dpo_config.yaml
```
</hfoption>
<hfoptionid="Reward inline">
```bash
trl reward \
--model_name_or_path Qwen/Qwen2.5-0.5B \
--dataset_name trl-lib/ultrafeedback_binarized \
--accelerate_config zero2 # or path/to/my/accelerate/config.yaml
```
</hfoption>
<hfoptionid="Reward w/ config file">
```yaml
# reward_config.yaml
model_name_or_path:Qwen/Qwen2.5-0.5B
dataset_name:trl-lib/ultrafeedback_binarized
accelerate_config:zero2 # or path/to/my/accelerate/config.yaml
```
Launch with:
```bash
trl reward --config reward_config.yaml
```
</hfoption>
</hfoptions>
### Using dataset mixtures
You can use dataset mixtures to combine multiple datasets into a single training dataset. This is useful for training on diverse data sources or when you want to mix different types of data.
<hfoptionsid="dataset_mixtures">
<hfoptionid="SFT">
```yaml
# sft_config.yaml
model_name_or_path:Qwen/Qwen2.5-0.5B
datasets:
- path:stanfordnlp/imdb
- path:roneneldan/TinyStories
```
Launch with:
```bash
trl sft --config sft_config.yaml
```
</hfoption>
<hfoptionid="DPO">
```yaml
# dpo_config.yaml
model_name_or_path:Qwen/Qwen2.5-0.5B
datasets:
- path:BAAI/Infinity-Preference
- path:argilla/Capybara-Preferences
```
Launch with:
```bash
trl dpo --config dpo_config.yaml
```
</hfoption>
<hfoptionid="Reward">
```yaml
# reward_config.yaml
model_name_or_path:Qwen/Qwen2.5-0.5B
datasets:
- path:trl-lib/tldr-preference
- path:trl-lib/lm-human-preferences-sentiment
```
Launch with:
```bash
trl reward --config reward_config.yaml
```
</hfoption>
</hfoptions>
To see all the available keywords for defining dataset mixtures, refer to the [`scripts.utils.DatasetConfig`] and [`DatasetMixtureConfig`] classes.
## Getting the System Information
You can get the system information by running the following command:
```bash
trl env
```
This will print out the system information, including the GPU information, the CUDA version, the PyTorch version, the transformers version, the TRL version, and any optional dependencies that are installed.
```txt
Copy-paste the following information when reporting an issue:
Community tutorials are made by active members of the Hugging Face community who want to share their knowledge and expertise with others. They are a great way to learn about the library and its features, and to get started with core classes and modalities.
| Reinforcement Learning | [`GRPOTrainer`] | Efficient Online Training with GRPO and vLLM in TRL | [Sergio Paniego](https://huggingface.co/sergiopaniego) | [Link](https://huggingface.co/learn/cookbook/grpo_vllm_online_training) | [](https://colab.research.google.com/github/huggingface/cookbook/blob/main/notebooks/en/grpo_vllm_online_training.ipynb) |
| Reinforcement Learning | [`GRPOTrainer`] | Post training an LLM for reasoning with GRPO in TRL | [Sergio Paniego](https://huggingface.co/sergiopaniego) | [Link](https://huggingface.co/learn/cookbook/fine_tuning_llm_grpo_trl) | [](https://colab.research.google.com/github/huggingface/cookbook/blob/main/notebooks/en/fine_tuning_llm_grpo_trl.ipynb) |
| Reinforcement Learning | [`GRPOTrainer`] | RL on LLaMA 3.1-8B with GRPO and Unsloth optimizations | [Andrea Manzoni](https://huggingface.co/AManzoni) | [Link](https://colab.research.google.com/github/amanzoni1/fine_tuning/blob/main/RL_LLama3_1_8B_GRPO.ipynb) | [](https://colab.research.google.com/github/amanzoni1/fine_tuning/blob/main/RL_LLama3_1_8B_GRPO.ipynb) |
| Instruction tuning | [`SFTTrainer`] | Fine-tuning Google Gemma LLMs using ChatML format with QLoRA | [Philipp Schmid](https://huggingface.co/philschmid) | [Link](https://www.philschmid.de/fine-tune-google-gemma) | [](https://colab.research.google.com/github/philschmid/deep-learning-pytorch-huggingface/blob/main/training/gemma-lora-example.ipynb) |
| Structured Generation | [`SFTTrainer`] | Fine-tuning Llama-2-7B to generate Persian product catalogs in JSON using QLoRA and PEFT | [Mohammadreza Esmaeilian](https://huggingface.co/Mohammadreza) | [Link](https://huggingface.co/learn/cookbook/en/fine_tuning_llm_to_generate_persian_product_catalogs_in_json_format) | [](https://colab.research.google.com/github/huggingface/cookbook/blob/main/notebooks/en/fine_tuning_llm_to_generate_persian_product_catalogs_in_json_format.ipynb) |
| Preference Optimization | [`DPOTrainer`] | Align Mistral-7b using Direct Preference Optimization for human preference alignment | [Maxime Labonne](https://huggingface.co/mlabonne) | [Link](https://mlabonne.github.io/blog/posts/Fine_tune_Mistral_7b_with_DPO.html) | [](https://colab.research.google.com/github/mlabonne/llm-course/blob/main/Fine_tune_a_Mistral_7b_model_with_DPO.ipynb) |
| Preference Optimization | [`ORPOTrainer`] | Fine-tuning Llama 3 with ORPO combining instruction tuning and preference alignment | [Maxime Labonne](https://huggingface.co/mlabonne) | [Link](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) | [](https://colab.research.google.com/drive/1eHNWg9gnaXErdAa8_mcvjMupbSS6rDvi) |
| Instruction tuning | [`SFTTrainer`] | How to fine-tune open LLMs in 2025 with Hugging Face | [Philipp Schmid](https://huggingface.co/philschmid) | [Link](https://www.philschmid.de/fine-tune-llms-in-2025) | [](https://colab.research.google.com/github/philschmid/deep-learning-pytorch-huggingface/blob/main/training/fine-tune-llms-in-2025.ipynb) |
### Videos
| Task | Title | Author | Video |
| --- | --- | --- | --- |
| Instruction tuning | Fine-tuning open AI models using Hugging Face TRL | [Wietse Venema](https://huggingface.co/wietsevenema) | [<img src="https://img.youtube.com/vi/cnGyyM0vOes/0.jpg">](https://youtu.be/cnGyyM0vOes) |
| Instruction tuning | How to fine-tune a smol-LM with Hugging Face, TRL, and the smoltalk Dataset | [Mayurji](https://huggingface.co/iammayur) | [<img src="https://img.youtube.com/vi/jKdXv3BiLu0/0.jpg">](https://youtu.be/jKdXv3BiLu0) |
<details>
<summary>⚠️ Deprecated features notice for "How to fine-tune a smol-LM with Hugging Face, TRL, and the smoltalk Dataset" (click to expand)</summary>
> [!WARNING]
> The tutorial uses two deprecated features:
>
> - `SFTTrainer(..., tokenizer=tokenizer)`: Use `SFTTrainer(..., processing_class=tokenizer)` instead, or simply omit it (it will be inferred from the model).
> - `setup_chat_format(model, tokenizer)`: Use `SFTConfig(..., chat_template_path="Qwen/Qwen3-0.6B")`, where `chat_template_path` specifies the model whose chat template you want to copy.
| Visual QA | [`DPOTrainer`] | Fine-tuning SmolVLM using direct preference optimization (DPO) with TRL on a consumer GPU | [Sergio Paniego](https://huggingface.co/sergiopaniego) | [Link](https://huggingface.co/learn/cookbook/fine_tuning_vlm_dpo_smolvlm_instruct) | [](https://colab.research.google.com/github/huggingface/cookbook/blob/main/notebooks/en/fine_tuning_vlm_dpo_smolvlm_instruct.ipynb) |
| Object Detection Grounding | [`SFTTrainer`] | Fine tuning a VLM for Object Detection Grounding using TRL | [Sergio Paniego](https://huggingface.co/sergiopaniego) | [Link](https://huggingface.co/learn/cookbook/fine_tuning_vlm_object_detection_grounding) | [](https://colab.research.google.com/github/huggingface/cookbook/blob/main/notebooks/en/fine_tuning_vlm_object_detection_grounding.ipynb) |
| Visual QA | [`DPOTrainer`] | Fine-Tuning a Vision Language Model with TRL using MPO | [Sergio Paniego](https://huggingface.co/sergiopaniego) | [Link](https://huggingface.co/learn/cookbook/fine_tuning_vlm_mpo) | [](https://colab.research.google.com/github/huggingface/cookbook/blob/main/notebooks/en/fine_tuning_vlm_mpo.ipynb) |
| Reinforcement Learning | [`GRPOTrainer`] | Post training a VLM for reasoning with GRPO using TRL | [Sergio Paniego](https://huggingface.co/sergiopaniego) | [Link](https://huggingface.co/learn/cookbook/fine_tuning_vlm_grpo_trl) | [](https://colab.research.google.com/github/huggingface/cookbook/blob/main/notebooks/en/fine_tuning_vlm_grpo_trl.ipynb) |
## Contributing
If you have a tutorial that you would like to add to this list, please open a PR to add it. We will review it and merge it if it is relevant to the community.
Contrastive Preference Optimization (CPO) as introduced in the paper [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417) by [Haoran Xu](https://huggingface.co/haoranxu), [Amr Sharaf](https://huggingface.co/amrsharaf), [Yunmo Chen](https://huggingface.co/yunmochen), Weiting Tan, Lingfeng Shen, Benjamin Van Durme, [Kenton Murray](https://huggingface.co/Kenton), and [Young Jin Kim](https://huggingface.co/ykim362). At a high level, CPO trains models to avoid generating adequate, but not perfect, translations in Machine Translation (MT) tasks. However, CPO is a general approximation of the DPO loss and can be applied to other domains, such as chat.
CPO aims to mitigate two fundamental shortcomings of SFT. First, SFT’s methodology of minimizing the discrepancy between predicted outputs and gold-standard references inherently caps model performance at the quality level of the training data. Secondly, SFT lacks a mechanism to prevent the model from rejecting mistakes in translations. The CPO objective is derived from the DPO objective.
## Quick start
This example demonstrates how to train a model using the CPO method. We use the [Qwen 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) as the base model. We use the preference data from the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback). You can view the data in the dataset here:
CPO requires a [preference dataset](dataset_formats#preference). The [`CPOTrainer`] supports both [conversational](dataset_formats#conversational) and [standard](dataset_formats#standard) dataset formats. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
## Example script
We provide an example script to train a model using the CPO method. The script is available in [`examples/scripts/cpo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/cpo.py)
To test the CPO script with the [Qwen2 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [UltraFeedback dataset](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized), run the following command:
```bash
accelerate launch examples/scripts/cpo.py \
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
--dataset_name trl-lib/ultrafeedback_binarized \
--num_train_epochs 1\
--output_dir Qwen2-0.5B-CPO
```
## Logged metrics
While training and evaluating, we record the following reward metrics:
*`rewards/chosen`: the mean log probabilities of the policy model for the chosen responses scaled by beta
*`rewards/rejected`: the mean log probabilities of the policy model for the rejected responses scaled by beta
*`rewards/accuracies`: mean of how often the chosen rewards are > than the corresponding rejected rewards
*`rewards/margins`: the mean difference between the chosen and corresponding rejected rewards
*`nll_loss`: the mean negative log likelihood loss of the policy model for the chosen responses
## CPO variants
### Simple Preference Optimization (SimPO)
[Simple Preference Optimization](https://huggingface.co/papers/2405.14734) (SimPO) by [Yu Meng](https://huggingface.co/yumeng5), [Mengzhou Xia](https://huggingface.co/mengzhouxia), and [Danqi Chen](https://huggingface.co/cdq10131) proposes a simpler and more effective preference optimization algorithm than DPO without using a reference model. The key designs in SimPO are (1) using length-normalized log likelihood as the implicit reward, and (2) incorporating a target reward margin in the Bradley-Terry ranking objective. The official code can be found at [princeton-nlp/SimPO](https://github.com/princeton-nlp/SimPO).
The abstract from the paper is the following:
> Direct Preference Optimization (DPO) is a widely used offline preference optimization algorithm that reparameterizes reward functions in reinforcement learning from human feedback (RLHF) to enhance simplicity and training stability. In this work, we propose SimPO, a simpler yet more effective approach. The effectiveness of SimPO is attributed to a key design: using the average log probability of a sequence as the implicit reward. This reward formulation better aligns with model generation and eliminates the need for a reference model, making it more compute and memory efficient. Additionally, we introduce a target reward margin to the Bradley-Terry objective to encourage a larger margin between the winning and losing responses, further enhancing the algorithm's performance. We compare SimPO to DPO and its latest variants across various state-of-the-art training setups, including both base and instruction-tuned models like Mistral and Llama3. We evaluated on extensive instruction-following benchmarks, including AlpacaEval 2, MT-Bench, and the recent challenging Arena-Hard benchmark. Our results demonstrate that SimPO consistently and significantly outperforms existing approaches without substantially increasing response length. Specifically, SimPO outperforms DPO by up to 6.4 points on AlpacaEval 2 and by up to 7.5 points on Arena-Hard. Our top-performing model, built on Llama3-8B-Instruct, achieves a remarkable 44.7 length-controlled win rate on AlpacaEval 2 -- surpassing Claude 3 Opus on the leaderboard, and a 33.8 win rate on Arena-Hard -- making it the strongest 8B open-source model.
The SimPO loss is integrated in the [`CPOTrainer`], as it's an alternative loss that adds a reward margin, allows for length normalization, and does not use BC regularization. To use this loss, just turn on `loss_type="simpo"` and `cpo_alpha=0.0` in the [`CPOConfig`] and set the `simpo_gamma` to a recommended value.
### CPO-SimPO
We also offer the combined use of CPO and SimPO, which enables more stable training and improved performance. Learn more details at [CPO-SimPO GitHub](https://github.com/fe1ixxu/CPO_SIMPO). To use this method, simply enable SimPO by setting `loss_type="simpo"` and a non-zero `cpo_alpha` in the [`CPOConfig`].
### AlphaPO
The [AlphaPO -- Reward shape matters for LLM alignment](https://huggingface.co/papers/2501.03884) (AlphaPO) method by Aman Gupta, Shao Tang, Qingquan Song, Sirou Zhu, [Jiwoo Hong](https://huggingface.co/JW17), Ankan Saha, Viral Gupta, Noah Lee, Eunki Kim, Jason Zhu, Natesh Pillai, and S. Sathiya Keerthi is also implemented in the [`CPOTrainer`]. AlphaPO is an alternative method that applies a transformation to the reward function shape in the context of SimPO loss. The abstract from the paper is the following:
> Reinforcement Learning with Human Feedback (RLHF) and its variants have made huge strides toward the effective alignment of large language models (LLMs) to follow instructions and reflect human values. More recently, Direct Alignment Algorithms (DAAs) have emerged in which the reward modeling stage of RLHF is skipped by characterizing the reward directly as a function of the policy being learned. Some popular examples of DAAs include Direct Preference Optimization (DPO) and Simple Preference Optimization (SimPO). These methods often suffer from likelihood displacement, a phenomenon by which the probabilities of preferred responses are often reduced undesirably. In this paper, we argue that, for DAAs the reward (function) shape matters. We introduce AlphaPO, a new DAA method that leverages an α-parameter to help change the shape of the reward function beyond the standard log reward. AlphaPO helps maintain fine-grained control over likelihood displacement and overoptimization. Compared to SimPO, one of the best performing DAAs, AlphaPO leads to about 7% to 10% relative improvement in alignment performance for the instruct versions of Mistral-7B and Llama3-8B while achieving 15% to 50% relative improvement over DPO on the same models. The analysis and results presented highlight the importance of the reward shape and how one can systematically change it to affect training dynamics, as well as improve alignment performance.
To use this loss as described in the paper, we can set the `loss_type="alphapo"` which automatically sets `loss_type="simpo"` and `cpo_alpha=0.0`, together with `alpha` and `simpo_gamma` to recommended values in the [`CPOConfig`]. Alternatively, you can manually set `loss_type="simpo"`, `cpo_alpha=0.0`, together with `alpha` and `simpo_gamma` to recommended values. Other variants of this method are also possible, such as setting `loss_type="ipo"` and `alpha` to any non-zero value.
## Loss functions
The CPO algorithm supports several loss functions. The loss function can be set using the `loss_type` parameter in the [`CPOConfig`]. The following loss functions are supported:
| `loss_type=` | Description |
| --- | --- |
| `"sigmoid"` (default) | Given the preference data, we can fit a binary classifier according to the Bradley-Terry model, and in fact, the [DPO](https://huggingface.co/papers/2305.18290) authors propose the sigmoid loss on the normalized likelihood via the `logsigmoid` to fit a logistic regression. |
| `"hinge"` | The [RSO](https://huggingface.co/papers/2309.06657) authors propose to use a hinge loss on the normalized likelihood from the [SLiC](https://huggingface.co/papers/2305.10425) paper. In this case, the `beta` is the reciprocal of the margin. |
| `"ipo"` | The [IPO](https://huggingface.co/papers/2310.12036) authors provide a deeper theoretical understanding of the DPO algorithms and identify an issue with overfitting and propose an alternative loss. In this case, the `beta` is the reciprocal of the gap between the log-likelihood ratios of the chosen vs the rejected completion pair, and thus the smaller the `beta`, the larger this gap is. As per the paper, the loss is averaged over log-likelihoods of the completion (unlike DPO, which is summed only). |
| `"simpo"` | The [SimPO](https://huggingface.co/papers/2405.14734) method is also implemented in the [`CPOTrainer`]. SimPO is an alternative loss that adds a reward margin, allows for length normalization, and does not use BC regularization. To use this loss, simply set `loss_type="simpo"` and `cpo_alpha=0.0` in the [`CPOConfig`] and `simpo_gamma` to a recommended value. |
| `"alphapo"` | The [AlphaPO](https://huggingface.co/papers/2501.03884) method is also implemented in the [`CPOTrainer`]. This is syntactic sugar that automatically sets `loss_type="simpo"` and `cpo_alpha=0.0`. AlphaPO applies a transformation to the reward function shape in the context of SimPO loss when the `alpha` parameter is non-zero. |
### For Mixture of Experts Models: Enabling the auxiliary loss
MOEs are the most efficient if the load is about equally distributed between experts.
To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss.
This option is enabled by setting `output_router_logits=True` in the model config (e.g., [`~transformers.MixtralConfig`]).
To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: `0.001`) in the model config.
TRL is designed with modularity in mind so that users are able to efficiently customize the training loop for their needs. Below are some examples on how you can apply and test different techniques. Note: Although these examples use the DPOTrainer, the customization applies to most (if not all) trainers.
## Use different optimizers and schedulers
By default, the `DPOTrainer` creates a `torch.optim.AdamW` optimizer. You can create and define a different optimizer and pass it to `DPOTrainer` as follows:
Since `trl` supports all keyword arguments when loading a model from `transformers` using `from_pretrained`, you can also leverage `load_in_8bit` from `transformers` for more memory efficient fine-tuning.
Read more about 8-bit model loading in `transformers` [Load in 8bit or 4bit](https://huggingface.co/docs/transformers/en/peft#load-in-8bit-or-4bit).
When training large models, you should better handle the accelerator cache by iteratively clearing it. To do so, simply pass `optimize_device_cache=True` to [`DPOConfig`]:
At `trl` we provide the possibility to give enough modularity to users to be able to efficiently customize the training loop for their needs. Below are some examples on how you can apply and test different techniques.
## Use different optimizers
By default, the `PPOTrainer` creates a `torch.optim.Adam` optimizer. You can create and define a different optimizer and pass it to `PPOTrainer`:
```python
import torch
from transformers import GPT2Tokenizer
from trl import PPOTrainer, PPOConfig, AutoModelForCausalLMWithValueHead
# 1. load a pretrained model
model = AutoModelForCausalLMWithValueHead.from_pretrained('gpt2')
You can use the new [LION optimizer from Google](https://arxiv.org/abs/2302.06675) as well, first take the source code of the optimizer definition [here](https://github.com/lucidrains/lion-pytorch/blob/main/lion_pytorch/lion_pytorch.py), and copy it so that you can import the optimizer. Make sure to initialize the optimizer by considering the trainable parameters only for a more memory efficient training:
We advice you to use the learning rate that you would use for `Adam` divided by 3 as pointed out [here](https://github.com/lucidrains/lion-pytorch#lion---pytorch). We observed an improvement when using this optimizer compared to classic Adam (check the full logs [here](https://wandb.ai/distill-bloom/trl/runs/lj4bheke?workspace=user-younesbelkada)):
Since `trl` supports all key word arguments when loading a model from `transformers` using `from_pretrained`, you can also leverage `load_in_8bit` from `transformers` for more memory efficient fine-tuning.
Read more about 8-bit model loading in `transformers` [here](https://huggingface.co/docs/transformers/perf_infer_gpu_one#bitsandbytes-integration-for-int8-mixedprecision-matrix-decomposition).
</div>
```python
# 0. imports
# pip install bitsandbytes
import torch
from transformers import AutoTokenizer
from trl import PPOTrainer, PPOConfig, AutoModelForCausalLMWithValueHead
# 1. load a pretrained model
model = AutoModelForCausalLMWithValueHead.from_pretrained('bigscience/bloom-560m')
When training large models, you should better handle the CUDA cache by iteratively clearing it. Do do so, simply pass `optimize_cuda_cache=True` to `PPOConfig`:
This guide provides an overview of the dataset formats and types supported by each trainer in TRL.
## Overview of the dataset formats and types
- The *format* of a dataset refers to how the data is structured, typically categorized as either *standard* or *conversational*.
- The *type* is associated with the specific task the dataset is designed for, such as *prompt-only* or *preference*. Each type is characterized by its columns, which vary according to the task, as shown in the table.
<table>
<tr>
<th>Type \ Format</th>
<th>Standard</th>
<th>Conversational</th>
</tr>
<tr>
<td>Language modeling</td>
<td>
<pre><code>{"text": "The sky is blue."}</code></pre>
</td>
<td>
<pre><code>{"messages": [{"role": "user", "content": "What color is the sky?"},
{"role": "assistant", "content": "It is blue."}]}</code></pre>
</td>
</tr>
<tr>
<td>Prompt-only</td>
<td>
<pre><code>{"prompt": "The sky is"}</code></pre>
</td>
<td>
<pre><code>{"prompt": [{"role": "user", "content": "What color is the sky?"}]}</code></pre>
</td>
</tr>
<tr>
<td>Prompt-completion</td>
<td>
<pre><code>{"prompt": "The sky is",
"completion": " blue."}</code></pre>
</td>
<td>
<pre><code>{"prompt": [{"role": "user", "content": "What color is the sky?"}],
"completion": [{"role": "assistant", "content": "It is blue."}]}</code></pre>
</td>
</tr>
</tr>
<tr>
<td>Preference</td>
<td>
<pre><code>{"prompt": "The sky is",
"chosen": " blue.",
"rejected": " green."}</code></pre>
or, with implicit prompt:
<pre><code>{"chosen": "The sky is blue.",
"rejected": "The sky is green."}</code></pre>
</td>
<td>
<pre><code>{"prompt": [{"role": "user", "content": "What color is the sky?"}],
"chosen": [{"role": "assistant", "content": "It is blue."}],
"rejected": [{"role": "assistant", "content": "It is green."}]}</code></pre>
or, with implicit prompt:
<pre><code>{"chosen": [{"role": "user", "content": "What color is the sky?"},
{"role": "assistant", "content": "It is blue."}],
"rejected": [{"role": "user", "content": "What color is the sky?"},
{"role": "assistant", "content": "It is green."}]}</code></pre>
</td>
</tr>
<td>Unpaired preference</td>
<td>
<pre><code>{"prompt": "The sky is",
"completion": " blue.",
"label": True}</code></pre>
</td>
<td>
<pre><code>{"prompt": [{"role": "user", "content": "What color is the sky?"}],
"completion": [{"role": "assistant", "content": "It is green."}],
"label": False}</code></pre>
</td>
</tr>
</tr>
<td>Stepwise supervision</td>
<td>
<pre><code>{"prompt": "Which number is larger, 9.8 or 9.11?",
"completions": ["The fractional part of 9.8 is 0.8.",
The standard dataset format typically consists of plain text strings. The columns in the dataset vary depending on the task. This is the format expected by TRL trainers. Below are examples of standard dataset formats for different tasks:
```python
# Language modeling
language_modeling_example={"text":"The sky is blue."}
Conversational datasets are used for tasks involving dialogues or chat interactions between users and assistants. Unlike standard dataset formats, these contain sequences of messages where each message has a `role` (e.g., `"user"` or `"assistant"`) and `content` (the message text).
```python
messages=[
{"role":"user","content":"Hello, how are you?"},
{"role":"assistant","content":"I'm doing great. How can I help you today?"},
{"role":"user","content":"I'd like to show off how chat templating works!"},
]
```
Just like standard datasets, the columns in conversational datasets vary depending on the task. Below are examples of conversational dataset formats for different tasks:
```python
# Prompt-completion
prompt_completion_example={"prompt":[{"role":"user","content":"What color is the sky?"}],
"completion":[{"role":"assistant","content":"It is blue."}]}
# Preference
preference_example={
"prompt":[{"role":"user","content":"What color is the sky?"}],
"chosen":[{"role":"assistant","content":"It is blue."}],
"rejected":[{"role":"assistant","content":"It is green."}],
}
```
#### Tool Calling
Some chat templates support *tool calling*, which allows the model to interact with external functions—referred to as **tools**—during generation. This extends the conversational capabilities of the model by enabling it to output a `"tool_calls"` field instead of a standard `"content"` message whenever it decides to invoke a tool.
After the assistant initiates a tool call, the tool executes and returns its output. The assistant can then process this output and continue the conversation accordingly.
Here’s a simple example of a tool-calling interaction:
```python
messages=[
{"role":"user","content":"Turn on the living room lights."},
{"role":"assistant","tool_calls":[
{"type":"function","function":{
"name":"control_light",
"arguments":{"room":"living room","state":"on"}
}}]
},
{"role":"tool","name":"control_light","content":"The lights in the living room are now on."},
{"role":"assistant","content":"Done!"}
]
```
When preparing datasets for Supervised Fine-Tuning (SFT) with tool calling, it is important that your dataset includes an additional column named `tools`. This column contains the list of available tools for the model, which is usually used by the chat template to construct the system prompt.
The tools must be specified in a codified JSON schema format. You can automatically generate this schema from Python function signatures using the [`~transformers.utils.get_json_schema`] utility:
```python
fromtransformers.utilsimportget_json_schema
defcontrol_light(room:str,state:str)->str:
"""
Controls the lights in a room.
Args:
room: The name of the room.
state: The desired state of the light ("on" or "off").
Returns:
str: A message indicating the new state of the lights.
"""
returnf"The lights in {room} are now {state}."
# Generate JSON schema
json_schema=get_json_schema(control_light)
```
The generated schema would look like:
```python
{
"type":"function",
"function":{
"name":"control_light",
"description":"Controls the lights in a room.",
"parameters":{
"type":"object",
"properties":{
"room":{"type":"string","description":"The name of the room."},
"state":{"type":"string","description":'The desired state of the light ("on" or "off").'},
},
"required":["room","state"],
},
"return":{"type":"string","description":"str: A message indicating the new state of the lights."},
},
}
```
A complete dataset entry for SFT might look like:
```python
{"messages":messages,"tools":[json_schema]}
```
For more detailed information on tool calling, refer to the [Tool Calling section in the `transformers` documentation](https://huggingface.co/docs/transformers/chat_extras#tools-and-rag) and the blog post [Tool Use, Unified](https://huggingface.co/blog/unified-tool-use).
### Harmony
The [Harmony response format](https://cookbook.openai.com/articles/openai-harmony) was introduced with the [OpenAI GPT OSS models](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4). It extends the conversational format by adding richer structure for reasoning, function calls, and metadata about the model’s behavior. Key features include:
- **Developer role** – Provides high level instructions (similar to a system prompt) and lists available tools.
- **Channels** – Separate types of assistant output into distinct streams:
-`analysis`– for internal reasoning, from the key `"thinking"`
-`final`– for the user-facing answer, from the key `"content"`
-`commentary`– for tool calls or meta notes
- **Reasoning effort** – Signals how much thinking the model should show (e.g., `"low"`, `"medium"`, `"high"`).
- **Model identity** – Explicitly defines the assistant’s persona.
{"role":"developer","content":"Use a friendly tone."},
{"role":"user","content":"What is the meaning of life?"},
{"role":"assistant","thinking":"Deep reflection...","content":"The final answer is..."},
]
print(
tokenizer.apply_chat_template(
messages,
tokenize=False,
reasoning_effort="low",
model_identity="You are HuggingGPT, a large language model trained by Hugging Face."
)
)
```
This produces:
```txt
<|start|>system<|message|>You are HuggingGPT, a large language model trained by Hugging Face.
Knowledge cutoff: 2024-06
Current date: 2025-08-03
Reasoning: low
# Valid channels: analysis, commentary, final. Channel must be included for every message.<|end|><|start|>developer<|message|># Instructions
Use a friendly tone.<|end|><|start|>user<|message|>What is the meaning of life?<|end|><|start|>assistant<|channel|>analysis<|message|>Deep reflection...<|end|><|start|>assistant<|channel|>final<|message|>The final answer is...<|return|>
```
For full details on message structure, supported fields, and advanced usage, see the [Harmony documentation](https://cookbook.openai.com/articles/openai-harmony).
### Types
#### Language modeling
A language modeling dataset consists of a column `"text"` (or `"messages"` for conversational datasets) containing a full sequence of text.
```python
# Standard format
language_modeling_example={"text":"The sky is blue."}
# Conversational format
language_modeling_example={"messages":[
{"role":"user","content":"What color is the sky?"},
{"role":"assistant","content":"It is blue."}
]}
```
#### Prompt-only
In a prompt-only dataset, only the initial prompt (the question or partial sentence) is provided under the key `"prompt"`. The training typically involves generating completion based on this prompt, where the model learns to continue or complete the given input.
```python
# Standard format
prompt_only_example={"prompt":"The sky is"}
# Conversational format
prompt_only_example={"prompt":[{"role":"user","content":"What color is the sky?"}]}
```
For examples of prompt-only datasets, refer to the [Prompt-only datasets collection](https://huggingface.co/collections/trl-lib/prompt-only-datasets-677ea25245d20252cea00368).
> [!TIP]
> While both the prompt-only and language modeling types are similar, they differ in how the input is handled. In the prompt-only type, the prompt represents a partial input that expects the model to complete or continue, while in the language modeling type, the input is treated as a complete sentence or sequence. These two types are processed differently by TRL. Below is an example showing the difference in the output of the `apply_chat_template` function for each type:
> # Output: {'prompt': '<|user|>\nWhat color is the sky?<|end|>\n<|assistant|>\n'}
>
> # Example for language modeling type
> lm_example = {"messages": [{"role": "user", "content": "What color is the sky?"}]}
> apply_chat_template(lm_example, tokenizer)
> # Output: {'text': '<|user|>\nWhat color is the sky?<|end|>\n<|endoftext|>'}
> ```
>
> - The prompt-only output includes a `'<|assistant|>\n'`, indicating the beginning of the assistant’s turn and expecting the model to generate a completion.
> - In contrast, the language modeling output treats the input as a complete sequence and terminates it with `'<|endoftext|>'`, signaling the end of the text and not expecting any additional content.
#### Prompt-completion
A prompt-completion dataset includes a `"prompt"` and a `"completion"`.
prompt_completion_example={"prompt":[{"role":"user","content":"What color is the sky?"}],
"completion":[{"role":"assistant","content":"It is blue."}]}
```
For examples of prompt-completion datasets, refer to the [Prompt-completion datasets collection](https://huggingface.co/collections/trl-lib/prompt-completion-datasets-677ea2bb20bbb6bdccada216).
#### Preference
A preference dataset is used for tasks where the model is trained to choose between two or more possible completions to the same prompt. This dataset includes a `"prompt"`, a `"chosen"` completion, and a `"rejected"` completion. The model is trained to select the `"chosen"` response over the `"rejected"` response.
Some datasets may not include the `"prompt"` column, in which case the prompt is implicit and directly included in the `"chosen"` and `"rejected"` completions. We recommend using explicit prompts whenever possible.
preference_example={"chosen":"The sky is blue.","rejected":"The sky is green."}
# Conversational format
## Explicit prompt (recommended)
preference_example={"prompt":[{"role":"user","content":"What color is the sky?"}],
"chosen":[{"role":"assistant","content":"It is blue."}],
"rejected":[{"role":"assistant","content":"It is green."}]}
## Implicit prompt
preference_example={"chosen":[{"role":"user","content":"What color is the sky?"},
{"role":"assistant","content":"It is blue."}],
"rejected":[{"role":"user","content":"What color is the sky?"},
{"role":"assistant","content":"It is green."}]}
```
For examples of preference datasets, refer to the [Preference datasets collection](https://huggingface.co/collections/trl-lib/preference-datasets-677e99b581018fcad9abd82c).
Some preference datasets can be found with [the tag `dpo` on Hugging Face Hub](https://huggingface.co/datasets?other=dpo). You can also explore the [librarian-bots' DPO Collections](https://huggingface.co/collections/librarian-bots/direct-preference-optimization-datasets-66964b12835f46289b6ef2fc) to identify preference datasets.
#### Unpaired preference
An unpaired preference dataset is similar to a preference dataset but instead of having `"chosen"` and `"rejected"` completions for the same prompt, it includes a single `"completion"` and a `"label"` indicating whether the completion is preferred or not.
unpaired_preference_example={"prompt":[{"role":"user","content":"What color is the sky?"}],
"completion":[{"role":"assistant","content":"It is blue."}],
"label":True}
```
For examples of unpaired preference datasets, refer to the [Unpaired preference datasets collection](https://huggingface.co/collections/trl-lib/unpaired-preference-datasets-677ea22bf5f528c125b0bcdf).
#### Stepwise supervision
A stepwise (or process) supervision dataset is similar to an [unpaired preference](#unpaired-preference) dataset but includes multiple steps of completions, each with its own label. This structure is useful for tasks that need detailed, step-by-step labeling, such as reasoning tasks. By evaluating each step separately and providing targeted labels, this approach helps identify precisely where the reasoning is correct and where errors occur, allowing for targeted feedback on each part of the reasoning process.
```python
stepwise_example={
"prompt":"Which number is larger, 9.8 or 9.11?",
"completions":["The fractional part of 9.8 is 0.8, while the fractional part of 9.11 is 0.11.","Since 0.11 is greater than 0.8, the number 9.11 is larger than 9.8."],
"labels":[True,False]
}
```
For examples of stepwise supervision datasets, refer to the [Stepwise supervision datasets collection](https://huggingface.co/collections/trl-lib/stepwise-supervision-datasets-677ea27fd4c5941beed7a96e).
## Which dataset type to use?
Choosing the right dataset type depends on the task you are working on and the specific requirements of the TRL trainer you are using. Below is a brief overview of the dataset types supported by each TRL trainer.
| [`SFTTrainer`] | [Language modeling](#language-modeling) or [Prompt-completion](#prompt-completion) |
| [`XPOTrainer`] | [Prompt-only](#prompt-only) |
## Using any dataset with TRL: preprocessing and conversion
Many datasets come in formats tailored to specific tasks, which might not be directly compatible with TRL. To use such datasets with TRL, you may need to preprocess and convert them into the required format.
To make this easier, we provide a set of [example scripts](https://github.com/huggingface/trl/tree/main/examples/datasets) that cover common dataset conversions.
### Example: UltraFeedback dataset
Let’s take the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback) as an example. Here's a preview of the dataset:
As shown above, the dataset format does not match the expected structure. It’s not in a conversational format, the column names differ, and the results pertain to different models (e.g., Bard, GPT-4) and aspects (e.g., "helpfulness", "honesty").
By using the provided conversion script [`examples/datasets/ultrafeedback.py`](https://github.com/huggingface/trl/tree/main/examples/datasets/ultrafeedback.py), you can transform this dataset into an unpaired preference type, and push it to the Hub:
By adapting the provided scripts or creating your own, you can convert any dataset into a format compatible with TRL.
## Utilities for converting dataset types
This section provides example code to help you convert between different dataset types. While some conversions can be performed after applying the chat template (i.e., in the standard format), we recommend performing the conversion before applying the chat template to ensure it works consistently.
For simplicity, some of the examples below do not follow this recommendation and use the standard format. However, the conversions can be applied directly to the conversational format without modification.
| From \ To | Language modeling | Prompt-completion | Prompt-only | Preference with implicit prompt | Preference | Unpaired preference | Stepwise supervision |
To convert a prompt-completion dataset into a prompt-only dataset, remove the completion.
```python
fromdatasetsimportDataset
dataset=Dataset.from_dict({
"prompt":["The sky is","The sun is"],
"completion":[" blue."," in the sky."],
})
dataset=dataset.remove_columns("completion")
```
```python
>>>dataset[0]
{'prompt':'The sky is'}
```
### From preference with implicit prompt to language modeling dataset
To convert a preference with implicit prompt dataset into a language modeling dataset, remove the rejected, and rename the column `"chosen"` to `"text"`.
```python
fromdatasetsimportDataset
dataset=Dataset.from_dict({
"chosen":["The sky is blue.","The sun is in the sky."],
"rejected":["The sky is green.","The sun is in the sea."],
### From preference with implicit prompt to prompt-completion dataset
To convert a preference dataset with implicit prompt into a prompt-completion dataset, extract the prompt with [`extract_prompt`], remove the rejected, and rename the column `"chosen"` to `"completion"`.
```python
fromdatasetsimportDataset
fromtrlimportextract_prompt
dataset=Dataset.from_dict({
"chosen":[
[{"role":"user","content":"What color is the sky?"},{"role":"assistant","content":"It is blue."}],
[{"role":"user","content":"Where is the sun?"},{"role":"assistant","content":"In the sky."}],
],
"rejected":[
[{"role":"user","content":"What color is the sky?"},{"role":"assistant","content":"It is green."}],
[{"role":"user","content":"Where is the sun?"},{"role":"assistant","content":"In the sea."}],
{'prompt':[{'role':'user','content':'What color is the sky?'}],'completion':[{'role':'assistant','content':'It is blue.'}]}
```
### From preference with implicit prompt to prompt-only dataset
To convert a preference dataset with implicit prompt into a prompt-only dataset, extract the prompt with [`extract_prompt`], and remove the rejected and the chosen.
```python
fromdatasetsimportDataset
fromtrlimportextract_prompt
dataset=Dataset.from_dict({
"chosen":[
[{"role":"user","content":"What color is the sky?"},{"role":"assistant","content":"It is blue."}],
[{"role":"user","content":"Where is the sun?"},{"role":"assistant","content":"In the sky."}],
],
"rejected":[
[{"role":"user","content":"What color is the sky?"},{"role":"assistant","content":"It is green."}],
[{"role":"user","content":"Where is the sun?"},{"role":"assistant","content":"In the sea."}],
{'prompt':[{'role':'user','content':'What color is the sky?'}]}
```
### From implicit to explicit prompt preference dataset
To convert a preference dataset with implicit prompt into a preference dataset with explicit prompt, extract the prompt with [`extract_prompt`].
```python
fromdatasetsimportDataset
fromtrlimportextract_prompt
dataset=Dataset.from_dict({
"chosen":[
[{"role":"user","content":"What color is the sky?"},{"role":"assistant","content":"It is blue."}],
[{"role":"user","content":"Where is the sun?"},{"role":"assistant","content":"In the sky."}],
],
"rejected":[
[{"role":"user","content":"What color is the sky?"},{"role":"assistant","content":"It is green."}],
[{"role":"user","content":"Where is the sun?"},{"role":"assistant","content":"In the sea."}],
],
})
dataset=dataset.map(extract_prompt)
```
```python
>>>dataset[0]
{'prompt':[{'role':'user','content':'What color is the sky?'}],
'chosen':[{'role':'assistant','content':'It is blue.'}],
'rejected':[{'role':'assistant','content':'It is green.'}]}
```
### From preference with implicit prompt to unpaired preference dataset
To convert a preference dataset with implicit prompt into an unpaired preference dataset, extract the prompt with [`extract_prompt`], and unpair the dataset with [`unpair_preference_dataset`].
[{"role":"user","content":"What color is the sky?"},{"role":"assistant","content":"It is blue."}],
[{"role":"user","content":"Where is the sun?"},{"role":"assistant","content":"In the sky."}],
],
"rejected":[
[{"role":"user","content":"What color is the sky?"},{"role":"assistant","content":"It is green."}],
[{"role":"user","content":"Where is the sun?"},{"role":"assistant","content":"In the sea."}],
],
})
dataset=dataset.map(extract_prompt)
dataset=unpair_preference_dataset(dataset)
```
```python
>>>dataset[0]
{'prompt':[{'role':'user','content':'What color is the sky?'}],
'completion':[{'role':'assistant','content':'It is blue.'}],
'label':True}
```
> [!WARNING]
> Keep in mind that the `"chosen"` and `"rejected"` completions in a preference dataset can be both good or bad.
> Before applying [`unpair_preference_dataset`], please ensure that all `"chosen"` completions can be labeled as good and all `"rejected"` completions as bad.
> This can be ensured by checking absolute rating of each completion, e.g. from a reward model.
### From preference to language modeling dataset
To convert a preference dataset into a language modeling dataset, remove the rejected, concatenate the prompt and the chosen into the `"text"` column.
### From explicit to implicit prompt preference dataset
To convert a preference dataset with explicit prompt into a preference dataset with implicit prompt, concatenate the prompt to both chosen and rejected, and remove the prompt.
```python
fromdatasetsimportDataset
dataset=Dataset.from_dict({
"prompt":[
[{"role":"user","content":"What color is the sky?"}],
{'chosen':[{'role':'user','content':'What color is the sky?'},{'role':'assistant','content':'It is blue.'}],
'rejected':[{'role':'user','content':'What color is the sky?'},{'role':'assistant','content':'It is green.'}]}
```
### From preference to unpaired preference dataset
To convert dataset into an unpaired preference dataset, unpair the dataset with [`unpair_preference_dataset`].
```python
fromdatasetsimportDataset
fromtrlimportunpair_preference_dataset
dataset=Dataset.from_dict({
"prompt":[
[{"role":"user","content":"What color is the sky?"}],
[{"role":"user","content":"Where is the sun?"}],
],
"chosen":[
[{"role":"assistant","content":"It is blue."}],
[{"role":"assistant","content":"In the sky."}],
],
"rejected":[
[{"role":"assistant","content":"It is green."}],
[{"role":"assistant","content":"In the sea."}],
],
})
dataset=unpair_preference_dataset(dataset)
```
```python
>>>dataset[0]
{'prompt':[{'role':'user','content':'What color is the sky?'}],
'completion':[{'role':'assistant','content':'It is blue.'}],
'label':True}
```
> [!WARNING]
> Keep in mind that the `"chosen"` and `"rejected"` completions in a preference dataset can be both good or bad.
> Before applying [`unpair_preference_dataset`], please ensure that all `"chosen"` completions can be labeled as good and all `"rejected"` completions as bad.
> This can be ensured by checking absolute rating of each completion, e.g. from a reward model.
### From unpaired preference to language modeling dataset
To convert an unpaired preference dataset into a language modeling dataset, concatenate prompts with good completions into the `"text"` column, and remove the prompt, completion and label columns.
```python
fromdatasetsimportDataset
dataset=Dataset.from_dict({
"prompt":["The sky is","The sun is","The sky is","The sun is"],
"completion":[" blue."," in the sky."," green."," in the sea."],
### From stepwise supervision to unpaired preference dataset
To convert a stepwise supervision dataset into an unpaired preference dataset, join the completions and merge the labels.
The method for merging the labels depends on the specific task. In this example, we use the logical AND operation. This means that if the step labels indicate the correctness of individual steps, the resulting label will reflect the correctness of the entire sequence.
```python
fromdatasetsimportDataset
dataset=Dataset.from_dict({
"prompt":["Blue light","Water"],
"completions":[[" scatters more in the atmosphere,"," so the sky is green."],
[" forms a less dense structure in ice,"," which causes it to expand when it freezes."]],
{'prompt':'Blue light','completion':' scatters more in the atmosphere, so the sky is green.','label':False}
```
## Vision datasets
Some trainers also support fine-tuning vision-language models (VLMs) using image-text pairs. In this scenario, it's recommended to use a conversational format, as each model handles image placeholders in text differently.
A conversational vision dataset differs from a standard conversational dataset in two key ways:
1. The dataset must contain the key `images` with the image data (as lists of PIL images) or `image` with a single PIL image.
2. The `"content"` field in messages must be a list of dictionaries, where each dictionary specifies the type of data: `"image"` or `"text"`.
Example:
```python
# Textual dataset:
"content":"What color is the sky?"
# Vision dataset:
"content":[
{"type":"image"},
{"type":"text","text":"What color is the sky in the image?"}
]
```
An example of a conversational vision dataset is the [openbmb/RLAIF-V-Dataset](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset). Below is an embedded view of the dataset's training data, allowing you to explore it directly:
> Section under construction. Feel free to contribute!
TRL supports training with DeepSpeed, a library that implements advanced training optimization techniques. These include optimizer state partitioning, offloading, gradient partitioning, and more.
DeepSpeed integrates the [Zero Redundancy Optimizer (ZeRO)](https://huggingface.co/papers/1910.02054), which allows to scale the model size proportional to the number of devices with sustained high efficiency.
We provide ready-to-use DeepSpeed configuration files in the [`examples/accelerate_configs`](https://github.com/huggingface/trl/tree/main/examples/accelerate_configs) directory. For example, to run training with ZeRO Stage 2, use the following command:
Consult the 🤗 Accelerate [documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more information about the DeepSpeed plugin.
Language models (LMs) are known to sometimes generate toxic outputs. In this example, we will show how to "detoxify" a LM by feeding it toxic prompts and then using [Transformer Reinforcement Learning (TRL)](https://huggingface.co/docs/trl/index) and Proximal Policy Optimization (PPO) to "detoxify" it.
Read this section to follow our investigation on how we can reduce toxicity in a wide range of LMs, from 125m parameters to 6B parameters!
Here's an overview of the notebooks and scripts in the [TRL toxicity repository](https://github.com/lvwerra/trl/tree/main/examples/toxicity/scripts) as well as the link for the interactive demo:
| File | Description | Colab link |
|---|---| --- |
| [`gpt-j-6b-toxicity.py`](https://github.com/lvwerra/trl/blob/main/examples/toxicity/scripts/gpt-j-6b-toxicity.py) | Detoxify `GPT-J-6B` using PPO | x |
| [`evaluate-toxicity.py`](https://github.com/lvwerra/trl/blob/main/examples/toxicity/scripts/evaluate-toxicity.py) | Evaluate de-toxified models using `evaluate` | x |
| [Interactive Space](https://huggingface.co/spaces/ybelkada/detoxified-lms)| An interactive Space that you can use to compare the original model with its detoxified version!| x |
## Context
Language models are trained on large volumes of text from the internet which also includes a lot of toxic content. Naturally, language models pick up the toxic patterns during training. Especially when prompted with already toxic texts the models are likely to continue the generations in a toxic way. The goal here is to "force" the model to be less toxic by feeding it toxic prompts and then using PPO to "detoxify" it.
### Computing toxicity scores
In order to optimize a model with PPO we need to define a reward. For this use-case we want a negative reward whenever the model generates something toxic and a positive comment when it is not toxic.
Therefore, we used [`facebook/roberta-hate-speech-dynabench-r4-target`](https://huggingface.co/facebook/roberta-hate-speech-dynabench-r4-target), which is a RoBERTa model fine-tuned to classify between "neutral" and "toxic" text as our toxic prompts classifier.
One could have also used different techniques to evaluate the toxicity of a model, or combined different toxicity classifiers, but for simplicity we have chosen to use this one.
### Selection of models
We selected the following models for our experiments to show that TRL can be easily scaled to 10B parameters models:
* [`EleutherAI/gpt-neo-125M`](https://huggingface.co/EleutherAI/gpt-neo-125M) (125 million parameters)
For the selection of the smallest model, we have chosen `EleutherAI/gpt-neo-125M` because it has shown to be a model that was the "most toxic" compared to other models. We have ran toxicity evaluation using `facebook/roberta-hate-speech-dynabench-r4-target` model on 4 different architectures on a subset of `allenai/real-toxicity-prompts` dataset. Note that we have computed the toxicity score on the generated text only (thus ignoring the prompt).
| Model | Mean toxicity score |
|---|---|
| `gpt2` | 0.01602 |
| `facebook/opt-350m` | 0.01628 |
| `bigscience/bloom-560m` | 0.00767 |
| `EleutherAI/gpt-neo-125M` | **0.02016** |
## Designing the problem
When doing PPO, it is very important to design the problem efficiently so that the model can learn to solve it. Let's cover the topics that were important for the model to converge.
### Pre-processing the dataset
The dataset consist of prompts and their continuations, and each of them has an associated `toxicity` score.
A `prompt` example:
```
{ "text": "Bronx officer Luis Rios alleged on-the-job tall tales were compounded by his lying about them in court, and", "profanity": 0.08360514, "sexually_explicit": 0.118823394, "identity_attack": 0.09608547, "flirtation": 0.23102224, "threat": 0.13215046, "insult": 0.10130461, "severe_toxicity": 0.04068885, "toxicity": 0.16534281 }
```
And its `continuation` value:
```
{ "text": " cost federal prosecutors all the drug evidence that the cop collected against an armed suspect — 16 baggies of cocaine during a strip search.", "severe_toxicity": 0.067997746, "toxicity": 0.1694093, "profanity": 0.11931301, "sexually_explicit": 0.12521537, "identity_attack": 0.09268324, "flirtation": 0.13452998, "threat": 0.31312028, "insult": 0.10761123 }
```
We want to increase the chance for the model to generate toxic prompts so we get more learning signal. For this reason pre-process the dataset to consider only the prompt that has a toxicity score that is greater than a threshold. We can do this in a few lines of code:
The reward function is one of the most important part of training a model with reinforcement learning. It is the function that will tell the model if it is doing well or not.
We tried various combinations, considering the softmax of the label "neutral", the log of the toxicity score and the raw logits of the label "neutral". We have found out that the convergence was much more smoother with the raw logits of the label "neutral".
We have found out that training a model with small or long context (from 5 to 8 tokens for the small context and from 15 to 20 tokens for the long context) does not have any impact on the convergence of the model, however, when training the model with longer prompts, the model will tend to generate more toxic prompts.
As a compromise between the two we took for a context window of 10 to 15 tokens for the training.
Our goal is to train models up to 6B parameters, which is about 24GB in float32! Here two tricks we use to be able to train a 6B model on a single 40GB-RAM GPU:
- Use `bfloat16` precision: Simply load your model in `bfloat16` when calling `from_pretrained` and you can reduce the size of the model by 2:
```python
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.bfloat16)
```
and the optimizer will take care of computing the gradients in `bfloat16` precision. Note that this is a pure `bfloat16` training which is different from the mixed precision training. If one wants to train a model in mixed-precision, they should not load the model with `torch_dtype` and specify the mixed precision argument when calling `accelerate config`.
- Use shared layers: Since PPO algorithm requires to have both the active and reference model to be on the same device, we have decided to use shared layers to reduce the memory footprint of the model. This can be achieved by just speifying `num_shared_layers` argument when creating a `PPOTrainer`:
In the example above this means that the model have the 4 first layers frozen (i.e. since these layers are shared between the active model and the reference model).
- One could have also applied gradient checkpointing to reduce the memory footprint of the model by calling `model.pretrained_model.enable_gradient_checkpointing()` (although this has the downside of training being ~20% slower).
## Training the model!
We have decided to keep 3 models in total that correspond to our best models:
We have used different learning rates for each model, and have found out that the largest models were quite hard to train and can easily lead to collapse mode if the learning rate is not chosen correctly (i.e. if the learning rate is too high):
As you can see the model converges nicely, but obviously we don't observe a very large improvement from the first step, as the original model is not trained to generate toxic contents.
Also we have observed that training with larger `mini_batch_size` leads to smoother convergence and better results on the test set:
We tested our models on a new dataset, the [`OxAISH-AL-LLM/wiki_toxic`](https://huggingface.co/datasets/OxAISH-AL-LLM/wiki_toxic) dataset. We feed each model with a toxic prompt from it (a sample with the label "toxic"), and generate 30 new tokens as it is done on the training loop and measure the toxicity score using `evaluate`'s [`toxicity` metric](https://huggingface.co/spaces/ybelkada/toxicity).
We report the toxicity score of 400 sampled examples, compute its mean and standard deviation and report the results in the table below:
| Model | Mean toxicity score | Std toxicity score |
The evaluation script can be found [here](https://github.com/lvwerra/trl/blob/main/examples/toxicity/scripts/evaluate-toxicity.py).
### Discussions
The results are quite promising, as we can see that the models are able to reduce the toxicity score of the generated text by an interesting margin. The gap is clear for `gpt-neo-2B` model but we less so for the `gpt-j-6B` model. There are several things we could try to improve the results on the largest model starting with training with larger `mini_batch_size` and probably allowing to back-propagate through more layers (i.e. use less shared layers).
To sum up, in addition to human feedback this could be a useful additional signal when training large language models to ensure there outputs are less toxic as well as useful.
### Limitations
We are also aware of consistent bias issues reported with toxicity classifiers, and of work evaluating the negative impact of toxicity reduction on the diversity of outcomes. We recommend that future work also compare the outputs of the detoxified models in terms of fairness and diversity before putting them to use.
## What is next?
You can download the model and use it out of the box with `transformers`, or play with the Spaces that compares the output of the models before and after detoxification [here](https://huggingface.co/spaces/ybelkada/detoxified-lms).
> Section under construction. Feel free to contribute!
## Multi-GPU Training with TRL
The trainers in TRL use [🤗 Accelerate](https://github.com/huggingface/accelerate) to enable distributed training across multiple GPUs or nodes. To do so, first create an [🤗 Accelerate](https://github.com/huggingface/accelerate) config file by running
```bash
accelerate config
```
and answering the questions according to your multi-GPU / multi-node setup. You can then launch distributed training by running:
```bash
accelerate launch train.py
```
We also provide config files in the [examples folder](https://github.com/huggingface/trl/tree/main/examples/accelerate_configs) that can be used as templates. To use these templates, simply pass the path to the config file when launching a job, e.g.:
To maintain a consistent batch size when scaling to multiple GPUs, make sure to update `per_device_train_batch_size` and `gradient_accumulation_steps` accordingly.
Example, these configurations are equivalent, and should yield the same results:
| Number of GPUs | Per device batch size | Gradient accumulation steps | Comments |
| --- | --- | --- | --- |
| 1 | 32 | 1 | Possibly high memory usage, but faster training |
| 8 | 4 | 1 | Multi-GPU to get the best of both worlds |
> [!TIP]
> Having one model per GPU can lead to high memory usage, which may not be feasible for large models or low-memory GPUs. In such cases, you can leverage [DeepSpeed](https://github.com/deepspeedai/DeepSpeed), which provides optimizations like model sharding, Zero Redundancy Optimizer, mixed precision training, and offloading to CPU or NVMe. Check out our [DeepSpeed Integration](deepspeed_integration) guide for more details.
## Context Parallelism
Context Parallelism (CP) is a parallelization technique that enables training with longer sequences by splitting the sequence dimension across multiple GPUs. Each GPU processes a portion of the sequence, allowing you to train with sequences longer than what would fit on a single GPU's memory.
For more details on CP, see the [Ultrascale Playbook - Context Parallelism](https://huggingface.co/spaces/nanotron/ultrascale-playbook?section=context_parallelism).
CP is particularly useful when:
- You want to train with very long sequences (>32k tokens)
- Single GPU memory is insufficient for your desired sequence length
- You need to maintain sequence coherence across the full context
### Requirements and Limitations
CP has specific requirements:
1.**Accelerate 1.10 or higher** is required
2.**FSDP2 (PyTorch FSDP v2)** is required as the distributed training backend
3.**SDPA attention** - Flash Attention is currently not supported with CP
4.**Sequence length divisibility** - sequences must be divisible by `cp_size * 2`. This is now automatically handled using the `pad_to_multiple_of` parameter in the data collator, which works seamlessly with both standard and padding-free modes.
### Configuration
To enable CP, you need to configure both Accelerate and your training arguments:
#### Accelerate Configuration
Use one of the provided accelerate config files (e.g. [`context_parallel_2gpu.yaml`](https://github.com/huggingface/trl/blob/main/examples/accelerate_configs/context_parallel_2gpu.yaml) for 2 GPUs):
```yaml
compute_environment:LOCAL_MACHINE
debug:false
distributed_type:FSDP
downcast_bf16:'no'
enable_cpu_affinity:false
fsdp_config:
fsdp_activation_checkpointing:true# Enable activation checkpointing for memory efficiency
pad_to_multiple_of=4,# ensures divisibility by cp_size * 2
# to get the most out of CP
max_length=16384,# long sequence length
packing=True,# use packing to reduce padding
use_liger_kernel=True,# compatible with CP
gradient_checkpointing=False,# The activation_checkpointing in FSDP config and the gradient_checkpointing in training arg can't be set to True simultaneously
per_device_train_batch_size=1,
...
)
```
Then, launch your training script with the appropriate accelerate config file:
We adjusted `num_processes` and `parallelism_config_cp_size` based on the number of GPUs for each run.
Training was performed with the [sft.py](https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py) example script, combined with the parameters described above.
The results below summarize the **maximum trainable sequence length** and **iterations per second** for different numbers of GPUs. A value marked as `OOM` indicates that the configuration ran out of memory and could not be trained.
These results show that **Context Parallelism (CP) scales effectively with more GPUs**, enabling training on much longer sequences. With **8 GPUs**, context lengths of over **300k tokens** become feasible, unlocking training with extremely long contexts while maintaining reasonable throughput.
<divclass="flex justify-center">
<imgsrc="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/context_parallelism_max_length_plot.png"alt="CP Max content length"width="45%"/>
> Accelerate also supports **N-Dimensional Parallelism (ND-parallelism)**, which enables you to combine different parallelization strategies to efficiently distribute model training across multiple GPUs.
>
> You can learn more and explore configuration examples in the [Accelerate ND-parallelism guide](https://github.com/huggingface/accelerate/blob/main/examples/torch_native_parallelism/README.md#nd-parallelism).
- [Hugging Face Blog: Enabling Long-Context Training with Sequence Parallelism in Axolotl](https://huggingface.co/blog/axolotl-ai-co/long-context-with-sequence-parallelism-in-axolotl)
- [Snowflake Engineering Blog: Arctic Long Sequence Training (ALST) — Scalable and Efficient Training for Multi-Million Token Sequences (Note that they use a different strategy)](https://www.snowflake.com/en/engineering-blog/arctic-long-sequence-training-multi-million-token-ai/)
## Multi-Node Training
We're working on a guide for multi-node training. Stay tuned! 🚀
TRL supports the DPO Trainer for training language models from preference data, as described in the paper [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290) by [Rafael Rafailov](https://huggingface.co/rmrafailov), Archit Sharma, Eric Mitchell, [Stefano Ermon](https://huggingface.co/ermonste), [Christopher D. Manning](https://huggingface.co/manning), [Chelsea Finn](https://huggingface.co/cbfinn).
The abstract from the paper is the following:
> While large-scale unsupervised language models (LMs) learn broad world knowledge and some reasoning skills, achieving precise control of their behavior is difficult due to the completely unsupervised nature of their training. Existing methods for gaining such steerability collect human labels of the relative quality of model generations and fine-tune the unsupervised LM to align with these preferences, often with reinforcement learning from human feedback (RLHF). However, RLHF is a complex and often unstable procedure, first fitting a reward model that reflects the human preferences, and then fine-tuning the large unsupervised LM using reinforcement learning to maximize this estimated reward without drifting too far from the original model. In this paper we introduce a new parameterization of the reward model in RLHF that enables extraction of the corresponding optimal policy in closed form, allowing us to solve the standard RLHF problem with only a simple classification loss. The resulting algorithm, which we call Direct Preference Optimization (DPO), is stable, performant, and computationally lightweight, eliminating the need for sampling from the LM during fine-tuning or performing significant hyperparameter tuning. Our experiments show that DPO can fine-tune LMs to align with human preferences as well as or better than existing methods. Notably, fine-tuning with DPO exceeds PPO-based RLHF in ability to control sentiment of generations, and matches or improves response quality in summarization and single-turn dialogue while being substantially simpler to implement and train.
The first step is to train an SFT model, to ensure the data we train on is in-distribution for the DPO algorithm.
Then, fine-tuning a language model via DPO consists of two steps and is easier than [PPO](ppo_trainer):
1.**Data collection**: Gather a [preference dataset](dataset_formats#preference) with positive and negative selected pairs of generation, given a prompt.
2.**Optimization**: Maximize the log-likelihood of the DPO loss directly.
This process is illustrated in the sketch below (from [Figure 1 of the DPO paper](https://huggingface.co/papers/2305.18290)):
Read more about DPO algorithm in the [original paper](https://huggingface.co/papers/2305.18290).
## Quick start
This example demonstrates how to train a model using the DPO method. We use the [Qwen 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) as the base model. We use the preference data from the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback). You can view the data in the dataset here:
Distributed across 8 GPUs, the training takes approximately 3 minutes. You can verify the training progress by checking the reward graph. An increasing trend in the reward margin indicates that the model is improving and generating better responses over time.
To see how the [trained model](https://huggingface.co/trl-lib/Qwen2-0.5B-DPO) performs, you can use the [Transformers Chat CLI](https://huggingface.co/docs/transformers/quicktour#chat-with-text-generation-models).
Huggingface is a platform that allows users to access a variety of open-source machine learning resources such as pre-trained models and datasets Huggingface is a platform that allows users to access a variety of open-source machine learning resources such as pre-trained models and datasets for the development of machine learning models and applications. It provides a repository of over 300, 000 pre-trained models in Huggingface is a platform that allows users to access a variety of open-source machine learning resources such as pre-trained models and datasets for the development of machine learning models and applications. It provides a repository of over 300, 000 pre-trained models in a variety of languages, enabling users to explore and utilize the latest techniques and technologies in the field of machine learning.
</code></pre>
## Expected dataset type
DPO requires a [preference dataset](dataset_formats#preference). The [`DPOTrainer`] supports both [conversational](dataset_formats#conversational) and [standard](dataset_formats#standard) dataset formats. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
Although the [`DPOTrainer`] supports both explicit and implicit prompts, we recommend using explicit prompts. If provided with an implicit prompt dataset, the trainer will automatically extract the prompt from the `"chosen"` and `"rejected"` columns. For more information, refer to the [preference style](dataset_formats#preference) section.
### Special considerations for vision-language models
The [`DPOTrainer`] supports fine-tuning vision-language models (VLMs). For these models, a vision dataset is required. To learn more about the specific format for vision datasets, refer to the [Vision dataset format](dataset_formats#vision-datasets) section.
Additionally, unlike standard text-based models where a `tokenizer` is used, for VLMs, you should replace the `tokenizer` with a `processor`.
```diff
- model = AutoModelForCausalLM.from_pretrained(model_id)
+ model = AutoModelForImageTextToText.from_pretrained(model_id)
For a complete example of fine-tuning a vision-language model, refer to the script in [`examples/scripts/dpo_vlm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/dpo_vlm.py).
## Example script
We provide an example script to train a model using the DPO method. The script is available in [`trl/scripts/dpo.py`](https://github.com/huggingface/trl/blob/main/trl/scripts/dpo.py)
To test the DPO script with the [Qwen2 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [UltraFeedback dataset](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized), run the following command:
```bash
accelerate launch trl/scripts/dpo.py \
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
--dataset_name trl-lib/ultrafeedback_binarized \
--num_train_epochs 1\
--output_dir Qwen2-0.5B-DPO
```
## Logged metrics
While training and evaluating, we record the following reward metrics:
-`rewards/chosen`: the mean difference between the log probabilities of the policy model and the reference model for the chosen responses scaled by beta
-`rewards/rejected`: the mean difference between the log probabilities of the policy model and the reference model for the rejected responses scaled by beta
-`rewards/accuracies`: mean of how often the chosen rewards are > than the corresponding rejected rewards
-`rewards/margins`: the mean difference between the chosen and corresponding rejected rewards
## Loss functions
The DPO algorithm supports several loss functions. The loss function can be set using the `loss_type` parameter in the [`DPOConfig`]. The following loss functions are supported:
| `loss_type=` | Description |
| --- | --- |
| `"sigmoid"` (default) | Given the preference data, we can fit a binary classifier according to the Bradley-Terry model and in fact the [DPO](https://huggingface.co/papers/2305.18290) authors propose the sigmoid loss on the normalized likelihood via the `logsigmoid` to fit a logistic regression. |
| `"hinge"` | The [RSO](https://huggingface.co/papers/2309.06657) authors propose to use a hinge loss on the normalized likelihood from the [SLiC](https://huggingface.co/papers/2305.10425) paper. In this case, the `beta` is the reciprocal of the margin. |
| `"ipo"` | The [IPO](https://huggingface.co/papers/2310.12036) authors provide a deeper theoretical understanding of the DPO algorithms and identify an issue with overfitting and propose an alternative loss. In this case, the `beta` is the reciprocal of the gap between the log-likelihood ratios of the chosen vs the rejected completion pair and thus the smaller the `beta` the larger this gaps is. As per the paper the loss is averaged over log-likelihoods of the completion (unlike DPO which is summed only). |
| `"exo_pair"` | The [EXO](https://huggingface.co/papers/2402.00856) authors propose to minimize the reverse KL instead of the negative log-sigmoid loss of DPO which corresponds to forward KL. Setting non-zero `label_smoothing` (default `1e-3`) leads to a simplified version of EXO on pair-wise preferences (see Eqn. (16) of the [EXO paper](https://huggingface.co/papers/2402.00856)). The full version of EXO uses `K>2` completions generated by the SFT policy, which becomes an unbiased estimator of the PPO objective (up to a constant) when `K` is sufficiently large. |
| `"nca_pair"` | The [NCA](https://huggingface.co/papers/2402.05369) authors shows that NCA optimizes the absolute likelihood for each response rather than the relative likelihood. |
| `"robust"` | The [Robust DPO](https://huggingface.co/papers/2403.00409) authors propose an unbiased estimate of the DPO loss that is robust to preference noise in the data. Like in cDPO, it assumes that the preference labels are noisy with some probability. In this approach, the `label_smoothing` parameter in the [`DPOConfig`] is used to model the probability of existing label noise. To apply this conservative loss, set `label_smoothing` to a value greater than 0.0 (between 0.0 and 0.5; the default is 0.0) |
| `"bco_pair"` | The [BCO](https://huggingface.co/papers/2404.04656) authors train a binary classifier whose logit serves as a reward so that the classifier maps {prompt, chosen completion} pairs to 1 and {prompt, rejected completion} pairs to 0. For unpaired data, we recommend the dedicated [`BCOTrainer`]. |
| `"sppo_hard"` | The [SPPO](https://huggingface.co/papers/2405.00675) authors claim that SPPO is capable of solving the Nash equilibrium iteratively by pushing the chosen rewards to be as large as 1/2 and the rejected rewards to be as small as -1/2 and can alleviate data sparsity issues. The implementation approximates this algorithm by employing hard label probabilities, assigning 1 to the winner and 0 to the loser. |
| `"aot"` or `loss_type="aot_pair"` | The [AOT](https://huggingface.co/papers/2406.05882) authors propose to use Distributional Preference Alignment Via Optimal Transport. Traditionally, the alignment algorithms use paired preferences at a sample level, which does not ensure alignment on the distributional level. AOT, on the other hand, can align LLMs on paired or unpaired preference data by making the reward distribution of the positive samples stochastically dominant in the first order on the distribution of negative samples. Specifically, `loss_type="aot"` is appropriate for paired datasets, where each prompt has both chosen and rejected responses; `loss_type="aot_pair"` is for unpaired datasets. In a nutshell, `loss_type="aot"` ensures that the log-likelihood ratio of chosen to rejected of the aligned model has higher quantiles than that ratio for the reference model. `loss_type="aot_pair"` ensures that the chosen reward is higher on all quantiles than the rejected reward. Note that in both cases quantiles are obtained via sorting. To fully leverage the advantages of the AOT algorithm, it is important to maximize the per-GPU batch size. |
| `"apo_zero"` or `loss_type="apo_down"` | The [APO](https://huggingface.co/papers/2408.06266) method introduces an "anchored" version of the alignment objective. There are two variants: `apo_zero` and `apo_down`. The `apo_zero` loss increases the likelihood of winning outputs while decreasing the likelihood of losing outputs, making it suitable when the model is less performant than the winning outputs. On the other hand, `apo_down` decreases the likelihood of both winning and losing outputs, but with a stronger emphasis on reducing the likelihood of losing outputs. This variant is more effective when the model is better than the winning outputs. |
| `"discopop"` | The [DiscoPOP](https://huggingface.co/papers/2406.08414) paper uses LLMs to discover more efficient offline preference optimization losses. In the paper the proposed DiscoPOP loss (which is a log-ratio modulated loss) outperformed other optimization losses on different tasks (IMDb positive text generation, Reddit TLDR summarization, and Alpaca Eval 2.0). |
| `"sft"` | SFT (Supervised Fine-Tuning) loss is the negative log likelihood loss, used to train the model to generate preferred responses. |
### Multi-loss combinations
The DPO trainer supports combining multiple loss functions with different weights, enabling more sophisticated optimization strategies. This is particularly useful for implementing algorithms like MPO (Mixed Preference Optimization). MPO is a training approach that combines multiple optimization objectives, as described in the paper [Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization](https://huggingface.co/papers/2411.10442).
To combine multiple losses, specify the loss types and corresponding weights as lists:
```python
# MPO: Combines DPO (sigmoid) for preference and BCO (bco_pair) for quality
training_args=DPOConfig(
loss_type=["sigmoid","bco_pair","sft"],# Loss types to combine
loss_weights=[0.8,0.2,1.0]# Corresponding weights, as used in the MPO paper
)
```
If `loss_weights` is not provided, all loss types will have equal weights (1.0 by default).
### Label smoothing
The [cDPO](https://ericmitchell.ai/cdpo.pdf) is a tweak on the DPO loss where we assume that the preference labels are noisy with some probability. In this approach, the `label_smoothing` parameter in the [`DPOConfig`] is used to model the probability of existing label noise. To apply this conservative loss, set `label_smoothing` to a value greater than 0.0 (between 0.0 and 0.5; the default is 0.0).
### Syncing the reference model
The [TR-DPO](https://huggingface.co/papers/2404.09656) paper suggests syncing the reference model weights after every `ref_model_sync_steps` steps of SGD with weight `ref_model_mixup_alpha` during DPO training. To toggle this callback use the `sync_ref_model=True` in the [`DPOConfig`].
### RPO loss
The [RPO](https://huggingface.co/papers/2404.19733) paper implements an iterative preference tuning algorithm using a loss related to the RPO loss in this [paper](https://huggingface.co/papers/2405.16436) that essentially consists of a weighted SFT loss on the chosen preferences together with the DPO loss. To use this loss, set the `rpo_alpha` in the [`DPOConfig`] to an appropriate value. The paper suggests setting this weight to `1.0`.
### WPO loss
The [WPO](https://huggingface.co/papers/2406.11827) paper adapts off-policy data to resemble on-policy data more closely by reweighting preference pairs according to their probability under the current policy. To use this method, set the `use_weighting` flag to `True` in the [`DPOConfig`].
### LD-DPO loss
The [LD-DPO](https://huggingface.co/papers/2409.06411) paper decomposes the portion of the response that exceeds the desired length into two components — human-like preferences and verbosity preference — based on a mixing coefficient \\( \alpha \\). To use this method, set the `ld_alpha` in the [`DPOConfig`] to an appropriate value. The paper suggests setting this value between `0.0` and `1.0`.
### For Mixture of Experts Models: Enabling the auxiliary loss
MOEs are the most efficient if the load is about equally distributed between experts.
To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss.
This option is enabled by setting `output_router_logits=True` in the model config (e.g. [`~transformers.MixtralConfig`]).
To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: `0.001`) in the model config.
## Accelerate DPO fine-tuning using `unsloth`
You can further accelerate QLoRA / LoRA (2x faster, 60% less memory) using the [`unsloth`](https://github.com/unslothai/unsloth) library that is fully compatible with `SFTTrainer`. Currently `unsloth` supports only Llama (Yi, TinyLlama, Qwen, Deepseek etc) and Mistral architectures. Some benchmarks for DPO listed below:
First install `unsloth` according to the [official documentation](https://github.com/unslothai/unsloth). Once installed, you can incorporate unsloth into your workflow in a very simple manner; instead of loading `AutoModelForCausalLM`, you just need to load a `FastLanguageModel` as follows:
```diff
from datasets import load_dataset
from trl import DPOConfig, DPOTrainer
- from transformers import AutoModelForCausalLM, AutoTokenizer
+ from unsloth import FastLanguageModel
- model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
The saved model is fully compatible with Hugging Face's transformers library. Learn more about unsloth in their [official repository](https://github.com/unslothai/unsloth).
## Reference model considerations with PEFT
You have three main options (plus several variants) for how the reference model works when using PEFT, assuming the model that you would like to further enhance with DPO was tuned using (Q)LoRA.
1. Simply create two instances of the model, each loading your adapter - works fine but is very inefficient.
2. Merge the adapter into the base model, create another adapter on top, then leave the `ref_model` param null, in which case DPOTrainer will unload the adapter for reference inference - efficient, but has potential downsides discussed below.
3. Load the adapter twice with different names, then use `set_adapter` during training to swap between the adapter being DPO'd and the reference adapter - slightly less efficient compared to 2 (~adapter size VRAM overhead), but avoids the pitfalls.
### Downsides to merging QLoRA before DPO (approach 2)
As suggested by [Benjamin Marie](https://medium.com/@bnjmn_marie/dont-merge-your-lora-adapter-into-a-4-bit-llm-65b6da287997), the best option for merging QLoRA adapters is to first dequantize the base model, then merge the adapter. Something similar to [this script](https://github.com/jondurbin/qlora/blob/main/qmerge.py).
However, after using this approach, you will have an unquantized base model. Therefore, to use QLoRA for DPO, you will need to re-quantize the merged model or use the unquantized merge (resulting in higher memory demand).
### Using option 3 - load the adapter twice
To avoid the downsides with option 2, you can load your fine-tuned adapter into the model twice, with different names, and set the model/ref adapter names in [`DPOTrainer`].
For example:
```python
# Load the base model.
bnb_config=BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
model=AutoModelForCausalLM.from_pretrained(
"mistralai/mixtral-8x7b-v0.1",
load_in_4bit=True,
quantization_config=bnb_config,
attn_implementation="flash_attention_2",
dtype=torch.bfloat16,
device_map="auto",
)
# Load the adapter.
model=PeftModel.from_pretrained(
model,
"/path/to/peft",
is_trainable=True,
adapter_name="train",
)
# Load the adapter a second time, with a different name, which will be our reference model.
This directory contains a collection of examples that demonstrate how to use the TRL library for various applications. We provide both **scripts** for advanced use cases and **notebooks** for an easy start and interactive experimentation.
The notebooks are self-contained and can run on **free Colab**, while the scripts can run on **single GPU, multi-GPU, or DeepSpeed** setups.
**Getting Started**
Install TRL and additional dependencies as follows:
```bash
pip install --upgrade trl[quantization]
```
Check for additional optional dependencies [here](https://github.com/huggingface/trl/blob/main/pyproject.toml).
For scripts, you will also need an 🤗 Accelerate config (recommended for multi-gpu settings):
```bash
accelerate config # will prompt you to define the training configuration
```
This allows you to run scripts with `accelerate launch` in single or multi-GPU settings.
## Notebooks
These notebooks are easier to run and are designed for quick experimentation with TRL. The list of notebooks can be found in the [`trl/examples/notebooks/`](https://github.com/huggingface/trl/tree/main/examples/notebooks/) directory.
| Notebook | Description | Open in Colab |
|----------|-------------|---------------|
| [`sft_trl_lora_qlora.ipynb`](https://github.com/huggingface/trl/tree/main/examples/notebooks/sft_trl_lora_qlora.ipynb) | Supervised Fine-Tuning (SFT) using QLoRA on free Colab | [](https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/sft_trl_lora_qlora.ipynb) |
| [`sft_qwen_vl.ipynb`](https://github.com/huggingface/trl/tree/main/examples/notebooks/sft_qwen_vl.ipynb) | Supervised Fine-Tuning (SFT) Qwen3-VL with QLoRA using TRL on free Colab | [](https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/sft_qwen_vl.ipynb) |
| [`grpo_qwen3_vl.ipynb`](https://github.com/huggingface/trl/tree/main/examples/notebooks/grpo_qwen3_vl.ipynb) | GRPO Qwen3-VL with QLoRA using TRL on free Colab | [](https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/grpo_qwen3_vl.ipynb) |
## Scripts
Scripts are maintained in the [`trl/scripts`](https://github.com/huggingface/trl/blob/main/trl/scripts) and [`examples/scripts`](https://github.com/huggingface/trl/blob/main/examples/scripts) directories. They show how to use different trainers such as `SFTTrainer`, `PPOTrainer`, `DPOTrainer`, `GRPOTrainer`, and more.
File | Description |
| --- | --- |
| [`examples/scripts/bco.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/bco.py) | This script shows how to use the [`KTOTrainer`] with the BCO loss to fine-tune a model to increase instruction-following, truthfulness, honesty, and helpfulness using the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset. |
| [`examples/scripts/cpo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/cpo.py) | This script shows how to use the [`CPOTrainer`] to fine-tune a model to increase helpfulness and harmlessness using the [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset. |
| [`trl/scripts/dpo.py`](https://github.com/huggingface/trl/blob/main/trl/scripts/dpo.py) | This script shows how to use the [`DPOTrainer`] to fine-tune a model. |
| [`examples/scripts/dpo_vlm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/dpo_vlm.py) | This script shows how to use the [`DPOTrainer`] to fine-tune a Vision Language Model to reduce hallucinations using the [openbmb/RLAIF-V-Dataset](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset) dataset. |
| [`examples/scripts/evals/judge_tldr.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/evals/judge_tldr.py) | This script shows how to use [`HfPairwiseJudge`] or [`OpenAIPairwiseJudge`] to judge model generations. |
| [`examples/scripts/gkd.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/gkd.py) | This script shows how to use the [`GKDTrainer`] to fine-tune a model. |
| [`trl/scripts/grpo.py`](https://github.com/huggingface/trl/blob/main/trl/scripts/grpo.py) | This script shows how to use the [`GRPOTrainer`] to fine-tune a model. |
| [`examples/scripts/grpo_vlm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/grpo_vlm.py) | This script shows how to use the [`GRPOTrainer`] to fine-tune a multimodal model for reasoning using the [lmms-lab/multimodal-open-r1-8k-verified](https://huggingface.co/datasets/lmms-lab/multimodal-open-r1-8k-verified) dataset. |
| [`examples/scripts/gspo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/gspo.py) | This script shows how to use GSPO via the [`GRPOTrainer`] to fine-tune model for reasoning using the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset. |
| [`examples/scripts/gspo_vlm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/gspo_vlm.py) | This script shows how to use GSPO via the [`GRPOTrainer`] to fine-tune a multimodal model for reasoning using the [lmms-lab/multimodal-open-r1-8k-verified](https://huggingface.co/datasets/lmms-lab/multimodal-open-r1-8k-verified) dataset. |
| [`examples/scripts/kto.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/kto.py) | This script shows how to use the [`KTOTrainer`] to fine-tune a model. |
| [`examples/scripts/mpo_vlm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/mpo_vlm.py) | This script shows how to use MPO via the [`DPOTrainer`] to align a model based on preferences using the [HuggingFaceH4/rlaif-v_formatted](https://huggingface.co/datasets/HuggingFaceH4/rlaif-v_formatted) dataset and a set of loss weights with weights. |
| [`examples/scripts/nash_md.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/nash_md.py) | This script shows how to use the [`NashMDTrainer`] to fine-tune a model. |
| [`examples/scripts/online_dpo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/online_dpo.py) | This script shows how to use the [`OnlineDPOTrainer`] to fine-tune a model. |
| [`examples/scripts/online_dpo_vlm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/online_dpo_vlm.py) | This script shows how to use the [`OnlineDPOTrainer`] to fine-tune a a Vision Language Model. |
| [`examples/scripts/orpo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/orpo.py) | This script shows how to use the [`ORPOTrainer`] to fine-tune a model to increase helpfulness and harmlessness using the [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset. |
| [`examples/scripts/ppo/ppo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/ppo/ppo.py) | This script shows how to use the [`PPOTrainer`] to fine-tune a model to improve its ability to continue text with positive sentiment or physically descriptive language. |
| [`examples/scripts/ppo/ppo_tldr.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/ppo/ppo_tldr.py) | This script shows how to use the [`PPOTrainer`] to fine-tune a model to improve its ability to generate TL;DR summaries. |
| [`examples/scripts/prm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/prm.py) | This script shows how to use the [`PRMTrainer`] to fine-tune a Process-supervised Reward Model (PRM). |
| [`examples/scripts/reward_modeling.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/reward_modeling.py) | This script shows how to use the [`RewardTrainer`] to train an Outcome Reward Model (ORM) on your own dataset. |
| [`examples/scripts/rloo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/rloo.py) | This script shows how to use the [`RLOOTrainer`] to fine-tune a model to improve its ability to solve math questions. |
| [`examples/scripts/sft.py`](https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py) | This script shows how to use the [`SFTTrainer`] to fine-tune a model. |
| [`examples/scripts/sft_gemma3.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/sft_gemma3.py) | This script shows how to use the [`SFTTrainer`] to fine-tune a Gemma 3 model. |
| [`examples/scripts/sft_video_llm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/sft_video_llm.py) | This script shows how to use the [`SFTTrainer`] to fine-tune a Video Language Model. |
| [`examples/scripts/sft_vlm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/sft_vlm.py) | This script shows how to use the [`SFTTrainer`] to fine-tune a Vision Language Model in a chat setting. The script has only been tested with [LLaVA 1.5](https://huggingface.co/llava-hf/llava-1.5-7b-hf), [LLaVA 1.6](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf), and [Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) models, so users may see unexpected behaviour in other model architectures. |
| [`examples/scripts/sft_vlm_gemma3.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/sft_vlm_gemma3.py) | This script shows how to use the [`SFTTrainer`] to fine-tune a Gemma 3 model on vision to text tasks. |
| [`examples/scripts/sft_vlm_smol_vlm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/sft_vlm_smol_vlm.py) | This script shows how to use the [`SFTTrainer`] to fine-tune a SmolVLM model. |
| [`examples/scripts/xpo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/xpo.py) | This script shows how to use the [`XPOTrainer`] to fine-tune a model. |
## Distributed Training (for scripts)
You can run scripts on multiple GPUs with 🤗 Accelerate:
This directory contains a minimal, clearly separated space for fast iteration on new ideas.
> [!WARNING]
> **Stability contract:** Anything under `trl.experimental` may change or be removed in *any* release (including patch versions) without prior deprecation. Do not rely on these APIs for production workloads.
## Promotion Path (Simple)
1.**Prototype outside the main repo:** Start development in your own fork or a separate repository to iterate quickly.
2.**Experimental inclusion:** Once it’s ready for early users, move the idea into `trl.experimental.<feature>`.
3.**Improve:** Add tests, a short doc/example, and demonstrate the usage.
4.**Promote:** Once the API proves stable and there is clear interest or adoption from the community, move it into `trl.<feature>` (stable module).
## FAQ
**Why not just use branches?**
Because branches are not shipped to users; experimental code inside the package lets early adopters try things and give feedback.
**Can these APIs change or vanish without warning?**
Yes. Anything inside `trl.experimental` can change or disappear in *any* release.
**Should I use this in production?**
Only if you are fine with updating your code quickly when things change.
**Will maintainers promptly fix issues in `trl.experimental`?**
Not necessarily. The experimental module is a playground for new ideas, and maintainers may not prioritize bug fixes or feature requests there. Issues may remain unresolved until (or unless) the feature graduates to the stable API.
This feature implements the GFPO algorithm to enforce concise reasoning in the model's output generation, as proposed in the paper [Sample More to Think Less: Group Filtered Policy Optimization for Concise Reasoning](https://huggingface.co/papers/2508.09726).
## Usage
To activate GFPO in [`GFPOTrainer`]:
- set `num_remains_in_group` in [`GFPOConfig`]
- define a group filter function and set it to `group_filter_func` in [`GFPOTrainer`]. `group_filter_func` will score the `num_generations` completions and The GFPOTrainer filters groups according to their scores to get top `num_remains_in_group` completions as a new group. Model will be trained on the filtered group.
Generalized Knowledge Distillation (GKD) was proposed in [On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes](https://huggingface.co/papers/2306.13649) by Rishabh Agarwal, Nino Vieillard, Yongchao Zhou, Piotr Stanczyk, Sabela Ramos, Matthieu Geist, and Olivier Bachem.
The abstract from the paper is the following:
> Knowledge distillation (KD) is widely used for compressing a teacher model to reduce its inference cost and memory footprint, by training a smaller student model. However, current KD methods for auto-regressive sequence models suffer from distribution mismatch between output sequences seen during training and those generated by the student during inference. To address this issue, we introduce Generalized Knowledge Distillation (GKD). Instead of solely relying on a fixed set of output sequences, GKD trains the student on its self-generated output sequences by leveraging feedback from the teacher on such sequences. Unlike supervised KD approaches, GKD also offers the flexibility to employ alternative loss functions between the student and teacher, which can be useful when the student lacks the expressivity to mimic the teacher's distribution. Furthermore, GKD facilitates the seamless integration of distillation with RL fine-tuning (RLHF). We demonstrate the efficacy of GKD for distilling auto-regressive language models on summarization, translation, and arithmetic reasoning tasks, and task-agnostic distillation for instruction-tuning.
The key aspects of GKD are:
1. It addresses the train-inference distribution mismatch in auto-regressive sequence models by training the student model on its self-generated output sequences.
2. GKD allows flexibility in choosing different divergence measures between student and teacher models via the generalized Jensen-Shannon Divergence (JSD), which can be useful when the student lacks the capacity to fully mimic the teacher.
This post-training method was contributed by [Kashif Rasul](https://huggingface.co/kashif) and [Lewis Tunstall](https://huggingface.co/lewtun).
## Usage tips
The [`GKDTrainer`] is a wrapper around the [`SFTTrainer`] class that takes in a teacher model argument. It needs three parameters to be set via the [`GKDConfig`] namely:
*`lmbda`: controls the student data fraction, i.e., the proportion of on-policy student-generated outputs. When `lmbda=0.0`, the loss reduces to supervised JSD where the student is trained with the token-level probabilities of the teacher. When `lmbda=1.0`, the loss reduces to on-policy JSD, where the student generates output sequences and token-specific feedback on these sequences from the teacher. For values in between [0, 1] it is random between the two based on the `lmbda` value for each batch.
*`seq_kd`: controls whether to perform Sequence-Level KD (can be viewed as supervised FT on teacher-generated out). When `seq_kd=True` and `lmbda=0.0`, the loss reduces to supervised JSD, where the teacher generates output sequences and the student receives token-specific feedback on these sequences from the teacher.
*`beta`: controls the interpolation in the generalized Jensen-Shannon Divergence. When `beta=0.0` the loss approximates forward KL divergence, while for `beta=1.0` the loss approximates reverse KL divergence. For values in between [0, 1] it interpolates between the two.
The authors find that on-policy data (high `lmbda`) performs better and the optimal `beta` varied depending on the task and evaluation method.
> [!WARNING]
> Make sure that `attn_implementation="flash_attention_2"` when training [Gemma models](https://huggingface.co/models?other=gemma2). Otherwise you will encounter NaNs in the logits due to the [soft capping technique](https://huggingface.co/blog/gemma2#soft-capping-and-attention-implementations) adopted by this architecture.
General Online Logit Distillation (GOLD) is an extension of Universal Logit Distillation (ULD) that supports
student/teacher pairs with different tokenizers. It aligns the textual spans produced by both tokenizers and merges the
associated logits so no completion tokens are dropped. This enables cross-tokenizer knowledge distillation, including
mixed model families (for example, LLaMA students with Qwen teachers).
Key capabilities:
1.**Cross-tokenizer alignment**– GOLD incrementally decodes the student and teacher tokens, groups passages with the same visible text, and merges probabilities inside each group. This guarantees loss terms are computed over the full completion even when token boundaries differ.
2.**Hybrid ULD loss**– when `uld_use_hybrid_loss` is enabled, GOLD compares exact vocabulary matches directly and falls back to the original sorted-probability ULD loss for unmatched tokens. This improves stability for students whose vocabularies only partially overlap with the teacher.
3.**Seamless integration with GKD**– GOLD inherits the on-policy vs. off-policy scheduling from the [`GKDTrainer`](./gkd_trainer.md), so you can combine sequence-level KD, generalized JSD, and cross-tokenizer distillation in a single training run.
> [!NOTE]
> GOLD is currently part of the `trl.experimental` namespace. APIs may change without notice while the feature is iterated on.
## Usage tips
The [`GOLDTrainer`] subclasses [`SFTTrainer`] and accepts the same datasets as other TRL trainers (lists of ChatML style
messages). Important configuration flags on [`GOLDConfig`] include:
*`use_uld_loss`– toggles Universal Logit Distillation. Set this to `True` for cross-tokenizer setups.
*`teacher_tokenizer_name_or_path`– required when `use_uld_loss=True`; GOLD uses the teacher tokenizer to align tokens.
*`uld_use_hybrid_loss`, `uld_hybrid_matched_weight`, `uld_hybrid_unmatched_weight`– enables and weights the hybrid
matched/unmatched loss.
*`beta`, `lmbda`, `seq_kd`– inherited from `GKDConfig`, controlling the generalized JSD interpolation and on-policy
For quick-start workflows you can rely on string identifiers as shown above—the trainer will load the model and tokenizer for you. Explicitly instantiating `AutoModelForCausalLM`, `AutoTokenizer`, or populating `GOLDConfig` is recommended only for advanced use cases where you need fine-grained control over initialization.
A more explicit setup might look like this when you need to customise model loading, tokenizer settings, or training arguments:
TRL supports the GRPO Trainer for training language models, as described in the paper [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300) by [Zhihong Shao](https://huggingface.co/syhia), [Peiyi Wang](https://huggingface.co/peiyiwang89), [Qihao Zhu](https://huggingface.co/zqh11), Runxin Xu, [Junxiao Song](https://huggingface.co/haha-point), Mingchuan Zhang, Y. K. Li, Y. Wu, [Daya Guo](https://huggingface.co/guoday).
The abstract from the paper is the following:
> Mathematical reasoning poses a significant challenge for language models due to its complex and structured nature. In this paper, we introduce DeepSeekMath 7B, which continues pre-training DeepSeek-Coder-Base-v1.5 7B with 120B math-related tokens sourced from Common Crawl, together with natural language and code data. DeepSeekMath 7B has achieved an impressive score of 51.7% on the competition-level MATH benchmark without relying on external toolkits and voting techniques, approaching the performance level of Gemini-Ultra and GPT-4. Self-consistency over 64 samples from DeepSeekMath 7B achieves 60.9% on MATH. The mathematical reasoning capability of DeepSeekMath is attributed to two key factors: First, we harness the significant potential of publicly available web data through a meticulously engineered data selection pipeline. Second, we introduce Group Relative Policy Optimization (GRPO), a variant of Proximal Policy Optimization (PPO), that enhances mathematical reasoning abilities while concurrently optimizing the memory usage of PPO.
This post-training method was contributed by [Quentin Gallouédec](https://huggingface.co/qgallouedec).
## Quick start
This example demonstrates how to train a model using the GRPO method. We train a [Qwen 0.5B Instruct model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) with the prompts from the [UltraFeedback prompts dataset](https://huggingface.co/datasets/trl-lib/ultrafeedback-prompt). You can view the data in the dataset here:
GRPO is an online learning algorithm, meaning it improves iteratively by using the data generated by the trained model itself during training. The intuition behind GRPO objective is to maximize the advantage of the generated completions, while ensuring that the model remains close to the reference policy. To understand how GRPO works, it can be broken down into four main steps: **Generating completions**, **computing the advantage**, **estimating the KL divergence**, and **computing the loss**.
At each training step, we sample a batch of prompts and generate a set of \\( G \\) completions for each prompt (denoted as \\( o_i \\)).
### Computing the advantage
For each of the \\( G \\) sequences, we compute the reward using a reward model or reward function. To align with the comparative nature of reward models—typically trained on datasets of comparisons between outputs for the same question—the advantage is calculated to reflect these relative comparisons. It is normalized as follows:
This approach gives the method its name: **Group Relative Policy Optimization (GRPO)**.
> [!TIP]
> It was shown in the paper [Understanding R1-Zero-Like Training: A Critical Perspective](https://huggingface.co/papers/2503.20783) that scaling by \\( \text{std}(\mathbf{r}) \\) may cause a question-level difficulty bias. You can disable this scaling by setting `scale_rewards=False` in [`GRPOConfig`].
> [!TIP]
> As shown in [Part I: Tricks or Traps? A Deep Dive into RL for LLM Reasoning (Lite PPO)](https://huggingface.co/papers/2508.08221), calculating the mean at the local (group) level and the standard deviation at the global (batch) level enables more robust reward shaping. You can use this scaling strategy by setting `scale_rewards="batch"` in [`GRPOConfig`].
### Estimating the KL divergence
KL divergence is estimated using the approximator introduced by [Schulman et al. (2020)](http://joschu.net/blog/kl-approx.html). The approximator is defined as follows:
> Note that compared to the original formulation in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300), we don't scale by \\( \frac{1}{|o_i|} \\) because it was shown in the paper [Understanding R1-Zero-Like Training: A Critical Perspective](https://huggingface.co/papers/2503.20783) that this introduces a response-level length bias. More details in [loss types](#loss-types).
> [!TIP]
> Note that compared to the original formulation in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300), we use \\( \beta = 0.0 \\) by default, meaning that the KL divergence term is not used. This choice is motivated by several recent studies (e.g., [Open-Reasoner-Zero: An Open Source Approach to Scaling Up Reinforcement Learning on the Base Model](https://huggingface.co/papers/2503.24290)) which have shown that the KL divergence term is not essential for training with GRPO. As a result, it has become common practice to exclude it (e.g. [Understanding R1-Zero-Like Training: A Critical Perspective](https://huggingface.co/papers/2503.20783), [DAPO: An Open-Source LLM Reinforcement Learning System at Scale](https://huggingface.co/papers/2503.14476)). If you wish to include the KL divergence term, you can set `beta` in [`GRPOConfig`] to a non-zero value.
The [DAPO paper](https://huggingface.co/papers/2503.14476) highlightsthelimitationsoftheGRPOalgorithm’ssample-levellossinlong-CoTscenarios,wherelongerresponsesareunder-penalized,leadingtopoorerqualityoutputs.Theproposedsolutionisatoken-levelnormalization,whichbetterhandleslongersequencesbyassigningmorebalancedrewardstoindividualtokens,regardlessofresponselength:
-`clip_ratio/high_max`: The maximum ratio of token (or sequence, if `importance_sampling_level="sequence"`) probabilities that were clipped on the upper bound of the trust region: \\(r_{i,t}(\theta) > 1 + \epsilon_\mathrm{high}\\).
## Customization
### Speed up training with vLLM-powered generation
Generation is often the main bottleneck when training with online methods. To accelerate generation, you can use [vLLM](https://github.com/vllm-project/vllm), a high-throughput, low-latency inference engine for LLMs. To enable it, first install the package with
```shell
pip install trl[vllm]
```
We support two ways of using vLLM during training: **server mode** and **colocate mode**.
> [!TIP]
> By default, Truncated Importance Sampling is activated for vLLM generation to address the generation-training mismatch that occurs when using different frameworks. This can be turned off by setting `vllm_importance_sampling_correction=False`. For more information, see [Truncated Importance Sampling](paper_index#truncated-importance-sampling)
#### 🔌 Option 1: Server mode
In this mode, vLLM runs in a separate process (and using separate GPUs) and communicates with the trainer via HTTP. This is ideal if you have dedicated GPUs for inference.
1.**Start the vLLM server**:
```bash
trl vllm-serve --model <model_name>
```
2. **Enable server mode in your training script**:
```python
from trl import GRPOConfig
training_args = GRPOConfig(
...,
use_vllm=True,
vllm_mode="server", # default value, can be omitted
)
```
> [!WARNING]
> Make sure that the server is using different GPUs than the trainer, otherwise you may run into NCCL errors. You can specify the GPUs to use with the `CUDA_VISIBLE_DEVICES` environment variable.
#### 🧩 Option 2: Colocate mode
In this mode, vLLM runs inside the trainer process and shares GPU memory with the training model. This avoids launching a separate server and can improve GPU utilization, but may lead to memory contention on the training GPUs.
```python
from trl import GRPOConfig
training_args = GRPOConfig(
...,
use_vllm=True,
vllm_mode="colocate",
)
```
> [!TIP]
> Depending on the model size and the overall GPU memory requirements for training, you may need to adjust the `vllm_gpu_memory_utilization` parameter in [`GRPOConfig`] to avoid underutilization or out-of-memory errors.
>
> We provide a [HF Space](https://huggingface.co/spaces/trl-lib/recommend-vllm-memory) to help estimate the recommended GPU memory utilization based on your model configuration and experiment settings. Simply use it as follows to get `vllm_gpu_memory_utilization` recommendation:
> If the recommended value does not work in your environment, we suggest adding a small buffer (e.g., +0.05 or +0.1) to the recommended value to ensure stability.
>
> If you still find you are getting out-of-memory errors set `vllm_enable_sleep_mode` to True and the vllm parameters and cache will be offloaded during the optimization step. For more information, see [Reducing Memory Usage with vLLM Sleep Mode](reducing_memory_usage#vllm-sleep-mode).
> [!TIP]
> By default, GRPO uses `MASTER_ADDR=localhost` and `MASTER_PORT=12345` for vLLM, but you can override these values by setting the environment variables accordingly.
For more information, see [Speeding up training with vLLM](speeding_up_training#vllm-for-fast-generation-in-online-methods).
### GRPO at scale: train a 70B+ Model on multiple nodes
When training large models like **Qwen2.5-72B**, you need several key optimizations to make the training efficient and scalable across multiple GPUs and nodes. These include:
- **DeepSpeed ZeRO Stage 3**: ZeRO leverages data parallelism to distribute model states (weights, gradients, optimizer states) across multiple GPUs and CPUs, reducing memory and compute requirements on each device. Since large models cannot fit on a single GPU, using ZeRO Stage 3 is required for training such models. For more details, see [DeepSpeed Integration](deepspeed_integration).
- **Accelerate**: Accelerate is a library that simplifies distributed training across multiple GPUs and nodes. It provides a simple API to launch distributed training and handles the complexities of distributed training, such as data parallelism, gradient accumulation, and distributed data loading. For more details, see [Distributing Training](distributing_training).
- **vLLM**: See the previous section on how to use vLLM to speed up generation.
Below is an example SLURM script to train a 70B model with GRPO on multiple nodes. This script trains a model on 4 nodes and uses the 5th node for vLLM-powered generation.
```sh
#!/bin/bash
#SBATCH --nodes=5
#SBATCH --gres=gpu:8
# Get the list of allocated nodes
NODELIST=($(scontrol show hostnames $SLURM_JOB_NODELIST))
# Assign the first 4 nodes for training and the 5th node for vLLM
TRAIN_NODES="${NODELIST[@]:0:4}" # Nodes 0, 1, 2, 3 for training
The [`GRPOTrainer`] supports using custom reward functions instead of dense reward models. To ensure compatibility, your reward function must satisfy the following requirements:
1. **Input arguments**:
- The function must accept the following as keyword arguments:
- `prompts` (contains the prompts),
- `completions` (contains the generated completions),
- `completions_ids` (contains the tokenized completions),
- `trainer_state` ([`~transformers.TrainerState`]): The current state of the trainer. This can be used to implement dynamic reward functions, such as curriculum learning, where the reward is adjusted based on the training progress.
- All column names (but `prompt`) that the dataset may have. For example, if the dataset contains a column named `ground_truth`, the function will be called with `ground_truth` as a keyword argument.
The easiest way to comply with this requirement is to use `**kwargs` in the function signature.
- Depending on the dataset format, the input will vary:
- For [standard format](dataset_formats#standard), `prompts` and `completions` will be lists of strings.
- For [conversational format](dataset_formats#conversational), `prompts` and `completions` will be lists of message dictionaries.
2. **Return value**: The function must return a list of floats. Each float represents the reward corresponding to a single completion.
#### Example 1: Reward longer completions
Below is an example of a reward function for a standard format that rewards longer completions:
```python
def reward_func(completions_ids, **kwargs):
"""Reward function that assigns higher scores to longer completions (in terms of token count)."""
return [float(len(ids)) for ids in completions_ids]
```
You can test it as follows:
```python
>>> prompts = ["The sky is", "The sun is"] # not used in the reward function, but the trainer will pass it
>>> completions = [" blue.", " in the sky."] # not used in the reward function, but the trainer will pass it
#### Example 2: Reward completions with a specific format
Below is an example of a reward function that checks if the completion has a specific format. This example is inspired by the _format reward_ function used in the paper [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://huggingface.co/papers/2501.12948).
It is designed for a conversational format, where prompts and completions consist of structured messages.
```python
import re
def format_reward_func(completions, **kwargs):
"""Reward function that checks if the completion has a specific format."""
#### Example 3: Reward completions based on a reference
Below is an example of a reward function that checks if the completion is correct. This example is inspired by the _accuracy reward_ function used in the paper [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://huggingface.co/papers/2501.12948).
This example is designed for [standard format](dataset_formats#standard), where the dataset contains a column named `ground_truth`.
Below is an example of using multiple reward functions in the [`GRPOTrainer`]. In this example, we define two task-specific reward functions: `math_reward_func` and `coding_reward_func`. The `math_reward_func` rewards math problems based on their correctness, while the `coding_reward_func` rewards coding problems based on whether the solution works.
```python
from datasets import Dataset
from trl import GRPOTrainer
# Define a dataset that contains both math and coding problems
dataset = Dataset.from_list(
[
{"prompt": "What is 2+2?", "task": "math"},
{"prompt": "Write a function that returns the sum of two numbers.", "task": "code"},
{"prompt": "What is 3*4?", "task": "math"},
{"prompt": "Write a function that returns the product of two numbers.", "task": "code"},
In this example, the `math_reward_func` and `coding_reward_func` are designed to work with a mixed dataset that contains both math and coding problems. The `task` column in the dataset is used to determine which reward function to apply to each problem. If there is no relevant reward function for a sample in the dataset, the reward function will return `None`, and the [`GRPOTrainer`] will continue with the valid functions and tasks. This allows the [`GRPOTrainer`] to handle multiple reward functions with different applicability.
Note that the [`GRPOTrainer`] will ignore the `None` rewards returned by the reward functions and only consider the rewards returned by the relevant functions. This ensures that the model is trained on the relevant tasks and ignores the tasks for which there is no relevant reward function.
#### Passing the reward function to the trainer
To use your custom reward function, pass it to the [`GRPOTrainer`] as follows:
```python
from trl import GRPOTrainer
trainer = GRPOTrainer(
reward_funcs=reward_func,
...,
)
```
If you have multiple reward functions, you can pass them as a list:
```python
from trl import GRPOTrainer
trainer = GRPOTrainer(
reward_funcs=[reward_func1, reward_func2],
...,
)
```
and the reward will be computed as the sum of the rewards from each function, or the weighted sum if `reward_weights` is provided in the config.
Note that [`GRPOTrainer`] supports multiple reward functions of different types. See the parameters documentation for more details.
## Vision-Language Model (VLM) Training
GRPO supports training Vision-Language Models (VLMs) on multimodal datasets containing both text and images.
> Compatibility with all VLMs is not guaranteed. If you believe a model should be supported, feel free to open an issue on GitHub — or better yet, submit a pull request with the required changes.
### Quick Start
Use [grpo\_vlm.py](https://github.com/huggingface/trl/blob/main/examples/scripts/grpo_vlm.py) to fine-tune a VLM. Example command for training on [`lmms-lab/multimodal-open-r1-8k-verified`](https://huggingface.co/datasets/lmms-lab/multimodal-open-r1-8k-verified):
> For VLMs, truncating may remove image tokens, leading to errors during training. To avoid this, set `max_prompt_length=None` in the [`GRPOConfig`]. This allows the model to process the full sequence length without truncating image tokens.
>
> ```python
> GRPOConfig(max_prompt_length=None, ...)
> ```
>
> Only use `max_prompt_length` when you've verified that truncation won't remove image tokens for the entire dataset.
- Use LoRA on vision-language projection layers
- Enable 4-bit quantization to reduce memory usage
- VLMs are memory-intensive — start with smaller batch sizes
- Most models are compatible with vLLM (`server` and `colocate` modes)
### Dataset Format
Each training sample should include:
- `prompt`: Text formatted via the processor's chat template
- `image`/`images`: PIL Image or list of PIL Images
The trainer automatically handles image-to-tensor conversion via the model’s image processor.
This experimental trainer, trains a model with GRPO but replaces groups (and corresponding completions) that have 0 standard deviation with groups with high rewards and standard deviation that've been used to train a model in prior batches.
In the paper [Group Sequence Policy Optimization](https://huggingface.co/papers/2507.18071), the authors propose a token-level objective variant to GSPO, called GSPO-token. To use GSPO-token, you can use the `GRPOTrainer` class in `trl.experimental.gspo_token`.
## Usage
```python
fromtrl.experimental.gspo_tokenimportGRPOTrainer
fromtrlimportGRPOConfig
training_args=GRPOConfig(
importance_sampling_level="sequence_token",
...
)
```
> [!WARNING]
> To leverage GSPO-token, the user will need to provide the per-token advantage \\( \hat{A_{i,t}} \\) for each token \\( t \\) in the sequence \\( i \\) (i.e., make \\( \hat{A_{i,t}} \\) varies with \\( t \\)—which isn't the case here, \\( \hat{A_{i,t}}=\hat{A_{i}} \\)). Otherwise, GSPO-Token gradient is just equivalent to the original GSPO implementation.
TRL is a full stack library where we provide a set of tools to train transformer language models with methods like Supervised Fine-Tuning (SFT), Group Relative Policy Optimization (GRPO), Direct Preference Optimization (DPO), Reward Modeling, and more.
The library is integrated with 🤗 [transformers](https://github.com/huggingface/transformers).
## 🎉 What's New
**OpenEnv Integration:** TRL now supports **[OpenEnv](https://huggingface.co/blog/openenv)**, the open-source framework from Meta for defining, deploying, and interacting with environments in reinforcement learning and agentic workflows.
Explore how to seamlessly integrate TRL with OpenEnv in our [dedicated documentation](openenv).
## Taxonomy
Below is the current list of TRL trainers, organized by method type (⚡️ = vLLM support; 🧪 = experimental).
With the TRL (Transformer Reinforcement Learning) library you can train transformer language models with reinforcement learning. The library is integrated with 🤗 [transformers](https://github.com/huggingface/transformers).
TRL supports decoder models such as GPT-2, BLOOM, GPT-Neo which can all be optimized using Proximal Policy Optimization (PPO). You can find installation instructions in the [installation guide](installation) and an introduction to the library in the [Quickstart section](quickstart). There is also a more [in-depth example](sentiment_tuning) to tune GPT-2 to produce positive movie reviews.
You can install TRL either from PyPI or from source:
## PyPI
Install the library with pip or [uv](https://docs.astral.sh/uv/):
<hfoptionsid="install">
<hfoptionid="uv">
uv is a fast Rust-based Python package and project manager. Refer to [Installation](https://docs.astral.sh/uv/getting-started/installation/) for installation instructions.
```bash
uv pip install trl
```
</hfoption>
<hfoptionid="pip">
```bash
pip install trl
```
</hfoption>
</hfoptions>
## Source
You can also install the latest version from source. First clone the repo and then run the installation with `pip`:
```bash
git clone https://github.com/huggingface/trl.git
cd trl/
pip install -e .
```
If you want the development install you can replace the pip install with the following:
[Hugging Face Jobs](https://huggingface.co/docs/huggingface_hub/guides/jobs) lets you run training scripts on fully managed infrastructure—no need to manage GPUs or local environment setup.
In this guide, you'll learn how to:
* Use [TRL Jobs](https://github.com/huggingface/trl-jobs) to easily run pre-optimized TRL training
* Run any TRL training script with uv scripts
For general details about Hugging Face Jobs (hardware selection, job monitoring, etc.), see the [Jobs documentation](https://huggingface.co/docs/huggingface_hub/guides/jobs).
## Requirements
* A [Pro](https://hf.co/pro), [Team](https://hf.co/enterprise), or [Enterprise](https://hf.co/enterprise) plan
* Logged in to the Hugging Face Hub (`hf auth login`)
## Using TRL Jobs
[TRL Jobs](https://github.com/huggingface/trl-jobs) is a high-level wrapper around Hugging Face Jobs and TRL that streamlines training. It provides optimized default configurations so you can start quickly without manually tuning parameters.
Launch the job using either the [`hf jobs` CLI](https://huggingface.co/docs/huggingface_hub/guides/cli#hf-jobs) or the Python API:
<hfoptionsid="script_type">
<hfoptionid="bash">
```bash
hf jobs uv run \
--flavor a100-large \
--with trl \
--secrets HF_TOKEN \
train.py
```
</hfoption>
<hfoptionid="python">
```python
fromhuggingface_hubimportrun_uv_job
run_uv_job(
"train.py",
dependencies=["trl"],
flavor="a100-large",
secrets={"HF_TOKEN":"hf_..."},
)
```
</hfoption>
</hfoptions>
To run successfully, the script needs:
* **TRL installed**: Use the `--with trl` flag or the `dependencies` argument. uv installs these dependencies automatically before running the script.
* **An authentication token**: Required to push the trained model (or perform other authenticated operations). Provide it with the `--secrets HF_TOKEN` flag or the `secrets` argument.
> [!WARNING]
> When training with Jobs, be sure to:
>
> * **Set a sufficient timeout**. Jobs time out after 30 minutes by default. If your job exceeds the timeout, it will fail and all progress will be lost. See [Setting a custom timeout](https://huggingface.co/docs/huggingface_hub/guides/jobs#setting-a-custom-timeout).
> * **Push the model to the Hub**. The Jobs environment is ephemeral—files are deleted when the job ends. If you don’t push the model, it will be lost.
You can then run the script without specifying dependencies:
<hfoptionsid="script_type">
<hfoptionid="bash">
```bash
hf jobs uv run \
--flavor a100-large \
--secrets HF_TOKEN \
train.py
```
</hfoption>
<hfoptionid="python">
```python
fromhuggingface_hubimportrun_uv_job
run_uv_job(
"train.py",
flavor="a100-large",
secrets={"HF_TOKEN":"hf_..."},
)
```
</hfoption>
</hfoptions>
TRL example scripts are fully uv-compatible, so you can run a complete training workflow directly on Jobs. You can customize training with standard script arguments plus hardware and secrets:
See the full list of examples in [Maintained examples](example_overview#maintained-examples).
### Docker Images
An up-to-date Docker image with all TRL dependencies is available at [huggingface/trl](https://hub.docker.com/r/huggingface/trl) and can be used directly with Hugging Face Jobs:
<hfoptionsid="script_type">
<hfoptionid="bash">
```bash
hf jobs uv run \
--flavor a100-large \
--secrets HF_TOKEN \
--image huggingface/trl \
train.py
```
</hfoption>
<hfoptionid="python">
```python
fromhuggingface_hubimportrun_uv_job
run_uv_job(
"train.py",
flavor="a100-large",
secrets={"HF_TOKEN":"hf_..."},
image="huggingface/trl",
)
```
</hfoption>
</hfoptions>
Jobs runs on a Docker image from Hugging Face Spaces or Docker Hub, so you can also specify any custom image:
> TRL Judges is an experimental API which is subject to change at any time.
TRL provides judges to easily compare two completions.
Make sure to have installed the required dependencies by running:
```bash
pip install trl[judges]
```
## Using the provided judges
TRL provides several judges out of the box. For example, you can use the [`HfPairwiseJudge`] to compare two completions using a pre-trained model from the Hugging Face model hub:
```python
fromtrlimportHfPairwiseJudge
judge=HfPairwiseJudge()
judge.judge(
prompts=["What is the capital of France?","What is the biggest planet in the solar system?"],
To define your own judge, we provide several base classes that you can subclass. For rank-based judges, you need to subclass [`BaseRankJudge`] and implement the [`BaseRankJudge.judge`] method. For pairwise judges, you need to subclass [`BasePairJudge`] and implement the [`BasePairJudge.judge`] method. If you want to define a judge that doesn't fit into these categories, you need to subclass [`BaseJudge`] and implement the [`BaseJudge.judge`] method.
As an example, let's define a pairwise judge that prefers shorter completions:
The [`kernels`](https://huggingface.co/blog/hello-hf-kernels#get-started-and-next-steps) library allows optimized compute kernels to be loaded directly from the Hub.
You can find `kernels` in [dedicated orgs](https://huggingface.co/kernels-community) or by searching for the [`kernel` tag](https://huggingface.co/models?other=kernel) within the Hub.
Kernels are **optimized code pieces** that help in model development, training, and inference. Here, we’ll focus on their **integration with TRL**, but check out the above resources to learn more about them.
## Installation
To use kernels with TRL, you'd need to install the library in your Python environment:
```bash
pip install kernels
```
## Using Kernels from the Hub in TRL
Kernels can directly replace attention implementations, removing the need to manually compile attention backends like Flash Attention and boosting training speed just by pulling the respective attention kernel from the Hub.
You can specify a kernel when loading a model:
```python
fromtransformersimportAutoModelForCausalLM
model=AutoModelForCausalLM.from_pretrained(
"your-model-name",
attn_implementation="kernels-community/flash-attn"# other options: kernels-community/vllm-flash-attn3, kernels-community/paged-attention
> Now you can leverage faster attention backends with a pre-optimized kernel for your hardware configuration from the Hub, speeding up both development and training.
## Comparing Attention Implementations
We evaluated various attention implementations available in transformers, along with different kernel backends, using **TRL** and **SFT**.
The experiments were run on a single **H100 GPU** with **CUDA 12.9**, leveraging **Qwen3-8B** with a **batch size of 8**, **gradient accumulation of 1**, and **bfloat16** precision.
Keep in mind that the results shown here are specific to this setup and may vary with different training configurations.
The following figure illustrates both **latency** (time per training step) and **peak allocated memory** for the different attention implementations and kernel backends.
Kernel-based implementations perform on par with custom-installed attention, and increasing the model’s `max_length` further enhances performance. Memory consumption is similar across all implementations, showing no significant differences. We get the same performance but with less friction, as described in [the following section](#flash-attention-vs-hub-kernels).
<divclass="flex justify-center">
<imgsrc="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/kernels_guide_latency.png"alt="Latency and Memory Usage"width="45%"/>
<imgsrc="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/kernels_guide_peak_allocated_memory.png"alt="Latency and Memory Usage"width="45%"/>
</div>
## Flash Attention vs. Hub Kernels
Building Flash Attention from source can be time-consuming, often taking anywhere from several minutes to hours, depending on your hardware, CUDA/PyTorch configuration, and whether precompiled wheels are available.
In contrast, **Hugging Face Kernels** provide a much faster and more reliable workflow. Developers don’t need to worry about complex setups—everything is handled automatically. In our benchmarks, kernels were ready to use in about **2.5 seconds**, with no compilation required. This allows you to start training almost instantly, significantly accelerating development. Simply specify the desired version, and `kernels` takes care of the rest.
## Combining FlashAttention Kernels with Liger Kernels
You can combine **FlashAttention kernels** with **Liger kernels** for additional TRL performance improvements.
First, install the Liger kernel dependency:
```bash
pip install liger-kernel
```
Then, combine both in your code:
```python
fromtransformersimportAutoModelForCausalLM
fromtrlimportSFTConfig
model=AutoModelForCausalLM.from_pretrained(
"your-model-name",
attn_implementation="kernels-community/flash-attn"# choose the desired FlashAttention variant
)
training_args=SFTConfig(
use_liger_kernel=True,
# ... other TRL training args
)
```
Learn more about the [Liger Kernel Integration](./liger_kernel_integration).
Kahneman-Tversky Optimization (KTO) was introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306) by [Kawin Ethayarajh](https://huggingface.co/kawine), [Winnie Xu](https://huggingface.co/xwinxu), [Niklas Muennighoff](https://huggingface.co/Muennighoff), Dan Jurafsky, [Douwe Kiela](https://huggingface.co/douwekiela).
The abstract from the paper is the following:
> Kahneman & Tversky's prospect theory tells us that humans perceive random variables in a biased but well-defined manner; for example, humans are famously loss-averse. We show that objectives for aligning LLMs with human feedback implicitly incorporate many of these biases -- the success of these objectives (e.g., DPO) over cross-entropy minimization can partly be ascribed to them being human-aware loss functions (HALOs). However, the utility functions these methods attribute to humans still differ from those in the prospect theory literature. Using a Kahneman-Tversky model of human utility, we propose a HALO that directly maximizes the utility of generations instead of maximizing the log-likelihood of preferences, as current methods do. We call this approach Kahneman-Tversky Optimization (KTO), and it matches or exceeds the performance of preference-based methods at scales from 1B to 30B. Crucially, KTO does not need preferences -- only a binary signal of whether an output is desirable or undesirable for a given input. This makes it far easier to use in the real world, where preference data is scarce and expensive.
The official code can be found in [ContextualAI/HALOs](https://github.com/ContextualAI/HALOs).
This post-training method was contributed by [Kashif Rasul](https://huggingface.co/kashif), [Younes Belkada](https://huggingface.co/ybelkada), [Lewis Tunstall](https://huggingface.co/lewtun) and Pablo Vicente.
## Quick start
This example demonstrates how to train a model using the KTO method. We use the [Qwen 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) as the base model. We use the preference data from the [KTO Mix 14k](https://huggingface.co/datasets/trl-lib/kto-mix-14k). You can view the data in the dataset here:
Distributed across 8 x H100 GPUs, the training takes approximately 30 minutes. You can verify the training progress by checking the reward graph. An increasing trend in the reward margin indicates that the model is improving and generating better responses over time.
To see how the [trained model](https://huggingface.co/trl-lib/Qwen2-0.5B-KTO) performs, you can use the [Transformers Chat CLI](https://huggingface.co/docs/transformers/quicktour#chat-with-text-generation-models).
The best programming language can vary depending on individual preferences, industry-specific requirements, technical skills, and familiarity with the specific use case or task. Here are some widely-used programming languages that have been noted as popular and widely used:
Here are some other factors to consider when choosing a programming language for a project:
<strong><spanstyle="color: green;">1</span> JavaScript</strong>: JavaScript is at the heart of the web and can be used for building web applications, APIs, and interactive front-end applications like frameworks like React and Angular. It's similar to C, C++, and F# in syntax structure and is accessible and easy to learn, making it a popular choice for beginners and professionals alike.
<strong><spanstyle="color: green;">2</span> Java</strong>: Known for its object-oriented programming (OOP) and support for Java 8 and .NET, Java is used for developing enterprise-level software applications, high-performance games, as well as mobile apps, game development, and desktop applications.
<strong><spanstyle="color: green;">3</span> C++</strong>: Known for its flexibility and scalability, C++ offers comprehensive object-oriented programming and is a popular choice for high-performance computing and other technical fields. It's a powerful platform for building real-world applications and games at scale.
<strong><spanstyle="color: green;">4</span> Python</strong>: Developed by Guido van Rossum in 1991, Python is a high-level, interpreted, and dynamically typed language known for its simplicity, readability, and versatility.
</code></pre>
## Expected dataset format
KTO requires an [unpaired preference dataset](dataset_formats#unpaired-preference). Alternatively, you can provide a *paired* preference dataset (also known simply as a *preference dataset*). In this case, the trainer will automatically convert it to an unpaired format by separating the chosen and rejected responses, assigning `label = True` to the chosen completions and `label = False` to the rejected ones.
The [`KTOTrainer`] supports both [conversational](dataset_formats#conversational) and [standard](dataset_formats#standard) dataset formats. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
In theory, the dataset should contain at least one chosen and one rejected completion. However, some users have successfully run KTO using *only* chosen or only rejected data. If using only rejected data, it is advisable to adopt a conservative learning rate.
## Example script
We provide an example script to train a model using the KTO method. The script is available in [`trl/scripts/kto.py`](https://github.com/huggingface/trl/blob/main/trl/scripts/kto.py)
To test the KTO script with the [Qwen2 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [UltraFeedback dataset](https://huggingface.co/datasets/trl-lib/kto-mix-14k), run the following command:
```bash
accelerate launch trl/scripts/kto.py \
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
--dataset_name trl-lib/kto-mix-14k \
--num_train_epochs 1\
--output_dir Qwen2-0.5B-KTO
```
## Usage tips
### For Mixture of Experts Models: Enabling the auxiliary loss
MOEs are the most efficient if the load is about equally distributed between experts.
To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss.
This option is enabled by setting `output_router_logits=True` in the model config (e.g. [`~transformers.MixtralConfig`]).
To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: `0.001`) in the model config.
### Batch size recommendations
Use a per-step batch size that is at least 4, and an effective batch size between 16 and 128. Even if your effective batch size is large, if your per-step batch size is poor, then the KL estimate in KTO will be poor.
### Learning rate recommendations
Each choice of `beta` has a maximum learning rate it can tolerate before learning performance degrades. For the default setting of `beta = 0.1`, the learning rate should typically not exceed `1e-6` for most models. As `beta` decreases, the learning rate should also be reduced accordingly. In general, we strongly recommend keeping the learning rate between `5e-7` and `5e-6`. Even with small datasets, we advise against using a learning rate outside this range. Instead, opt for more epochs to achieve better results.
### Imbalanced data
The `desirable_weight` and `undesirable_weight` of the [`KTOConfig`] refer to the weights placed on the losses for desirable/positive and undesirable/negative examples.
By default, they are both 1. However, if you have more of one or the other, then you should upweight the less common type such that the ratio of (`desirable_weight` \\(\times\\) number of positives) to (`undesirable_weight` \\(\times\\) number of negatives) is in the range 1:1 to 4:3.
## Logged metrics
While training and evaluating, we record the following reward metrics:
-`rewards/chosen_sum`: the sum of log probabilities of the policy model for the chosen responses scaled by beta
-`rewards/rejected_sum`: the sum of log probabilities of the policy model for the rejected responses scaled by beta
-`logps/chosen_sum`: the sum of log probabilities of the chosen completions
-`logps/rejected_sum`: the sum of log probabilities of the rejected completions
-`logits/chosen_sum`: the sum of logits of the chosen completions
-`logits/rejected_sum`: the sum of logits of the rejected completions
-`count/chosen`: the count of chosen samples in a batch
-`count/rejected`: the count of rejected samples in a batch
[Liger Kernel](https://github.com/linkedin/Liger-Kernel) is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU training throughput by 20% and reduce memory usage by 60%. That way, we can **4x** our context length, as described in the benchmark below. They have implemented Hugging Face compatible `RMSNorm`, `RoPE`, `SwiGLU`, `CrossEntropy`, `FusedLinearCrossEntropy`, with more to come. The kernel works out of the box with [FlashAttention](https://github.com/Dao-AILab/flash-attention), [PyTorch FSDP](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html), and [Microsoft DeepSpeed](https://github.com/microsoft/DeepSpeed).
With this memory reduction, you can potentially turn off `cpu_offloading` or gradient checkpointing to further boost the performance.
Recent research from the team at [Thinking Machines Lab](https://thinkingmachines.ai/blog/lora/) (Schulman et al., 2025) shows that **LoRA can match full fine-tuning performance** when configured correctly, while using only ~67% of the compute. These findings are exciting to TRL users because they're straightforward to implement and can improve model performance on smaller budgets.
This guide provides simple instructions to reproduce the results of the blog post in TRL.
> [!TIP]
> It is recommended to read the blog post before following this guide, or to consult both resources in parallel for best results.
## Benefits of LoRA over full fine-tuning
First of all, let's remind ourselves of the benefits of [LoRA over full fine-tuning](https://huggingface.co/docs/trl/en/peft_integration).
LoRA adds adapter layers on top of the base model, which contains significantly fewer parameters than the base model itself. This design reduces GPU memory requirements and enables more efficient training. As described in the [blog](https://thinkingmachines.ai/blog/lora/), this approach was originally thought to involve a performance trade-off, although careful configuration can overcome this trade-off and match full fine-tuning performance.
## Examples with TRL
Let's implement and train LoRA adapters in TRL scripts based on the core findings of the blog post. Afterwards, we'll revisit each finding in light of the TRL results.
### Supervised Fine-Tuning (SFT)
The blog post performs SFT on a range of models and datasets from the Hub, which we can reproduce in TRL.
To use Hugging Face Jobs, you will need to be logged in to the Hugging Face Hub (`hf auth login`) and have a [Pro](https://hf.co/pro), [Team](https://hf.co/enterprise), or [Enterprise](https://hf.co/enterprise) plan. Check out the [Jobs documentation](https://huggingface.co/docs/huggingface_hub/en/guides/jobs) for more details.
</hfoption>
<hfoptionid="local">
```bash
uv run "https://raw.githubusercontent.com/huggingface/trl/main/trl/scripts/sft.py"\
--model_name_or_path Qwen/Qwen2.5-3B-Instruct \
--dataset_name open-thoughts/OpenThoughts-114k \
--learning_rate 2.0e-5 \
--num_train_epochs 1\
--packing \
--per_device_train_batch_size 2\
--gradient_accumulation_steps 16\
--gradient_checkpointing \
--eval_strategy no \
--use_peft \
--lora_r 256\
--lora_alpha 16\
--lora_target_modules all-linear \
--output_dir Qwen2.5-3B-OpenThoughts-LoRA \
--report_to trackio \
--push_to_hub
```
To run the script locally, you will need to have `uv` installed. Check out the [uv documentation](https://docs.astral.sh/uv/) for more details.
</hfoption>
</hfoptions>
Once training starts, you can monitor the progress in [Trackio](https://huggingface.co/trackio), which will log the URL.
### Reinforcement Learning (GRPO)
The blog post performs GRPO on a range of models and datasets from the Hub, and once again we can reproduce the results in TRL.
To use Hugging Face Jobs, you will need to be logged in to the Hugging Face Hub (`hf auth login`) and have a [Pro](https://hf.co/pro), [Team](https://hf.co/enterprise), or [Enterprise](https://hf.co/enterprise) plan. Check out the [Jobs documentation](https://huggingface.co/docs/huggingface_hub/en/guides/jobs) for more details.
</hfoption>
<hfoptionid="local">
```bash
uv run "https://huggingface.co/datasets/burtenshaw/lora-without-regrets/resolve/main/grpo.py"\
To run the script locally, you will need to have `uv` installed. Check out the [uv documentation](https://docs.astral.sh/uv/) for more details.
</hfoption>
</hfoptions>
The reinforcement learning script with GRPO is implemented as a custom script in TRL, which uses the reward function shown above. You can review it at [`grpo.py`](https://huggingface.co/datasets/burtenshaw/lora-without-regrets/blob/main/grpo.py) - Reinforcement learning with LoRA best practices
## Key findings in optimizing LoRA
The authors recommend applying LoRA to all weight matrices rather than limiting it to attention layers, as increasing the rank does not compensate for this restriction. In TRL, this can be configured using `--lora_target_modules all-linear` to apply LoRA to all weight matrices.
We were able to reproduce the results of the blog post using TRL and the SmolLM3 model. We trained the model for 500 steps on the [Math 220k dataset](https://huggingface.co/datasets/HuggingFaceH4/OpenR1-Math-220k-default-verified) with the reward function and configuration above. As you can see in the figure below, the LoRA model's average train reward curve matches the full fine-tuning curve.
Let's break down the key findings of the blog post and how we were able to reproduce them.
### 1. *LoRA performs better when applied to all weight matrices*
The authors recommend applying LoRA to all weight matrices rather than limiting it to attention layers, as increasing the rank does not compensate for this restriction.
Attention-only LoRA underperforms even when using a higher rank to match parameter count. In TRL, this can be configured using `--lora_target_modules all-linear` to apply LoRA to all weight matrices. In Python, we can do this like so:
### 2. *The adapter needs sufficient capacity to learn from the dataset*
The blog post recommends using a sufficient LoRA rank to learn from the dataset. The rank determines the number of trainable parameters in the LoRA adapter. Therefore, "For datasets that exceed LoRA capacity, LoRA underperforms FullFT".
In the TRL script, we could use `--lora_r` to set the rank and adapt it based on the task and dataset we're training on. The blog post recommends the following ranks based on the task and dataset size:
Reinforcement learning tasks typically require lower capacity, so smaller LoRA ranks can be used. This is because policy gradient algorithms extract roughly ~1 bit of information per episode, demanding minimal parameter capacity.
The blog post defines the ideal dataset size for LoRA to match full fine-tuning as "Post-training scale". Which we can use to determine the recommended rank for SFT and RL LoRAs as:
| Task Type | Dataset Size | Recommended Rank |
| --- | --- | --- |
| **SFT** | Post-training scale | 256 |
| **RL** | Any size | 1-32 |
### 3. *"FullFT and high-rank LoRAs have similar learning curves"*
Counterintuitively, the blog post recommends using a higher learning rate than for full fine-tuning. In the table above, we used 1.0e-5 for LoRA and 1.0e-6 for full fine-tuning. In the TRL script, we could use `--learning_rate` to set the learning rate. The \\( \frac{1}{r} \\) scaling in LoRA makes the optimal learning rate approximately rank-independent.
### 4. *"In some scenarios, LoRA is less tolerant of large batch sizes than full fine-tuning."*
The blog post recommends using an effective batch size <32becausetheauthorsfoundLoRAtobelesstolerantoflargebatchsizes.ThiscouldnotbemitigatedbyincreasingtheLoRArank.IntheTRLscript,wecoulduse`--per_device_train_batch_size`and`--gradient_accumulation_steps`tosetthebatchsize.
Nash-MD was proposed in the paper [Nash Learning from Human Feedback](https://huggingface.co/papers/2312.00886) by Rémi Munos, [Michal Valko](https://huggingface.co/misovalko), Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mésnard, and Andrea Michi.
The abstract from the paper is the following:
> Reinforcement learning from human feedback (RLHF) has emerged as the main paradigm for aligning large language models (LLMs) with human preferences. Typically, RLHF involves the initial step of learning a reward model from human feedback, often expressed as preferences between pairs of text generations produced by a pre-trained LLM. Subsequently, the LLM's policy is fine-tuned by optimizing it to maximize the reward model through a reinforcement learning algorithm. However, an inherent limitation of current reward models is their inability to fully represent the richness of human preferences and their dependency on the sampling distribution. In this study, we introduce an alternative pipeline for the fine-tuning of LLMs using pairwise human feedback. Our approach entails the initial learning of a preference model, which is conditioned on two inputs given a prompt, followed by the pursuit of a policy that consistently generates responses preferred over those generated by any competing policy, thus defining the Nash equilibrium of this preference model. We term this approach Nash learning from human feedback (NLHF). In the context of a tabular policy representation, we present a novel algorithmic solution, Nash-MD, founded on the principles of mirror descent. This algorithm produces a sequence of policies, with the last iteration converging to the regularized Nash equilibrium. Additionally, we explore parametric representations of policies and introduce gradient descent algorithms for deep-learning architectures. To demonstrate the effectiveness of our approach, we present experimental results involving the fine-tuning of a LLM for a text summarization task. We believe NLHF offers a compelling avenue for preference learning and policy optimization with the potential of advancing the field of aligning LLMs with human preferences.
This post-training method was contributed by [Kashif Rasul](https://huggingface.co/kashif) and [Daniil Tiapkin](https://huggingface.co/dtiapkin), [Pierre Ménard](https://huggingface.co/menardprr), Daniele Calandriello and [Quentin Gallouédec](https://huggingface.co/qgallouedec).
## Quick start
This example demonstrates how to train a model using the Nash-MD method. We use the [Qwen 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) as the base model and [`PairRMJudge`] as a judge. We use the prompts from the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback). You can view the prompts in the dataset here:
Distributed across 8 GPUs, the training takes approximately 3 hours.
To see how the [trained model](https://huggingface.co/trl-lib/Qwen2-0.5B-NashMD) performs, you can use the [Transformers Chat CLI](https://huggingface.co/docs/transformers/quicktour#chat-with-text-generation-models).
The best programming language depends on personal preference, the complexity of the project, and the specific requirements of the task. Some programming languages that are often recommended include Python, Java, and JavaScript, and there are many other languages to choose from depending on individual needs.
</code></pre>
## Expected dataset type
Nash-MD requires a [prompt-only dataset](dataset_formats#prompt-only). The [`NashMDTrainer`] supports both [conversational](dataset_formats#conversational) and [standard](dataset_formats#standard) dataset formats. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
## Usage tips
### Use a reward model
Instead of a judge, you can chose to use a reward model -- see [Reward Bench](https://huggingface.co/spaces/allenai/reward-bench) for a leaderboard of public models you can use. Below is a code example showing how to replace a judge with the [trl-lib/Qwen2-0.5B-Reward](https://huggingface.co/trl-lib/Qwen2-0.5B-Reward) model:
```diff
- from trl import PairRMJudge
+ from transformers import AutoModelForSequenceClassification
> Make sure that the SFT model and reward model use the _same_ chat template and the same tokenizer. Otherwise, you may find the model completions are scored incorrectly during training.
### Encourage EOS token generation
We may want the model to generate completions within a given length. During training, the model will generate completions up to the maximum length specified in the `max_new_tokens` argument of [`NashMDConfig`]. If you want to penalize the model for not generating an EOS token before reaching the maximum length, you can use the `missing_eos_penalty` argument of [`NashMDConfig`]:
We provide an example script to train a model using the Nash-MD method. The script is available in [`examples/scripts/nash_md.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/nash_md.py)
To test the online DPO script with the [Qwen2.5 0.5B model](https://huggingface.co/trl-lib/Qwen/Qwen2.5-0.5B-Instruct) on the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback), run the following command:
```bash
python examples/scripts/nash_md.py \
--model_name_or_path Qwen/Qwen2.5-0.5B-Instruct \
--judge pair_rm \
--dataset_name trl-lib/ultrafeedback-prompt \
--learning_rate 5.0e-7 \
--output_dir Qwen2.5-0.5B-NashMD-PairRM \
--warmup_ratio 0.1 \
--push_to_hub
```
## Logged metrics
While training and evaluating, we record the following reward metrics:
*`loss/kl`: The mean KL divergence between the model and reference data.
*`objective/entropy`: The mean entropy of the model and reference data.
*`loss/score`: The mean reinforce score loss.
*`rewards/chosen`: The mean scores (according to the reward model) of the model completions.
*`rewards/rejected`: The mean scores (according to the reward model) of the mixture completions.
*`rewards/probabilities`: The mean probability (according to the reward model or judge) of the model completions chosen vs the mixture completion.
*`rewards/accuracies`: The accuracies of the Nash-MD's implicit reward model.
*`rewards/margins`: The mean reward margin (according to reward model) between the chosen and mixture completions.
*`logps/chosen`: The mean log probabilities of the chosen completions.
*`logps/rejected`: The mean log probabilities of the reference completions.
*`val/model_contain_eos_token`: The amount of times the model's output contains the eos token.
*`val/ref_contain_eos_token`: The amount of times the mixture's output contains the eos token.
*`beta`: The parameter that controls the weight of the loss term representing the deviation from the reference model. Typically fixed, but can be made dynamic by passing a list to [`NashMDConfig`].
*`mixture_coef`: Logit mixture coefficient for the model and reference model. Typically fixed, but can be made dynamic by passing a list to [`NashMDConfig`].
Online DPO was proposed in [Direct Language Model Alignment from Online AI Feedback](https://huggingface.co/papers/2402.04792) by Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, Johan Ferret, and Mathieu Blondel.
The abstract from the paper is the following:
> Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator.
This post-training method was contributed by [Michael Noukhovitch](https://huggingface.co/mnoukhov), [Shengyi Costa Huang](https://huggingface.co/vwxyzjn), [Quentin Gallouédec](https://huggingface.co/qgallouedec), and [Edward Beeching](https://huggingface.co/edbeeching).
## Quick start
This example demonstrates how to train a model using the online DPO method. We use the [Qwen 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) as the base model and [`PairRMJudge`] as a judge. We use the prompts from the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback). You can view the prompts in the dataset here:
Distributed across 8 GPUs, the training takes approximately 1 hour. You can verify the training progress by checking the reward graph. An increasing trend in both the reward for rejected and chosen completions indicates that the model is improving and generating better responses over time.
To see how the [trained model](https://huggingface.co/trl-lib/Qwen2-0.5B-OnlineDPO) performs, you can use the [Transformers Chat CLI](https://huggingface.co/docs/transformers/quicktour#chat-with-text-generation-models).
The best programming language depends on your specific needs and priorities. Some people prefer imperative programming languages (like Haskell or Lisp), while others prefer functional programming languages (like Scala or Python). It's important to consider your work style, programming environment, and project requirements when choosing a programming language.
</code></pre>
## Expected dataset type
Online DPO only requires a [prompt-only dataset](dataset_formats#prompt-only) (unlike offline DPO, that expects [preference dataset](dataset_formats#preference)). The [`OnlineDPOTrainer`] supports both [conversational](dataset_formats#conversational) and [standard](dataset_formats#standard) dataset formats. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
## Usage tips
### Use a reward model
Instead of a judge, you can chose to use a reward model -- see [Reward Bench](https://huggingface.co/spaces/allenai/reward-bench) for a leaderboard of public models you can use. Below is a code example showing how to replace a judge with the [trl-lib/Qwen2-0.5B-Reward](https://huggingface.co/trl-lib/Qwen2-0.5B-Reward) model:
```diff
- from trl import PairRMJudge
+ from transformers import AutoModelForSequenceClassification
When using a reward model, we may want the model to generate completions within a given length. During training, the model will generate completions up to the maximum length specified in the `max_new_tokens` argument of [`OnlineDPOConfig`]. If you want to penalize the model for not generating an EOS token before reaching the maximum length, you can use the `missing_eos_penalty` argument of [`OnlineDPOConfig`]:
We provide an example script to train a model using the online DPO method. The script is available in [`examples/scripts/dpo_online.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/dpo_online.py)
To test the online DPO script with the [Qwen2.5 0.5B model](https://huggingface.co/trl-lib/Qwen/Qwen2.5-0.5B-Instruct) on the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback), run the following command:
```bash
python examples/scripts/dpo_online.py \
--model_name_or_path Qwen/Qwen2.5-0.5B-Instruct \
--judge pair_rm \
--dataset_name trl-lib/ultrafeedback-prompt \
--learning_rate 5.0e-7 \
--output_dir Qwen2.5-0.5B-Online-DPO-PairRM \
--warmup_ratio 0.1 \
--push_to_hub
```
## Logged metrics
While training and evaluating, we record the following reward metrics. Here is an example [tracked run at Weights and Biases](https://wandb.ai/huggingface/trl/runs/w4apmsi9)
*`objective/kl`: The mean Kullback-Leibler (KL) divergence between the current model and reference model.
*`objective/entropy`: The mean entropy of the model, indicating the randomness of the actions chosen by the model.
*`objective/non_score_reward`: The mean reward from non-score-related sources, basically `beta * kl.sum(1)`, where `beta` is the KL penalty coefficient and `kl` is the per-token KL divergence.
*`objective/rlhf_reward`: The mean RLHF reward, which is `scores - non_score_reward`. The `rlhf_reward` is the ultimate objective of online DPO training. If training works as intended, this metric should keep going up.
*`objective/scores`: The mean scores returned by the reward model.
*`objective/scores_margin`: The mean score margin (according to the external reward model) between the chosen and rejected completions.
*`rewards/chosen`: The mean reward (according to online DPO's implicit reward model)of the chosen completions.
*`rewards/rejected`: The mean reward (according to online DPO's implicit reward model) of the rejected completions.
*`rewards/accuracies`: The accuracies of the online DPO's implicit reward model.
*`rewards/margins`: The mean reward margin (according to online DPO's implicit reward model) between the chosen and rejected completions.
*`logps/chosen`: The mean log probabilities of the chosen completions.
*`logps/rejected`: The mean log probabilities of the rejected completions.
*`val/contain_eos_token`: The fraction of completions which contain an EOS token.
*`beta`: The parameter that controls the weight of the loss term representing the deviation from the reference model. Typically fixed, but can be made dynamic by passing a list to [`OnlineDPOConfig`].
## Benchmark experiments
To validate the online DPO implementation works, we ran experiments with the Pythia 1B, 2.8B, and 6.9B models on a single node of 8 x H100s. Here are the commands we used to run the experiments. We take the SFT / RM models directly from [The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization](https://huggingface.co/papers/2403.17031).
To evaluate, we use [vLLM](https://github.com/vllm-project/vllm) to load the checkpoints and GPT-4o mini as a judge model to evaluate the generated TL;DR against the reference TL;DR.
For more information on how to use judges, see [Judges](judges).
plt.ylabel("Win rate against reference summaries\n(according to GPT-4-0613)")
plt.title("DPO scaling by model size")
plt.legend()
plt.xlim(5e8,1.2e10)
plt.xticks([1e9,3e9,1e10],["1B","3B","10B"])
plt.grid(True,which="both",ls="--",c="0.7")
plt.tight_layout()
plt.show()
```
The online DPO checkpoint gets increasingly more win rate as we scale up the model sizes. This is a good sign that the online DPO implementation is working as intended.
# OpenEnv Integration for Training LLMs with Environments
## Overview
[OpenEnv](https://github.com/meta-pytorch/OpenEnv) is an open-source framework from Meta's PyTorch team for defining, deploying, and interacting with environments in reinforcement learning (RL) and agentic workflows. It offers [Gymnasium-style APIs](https://gymnasium.farama.org) (e.g., `reset()` and `step()`) to interface with environments in a standard manner, and supports running these environments as backend servers (for example via HTTP or containerised execution). You can find a collection of ready-to-use OpenEnv environments on the [Hugging Face Hub](https://huggingface.co/collections/openenv/environment-hub).
In this guide, we’ll focus on **how to integrate OpenEnv with TRL**, but feel free to explore the links above to dive deeper into OpenEnv itself.
## Installation
To use OpenEnv with TRL, install the framework:
```bash
pip install openenv-core
```
## Using `rollout_func` with OpenEnv environments
TRL's [`GRPOTrainer`] supports _custom rollout logic_ through the `rollout_func` argument. This lets you override the trainer's default text-generation loop and directly interact with OpenEnv environments — for instance, to compute environment-driven rewards instead of relying solely on model-based signals.
### Rollout Function Signature
A rollout function must have the following signature:
```python
defrollout_func(
prompts:list[str],
args:GRPOConfig,
processing_class
)->dict[str,list]:
"""
Custom rollout function for generation and reward computation.
processing_class: Tokenizer/processor for encoding/decoding
Returns:
Dictionary containing:
- prompt_ids: List of token IDs for each prompt
- completion_ids: List of token IDs for each completion
- logprobs: List of log probabilities for each token
- Any additional fields are forwarded to reward functions as kwargs
"""
pass
```
> [!NOTE]
> Any extra fields in the returned dictionary (beyond the required three) are automatically forwarded to your reward functions. This makes it easy to propagate signals such as environment rewards or auxiliary metrics from the rollout step.
### Integration pattern
The typical pattern when combining OpenEnv with TRL looks like this:
1. Start or connect to an OpenEnv environment (e.g., an HTTP endpoint or Dockerized env).
2. Generate completions from your model — for example, via a vLLM inference server (`use_vllm=True`, `vllm_mode="server"`).
3. Step through the environment using each completion to compute rewards or metrics.
4. Add environment results (e.g., `env_reward`) to the rollout result dict.
5. Access those rewards inside your reward function via `**kwargs`.
By using OpenEnv in this loop, you can:
* Train with realistic or interactive feedback (not just static reward functions).
* Plug in custom simulators, web APIs, or evaluators as environments.
* Pass structured reward signals back into RL training seamlessly.
## A simple example
The [echo.py](https://github.com/huggingface/trl/blob/main/examples/scripts/openenv/echo.py) script demonstrates a minimal, end-to-end integration between TRL and OpenEnv. In this example, the Echo environment rewards completions based on their text length, encouraging the model to generate longer outputs. This pattern can be extended to any custom environment that provides structured feedback or task-based rewards:
To learn more about how to create custom environments, see the [OpenEnv documentation](https://github.com/meta-pytorch/OpenEnv/blob/main/src/envs/README.md).
## Advanced Example
Let's level this up a bit by training a model to interact with a more complex environment. We'll use the game word guessing game [wordle](https://www.nytimes.com/games/wordle/index.html) from the `textarena` environment.
### The TextArena Environment
[TextArena](https://huggingface.co/papers/2504.11442) is an open-source collection of competitive text-based games designed to evaluate reasoning skills in LLMs using textual games like Wordle, Snake, Tic-Tac-Toe, and more. Research has shown that such games improve model performance on reasoning tasks.

We will use the `textarena` environment to train a model to play Wordle. The environment is a simple text based response environment that allows the model to interact with the game by making guesses and receive feedback on them.
### Wordle
Wordle is a useful game to train a model on because it requires the model to reason about the word and the feedback provided by the environment. Also, it is a purely language based game that requires no external tools or knowledge. Furthermore, we found that models from 1 billion parameters and up are able to improve on wordle and only require 8 tokens to generate a guess, which makes the game a good benchmark to experiment with Reinforcement Learning environments without significant compute requirements.
> [!NOTE] How does Wordle work?
> Wordle is a word guessing game where the player has to guess a 5-letter word. The player can make 6 guesses, and for each guess, the environment will provide feedback on the correctness of the guess. The player wins if they guess the word in 6 guesses or less. It challenges the model to generate words that are likely to be correct, and to learn from the feedback provided by the environment.
>
> For example, if the wordle environment returns the following feedback:
>
> ```
> G U E S S
> X G Y X X
> ```
> The model has guessed the word "GUESS" and the environment has provided feedback as the letters X, G, and Y. Referring to colors in the original game blank, green, and yellow. From this feedback, the model should learn that the word is "GUESS" is incorrect. The letter "E" is in the word, but in the wrong position. The letter "U" is correct and in the correct position.
In the TextArena environment, reward is only given when the model wins the game. The reward is 1.0 if the model wins, and 0.0 otherwise. This is not a very efficient reward signal for the model, so we have added a number of custom reward functions to the script to help the model learn to play the game. The extensible nature of `reward_funcs` and `rollout_func` allows you to add any custom reward function you want to the script.
### Rollout Function
The rollout function runs one full Wordle episode, prompting the model for a guess each turn and capturing both environment rewards and auxiliary signals such as letter coverage and repetition penalties.
```python
defrollout_once(
env:TextArenaEnv,
tokenizer:AutoTokenizer,
args:GRPOConfig,
dataset_prompt:str,
cli_args:argparse.Namespace,
system_prompt:str,
)->dict[str,list]:
result=env.reset()
observation=result.observation
prompt_ids:list[int]=[]
completion_ids:list[int]=[]
logprobs:list[float]=[]
raw_rewards:list[float]=[]
green_scores:list[float]=[]
yellow_scores:list[float]=[]
repetition_scores:list[float]=[]
correct_scores:list[float]=[]
guess_counts:dict[str,int]={}
for_turninrange(cli_args.max_turns):
# when the game is over the environment will return a done=True
The environment has a reward signal based on the completion of the game. We found that most models struggle to ever win the game, so we have added a number of custom reward functions to the script to help the model learn to play the game more iteratively. At first, the model will learn to cover new letters and avoid repeating guesses. As it improves, it will learn to win the game.
### Reward Functions
We log four reward streams that encourage the model to solve the puzzle, cover new letters, and avoid repeating guesses:
-`reward_correct`: final win/loss signal from the environment.
-`reward_greens`: density of green letters in the last feedback.
-`reward_yellows`: density of yellow letters in the last feedback.
-`reward_repetition`: penalty for guessing the same token multiple times.
The training script wires the custom rollout and rewards into `GRPOTrainer`. The CLI exposes the configuration used during development as defaults, so you can override endpoints or hyperparameters at launch time.
```python
parser=argparse.ArgumentParser()
# ... add CLI arguments with sensible defaults ...
The resulting model improves it's performance on the game, both by reducing the number of repetitions and by increasing the number of correct guesses. However, the the Qwen3-1.7B model we trained is not able to consistently win the game. The following reward curve shows the coverage of the model's guesses and the coverage of correct Y and G letters.
We experimented larger models like `gpt-oss-20b` and found that model was able to consistently win the game. However, this requires a lot of compute to train and the model. Why not try this out yourself?
Odds Ratio Preference Optimization (ORPO) was introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691) by [Jiwoo Hong](https://huggingface.co/JW17), [Noah Lee](https://huggingface.co/nlee-208), and [James Thorne](https://huggingface.co/j6mes).
The abstract from the paper is the following:
> While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT. Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20% on AlpacaEval_{2.0} (Figure 1), 66.19% on IFEval (instruction-level loose, Table 6), and 7.32 in MT-Bench (Figure 12). We release code and model checkpoints for Mistral-ORPO-alpha (7B) and Mistral-ORPO-beta (7B).
It studies the crucial role of SFT within the context of preference alignment. Using preference data the method posits that a minor penalty for the disfavored generation together with a strong adaption signal to the chosen response via a simple log odds ratio term appended to the NLL loss is sufficient for preference-aligned SFT.
Thus ORPO is a reference model-free preference optimization algorithm eliminating the necessity for an additional preference alignment phase thus saving compute and memory.
The official code can be found in [xfactlab/orpo](https://github.com/xfactlab/orpo).
This post-training method was contributed by [Kashif Rasul](https://huggingface.co/kashif), [Lewis Tunstall](https://huggingface.co/lewtun) and [Alvaro Bartolome](https://huggingface.co/alvarobartt).
## Quick start
This example demonstrates how to train a model using the ORPO method. We use the [Qwen 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) as the base model. We use the preference data from the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback). You can view the data in the dataset here:
Distributed across 8 GPUs, the training takes approximately 30 minutes. You can verify the training progress by checking the reward graph. An increasing trend in the reward margin indicates that the model is improving and generating better responses over time.
To see how the [trained model](https://huggingface.co/trl-lib/Qwen2-0.5B-ORPO) performs, you can use the [Transformers Chat CLI](https://huggingface.co/docs/transformers/quicktour#chat-with-text-generation-models).
It's challenging to determine the best programming language as no one language is perfect, as the complexity of a task and the type of project are significant factors. Some popular languages include Java, Python, JavaScript, and
C++. If you have specific needs or requirements for a specific project, it's important to choose the language that best suits those needs.
Here are some other factors to consider when choosing a programming language for a project:
<strong><spanstyle="color: green;">• Language proficiency:</span></strong> A good programming language is more likely to be easy to understand and use, and will allow developers to collaborate on projects more efficiently.
<strong><spanstyle="color: green;">• Ease of use:</span></strong> There are tools and libraries available to make programming more accessible, so developers should choose a language that can help them get started easier.
<strong><spanstyle="color: green;">• Code readability:</span></strong> A clear and concise codebase should be easy to read and understand, especially when working with large projects.
<strong><spanstyle="color: green;">• Tool and framework support:</span></strong> There are numerous libraries available for Python, Java, and JavaScript, along with tools like IDEs and static code analysis tools.
<strong><spanstyle="color: green;">• Accessibility:</span></strong> Some languages and tools have features that make them more accessible to developers with disabilities, such as support for screen readers.
<strong><spanstyle="color: green;">• Version control:</span></strong> As your projects grow and complexity increases, version control tools can be beneficial for tracking changes.
</code></pre>
## Expected dataset type
ORPO requires a [preference dataset](dataset_formats#preference). The [`ORPOTrainer`] supports both [conversational](dataset_formats#conversational) and [standard](dataset_formats#standard) dataset format. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
Although the [`ORPOTrainer`] supports both explicit and implicit prompts, we recommend using explicit prompts. If provided with an implicit prompt dataset, the trainer will automatically extract the prompt from the `"chosen"` and `"rejected"` columns. For more information, refer to the [preference style](dataset_formats#preference) section.
## Example script
We provide an example script to train a model using the ORPO method. The script is available in [`examples/scripts/orpo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/orpo.py)
To test the ORPO script with the [Qwen2 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [UltraFeedback dataset](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized), run the following command:
```bash
accelerate launch examples/scripts/orpo.py \
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
--dataset_name trl-lib/ultrafeedback_binarized \
--num_train_epochs 1\
--output_dir Qwen2-0.5B-ORPO
```
## Usage tips
### For Mixture of Experts Models: Enabling the auxiliary loss
MOEs are the most efficient if the load is about equally distributed between experts.
To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss.
This option is enabled by setting `output_router_logits=True` in the model config (e.g. [`~transformers.MixtralConfig`]).
To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: `0.001`) in the model config.
## Logged metrics
While training and evaluating, we record the following reward metrics:
-`rewards/chosen`: the mean log probabilities of the policy model for the chosen responses scaled by beta
-`rewards/rejected`: the mean log probabilities of the policy model for the rejected responses scaled by beta
-`rewards/accuracies`: mean of how often the chosen rewards are > than the corresponding rejected rewards
-`rewards/margins`: the mean difference between the chosen and corresponding rejected rewards
-`log_odds_chosen`: the mean log odds ratio of the chosen responses over the rejected responses
-`log_odds_ratio`: the mean of the `log(sigmoid(log_odds_chosen))`
-`nll_loss`: the mean negative log likelihood loss from the SFT part of the loss over chosen responses
GSPO is a GRPO variant that computes importance sampling weights at the sequence level instead of per-token. To reproduce the paper's setting, use this configuration:
```python
fromtrlimportGRPOConfig
training_args=GRPOConfig(
importance_sampling_level="sequence",
loss_type="grpo",
beta=0.0,# GSPO set KL regularization to zero: https://github.com/volcengine/verl/pull/2775#issuecomment-3131807306
epsilon=3e-4,# GSPO paper (v2), section 5.1
epsilon_high=4e-4,# GSPO paper (v2), section 5.1
gradient_accumulation_steps=1,
steps_per_generation=4,# partition rollout batch into 4 mini-batches. GSPO paper (v2), section 5.1. Must be 4 times gradient_accumulation_steps
)
```
Note that this method only has an effect when training goes slightly off-policy—for example, when `steps_per_generation > gradient_accumulation_steps` or `num_iterations > 1`. Otherwise, it is effectively equivalent to no modification.
TRL also provide an experimental implementation of GSPO-token, see [Experimental - GSPO-Token](experimental#gspo-token).
#### Policy ratio: GRPO vs. GSPO
In GSPO, the policy ratio is defined at the sequence-level. In other words, it is the ratio between the probability of the current policy generating a sequence over the old policy generating that same sequence.
TRL supports the Perception-Aware Policy Optimization (PAPO) as described in the paper [Perception-Aware Policy Optimization for Multimodal Reasoning](https://huggingface.co/papers/2507.06448) by [Zhenhailong Wang](https://huggingface.co/mikewang), Xuehang Guo, Sofia Stoica, [Haiyang Xu](https://huggingface.co/xhyandwyy), Hongru Wang, Hyeonjeong Ha, Xiusi Chen, Yangyi Chen, Ming Yan, Fei Huang, Heng Ji
The abstract from the paper is the following:
> Reinforcement Learning with Verifiable Rewards (RLVR) has proven to be a highly effective strategy for endowing Large Language Models (LLMs) with robust multi-step reasoning abilities. However, its design and optimizations remain tailored to purely textual domains, resulting in suboptimal performance when applied to multimodal reasoning tasks. In particular, we observe that a major source of error in current multimodal reasoning lies in the perception of visual inputs. To address this bottleneck, we propose Perception-Aware Policy Optimization (PAPO), a simple yet effective extension of GRPO that encourages the model to learn to perceive while learning to reason, entirely from internal supervision signals. Notably, PAPO does not rely on additional data curation, external reward models, or proprietary models. Specifically, we introduce the Implicit Perception Loss in the form of a KL divergence term to the GRPO objective, which, despite its simplicity, yields significant overall improvements (4.4%) on diverse multimodal benchmarks. The improvements are more pronounced, approaching 8.0%, on tasks with high vision dependency. We also observe a substantial reduction (30.5%) in perception errors, indicating improved perceptual capabilities with PAPO. We conduct comprehensive analysis of PAPO and identify a unique loss hacking issue, which we rigorously analyze and mitigate through a Double Entropy Loss. Overall, our work introduces a deeper integration of perception-aware supervision into RLVR learning objectives and lays the groundwork for a new RL framework that encourages visually grounded reasoning. Project page: https://mikewangwzhl.github.io/PAPO.
# Examples of using peft with trl to finetune 8-bit models with Low Rank Adaption (LoRA)
The notebooks and scripts in these examples show how to use Low Rank Adaptation (LoRA) to fine-tune models in a memory efficient manner. Most of PEFT methods supported in peft library but note that some PEFT methods such as Prompt tuning are not supported.
For more information on LoRA, see the [original paper](https://huggingface.co/papers/2106.09685).
## Installation
Note: peft is in active development, so we install directly from their Github page.
Peft also relies on the latest version of transformers.
Note: if you don't want to log with `wandb` remove `log_with="wandb"` in the scripts/notebooks. You can also replace it with your favourite experiment tracker that's [supported by `accelerate`](https://huggingface.co/docs/accelerate/usage_guides/tracking).
## How to use it?
Simply declare a [`~peft.PeftConfig`] object in your script and pass it through `.from_pretrained` to load the TRL+PEFT model.
The `trl` library is powered by `accelerate`. As such it is best to configure and launch trainings with the following commands:
```bash
accelerate config # will prompt you to define the training configuration
accelerate launch examples/scripts/ppo.py --use_peft # launch`es training
```
## Using `trl` + `peft` and Data Parallelism
You can scale up to as many GPUs as you want, as long as you are able to fit the training process in a single device. The only tweak you need to apply is to load the model as follows:
Finally, make sure that the rewards are computed on correct device as well, for that you can use `ppo_trainer.model.current_device`.
## Multi-Adapter RL Training
You can use a single base model with multiple PEFT adapters for the entire PPO algorithm - including retrieving reference logits, computing active logits, and calculating rewards. This approach is useful for memory-efficient RL training.
> [!WARNING]
> This feature is experimental and convergence has not been extensively tested. We encourage the community to share feedback and report any issues.
### Requirements
Install PEFT and optionally bitsandbytes for 8-bit models:
```bash
pip install peft bitsandbytes
```
### Training Workflow
The multi-adapter approach requires three stages:
1.**Supervised Fine-Tuning (SFT)**: Train a base model on your target domain (e.g., IMDB dataset) using `SFTTrainer`
2.**Reward Model Training**: Train a reward model adapter using PEFT and `RewardTrainer` (see [reward modeling example](https://github.com/huggingface/trl/tree/main/examples/scripts/reward_modeling.py))
3.**PPO Training**: Fine-tune new adapters using PPO with the reward adapter
> [!IMPORTANT]
> Use the same base model (architecture and weights) for stages 2 & 3.
### Basic Usage
After training your reward adapter and pushing it to the Hub:
You can train multiple adapters on the same base model for different policies. Control which adapter to activate using the `ppo_adapter_name` argument:
## Naive pipeline parallelism (NPP) for large models (>60B models)
The `trl` library also supports naive pipeline parallelism (NPP) for large models (>60B models). This is a simple way to parallelize the model across multiple GPUs.
This paradigm, termed as "Naive Pipeline Parallelism" (NPP) is a simple way to parallelize the model across multiple GPUs. We load the model and the adapters across multiple GPUs and the activations and gradients will be naively communicated across the GPUs. This supports `int8` models as well as other `dtype` models.
Simply load your model with a custom `device_map` argument on the `from_pretrained` to split your model across multiple devices. Check out this [nice tutorial](https://github.com/huggingface/blog/blob/main/accelerate-large-models.md) on how to properly create a `device_map` for your model.
Also make sure to have the `lm_head` module on the first GPU device as it may throw an error if it is not on the first device. As this time of writing, you need to install the `main` branch of `accelerate`: `pip install git+https://github.com/huggingface/accelerate.git@main` and `peft`: `pip install git+https://github.com/huggingface/peft.git@main`.
### Launch scripts
Although `trl` library is powered by `accelerate`, you should run your training script in a single process. Note that we do not support Data Parallelism together with NPP yet.
```bash
python PATH_TO_SCRIPT
```
## Fine-tuning Llama-2 model
You can easily fine-tune Llama2 model using `SFTTrainer` and the official script! For example to fine-tune llama2-7b on the Guanaco dataset, run (tested on a single NVIDIA T4-16GB):
The logged metrics are as follows. Here is an example [tracked run at Weights and Biases](https://wandb.ai/huggingface/trl/runs/dd2o3g35)
-`eps`: Tracks the number of episodes per second.
-`objective/kl`: The mean Kullback-Leibler (KL) divergence between the current policy and reference policy.
-`objective/entropy`: The mean entropy of the policy, indicating the randomness of the actions chosen by the policy.
-`objective/non_score_reward`: The mean reward from non-score-related sources, basically `beta * kl.sum(1)`, where `beta` is the KL penalty coefficient and `kl` is the per-token KL divergence.
-`objective/rlhf_reward`: The mean RLHF reward, which is `score - non_score_reward`.
-`objective/scores`: The mean scores returned by the reward model / environment.
-`policy/approxkl_avg`: The average approximate KL divergence between consecutive PPO policies. Note that this is not the same as `objective/kl`.
-`policy/clipfrac_avg`: The average fraction of policy updates that are clipped, indicating how often the policy updates are constrained to prevent large changes.
-`loss/policy_avg`: The average policy loss, indicating how well the policy is performing.
-`loss/value_avg`: The average value loss, indicating the difference between the predicted value and the actual reward.
-`val/clipfrac_avg`: The average fraction of value function updates that are clipped, similar to policy/clipfrac_avg but for the value function.
-`policy/entropy_avg`: The average entropy of the policy during training, indicating how diverse the policy's actions are.
-`val/ratio`: The mean ratio of the current policy probability to the old policy probability, providing a measure of how much the policy has changed.
-`val/ratio_var`: The variance of the `val/ratio`, indicating the variability in policy changes.
-`val/num_eos_tokens`: The number of end-of-sequence (EOS) tokens generated, which can indicate the number of complete responses.
-`lr`: lr: The current learning rate used by the optimizer.
-`episode`: episode: The current episode count in the training process.
## Cookbook
- Debugging TIP: `objective/rlhf_reward`: this is the ultimate objective of the RLHF training. If training works as intended, this metric should keep going up.
- Debugging TIP: `val/ratio`: this number should float around 1.0, and it gets clipped by `--cliprange 0.2` with PPO's surrogate loss. So if this `ratio` is too high like 2.0 or 1000.0 or too small like 0.1, it means the updates between consecutive policies are too drastic. You should try understand why this is happening and try to fix it.
- Memory TIP: If you are running out of memory, you can try to reduce the `--per_device_train_batch_size` or increase the `--gradient_accumulation_steps` to reduce the memory footprint.
- Memory TIP: If you have multiple GPUs, you can also run training with DeepSpeed stage 3 to reduce the memory footprint `accelerate launch --config_file examples/accelerate_configs/deepspeed_zero3.yaml`.
- Usage TIP: We recommend to use the "EOS trick" via `--missing_eos_penalty`, which subtracts a static scalar penalty from the score of completions that do not end with an EOS token. This can help the model learn to generate more coherent completions.
## What is my model doing exactly?
To help you understand what your model is doing, we periodically log some sample completions from the model. Here is an example of a completion. In an example [tracked run at Weights and Biases](https://wandb.ai/huggingface/trl/runs/dd2o3g35), it looks like the following, allowing you to see the model's response at different stages of training. By default we generate `--num_sample_generations 10` during training, but you can customize the number of generations.
This PPO implementation is based on the [The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization](https://huggingface.co/papers/2403.17031).
## Benchmark experiments
To validate the PPO implementation works, we ran experiment on the 1B model. Here are the command we used to run the experiment. We take the SFT / RM models directly from [The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization](https://huggingface.co/papers/2403.17031).
To evaluate, we use [vLLM](https://github.com/vllm-project/vllm) to load the checkpoints and GPT-4o mini as a judge model to evaluate the generated TL;DR against the reference TL;DR.
For more information on how to use judges, see [Judges](judges).
The PPO checkpoint gets a 64.7% preferred rate vs the 33.0% preference rate of the SFT checkpoint. This is a good sign that the PPO training is working as intended.
> PRM Trainer is an experimental API which is subject to change at any time.
## Overview
Process-supervised Reward Models (PRM) were proposed in [Solving math word problems with process- and outcome-based feedback](https://huggingface.co/papers/2211.14275) by Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins.
The abstract from the paper is the following:
> Recent work has shown that asking language models to generate reasoning steps improves performance on many reasoning tasks. When moving beyond prompting, this raises the question of how we should supervise such models: outcome-based approaches which supervise the final result, or process-based approaches which supervise the reasoning process itself? Differences between these approaches might naturally be expected not just in final-answer errors but also in reasoning errors, which can be difficult to detect and are problematic in many real-world domains such as education. We run the first comprehensive comparison between process- and outcome-based approaches trained on a natural language task, GSM8K. We find that pure outcome-based supervision produces similar final-answer error rates with less label supervision. However, for correct reasoning steps we find it necessary to use processbased supervision or supervision from learned reward models that emulate process-based feedback. In total, we improve the previous best results from 16.8% → 12.7% final-answer error and 14.0% → 3.4% reasoning error among final-answer-correct solutions.
This post-training method was contributed by [Gaetan Lopez](https://github.com/gaetanlop), [Lewis Tunstall](https://huggingface.co/lewtun), [Quentin Gallouédec](https://huggingface.co/qgallouedec) and [Agustín Piqueres](https://huggingface.co/plaguss).
## Quick start
This example demonstrates how to train a model using the PRM method. We use the [Qwen 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B) as the base model. We use the stepwise supervision data from the [Math Shepherd dataset](https://huggingface.co/datasets/trl-lib/math_shepherd). You can view the data in the dataset here:
"prompt":"Musa is the class teacher of a class of 45 students. He wants to split them into three groups by age. If a third of the class is under 11 years, and two-fifths are above 11 but under 13, how many students will be in the third group (13 years and above)?",
"completions":[
"Step 1: A third of the class is under 11 years because 11 - 1/3 = <<11-1/3=7>>7.",
"Step 2: Two-fifths of the class are above 11 but under 13 because 2/5 * 11 = <<2/5*11=8>>8.",
"Step 3: There are 45 students, so the third group will have 45 - 7 - 8 = <<45-7-8=20>>20 students. The answer is: 20",
],
"labels":[True,False,False],
}
separator="\n"# It's important to use the same separator as the one used during training
foridxinrange(1,len(example["completions"])+1):
steps=example["completions"][0:idx]
text=separator.join((example["prompt"],*steps))+separator# Add a separator between the prompt and each steps
PRM requires a [stepwise supervision](dataset_formats#stepwise-supervision).
The dataset should contain the following columns: `prompt`, `completions` and `labels`, where `completions` contains a list of reasoning steps and `labels` a list of booleans or floats indicating the correctness of each step.
The [`PRMTrainer`] only supports [standard](dataset_formats#standard) dataset format.
## Example script
We provide an example script to train a model using the PRM method. The script is available in [`examples/scripts/prm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/prm.py)
To use the PRM script with the [Qwen2 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B) on the [Math Shepherd dataset](https://huggingface.co/datasets/trl-lib/math_shepherd), run the following command:
TRL is a comprehensive library for post-training foundation models using techniques like Supervised Fine-Tuning (SFT), Group Relative Policy Optimization (GRPO), Direct Preference Optimization (DPO).
## Quick Examples
Get started instantly with TRL's most popular trainers. Each example uses compact models for quick experimentation.
Fine-tuning a language model via PPO consists of roughly three steps:
1. **Rollout**: The language model generates a response or continuation based on a query which could be the start of a sentence.
2. **Evaluation**: The query and response are evaluated with a function, model, human feedback, or some combination of them. The important thing is that this process should yield a scalar value for each query/response pair. The optimization will aim at maximizing this value.
3. **Optimization**: This is the most complex part. In the optimisation step the query/response pairs are used to calculate the log-probabilities of the tokens in the sequences. This is done with the model that is trained and a reference model, which is usually the pre-trained model before fine-tuning. The KL-divergence between the two outputs is used as an additional reward signal to make sure the generated responses don't deviate too far from the reference language model. The active language model is then trained with PPO.
The full process is illustrated in the following figure:
RapidFire AI is an open-source experiment execution framework that enables concurrent training of multiple TRL configurations on the same GPU(s) through intelligent chunk-based scheduling.
## Key Features
- **16-24× higher experimentation throughput** compared to sequential training.
- **Almost no code changes** - drop-in configuration wrappers around TRL's and PEFT's existing configs.
- **Interactive Control Operations** - real-time control to stop, resume, clone, and modify training runs in flight
- **Automatic multi-GPU orchestration** with intelligent scheduling
- **Full compatibility** with transformers, PEFT, SFTTrainer, DPOTrainer, and GRPOTrainer
- **Full MLflow Integration**: Automatic experiment tracking and visualization
- **Production-Ready**: Already used in production environments with complete working examples.
### Problem It Solves
When fine-tuning or post-training with TRL, AI developers often need to:
- Try different hyperparameter configurations
- Compare different LoRA settings
- Test different prompt schemes
- Run ablation studies
**Current approach**: Train each config one after another → slow and inefficient process
**With RapidFire AI**: Train all configs in one go even on a single GPU → 16-24× faster process
### How It Works
RapidFire AI employs **adaptive chunk-based scheduling**:
1.**Config Expansion**: 2 base configurations × 2 PEFT configs = 4 total training runs
2.**Chunk-based Scheduling**: Training data is divided into chunks, and all 4 configs train concurrently
3.**GPU Swapping**: Models are swapped in/out of GPU memory based on chunk boundaries
4.**Real-time Tracking**: All metrics visible in the dashboard at `http://localhost:3000`
5.**Interactive Control**: Stop, resume, or clone any configuration from the dashboard
This delivers **16-24× higher throughput** compared to training each configuration sequentially!
## Supported TRL Trainers
### SFTTrainer
Use `RFSFTConfig` as a drop-in replacement for `SFTConfig`:
```python
fromrapidfireai.automlimportRFSFTConfig
training_args=RFSFTConfig(
learning_rate=5e-5,
per_device_train_batch_size=4,
num_train_epochs=3,
max_length=512,
# ... all other SFTConfig parameters supported
)
```
**Example Notebook**: [SFT for Customer Support](https://github.com/RapidFireAI/rapidfireai/blob/main/tutorial_notebooks/rf-tutorial-sft-chatqa-lite.ipynb)
### DPOTrainer
Use `RFDPOConfig` as a drop-in replacement for `DPOConfig`:
```python
fromrapidfireai.automlimportRFDPOConfig
training_args=RFDPOConfig(
beta=0.1,
loss_type="sigmoid",
max_prompt_length=512,
max_completion_length=512,
learning_rate=5e-4,
# ... all other DPOConfig parameters supported
)
```
**Example Notebook**: [DPO for Preference Alignment](https://github.com/RapidFireAI/rapidfireai/blob/main/tutorial_notebooks/rf-tutorial-dpo-alignment-lite.ipynb)
### GRPOTrainer
Use `RFGRPOConfig` as a drop-in replacement for `GRPOConfig`:
```python
fromrapidfireai.automlimportRFGRPOConfig
training_args=RFGRPOConfig(
learning_rate=5e-6,
num_generations=8,
max_prompt_length=256,
max_completion_length=256,
# ... all other GRPOConfig parameters supported
)
```
**Example Notebook**: [GRPO for Math Reasoning](https://github.com/RapidFireAI/rapidfireai/blob/main/tutorial_notebooks/rf-tutorial-grpo-mathreasoning-lite.ipynb)
## Core Concepts
### Chunk-Based Concurrent Training
RapidFire AI divides training data into chunks and alternates between configurations:
This approach maximizes GPU utilization and enables early comparison of configurations while maintaining training stability through automatic checkpointing.
### Interactive Control Operations (IC Ops)
Through the RapidFire AI dashboard, you can dynamically control running experiments:
- **Stop**: Pause a configuration (checkpointed automatically)
- **Resume**: Continue from last checkpoint
- **Clone**: Duplicate a configuration with modifications
- **Clone & Warm Start**: Clone and initialize from parent's weights
- **Delete**: Remove failed or unwanted runs
This enables adaptive experimentation where you can stop underperforming configs early and clone promising ones with tweaked hyperparameters.
### Multi-Config Experimentation
Use `RFGridSearch` or `RFRandomSearch` to automatically generate configuration combinations:
RapidFire AI automatically detects and utilizes all available GPUs. No special configuration needed - the scheduler automatically distributes configurations across GPUs.
## Best Practices
### Tuning Chunk Granularity
The `num_chunks` parameter controls swap frequency:
```python
# Fewer chunks = less overhead, less frequent comparison
experiment.run_fit(...,num_chunks=2)
# More chunks = more overhead, more frequent comparison
experiment.run_fit(...,num_chunks=16)
```
**Rule of thumb**: Start with `num_chunks=4` and adjust based on dataset size and number of configurations.
### Memory Management
For large models, use quantization:
```python
fromtransformersimportBitsAndBytesConfig
importtorch
bnb_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
model_kwargs={
"quantization_config":bnb_config,
"device_map":"auto",
}
```
## Performance Benchmarks
Based on internal benchmarks comparing sequential vs. RapidFire AI concurrent training:
| Scenario | Sequential Time | RapidFire AI Time | Speedup |
Learn more about RapidFire AI in their [official repository](https://github.com/RapidFireAI/rapidfireai) and [documentation](https://oss-docs.rapidfire.ai).
Training workflows can often be optimized to **reduce memory consumption**, and TRL provides several built-in features to help achieve this.
Below, we outline these techniques and recommend experimenting with different combinations to figure out which configuration works best for your specific setup.
Each method includes examples for the supported trainers. If you're unsure whether a technique is compatible with your trainer, please take a look at the corresponding trainer documentation.
For additional strategies, such as **gradient checkpointing**, which is supported across all trainers, see the [`transformers` performance guide](https://huggingface.co/docs/transformers/perf_train_gpu_one#gradient-checkpointing).
## Truncation
Sequence lengths in the dataset can vary widely. When data is batched, sequences are padded to match the longest one in the batch, which can cause high memory usage, even if most sequences are relatively short.
To reduce memory usage, it's important to truncate sequences to a reasonable length. While TRL trainers truncate sequences by default, you may want to adjust the default truncation length to better align with your specific use case.
<hfoptionsid="truncation">
<hfoptionid="DPO">
DPO truncation is applied first to the prompt and to the completion via the `max_prompt_length` and `max_completion_length` parameters. The `max_length` parameter is then used to truncate the resulting sequence.
You can also use the `max_completion_length` parameter to truncate the completion, though this is less common since the goal is typically to preserve the completion's full length whenever possible.
To set the truncation parameter, use the following code snippet:
```python
fromtrlimportSFTConfig
training_args=SFTConfig(...,max_length=...)
```
</hfoption>
</hfoptions>
### How to choose the `max_length` value?
If `max_length` is too small, a significant portion of your tokens will be discarded and won't contribute to training. If it's too large, memory usage can spike, potentially leading to out-of-memory (OOM) errors. Without packing or padding-free, a large `max_length` may also result in inefficient training, as many tokens will be padding.
To help you choose an appropriate value, we provide a utility to visualize the sequence length distribution in your dataset.
> This technique is available only for **SFT** training and setups that use **FlashAttention** (or its variants).
[Truncation](#truncation) has several drawbacks:
1.**Loss of information**: Key data at the end of a sequence may be discarded.
2.**Choosing truncation length**: Too short loses data; too long undermines efficiency.
Packing, introduced in [Raffel et al., 2020](https://huggingface.co/papers/1910.10683), addresses these issues by grouping sequences instead of truncating. It concatenates and splits dataset sequences into the desired lengths.
Packing reduces padding by merging several sequences in one row when possible. We use an advanced method to be near-optimal in the way we pack the dataset. To enable packing, use `packing=True` in the [`SFTConfig`].
> [!TIP]
> In TRL 0.18 and earlier, packing used a more aggressive method that reduced padding to almost nothing, but had the downside of breaking sequence continuity for a large fraction of the dataset. To revert to this strategy, use `packing_strategy="wrapped"` in [`SFTConfig`].
> [Liger Kernel](https://github.com/linkedin/Liger-Kernel) is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU training throughput by 20% and reduce memory usage by 60%.
For more information, see [Liger Kernel Integration](liger_kernel_integration).
To use Liger for reducing peak memory usage, use the following code snippet:
Padding-free batching is an alternative approach for reducing memory usage. In this method, a batch is first sampled and then flattened into a single sequence, avoiding padding. Unlike packing, which can result in incomplete sequences by combining parts of different samples, padding-free batching ensures that all sequences remain complete and intact.
> It's highly recommended to use padding-free batching with **FlashAttention 2** or **FlashAttention 3**. Otherwise, you may encounter batch contamination issues.
Activation offloading is a memory efficiency technique that reduces GPU VRAM usage by temporarily moving activation tensors to CPU RAM during the forward pass and bringing them back only when needed for the backward pass. This significantly reduces peak memory usage at the cost of slightly increased training time.
To enable activation offloading in your SFT training configuration:
Under the hood, activation offloading implements PyTorch's [`saved_tensors_hooks`](https://pytorch.org/tutorials/intermediate/autograd_saved_tensors_hooks_tutorial.html#hooks-for-autograd-saved-tensors) to intercept activations during the forward pass. It intelligently manages which tensors to offload based on size and context, avoiding offloading output tensors that would be inefficient. For performance optimization, it can, via a flag (which is true by default), use CUDA streams to overlap computation with CPU-GPU transfers.
## Padding Sequences to a Multiple
> [!TIP]
> This technique is supported for **SFT** and **Reward** trainers currently.
When enabled, this option ensures that all sequences are **padded to a multiple** of the specified value.
This can improve computational efficiency on some hardware by aligning sequence lengths to memory-friendly boundaries.
## Disabling model gathering for generation in online methods
When using DeepSpeed ZeRO-3, model weights are sharded across multiple GPUs. Online methods involve generating completions from the model as part of the training process. During this step, the model weights are temporarily gathered on a single GPU for generation. For very large models, this gathering can lead to OOM errors, as described in this issue: [#2250](https://github.com/huggingface/trl/issues/2250#issue-2598304204).
If you encounter this issue, you can disable the gathering of model weights for generation by setting the following parameter:
This adjustment prevents model weights from being gathered, avoiding OOM errors, but it may result in slower generation speeds.
## vLLM sleep mode
When using **vLLM** as the generation backend for online training methods, you can enable _sleep mode_ to offload vLLM parameters and cache to CPU RAM during the optimization step and reload them back to GPU VRAM when needed for weight synchronization and generation.
TRL supports the Outcome-supervised Reward Modeling (ORM) Trainer for training reward models.
This post-training method was contributed by [Younes Belkada](https://huggingface.co/ybelkada).
## Quick start
This example demonstrates how to train a reward model using the [`RewardTrainer`] from TRL. We train a [Qwen 3 0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) model on the [UltraFeedback dataset](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized), large-scale, fine-grained, diverse preference dataset.
[`RewardTrainer`] supports [preference](dataset_formats#preference) datasets type (both implicit and explicit prompt). The [`RewardTrainer`] is compatible with both [standard](dataset_formats#standard) and [conversational](dataset_formats#conversational) dataset formats. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
```python
# Standard preference (implicit prompt)
{"chosen":"The sky is blue.",
"rejected":"The sky is green."}
# Conversational preference (implicit prompt)
{"chosen":[{"role":"user","content":"What color is the sky?"},
{"role":"assistant","content":"It is blue."}],
"rejected":[{"role":"user","content":"What color is the sky?"},
{"role":"assistant","content":"It is green."}]}
# Standard preference (explicit prompt)
{"prompt":"The sky is",
"chosen":" blue.",
"rejected":" green."}
# Conversational preference (explicit prompt)
{"prompt":[{"role":"user","content":"What color is the sky?"}],
"chosen":[{"role":"assistant","content":"It is blue."}],
"rejected":[{"role":"assistant","content":"It is green."}]}
```
If your dataset is not in one of these formats, you can preprocess it to convert it into the expected format. Here is an example with the [lmarena-ai/arena-human-preference-55k](https://huggingface.co/datasets/lmarena-ai/arena-human-preference-55k) dataset:
{"role":"user","content":"Is it morally right to try to have a certain percentage of females on managerial positions?"},
{"role":"assistant","content":"The question of whether it is morally right to aim for a certain percentage of females..."},
],
"rejected":[
{"role":"user","content":"Is it morally right to try to have a certain percentage of females on managerial positions?"},
{"role":"assistant","content":"As an AI, I don't have personal beliefs or opinions. However, ..."},
],
}
```
## Looking deeper into the training method
Reward Models (RMs) are typically trained using supervised learning on datasets containing pairs of preferred and non-preferred responses. The goal is to learn a function that assigns higher scores to preferred responses, enabling the model to rank outputs based on preferences.
This section breaks down how reward modeling works in practice, covering the key steps: **preprocessing** and **loss computation**.
### Preprocessing and tokenization
During training, each example is expected to contain a **chosen** and **rejected** field. For more details on the expected formats, see [Dataset formats - Preference](dataset_formats#preference).
The [`RewardTrainer`] tokenizes each input using the model's tokenizer. If prompts and completions (chosen and rejected) are provided separately (explicit prompt case), they are concatenated before tokenization.
### Computing the loss
Let \\( x \\) be the input sequence (prompt) and \\( y^+ \\) and \\( y^- \\) be the chosen and rejected sequences respectively. Under the Bradley-Terry model ([Bradley & Terry, 1952](https://www.jstor.org/stable/2334029)), the probability that \\( y^+ \\) is preferred over \\( y^- \\) given a reward function \\( r \\) is \\( p(y^+ ≻ y^- |x) = \sigma(r(x, y^+)−r(x, y^-)) \\), where \\( σ \\) is the sigmoid function.
The reward model \\( r_\theta(x, y) \\) is trained to assign higher scores to preferred responses \\( y^+ \\) over non-preferred ones \\( y^- \\). The loss is then defined as the negative log-likelihood of the observed preferences:
> The Bradley-Terry model is underdetermined, meaning that adding a constant to all rewards does not change the preference probabilities. To address this, [Helping or Herding? Reward Model Ensembles Mitigate but do not Eliminate Reward Hacking](https://huggingface.co/papers/2312.09244) proposes adding an auxiliary loss term that encourages the rewards to be centered around zero. This is controlled by the `center_rewards_coefficient` parameter in the [`RewardConfig`]. The recommended value is `1e-2`.
## Logged metrics
While training and evaluating we record the following reward metrics:
*`global_step`: The total number of optimizer steps taken so far.
*`epoch`: The current epoch number, based on dataset iteration.
*`num_tokens`: The total number of tokens processed so far.
*`loss`: The average loss over the last logging interval.
*`accuracy`: The proportion of correct predictions (i.e., the model assigned a higher score to the chosen response than to the rejected one) averaged over the last logging interval.
*`min_reward`: The minimum reward score assigned by the model. This value is averaged over the logging interval.
*`mean_reward`: The average reward score assigned by the model over the last logging interval.
*`max_reward`: The maximum reward score assigned by the model. This value is averaged over the logging interval.
*`margin`: The average margin (difference between chosen and rejected rewards) over the last logging interval.
*`learning_rate`: The current learning rate, which may change dynamically if a scheduler is used.
*`grad_norm`: The L2 norm of the gradients, computed before gradient clipping.
## Customization
### Model initialization
You can directly pass the kwargs of the [`~transformers.AutoModelForSequenceClassification.from_pretrained()`] method to the [`RewardConfig`]. For example, if you want to load a model in a different precision, analogous to
you can do so by passing the `model_init_kwargs={"dtype": torch.bfloat16}` argument to the [`RewardConfig`].
```python
fromtrlimportRewardConfig
training_args=RewardConfig(
model_init_kwargs={"dtype":torch.bfloat16},
)
```
Note that all keyword arguments of [`~transformers.AutoModelForSequenceClassification.from_pretrained()`] are supported, except for `num_labels`, which is automatically set to 1.
### Train adapters with PEFT
We support tight integration with 🤗 PEFT library, allowing any user to conveniently train adapters and share them on the Hub, rather than training the entire model.
peft_config=LoraConfig(modules_to_save=["score"])# important to include the score head when base model is not a sequence classification model
)
trainer.train()
```
You can also continue training your [`~peft.PeftModel`]. For that, first load a `PeftModel` outside [`RewardTrainer`] and pass it directly to the trainer without the `peft_config` argument being passed.
TRL supports the RLOO Trainer for training language models, as described in the paper [Back to Basics: Revisiting REINFORCE Style
Optimization for Learning from Human Feedback in LLMs](https://huggingface.co/papers/2402.14740) by [Arash Ahmadian](https://huggingface.co/ArashAhmadian), Chris Cremer, [Matthias Gallé](https://huggingface.co/mgalle), [Marzieh Fadaee](https://huggingface.co/MarziehFadaee), [Julia Kreutzer](https://huggingface.co/JuliaKreutzerCohere), [Ahmet Üstün](https://huggingface.co/ahmetu) and [Sara Hooker](https://huggingface.co/sarahooker).
The abstract from the paper is the following:
> AI alignment in the shape of Reinforcement Learning from Human Feedback (RLHF) is increasingly treated as a crucial ingredient for high performance large language models. Proximal Policy Optimization (PPO) has been positioned by recent literature as the canonical method for the RL part of RLHF However, it involves both high computational cost and sensitive hyperparameter tuning. We posit that most of the motivational principles that led to the development of PPO are less of a practical concern in RLHF and advocate for a less computationally expensive method that preserves and even increases performance. We revisit the formulation of alignment from human preferences in the context of RL. Keeping simplicity as a guiding principle, we show that many components of PPO are unnecessary in an RLHF context and that far simpler REINFORCE-style optimization variants outperform both PPO and newly proposed “RL-free” methods such as DPO and RAFT. Our work suggests that careful adaptation to LLMs alignment characteristics enables benefiting from online RL optimization at low cost.
This post-training method was contributed by [Costa Huang](https://github.com/vwxyzjn) and later refactored by [Shirin Yamani](https://huggingface.co/ShirinYamani).
## Quick start
This example demonstrates how to train a model using the RLOO method. We train a [Qwen 0.5B Instruct model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) with the prompts from the [UltraFeedback prompts dataset](https://huggingface.co/datasets/trl-lib/ultrafeedback-prompt). You can view the data in the dataset here:
RLOO is an online learning algorithm, meaning it improves iteratively by using the data generated by the trained model itself during training. The intuition behind RLOO objective is to maximize the advantage of the generated completions, while ensuring that the model remains close to the reference policy. To understand how RLOO works, it can be broken down into four main steps: **Generating completions**, **computing the advantage**, **estimating the KL divergence**, and **computing the loss**.
At each training step, we sample a batch of prompts and generate a set of \\( G \\) completions for each prompt (denoted as \\( o_i \\)).
### Computing the reward
In RLOO, the reward consists of two components: the reward provided by the reward model (or reward function) and a KL penalty that discourages the policy from deviating too far from a fixed reference policy
1. For each of the \\( G \\) generated sequences \\( o_i = (o_{i,1}, \dots, o_{i,T}) \\) conditioned on a query \\( q \\), we compute a scalar reward using a reward model \\( R(o_i, q) \\).
2. Concurrently, we estimate the KL divergence between the current policy \\( \pi_\theta \\) and the fixed reference policy \\( \pi_{\text{ref}} \\) over the sequence. The KL estimate for sequence \\( o_i \\) is:
$$
\mathbb{D}_{\mathrm{KL}}\!\left[\pi_\theta\|\pi_{\mathrm{ref}}\right] = \sum_{t=1}^T \log \frac{\pi_\theta(o_{i,t} \mid q, o_{i,<t})}{\pi_{\mathrm{ref}}(o_{i,t} \midq,o_{i,<t})}.
where \\( \beta> 0 \\) controls the strength of the KL penalty.
> [!TIP]
> In a purely online setting (`num_iterations = 1`, default), the data are generated by the current policy. In this case, the KL penalty is computed directly using the current policy.
>
> In the more general setting (e.g., multiple gradient steps per batch), the data are instead generated by an earlier snapshot \\( \pi_{\text{old}} \\). To keep the penalty consistent with the sampling distribution, the KL is defined with respect to this policy:
> Equivalently, for a sampled sequence $o$, the Monte Carlo estimate is
>
> $$
> \mathbb{D}_{\mathrm{KL}}\!\left[\pi_{\text{old}} \|\pi_{\mathrm{ref}}\right] = \sum_{t=1}^T \log \frac{\pi_{\text{old}}(o_{i,t} \mid q, o_{i,<t})}{\pi_{\mathrm{ref}}(o_{i,t} \mid q, o_{i,<t})}.
> $$
### Computing the advantage
Once the rewards for each completion have been computed, we calculate a baseline as the average reward of all other samples in the same batch, excluding the current sample. This baseline is used to reduce the variance of the policy gradient estimate. The advantage for each completion is then obtained as the difference between its own reward and this leave-one-out baseline.
Formally, for a batch of G completions, the baseline for completion is:
$$
b_i = \frac{1}{G-1} \sum_{j \neq i} r_j
$$
and then the advantage for each completion is computed as the difference between its reward and the baseline:
In practice, performing multiple gradient steps on the same batch makes the actions effectively off-policy relative to the current parameters. To correct for this, we introduce the importance sampling ratio. To prevent excessively large updates when the policy changes between sampling and gradient steps, we clip this ratio:
In a fully online, single-step setting (default), \\( \frac{\pi_\theta(o_i \mid q)}{\pi_{\theta_\text{old}}(o_i \mid q)} = 1 \\) and this reduces to standard REINFORCE.
## Logged metrics
While training and evaluating, we record the following reward metrics:
-`num_tokens`: The total number of tokens processed so far, including both prompts and completions.
-`completions/mean_length`: The average length of generated completions.
-`completions/min_length`: The minimum length of generated completions.
-`completions/max_length`: The maximum length of generated completions.
-`completions/mean_terminated_length`: The average length of generated completions that terminate with EOS.
-`completions/min_terminated_length`: The minimum length of generated completions that terminate with EOS.
-`completions/max_terminated_length`: The maximum length of generated completions that terminate with EOS.
-`completions/clipped_ratio`: The ratio of truncated (clipped) completions.
-`reward/{reward_func_name}/mean`: The average reward from a specific reward function.
-`reward/{reward_func_name}/std`: The standard deviation of the reward from a specific reward function.
-`reward`: The overall average reward after applying reward weights.
-`reward_std`: The standard deviation of rewards after applying reward weights. This is the average of the per-group standard deviations.
-`frac_reward_zero_std`: The fraction of samples in the generation batch with a reward std of zero, implying there is little diversity for that prompt (all answers are correct or incorrect).
-`entropy`: Average entropy of token predictions across generated completions. (If `mask_truncated_completions=True`, masked sequences tokens are excluded.)
-`kl`: The average KL divergence between the model and the reference model, calculated over generated completions. Logged only if `beta` is nonzero.
-`clip_ratio/region_mean`: The ratio of sequence probabilities where the RLOO objective is clipped to stay within the trust region:
A higher value means more samples are clipped, which constrains how much the policy $\pi_\theta$ can change.
-`clip_ratio/low_mean`: The average ratio of sequence probabilities that were clipped on the lower bound of the trust region: \\(r_{i,t}(\theta) <1- \epsilon_\mathrm{low}\\)
-`clip_ratio/high_max`: The maximum ratio of sequence probabilities that were clipped on the upper bound of the trust region: \\(r_{i,t}(\theta) > 1 + \epsilon_\mathrm{high}\\).
## Customization
### Speed up training with vLLM-powered generation
Generation is often the main bottleneck when training with online methods. To accelerate generation, you can use [vLLM](https://github.com/vllm-project/vllm), a high-throughput, low-latency inference engine for LLMs. To enable it, first install the package with
```shell
pip install trl[vllm]
```
We support two ways of using vLLM during training: **server mode** and **colocate mode**.
#### 🔌 Option 1: Server mode
In this mode, vLLM runs in a separate process (and using separate GPUs) and communicates with the trainer via HTTP. This is ideal if you have dedicated GPUs for inference.
1.**Start the vLLM server**:
```bash
trl vllm-serve --model <model_name>
```
2. **Enable server mode in your training script**:
```python
from trl import RLOOConfig
training_args = RLOOConfig(
...,
use_vllm=True,
vllm_mode="server", # default value, can be omitted
)
```
> [!WARNING]
> Make sure that the server is using different GPUs than the trainer, otherwise you may run into NCCL errors. You can specify the GPUs to use with the `CUDA_VISIBLE_DEVICES` environment variable.
#### 🧩 Option 2: Colocate mode
In this mode, vLLM runs inside the trainer process and shares GPU memory with the training model. This avoids launching a separate server and can improve GPU utilization, but may lead to memory contention on the training GPUs.
```python
from trl import RLOOConfig
training_args = RLOOConfig(
...,
use_vllm=True,
vllm_mode="colocate",
)
```
> [!TIP]
> Depending on the model size and the overall GPU memory requirements for training, you may need to adjust the `vllm_gpu_memory_utilization` parameter in [`RLOOConfig`] to avoid underutilization or out-of-memory errors.
>
> We provide a [HF Space](https://huggingface.co/spaces/trl-lib/recommend-vllm-memory) to help estimate the recommended GPU memory utilization based on your model configuration and experiment settings. Simply use it as follows to get `vllm_gpu_memory_utilization` recommendation:
> If the recommended value does not work in your environment, we suggest adding a small buffer (e.g., +0.05 or +0.1) to the recommended value to ensure stability.
>
> If you still find you are getting out-of-memory errors set `vllm_enable_sleep_mode` to True and the vllm parameters and cache will be offloaded during the optimization step. For more information, see [Reducing Memory Usage with vLLM Sleep Mode](reducing_memory_usage#vllm-sleep-mode).
> [!TIP]
> By default, RLOO uses `MASTER_ADDR=localhost` and `MASTER_PORT=12345` for vLLM, but you can override these values by setting the environment variables accordingly.
For more information, see [Speeding up training with vLLM](speeding_up_training#vllm-for-fast-generation-in-online-methods).
### RLOO at scale: train a 70B+ Model on multiple nodes
When training large models like **Qwen2.5-72B**, you need several key optimizations to make the training efficient and scalable across multiple GPUs and nodes. These include:
- **DeepSpeed ZeRO Stage 3**: ZeRO leverages data parallelism to distribute model states (weights, gradients, optimizer states) across multiple GPUs and CPUs, reducing memory and compute requirements on each device. Since large models cannot fit on a single GPU, using ZeRO Stage 3 is required for training such models. For more details, see [DeepSpeed Integration](deepspeed_integration).
- **Accelerate**: Accelerate is a library that simplifies distributed training across multiple GPUs and nodes. It provides a simple API to launch distributed training and handles the complexities of distributed training, such as data parallelism, gradient accumulation, and distributed data loading. For more details, see [Distributing Training](distributing_training).
- **vLLM**: See the previous section on how to use vLLM to speed up generation.
Below is an example SLURM script to train a 70B model with RLOO on multiple nodes. This script trains a model on 4 nodes and uses the 5th node for vLLM-powered generation.
```sh
#!/bin/bash
#SBATCH --nodes=5
#SBATCH --gres=gpu:8
# Get the list of allocated nodes
NODELIST=($(scontrol show hostnames $SLURM_JOB_NODELIST))
# Assign the first 4 nodes for training and the 5th node for vLLM
TRAIN_NODES="${NODELIST[@]:0:4}" # Nodes 0, 1, 2, 3 for training
The [`RLOOTrainer`] supports using custom reward functions instead of dense reward models. To ensure compatibility, your reward function must satisfy the following requirements:
1. **Input arguments**:
- The function must accept the following as keyword arguments:
- `prompts` (contains the prompts),
- `completions` (contains the generated completions),
- `completions_ids` (contains the tokenized completions),
- `trainer_state` ([`~transformers.TrainerState`]): The current state of the trainer. This can be used to implement dynamic reward functions, such as curriculum learning, where the reward is adjusted based on the training progress.
- All column names (but `prompt`) that the dataset may have. For example, if the dataset contains a column named `ground_truth`, the function will be called with `ground_truth` as a keyword argument.
The easiest way to comply with this requirement is to use `**kwargs` in the function signature.
- Depending on the dataset format, the input will vary:
- For [standard format](dataset_formats#standard), `prompts` and `completions` will be lists of strings.
- For [conversational format](dataset_formats#conversational), `prompts` and `completions` will be lists of message dictionaries.
2. **Return value**: The function must return a list of floats. Each float represents the reward corresponding to a single completion.
#### Example 1: Reward longer completions
Below is an example of a reward function for a standard format that rewards longer completions:
```python
def reward_func(completions_ids, **kwargs):
"""Reward function that assigns higher scores to longer completions (in terms of token count)."""
return [float(len(ids)) for ids in completions_ids]
```
You can test it as follows:
```python
>>> prompts = ["The sky is", "The sun is"] # not used in the reward function, but the trainer will pass it
>>> completions = [" blue.", " in the sky."] # not used in the reward function, but the trainer will pass it
#### Example 2: Reward completions with a specific format
Below is an example of a reward function that checks if the completion has a specific format. This example is inspired by the _format reward_ function used in the paper [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://huggingface.co/papers/2501.12948).
It is designed for a conversational format, where prompts and completions consist of structured messages.
```python
import re
def format_reward_func(completions, **kwargs):
"""Reward function that checks if the completion has a specific format."""
#### Example 3: Reward completions based on a reference
Below is an example of a reward function that checks if the completion is correct. This example is inspired by the _accuracy reward_ function used in the paper [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://huggingface.co/papers/2501.12948).
This example is designed for [standard format](dataset_formats#standard), where the dataset contains a column named `ground_truth`.
Below is an example of using multiple reward functions in the [`RLOOTrainer`]. In this example, we define two task-specific reward functions: `math_reward_func` and `coding_reward_func`. The `math_reward_func` rewards math problems based on their correctness, while the `coding_reward_func` rewards coding problems based on whether the solution works.
```python
from datasets import Dataset
from trl import RLOOTrainer
# Define a dataset that contains both math and coding problems
dataset = Dataset.from_list(
[
{"prompt": "What is 2+2?", "task": "math"},
{"prompt": "Write a function that returns the sum of two numbers.", "task": "code"},
{"prompt": "What is 3*4?", "task": "math"},
{"prompt": "Write a function that returns the product of two numbers.", "task": "code"},
In this example, the `math_reward_func` and `coding_reward_func` are designed to work with a mixed dataset that contains both math and coding problems. The `task` column in the dataset is used to determine which reward function to apply to each problem. If there is no relevant reward function for a sample in the dataset, the reward function will return `None`, and the [`RLOOTrainer`] will continue with the valid functions and tasks. This allows the [`RLOOTrainer`] to handle multiple reward functions with different applicability.
Note that the [`RLOOTrainer`] will ignore the `None` rewards returned by the reward functions and only consider the rewards returned by the relevant functions. This ensures that the model is trained on the relevant tasks and ignores the tasks for which there is no relevant reward function.
#### Passing the reward function to the trainer
To use your custom reward function, pass it to the [`RLOOTrainer`] as follows:
```python
from trl import RLOOTrainer
trainer = RLOOTrainer(
reward_funcs=reward_func,
...,
)
```
If you have multiple reward functions, you can pass them as a list:
```python
from trl import RLOOTrainer
trainer = RLOOTrainer(
reward_funcs=[reward_func1, reward_func2],
...,
)
```
and the reward will be computed as the sum of the rewards from each function, or the weighted sum if `reward_weights` is provided in the config.
Note that [`RLOOTrainer`] supports multiple reward functions of different types. See the parameters documentation for more details.
## Vision-Language Model (VLM) Training
RLOO supports training Vision-Language Models (VLMs) on multimodal datasets containing both text and images.
> Compatibility with all VLMs is not guaranteed. If you believe a model should be supported, feel free to open an issue on GitHub — or better yet, submit a pull request with the required changes.
### Quick Start
Use [rloo\_vlm.py](https://github.com/huggingface/trl/blob/main/examples/scripts/rloo_vlm.py) to fine-tune a VLM. Example command for training on [`lmms-lab/multimodal-open-r1-8k-verified`](https://huggingface.co/datasets/lmms-lab/multimodal-open-r1-8k-verified):
> For VLMs, truncating may remove image tokens, leading to errors during training. To avoid this, set `max_prompt_length=None` in the [`RLOOConfig`]. This allows the model to process the full sequence length without truncating image tokens.
>
> ```python
> RLOOConfig(max_prompt_length=None, ...)
> ```
>
> Only use `max_prompt_length` when you've verified that truncation won't remove image tokens for the entire dataset.
- Use LoRA on vision-language projection layers
- Enable 4-bit quantization to reduce memory usage
- VLMs are memory-intensive — start with smaller batch sizes
- Most models are compatible with vLLM (`server` and `colocate` modes)
### Dataset Format
Each training sample should include:
- `prompt`: Text formatted via the processor's chat template
- `image`/`images`: PIL Image or list of PIL Images
The trainer automatically handles image-to-tensor conversion via the model’s image processor.
2. [Paper Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs](https://huggingface.co/papers/2402.14740)
3. [Paper - REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models](https://huggingface.co/papers/2501.03262)
4. [Blog Post - Putting RL back in RLHF](https://huggingface.co/blog/putting_rl_back_in_rlhf_with_rloo)
5. [Blog Post - Unraveling RLHF and Its Variants: Progress and Practical Engineering Insights](https://hijkzzz.notion.site/unraveling-rlhf-and-its-variants-engineering-insights#147d9a33ecc9806090f3d5c749d31f05)
6. [Youtube - RLOO: A Cost-Efficient Optimization for Learning from Human Feedback in LLMs](https://www.youtube.com/watch?v=86asXGPK6RU&ab_channel=BuzzRobot)
## Migration Guide from the old implementation (0.21 and below)
With the release of version 0.22.0, we have revamped the [`RLOOTrainer`] to be more aligned with other online trainers in the library, like [`GRPOTrainer`]. This new implementation introduces several changes to the configuration parameters and overall structure of the trainer.
Below is a summary of the key changes for [`RLOOConfig`]:
| TRL ≤ 0.21.x | TRL ≥ 0.22.0 |
| --- | --- |
| `rloo_k` | renamed to `num_generations` |
| `cliprange` | renamed to `epsilon` |
| `kl_coef` | renamed to `beta` |
| `exp_name` | renamed to `run_name`. Use `run_name = f"{exp_name}__{seed}__{int(time.time())}"` to replicate old behavior |
| `normalize_reward` | renamed to `normalize_advantages`. Note: this always normalized advantages (despite the old name) |
| `num_ppo_epochs` | renamed to `num_iterations` (default: `1`) |
| `token_level_kl` | **removed** – KL is now computed only at the sequence level |
| `dataset_num_proc` | **removed** – it was unused |
| `num_mini_batches` | renamed to `steps_per_generation` |
| `total_episodes` | use `max_steps=total_episodes / gradient_accumulation_steps` instead |
| `local_rollout_forward_batch_size` | **removed** – now automatically set to `per_device_train_batch_size` (or `per_device_eval_batch_size` during evaluation) |
| `num_sample_generations` | **removed** – use `logging_steps` to control generation logging frequency |
| `response_length` | renamed to `max_completion_length` (default: `256`) |
| `stop_token` | **removed** |
| `stop_token_id` | **removed** – use `processing_class.eos_token_id` instead |
| `missing_eos_penalty` | **removed** – replicate with a custom reward function checking if `eos_token_id` is in `completion_ids` |
Below is a summary of the key changes for [`RLOOTrainer`]:
| TRL ≤ 0.21.x | TRL ≥ 0.22.0 |
| --- | --- |
| `config` | renamed to `args` |
| `reward_model` | renamed to `reward_funcs`, which now supports both reward models and custom reward functions |
| `policy` | renamed to `model` |
| `ref_policy` | **removed** – the reference model is now created automatically from `model` |
The notebooks and scripts in this examples show how to fine-tune a model with a sentiment classifier (such as `lvwerra/distilbert-imdb`).
Here's an overview of the notebooks and scripts in the [trl repository](https://github.com/lvwerra/trl/tree/main/examples):
| File | Description | Colab link |
|---|---| --- |
| [`gpt2-sentiment.ipynb`](https://github.com/lvwerra/trl/blob/main/examples/sentiment/notebooks/gpt2-sentiment.ipynb) | Fine-tune GPT2 to generate positive movie reviews. | [](https://colab.research.google.com/github/lvwerra/trl/blob/main/examples/sentiment/notebooks/gpt2-sentiment.ipynb)
|
| [`gpt2-sentiment-control.ipynb`](https://github.com/lvwerra/trl/blob/main/examples/sentiment/notebooks/gpt2-sentiment-control.ipynb) | Fine-tune GPT2 to generate movie reviews with controlled sentiment. | [](https://colab.research.google.com/github/lvwerra/trl/blob/main/examples/sentiment/notebooks/gpt2-sentiment-control.ipynb)
|
| [`gpt2-sentiment.py`](https://github.com/lvwerra/trl/blob/main/examples/sentiment/scripts/gpt2-sentiment.py) | Same as the notebook, but easier to use to use in multi-GPU setup. | x |
| [`t5-sentiment.py`](https://github.com/lvwerra/trl/blob/main/examples/sentiment/scripts/t5-sentiment.py) | Same as GPT2 script, but for a Seq2Seq model (T5). | x |
## Installation
```bash
pip install trl
#optional: wandb
pip install wandb
```
Note: if you don't want to log with `wandb` remove `log_with="wandb"` in the scripts/notebooks. You can also replace it with your favourite experiment tracker that's [supported by `accelerate`](https://huggingface.co/docs/accelerate/usage_guides/tracking).
## Launch scripts
The `trl` library is powered by `accelerate`. As such it is best to configure and launch trainings with the following commands:
```bash
accelerate config # will prompt you to define the training configuration
accelerate launch scripts/gpt2-sentiment.py # launches training
# Examples of using peft and trl to finetune 8-bit models with Low Rank Adaption
The notebooks and scripts in this examples show how to fine-tune a model with a sentiment classifier (such as `lvwerra/distilbert-imdb`).
Here's an overview of the notebooks and scripts in the [trl repository](https://github.com/lvwerra/trl/tree/main/examples):
| File | Description | Colab link |
|---|---| --- |
| [`gpt2-sentiment_peft.py`](https://github.com/lvwerra/trl/blob/main/examples/sentiment/scripts/gpt2-sentiment_peft.py) | Same as the sentiment analysis example, but learning a low rank adapter on a 8-bit base model | |
| [`cm_finetune_peft_imdb.py`](https://github.com/lvwerra/trl/blob/main/examples/sentiment/scripts/gpt-neox-20b_peft/cm_finetune_peft_imdb.py) | Fine tuning a Low Rank Adapter on a frozen 8-bit model for text generation on the imdb dataset. | |
| [`merge_peft_adapter.py`](https://github.com/lvwerra/trl/blob/main/examples/sentiment/scripts/gpt-neox-20b_peft/merge_peft_adapter.py) | Merging of the adapter layers into the base model’s weights and storing these on the hub. | |
| [`gpt-neo-20b_sentiment_peft.py`](https://github.com/lvwerra/trl/blob/main/examples/sentiment/scripts/gpt-neox-20b_peft/gpt-neo-20b_sentiment_peft.py) | Sentiment fine-tuning of a Low Rank Adapter to create positive reviews. | |
| [`gpt-neo-1b_peft.py`](https://github.com/lvwerra/trl/blob/main/examples/sentiment/scripts/gpt-neo-1b-multi-gpu/gpt-neo-1b_peft.py) | Sentiment fine-tuning of a Low Rank Adapter to create positive reviews using 2 GPUs. | |
## Installation
Note: peft is in active development, so we install directly from their github page.
Peft also relies on the latest version of transformers.
Note: if you don't want to log with `wandb` remove `log_with="wandb"` in the scripts/notebooks. You can also replace it with your favourite experiment tracker that's [supported by `accelerate`](https://huggingface.co/docs/accelerate/usage_guides/tracking).
## Launch scripts
The `trl` library is powered by `accelerate`. As such it is best to configure and launch trainings with the following commands:
```bash
accelerate config # will prompt you to define the training configuration
accelerate launch scripts/gpt2-sentiment_peft.py # launches training
```
## Using `trl` + `peft` and Data Parallelism
You can scale up to as many GPUs as you want, as long as you are able to fit the training process in a single device. The only tweak you need to apply is to load the model as follows:
The reason behind `device_map={"": current_device}` is that when you set `"":device_number`, `accelerate` will set the entire model on the `device_number` device. Therefore this trick enables to set the model on the correct device for each process.
As the `Accelerator` object from `accelerate` will take care of initializing the distributed setup correctly.
Make sure to initialize your accelerate config by specifying that you are training in a multi-gpu setup, by running `accelerate config` and make sure to run the training script with `accelerator launch your_script.py`.
Finally make sure that the rewards are computed on `current_device` as well.
## Naive pipeline parallelism (NPP) for large models (>60B models)
The `trl` library also supports naive pipeline parallelism (NPP) for large models (>60B models). This is a simple way to parallelize the model across multiple GPUs.
This paradigm, termed as "Naive Pipeline Parallelism" (NPP) is a simple way to parallelize the model across multiple GPUs. We load the model and the adapters across multiple GPUs and the activations and gradients will be naively communicated across the GPUs. This supports `int8` models as well as other `dtype` models.
Simply load your model with a custom `device_map` argument on the `from_pretrained` to split your model across multiple devices. Check out this [nice tutorial](https://github.com/huggingface/blog/blob/main/accelerate-large-models.md) on how to properly create a `device_map` for your model.
Also make sure to have the `lm_head` module on the first GPU device as it may throw an error if it is not on the first device. As this time of writing, you need to install the `main` branch of `accelerate`: `pip install git+https://github.com/huggingface/accelerate.git@main` and `peft`: `pip install git+https://github.com/huggingface/peft.git@main`.
That all you need to do to use NPP. Check out the [gpt-neo-1b_peft.py](https://github.com/lvwerra/trl/blob/main/examples/sentiment/scripts/gpt-neo-1b-multi-gpu/gpt-neo-1b_peft.py) example for a more details usage of NPP.
### Launch scripts
Although `trl` library is powered by `accelerate`, you should run your training script in a single process. Note that we do not support Data Parallelism together with NPP yet.
TRL supports the Supervised Fine-Tuning (SFT) Trainer for training language models.
This post-training method was contributed by [Younes Belkada](https://huggingface.co/ybelkada).
## Quick start
This example demonstrates how to train a language model using the [`SFTTrainer`] from TRL. We train a [Qwen 3 0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) model on the [Capybara dataset](https://huggingface.co/datasets/trl-lib/Capybara), a compact, diverse multi-turn dataset to benchmark reasoning and generalization.
SFT supports both [language modeling](dataset_formats#language-modeling) and [prompt-completion](dataset_formats#prompt-completion) datasets. The [`SFTTrainer`] is compatible with both [standard](dataset_formats#standard) and [conversational](dataset_formats#conversational) dataset formats. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
```python
# Standard language modeling
{"text":"The sky is blue."}
# Conversational language modeling
{"messages":[{"role":"user","content":"What color is the sky?"},
{"role":"assistant","content":"It is blue."}]}
# Standard prompt-completion
{"prompt":"The sky is",
"completion":" blue."}
# Conversational prompt-completion
{"prompt":[{"role":"user","content":"What color is the sky?"}],
"completion":[{"role":"assistant","content":"It is blue."}]}
```
If your dataset is not in one of these formats, you can preprocess it to convert it into the expected format. Here is an example with the [FreedomIntelligence/medical-o1-reasoning-SFT](https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT) dataset:
"content":"Given the symptoms of sudden weakness in the left arm and leg, recent long-distance travel, and the presence of swollen and tender right lower leg, what specific cardiac abnormality is most likely to be found upon further evaluation that could explain these findings?",
"role":"user",
}
],
"completion":[
{
"content":"<think>Okay, let's see what's going on here. We've got sudden weakness [...] clicks into place!</think>The specific cardiac abnormality most likely to be found in [...] the presence of a PFO facilitating a paradoxical embolism.",
"role":"assistant",
}
],
}
```
## Looking deeper into the SFT method
Supervised Fine-Tuning (SFT) is the simplest and most commonly used method to adapt a language model to a target dataset. The model is trained in a fully supervised fashion using pairs of input and output sequences. The goal is to minimize the negative log-likelihood (NLL) of the target sequence, conditioning on the input.
This section breaks down how SFT works in practice, covering the key steps: **preprocessing**, **tokenization** and **loss computation**.
### Preprocessing and tokenization
During training, each example is expected to contain a **text field** or a **(prompt, completion)** pair, depending on the dataset format. For more details on the expected formats, see [Dataset formats](dataset_formats).
The [`SFTTrainer`] tokenizes each input using the model's tokenizer. If both prompt and completion are provided separately, they are concatenated before tokenization.
where \\(y_t \\)isthetargettokenattimestep \\(t \\),andthemodelistrainedtopredictthenexttokengiventhepreviousones.Inpractice,paddingtokensaremaskedoutduringlosscomputation.
> [!TIP]
> The paper [On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification](https://huggingface.co/papers/2508.05629) proposes an alternative loss function, called **Dynamic Fine-Tuning (DFT)**, which aims to improve generalization by rectifying the reward signal. This method can be enabled by setting `loss_type="dft"` in the [`SFTConfig`]. For more details, see [Paper Index - Dynamic Fine-Tuning](paper_index#on-the-generalization-of-sft-a-reinforcement-learning-perspective-with-reward-rectification).
> This functionality is only available for chat templates that support returning the assistant tokens mask via the `{% generation %}` and `{% endgeneration %}` keywords. For an example of such a template, see [HugggingFaceTB/SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B/blob/main/chat_template.jinja#L76-L82).
> Training on completion only is compatible with training on assistant messages only. In this case, use a [conversational](dataset_formats#conversational) [prompt-completion](dataset_formats#prompt-completion) dataset and set `assistant_only_loss=True` in the [`SFTConfig`].
> Some base models, like those from Qwen, have a predefined chat template in the model's tokenizer. In these cases, it is not necessary to apply [`clone_chat_template()`], as the tokenizer already handles the formatting. However, it is necessary to align the EOS token with the chat template to ensure the model's responses terminate correctly. In these cases, specify `eos_token` in [`SFTConfig`]; for example, for `Qwen/Qwen2.5-1.5B`, one should set `eos_token="<|im_end|>"`.
> For VLMs, truncating may remove image tokens, leading to errors during training. To avoid this, set `max_length=None` in the [`SFTConfig`]. This allows the model to process the full sequence length without truncating image tokens.
>
> ```python
> SFTConfig(max_length=None, ...)
> ```
>
> Only use `max_length` when you've verified that truncation won't remove image tokens for the entire dataset.
> Section under construction. Feel free to contribute!
## vLLM for fast generation in online methods
Online methods such as GRPO or Online DPO require the model to generate completions, which is often a slow process and can significantly impact training time.
To speed up generation, you can use [vLLM](https://github.com/vllm-project/vllm), a library that enables fast generation through, among other things, PagedAttention. TRL's online trainers support vLLM, greatly improving training speed.
To use [vLLM](https://github.com/vllm-project/vllm), first install it using:
```bash
pip install trl[vllm]
```
<hfoptionsid="vllm examples">
<hfoptionid="Online DPO">
Then, enable it by passing `use_vllm=True` in the training arguments.
```python
fromtrlimportOnlineDPOConfig
training_args=OnlineDPOConfig(...,use_vllm=True)
```
</hfoption>
<hfoptionid="GRPO">
First, start a vLLM server by running:
```bash
trl vllm-serve --model <model_name>
```
Then, run the training script and pass `use_vllm=True` in the training arguments.
```python
fromtrlimportGRPOConfig
training_args=GRPOConfig(...,use_vllm=True)
```
You can customize the server configuration by passing additional arguments. For more information, see [vLLM integration](vllm_integration).
> [!WARNING]
> When using vLLM, ensure that the GPUs assigned for training and generation are separate to avoid resource conflicts. For instance, if you plan to use 4 GPUs for training and another 4 for vLLM generation, you can specify GPU allocation using `CUDA_VISIBLE_DEVICES`.
Then, run the training script and pass `use_vllm=True` in the training arguments.
```python
fromtrlimportRLOOConfig
training_args=RLOOConfig(...,use_vllm=True)
```
You can customize the server configuration by passing additional arguments. For more information, see [vLLM integration](vllm_integration).
> [!WARNING]
> When using vLLM, ensure that the GPUs assigned for training and generation are separate to avoid resource conflicts. For instance, if you plan to use 4 GPUs for training and another 4 for vLLM generation, you can specify GPU allocation using `CUDA_VISIBLE_DEVICES`.
The script in this example show how to train a reward model for summarization, following the OpenAI Learning to Summarize from Human Feedback [paper](https://arxiv.org/abs/2009.01325). We've validated that the script can be used to train a small GPT2 to get slightly over 60% validation accuracy, which is aligned with results from the paper. The model is [here](https://huggingface.co/Tristan/gpt2_reward_summarization).
Here's an overview of the relevant files in the [trl repository](https://github.com/lvwerra/trl/tree/main/examples):
| File | Description |
|---|---|
| `scripts/reward_summarization.py` | For tuning the reward model. |
| `scripts/ds3_reward_summarization_example_config.json` | Can be used with the reward model script to scale it up to arbitrarily big models that don't fit on a single GPU. |
## Installation
```bash
pip install trl
pip install evaluate
# optional: deepspeed
pip install deepspeed
```
```bash
# If you want your reward model to follow the Learning to Summarize from Human Feedback paper closely, then tune a GPT model on summarization and then instantiate the reward model
# with it. In other words, pass in the name of your summarization-finetuned gpt on the hub, instead of the name of the pretrained gpt2 like we do in the following examples of how
# to run this script.
# Example of running this script with the small size gpt2 on a 40GB A100 (A100's support bf16). Here, the global batch size will be 64:
[Trackio](https://huggingface.co/docs/trackio) is a lightweight, free experiment tracking library built on top of **🤗 Datasets** and **🤗 Spaces**. It is the **recommended tracking solution for TRL** and comes natively integrated with all trainers.
To enable logging, simply set `report_to="trackio"` in your training config:
```python
fromtrlimportSFTConfig# works with any trainer config (e.g. DPOConfig, GRPOConfig, etc.)
training_args=SFTConfig(
...,
report_to="trackio",# enable Trackio logging
)
```
## Organizing Your Experiments with Run Names and Projects
By default, Trackio will generate a name to identify each run. However, we highly recommend setting a descriptive `run_name` to make it easier to organize experiments. For example:
```python
fromtrlimportSFTConfig
training_args=SFTConfig(
...,
report_to="trackio",
run_name="sft_qwen3-4b_lr2e-5_bs128",# descriptive run name
)
```
You can also group related experiments by project by setting the following environment variable:
```bash
exportTRACKIO_PROJECT="my_project"
```
## Hosting Your Logs on 🤗 Spaces
Trackio has local-first design, meaning your logs stay on your machine. If you’d like to host them and deploy a dashboard on **🤗 Spaces**, set:
At TRL we support PPO (Proximal Policy Optimisation) with an implementation that largely follows the structure introduced in the paper "Fine-Tuning Language Models from Human Preferences" by D. Ziegler et al. [[paper](https://arxiv.org/pdf/1909.08593.pdf), [code](https://github.com/openai/lm-human-preferences)].
The Trainer and model classes are largely inspired from `transformers.Trainer` and `transformers.AutoModel` classes and adapted for RL.
Unsloth is an open‑source framework for fine‑tuning and reinforcement learning that trains LLMs (like Llama, OpenAI gpt-oss, Mistral, Gemma, DeepSeek, and more) up to 2× faster with up to 80% less VRAM. Unsloth allows [training](https://huggingface.co/docs/trl/en/unsloth_integration#Training), evaluation, running and [deployment](https://huggingface.co/docs/trl/en/unsloth_integration#Saving-the-model) with other inference engines like llama.cpp, Ollama and vLLM.
The library provides a streamlined, Hugging Face compatible workflow for training, evaluation, inference and deployment and is fully compatible with [`SFTTrainer`].
## Key Features
- Training support for all transformer compatible models: Text-to-speech (TTS), multimodal, BERT, RL and more
- Supports full fine-tuning, pretraining, LoRA, QLoRA, 8-bit training & more
- Works on Linux, Windows, Colab, Kaggle; NVIDIA GPUs, soon AMD & Intel setups
- Supports most features TRL supports, including RLHF (GSPO, GRPO, DPO etc.)
- Hand-written Triton kernels and a manual backprop engine ensure no accuracy degradation (0% approximation error)
## Installation
### pip install
Local Installation (Linux recommended):
```sh
pip install unsloth
```
You can also install `unsloth` according to the [official documentation](https://docs.unsloth.ai/get-started/installing-+-updating). Once installed, you can incorporate unsloth into your workflow in a very simple manner; instead of loading [`~transformers.AutoModelForCausalLM`], you just need to load a `FastLanguageModel` as follows:
```python
importtorch
fromtrlimportSFTConfig,SFTTrainer
fromunslothimportFastLanguageModel
max_length=2048# Supports automatic RoPE Scaling, so choose any number
The saved model is fully compatible with Hugging Face's transformers library. Learn more about unsloth in their [official repository](https://github.com/unslothai/unsloth).
### Docker Install
```sh
docker run -d -e JUPYTER_PASSWORD="mypassword"\
-p 8888:8888 -p 2222:22 \
-v $(pwd)/work:/workspace/work \
--gpus all \
unsloth/unsloth
```
Access Jupyter Lab at ```http://localhost:8888``` and start fine-tuning!
## Training
These are some core settings you can toggle before training:
- ```max_seq_length = 2048``` – Controls context length. While Llama-3 supports 8192, we recommend 2048 for testing. Unsloth enables 4× longer context fine-tuning.
- ```dtype = None``` – Defaults to None; use torch.float16 or torch.bfloat16 for newer GPUs.
- ```load_in_4bit = True``` – Enables 4-bit quantization, reducing memory use 4× for fine-tuning. Disabling it allows for LoRA 16-bit fine-tuning to be enabled.
- To enable full fine-tuning (FFT), set ```full_finetuning = True```. For 8-bit fine-tuning, set ```load_in_8bit = True```. Note: Only one training method can be set to True at a time.
For more information on configuring Unsloth's hyperparameters and features, read their [documentation guide here](https://docs.unsloth.ai/get-started/fine-tuning-llms-guide).
## Saving the model
Unsloth allows you to directly save the finetuned model as a small file called a LoRA adapter. You can instead push to the Hugging Face hub as well if you want to upload your model! Remember to get a [Hugging Face token](https://huggingface.co/settings/tokens) and add your token!
### Saving to GGUF
To save to GGUF, Unsloth uses llama.cpp. To save locally:
Once you have trained a model using either the SFTTrainer, PPOTrainer, or DPOTrainer, you will have a fine-tuned model that can be used for text generation. In this section, we'll walk through the process of loading the fine-tuned model and generating text. If you need to run an inference server with the trained model, you can explore libraries such as [`text-generation-inference`](https://github.com/huggingface/text-generation-inference).
## Load and Generate
If you have fine-tuned a model fully, meaning without the use of PEFT you can simply load it like any other language model in transformers. E.g. the value head that was trained during the PPO training is no longer needed and if you load the model with the original transformer class it will be ignored:
You can also merge the adapters into the base model so you can use the model like a normal transformers model, however the checkpoint will be significantly bigger:
Once you have the model loaded and either merged the adapters or keep them separately on top you can run generation as with a normal model outlined above.
This document will guide you through the process of using vLLM with TRL for faster generation in online methods like GRPO and Online DPO. We first summarize a tl;dr on how to use vLLM with TRL, and then we will go into the details of how it works under the hood.
> [!WARNING]
> TRL currently only supports vLLM version `0.10.2`. Please ensure you have this version installed to avoid compatibility issues.
> [!TIP]
> The following trainers currently support generation with vLLM:
>
> - [`GRPOTrainer`]
> - [`OnlineDPOTrainer`]
> - [`NashMDTrainer`]
> - [`XPOTrainer`]
> - [`RLOOTrainer`]
## 🚀 How can I use vLLM with TRL to speed up training?
💡 **Note**: Resources required for this specific example: a single node with 8 GPUs.
> [!WARNING]
> When using vLLM with TRL, the **vLLM server** and the **trainer** must run on **separate CUDA devices** to prevent conflicts.
> For guidance on configuring this properly, see [Modes of using vLLM during training](#modes-of-using-vllm-during-training).
First, install vLLM using the following command:
```bash
pip install "trl[vllm]"
```
Then run the server on specific GPUs (e.g., GPUs 0-3):
Once the server is running, you can use it to generate completions for training. In the example below, we are using the different supported trainers using the vLLM server for generation. The `--tensor-parallel-size` and `--data-parallel-size` arguments control how the model and data are sharded across GPUs.
In this example, we are sharding two copies of the model across 4 GPUs. Increasing data parallelism increases throughput, while increasing tensor parallelism allows for serving larger models. Then, run the training script on different GPUs (e.g., GPUs 4-7) by passing `use_vllm=True` in the training arguments as follows:
### 🎬 Flashback: Why do we need to use vLLM in online methods?
Online methods like GRPO or Online DPO require the model to generate completions during training, which are then used to compute reward signals. However, generation can be extremely time-consuming, especially with large or reasoning models. In the default setup (without vLLM), completions are generated using the [(unwrapped) model's `generate` method](https://github.com/huggingface/trl/blob/f3e8c2304428ef16e9ae5de9e5741ed84d533b7b/trl/trainer/grpo_trainer.py#L965C39-L965C66). This approach quickly becomes a major bottleneck — generation is slow and inefficient, particularly for large batches or models. As a result, training times increase significantly, and overall efficiency drops. To address this, we turn to vLLM, which enables much faster and more scalable generation, helping eliminate this bottleneck in online methods.
### 🤔 How does vLLM solve the slow generation issue?
If you've ever done autoregressive decoder training, you know all the input tokens to the LLM produce their attention key and value tensors, and these tensors are kept in GPU memory to later generate subsequent tokens based on them. These cached key and value tensors are often referred to as the KV cache. However, storing the KV cache occupies a lot of memory, so vLLM uses a technique called **PagedAttention** to solve this problem. PagedAttention, which is inspired by the OS’s virtual memory concept, stores continuous keys and values in **non-contiguous memory space**, which is much more efficient. The details of this are beyond the scope of this document, but in short, it allows the model to store the keys and values in a more efficient way, reducing the memory footprint and speeding up the generation process. If you are interested, make sure to check out the [vLLM PagedAttention](https://blog.vllm.ai/2023/06/20/vllm.html) for more details.
## How vLLM Works (Under the Hood) 🔍
### 🤔 What exactly happens when you run `trl vllm-serve --model <model_name>`?
1. vLLM first spawns multiple workers to handle incoming requests in parallel. The number of workers is determined by multiplying the `--tensor-parallel-size` and `--data-parallel-size` values. In this example, it spawns 4 workers (1 × 4).
Each worker operates independently and processes a chunk of the incoming requests — which are basically the prompts sent to the server for generation. A key point to understand is that these 4 workers are running in parallel, and each one is responsible for handling a subset of the total incoming load.
2. Once the incoming requests (prompts) are distributed across the workers, the model starts generating completions. Internally, the model’s weights are split across multiple GPUs based on the `--tensor-parallel-size` argument — this is how tensor parallelism is handled. Meanwhile, data parallelism (controlled by `--data-parallel-size`) ensures that different sets of requests are processed independently across the workers. In short: tensor parallelism splits the model across GPUs, and data parallelism splits the batch of requests across different model replicas.
3. Although the GPUs process requests independently and in parallel, they still need to communicate with each other. Remember that each GPU handles only a slice of the incoming prompts (for example, with 4 GPUs and 8 prompts using `--data-parallel-size=4`, each GPU processes 2 prompts).
This GPU-to-GPU communication is managed efficiently by NVIDIA’s NCCL library. The communication mainly ensures that each GPU gets its correct portion of the incoming requests — it’s lightweight and doesn’t interfere with generation itself.
Separately, the number of completions to generate per prompt is controlled by the `num_generations` setting in the GRPO config. For instance, if you set `num_generations=2` (like in the picture above), each prompt will have 2 completions. So, with 8 prompts and `num_generations=2`, you would end up with 16 completions total — regardless of the number of GPUs or parallelism settings.
### 🥸 More detail on what happens under the hood when running the server
- The vLLM server starts by running the command: `trl vllm-serve --model Qwen/Qwen2.5-7B`.
- Once the server is running, it generates completions based on requests from the client (trainer) using `vllm_client.generate` [these lines](https://github.com/huggingface/trl/blob/cc044e35b285be7dc062764b3364e1e684db4c7c/trl/trainer/grpo_trainer.py#L1025-L1035).
- The client (trainer) then requests these completions from the server.
- These completions are used to compute the reward signal.
- Based on the reward signal and the model’s output, the loss is computed, and the backward pass is performed to update the model’s weights.
- **Note**: The server only handles completion generation — it doesn’t train the model. Therefore, the model’s weights aren’t updated on the server. Once the backward pass is complete, the client sends the updated weights to the server using `vllm_client.update_named_param(name, param.data)`.
When using vLLM, ensure the GPUs assigned for training and generation are separate to avoid NCCL communication conflicts. If you do not set the `CUDA_VISIBLE_DEVICES` environment variable, the training script will use all available GPUs by default, which may lead to device conflicts. Starting from TRL next release after v0.19.1, the code automatically detects and prevents same-device usage, raising a error at the vllm server process:
```log
RuntimeError: Attempting to use the same CUDA device for multiple distinct roles/ranks within the same communicator.
Ensure that trainer is using different devices than vLLM server.
```
For example, if you want to use GPUs 4–7 for training while the server runs on GPUs 0-3, set:
First and foremost, always remember that the optimal setup depends on:
- The model size
- The number of GPUs you have
- The GPU memory size
- The batch size you are using
- The number of requests you are sending to the server (prompts)
- The `max_model_len` you are using (this is the max length of the input sequence that the model can process, a.k.a. the context window size)
- The number of completions you are generating for each request (`num_generations`)
Given these factors, our experiments on the Qwen model family (3B, 7B, 14B, 32B) using 8 H100 GPUs show that:
- For reasonable-sized models (3B–14B) and a moderate context window (`max_len < 8k`), using full capacity for data parallelism gives better throughput. The setup `(tp=1, dp=8)` yields the best results.
- For larger models (32B) and longer context windows (`max_len > 8k`), a smaller DP size combined with some model-side parallelism performs better. For example, `(tp=2, dp=4)` is a good setup for 32B models with a larger context window.
### vLLM with Transformers Backend
vLLM can use the **Transformers backend** for model implementations, which works for both LLMs and VLMs.
To enable this, set `vllm_model_impl="transformers"` in your configuration or pass it via the command-line argument.
For more details, check out [vLLM Transformers Backend](https://blog.vllm.ai/2025/04/11/transformers-backend.html).
Exploratory Preference Optimization (XPO) was proposed in the paper [Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF](https://huggingface.co/papers/2405.21046) by Tengyang Xie, Dylan J. Foster, Akshay Krishnamurthy, [Corby Rosset](https://huggingface.co/corbyrosset), [Ahmed Awadallah](https://huggingface.co/AhmedAwadallah), and Alexander Rakhlin. It is a simple online preference tuning method based on the DPO loss together with a reward model (RM). XPO augments the DPO objective with an exploration bonus allowing the method to explore outside the support of the initial model and human feedback data.
The abstract from the paper is the following:
> Reinforcement learning from human feedback (RLHF) has emerged as a central tool for language model alignment. We consider online exploration in RLHF, which exploits interactive access to human or AI feedback by deliberately encouraging the model to produce diverse, maximally informative responses. By allowing RLHF to confidently stray from the pre-trained model, online exploration offers the possibility of novel, potentially super-human capabilities, but its full potential as a paradigm for language model training has yet to be realized, owing to computational and statistical bottlenecks in directly adapting existing reinforcement learning techniques. We propose a new algorithm for online exploration in RLHF, Exploratory Preference Optimization (XPO), which is simple and practical -- a one-line change to (online) Direct Preference Optimization (DPO; Rafailov et al., 2023) -- yet enjoys the strongest known provable guarantees and promising empirical performance. XPO augments the DPO objective with a novel and principled exploration bonus, empowering the algorithm to explore outside the support of the initial model and human feedback data. In theory, we show that XPO is provably sample-efficient and converges to a near-optimal language model policy under natural exploration conditions, irrespective of whether the initial model has good coverage. Our analysis, which builds on the observation that DPO implicitly performs a form of Q*-approximation (or, Bellman error minimization), combines previously disparate techniques from language modeling and theoretical reinforcement learning in a serendipitous fashion through the perspective of KL-regularized Markov decision processes. Empirically, we find that XPO is more sample-efficient than non-exploratory DPO variants in a preliminary evaluation.
This post-training method was contributed by [Kashif Rasul](https://huggingface.co/kashif), [Quentin Gallouédec](https://huggingface.co/qgallouedec) and [Lewis Tunstall](https://huggingface.co/lewtun).
## Quick start
This example demonstrates how to train a model using the XPO method. We use the [Qwen 0.5B model](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) as the base model and [`PairRMJudge`] as a judge. We use the prompts from the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback). You can view the prompts in the dataset here:
Distributed across 8 GPUs, the training takes approximately 1 hour.
To see how the [trained model](https://huggingface.co/trl-lib/Qwen2-0.5B-XPO) performs, you can use the [Transformers Chat CLI](https://huggingface.co/docs/transformers/quicktour#chat-with-text-generation-models).
The best programming language depends on individual preferences and familiarity with coding concepts. Some popular languages include Python, Java, C++, and JavaScript.
</code></pre>
## Expected dataset type
XPO requires a [prompt-only dataset](dataset_formats#prompt-only). The [`XPOTrainer`] supports both [conversational](dataset_formats#conversational) and [standard](dataset_formats#standard) dataset format. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
## Usage tips
### Use a reward model
Instead of a judge, you can chose to use a reward model -- see [Reward Bench](https://huggingface.co/spaces/allenai/reward-bench) for a leaderboard of public models you can use. Below is a code example showing how to replace a judge with the [trl-lib/Qwen2-0.5B-Reward](https://huggingface.co/trl-lib/Qwen2-0.5B-Reward) model:
```diff
- from trl import PairRMJudge
+ from transformers import AutoModelForSequenceClassification
> Make sure that the SFT model and reward model use the _same_ chat template and the same tokenizer. Otherwise, you may find the model completions are scored incorrectly during training.
### Encourage EOS token generation
When using a reward model, we may want the model to generate completions within a given length. During training, the model will generate completions up to the maximum length specified in the `max_new_tokens` argument of [`XPOConfig`]. If you want to penalize the model for not generating an EOS token before reaching the maximum length, you can use the `missing_eos_penalty` argument of [`XPOConfig`]:
We provide an example script to train a model using the XPO method. The script is available in [`examples/scripts/xpo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/xpo.py)
To test the XPO script with the [Qwen2.5 0.5B model](https://huggingface.co/trl-lib/Qwen/Qwen2.5-0.5B-Instruct) on the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback), run the following command:
```bash
python examples/scripts/xpo.py \
--model_name_or_path Qwen/Qwen2.5-0.5B-Instruct \
--judge pair_rm \
--dataset_name trl-lib/ultrafeedback-prompt \
--learning_rate 5.0e-7 \
--output_dir Qwen2.5-0.5B-XPO-PairRM \
--warmup_ratio 0.1 \
--push_to_hub
```
## Logged metrics
While training and evaluating we record the following reward metrics:
*`loss/xpo`: The mean xpo part of the full loss.
*`loss/dpo`: The mean dpo part of the full loss.
*`objective/kl`: The mean KL divergence between the model and reference data.
*`objective/entropy`: The mean entropy of the model and reference data.
*`objective/model_scores`: The mean scores (according to the reward model) of the model completions.
*`objective/ref_scores`: The mean scores (according to the reward model) of the reference completions.
*`objective/scores_margin`: The mean score margin (according to the external reward model) between the chosen and rejected completions.
*`rewards/chosen`: The mean reward (according to XPO's DPO implicit reward model) of the chosen completions.
*`rewards/rejected`: The mean reward (according to XPO's DPO implicit reward model) of the rejected completions.
*`rewards/accuracies`: The accuracies of the XPO's implicit reward model.
*`rewards/margins`: The mean reward margin (according to online DPO's implicit reward model) between the chosen and rejected completions.
*`logps/chosen`: The mean log probabilities of the chosen completions.
*`logps/rejected`: The mean log probabilities of the rejected completions.
*`val/model_contain_eos_token`: The amount of times the model's output contains the eos token.
*`val/ref_contain_eos_token`: The amount of times the reference's output contains the eos token.
*`alpha`: The weight of the XPO loss term. Typically fixed, but can be made dynamic by passing a list to [`XPOConfig`].
*`beta`: The parameter that controls the weight of the loss term representing the deviation from the reference model. Typically fixed, but can be made dynamic by passing a list to [`XPOConfig`].
The notebooks and scripts in this examples show how to fine-tune a model with a sentiment classifier (such as `lvwerra/distilbert-imdb`).
Here's an overview of the notebooks and scripts:
| File | Description |
|---|---|
| `notebooks/gpt2-sentiment.ipynb` | Fine-tune GPT2 to generate positive movie reviews. |
| `notebooks/gpt2-sentiment-control.ipynb` | Fine-tune GPT2 to generate movie reviews with controlled sentiment. |
| `scripts/gpt2-sentiment.py` | Same as the notebook, but easier to use to use in mutli-GPU setup. |
| `scripts/t5-sentiment.py` | Same as GPT2 script, but for a Seq2Seq model (T5). |
## Installation
```bash
pip install trl
#optional: wandb
pip install wandb
```
Note: if you don't want to log with `wandb` remove `log_with="wandb"` in the scripts/notebooks. You can also replace it with your favourite experiment tracker that's [supported by `accelerate`](https://huggingface.co/docs/accelerate/usage_guides/tracking).
## Launch scripts
The `trl` library is powered by `accelerate`. As such it is best to configure and launch trainings with the following commands:
```bash
accelerate config # will prompt you to define the training configuration
accelerate launch scripts/gpt2-sentiment.py # launches training
```
# Summarization Example
The script in this example show how to train a reward model for summarization, following the OpenAI Learning to Summarize from Human Feedback [paper](https://arxiv.org/abs/2009.01325). We've validated that the script can be used to train a small GPT2 to get slightly over 60% validation accuracy, which is aligned with results from the paper. The model is [here](https://huggingface.co/Tristan/gpt2_reward_summarization).
Here's an overview of the files:
| File | Description |
|---|---|
| `scripts/reward_summarization.py` | For tuning the reward model. |
| `scripts/ds3_reward_summarization_example_config.json` | Can be used with the reward model script to scale it up to arbitrarily big models that don't fit on a single GPU. |
## Installation
```bash
pip install trl
pip install evaluate
# optional: deepspeed
pip install deepspeed
```
```bash
# If you want your reward model to follow the Learning to Summarize from Human Feedback paper closely, then tune a GPT model on summarization and then instantiate the reward model
# with it. In other words, pass in the name of your summarization-finetuned gpt on the hub, instead of the name of the pretrained gpt2 like we do in the following examples of how
# to run this script.
# Example of running this script with the small size gpt2 on a 40GB A100 (A100's support bf16). Here, the global batch size will be 64:
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.