25 Commits

Author SHA1 Message Date
8265800abf Fix trl-internal-testing/tiny-DbrxForCausalLM (#4213) 2025-10-06 15:11:16 -06:00
45ee98b05e Replace unittest with pytest (#4188) 2025-10-06 11:14:54 +02:00
208e9f7df7 📏 torch_dype to dtype everywhere (#4000)
Co-authored-by: Quentin Gallouédec <gallouedec.quentin@gmail.com>
2025-09-03 15:45:37 -06:00
515e9eb255 [CI] Modify tests to handle device allocation for models (#3962) 2025-08-27 17:23:37 +02:00
7ee8f796ff 👔 HF Doc Builder style (#3498) 2025-08-14 18:58:12 -07:00
f5b1ed24a0 Replaced unittest.TestCase with TrlTestCase that handles tmp dir (#3863) 2025-08-12 12:37:19 -07:00
2f4cb38f28 📐 Fix CI and GeometricMixtureWrapper (#3779) 2025-07-26 16:15:08 -06:00
e4b586a389 👔 Apply doc-builder style (#3615)
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
2025-06-19 12:02:51 +02:00
227df8271e ♾️ [CI] Remove test_raise_error_not_causallm (#3265) 2025-04-09 10:39:36 -07:00
9df19e8a75 📜 Fix license and copyrights (#3264) 2025-04-08 15:22:58 -07:00
2fe2337067 🏃 Migrate CI to self-hosted runners (#3174) 2025-03-29 11:56:44 -07:00
1d23ecc36f ©️ Update copyrights year (#2547)
* happy new year

* fix wandb import sort
2025-01-07 14:53:09 +01:00
9410874787 ©️ Copyrights update (#2454)
* First changes

* Other files

* Finally

* rm comment

* fix nashmd

* Fix example

* Fix example [ci skip]
2024-12-10 10:40:00 +01:00
94e4135a17 🔓 Remove lm_head check in AutoModelForCausalLMWithValueHead (#2398)
* Remove lm_head check in `AutoModelForCausalLMWithValueHead`

* Style

* Remove test
2024-11-29 15:52:35 +01:00
453db5cd79 🤏 New models for tests (#2287)
* first commit

* uncomment

* other tests adaptations

* Remove unused variable in test_setup_chat_format

* Remove unused import statement

* style

* Add Bart model

* Update BCOTrainerTester class in test_bco_trainer.py

* Update model IDs and tokenizers in test files

* Add new models and processors

* Update model IDs in test files

* Fix formatting issue in test_dataset_formatting.py

* Refactor dataset formatting in test_dataset_formatting.py

* Fix dataset sequence length in SFTTrainerTester

* Remove tokenizer

* Remove print statement

* Add reward_model_path and sft_model_path to PPO trainer

* Fix tokenizer padding issue

* Add chat template for testing purposes in PaliGemma model

* Update PaliGemma model and chat template

* Increase learning rate to speed up test

* Update model names in run_dpo.sh and run_sft.sh scripts

* Update model and dataset names

* Fix formatting issue in test_dataset_formatting.py

* Fix formatting issue in test_dataset_formatting.py

* Remove unused chat template

* Update model generation script

* additional models

* Update model references in test files

* Remove unused imports in test_online_dpo_trainer.py

* Add is_llm_blender_available import and update reward_tokenizer

* Refactor test_online_dpo_trainer.py: Move skipped test case decorator

* remove models without chat templates

* Update model names in scripts and tests

* Update model_id in test_modeling_value_head.py

* Update model versions in test files

* Fix formatting issue in test_dataset_formatting.py

* Update embedding model ID in BCOTrainerTester

* Update test_online_dpo_trainer.py with reward model changes

* Update expected formatted text in test_dataset_formatting.py

* Add reward_tokenizer to TestOnlineDPOTrainer

* fix tests

* Add SIMPLE_CHAT_TEMPLATE to T5 tokenizer

* Fix dummy_text format in test_rloo_trainer.py

* Skip outdated test for chatML data collator

* Add new vision language models

* Commented out unused model IDs in test_vdpo_trainer

* Update model and vision configurations in generate_tiny_models.py and test_dpo_trainer.py

* Update model and tokenizer references

* Don't push if it already exists

* Add comment explaining test skip

* Fix model_exists function call and add new models

* Update LlavaForConditionalGeneration model and processor

* `qgallouedec` -> `trl-internal-testing`
2024-11-25 16:31:56 +01:00
24fb32733f 🔧 Use standard unittest assertion methods (#2283)
* WIP: Partial unit test update

* Update unittest format

* Update tests/slow/test_sft_slow.py comment

* Refactor unit tests: replace pytest.raises with self.assertRaises

* Fix: Restore accidentally deleted 'ref_model' parameter in DPOTrainer

* Re-run pre-commit

* fix: Incorrectly replacing non-TestCase assert

---------

Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
2024-10-31 15:10:43 +01:00
e615974a03 ♾️ Fix test generation max_new_tokens (#2272)
* `eval_strategy="steps" if eval_dataset else "no"`

* tmp skip test

* drop `eval_strategy` in `test_sft_trainer_uncorrect_data`

* remove eval strategy

* Add parameterized test for generate method

* Revert "`eval_strategy="steps" if eval_dataset else "no"`"

This reverts commit 1e8b331fa2c222a699cb3563f44f5702a7d6f50b.

* Revert "tmp skip test"

This reverts commit 44558f84cc43e20254b567d608b44d059a14913b.

* Revert "drop `eval_strategy` in `test_sft_trainer_uncorrect_data`"

This reverts commit a1ef7016286649fce10b3665159abcbfac2219e3.

* Revert "remove eval strategy"

This reverts commit cb7fafa874b108ba91b29f15944b7c4a41705d6d.

* style

* Refactor test_generate method in test_modeling_value_head.py

* `max_new_tokens=9`
2024-10-24 20:20:01 +02:00
b580e45c94 [WIP] Drop save/load test on windows (#1897)
* just test modelling

* Trigger CI

* always trigger

* only test_from_save_trl

* parametrize

* just one model

* file

* rm ref model

* assert exists

* style

* Update Makefile

* Update tests.yml

* Update Makefile

* Update test_modeling_value_head.py

* Update test_modeling_value_head.py

* skip windows

* skip test_from_save_transformers

* also skip test_from_save_trl

---------

Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
2024-08-05 16:01:06 +02:00
6171cddee5 Re-add BigBird Pegasus save/load test (#1882)
Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
2024-07-28 15:51:38 +02:00
82b07d6b01 Llama in modelling value head tests (#1878) 2024-07-26 11:43:48 +02:00
72bf6c21be Skip BigBird save and load test until next transformers version (#1874) 2024-07-26 11:33:07 +02:00
ae8431bd50 Codemod Unittest assertions to bare asserts (#1301)
* Remove stray commas from test data

* Codemod Unittest assertions to bare asserts

* Make `assertAlmostEqual` tests more idiomatic

* DRY some test strings
2024-02-01 23:49:03 +01:00
951ca1841f [CI] Fix CI with new transformers release (#946)
* fix CI with transformers release

* final fix
2023-11-03 10:38:58 +01:00
ed87942a47 Add LlaMa in tests + create_reference_model (#261)
* add LlaMa in tests

* Update tests/test_modeling_value_head.py

* add warning message

---------

Co-authored-by: Nathan Lambert <nathan@huggingface.co>
2023-03-30 10:49:46 +02:00
f1300ec811 add minibatching (#153)
* add minibatching

* all the fixes i missed

* ore fixes

* add dedicated variable for mini batch size

* style

* minor fixes

* fix rewards

* unbiased variance estimation

* mask values/returns

* moar fixes

* style

* change structure and add moar tests

* Apply suggestions from code review

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>

* deprecate `forward_batch_size`

* remove out of date warning about batching s2s and left padding models

* make style

* fixed failed merge

---------

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
2023-02-23 15:24:20 +01:00