21 Commits

Author SHA1 Message Date
d936478f07 ENH Make OFT faster and more memory efficient (#2575)
Make OFT faster and more memory efficient. This new version of OFT is
not backwards compatible with older checkpoints and vice versa. To load
older checkpoints, downgrade PEFT to 0.15.2 or lower.
2025-06-26 14:27:03 +02:00
bd893a8a36 TST Enable some further XPU tests to pass (#2596)
---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-06-23 14:51:49 +02:00
6c8c3c386e TST: Refactor remaining common tests to use pytest (#2491)
* Refactor test_adaption_prompt.py

- Did not really use PeftCommonTester, thus removed it
- Removed skip if llama or mistral not avaiable
- Parametrized tests instead of duplicating
- Use small models from Hub instead of creating new ones
- Test coverage misses 3 more lines around loading checkpoint, most
  likely unrelated to adaption prompt but instead due to using hub models
  instead of creating new ones

* Refactor test_feature_extraction.py

Pretty straightforward, test coverage is 100% identical.

* Refactor test_multitask_prompt_tuning

Same arguments apply as for test_adaption_prompt.py

* Refactor test_stablediffusion.py

This was pretty straightforward. After refactoring, the test coverage
was 100% the same.

I noticed, however, that these tests did not cover LoKr, they only
pretended to:

37f8dc3458/tests/test_stablediffusion.py (L113-L114)

Thus I added LoKr to the test matrix, after which the test coverage if
of course different, but is fine.

* Skip LoKr merging tests when not CUDA

For some reason, the outputs differ after merging. However, I locally
verified that this is already true before this refactor, so let's just
skip for now, as it is out of scope.
2025-05-02 11:19:32 +02:00
8c8b529b31 CI: More caching in tests to avoid 429 (#2472) 2025-04-02 18:09:31 +02:00
7af5adec29 TST Use different diffusion model for testing (#2345)
So far, tests are using hf-internal-testing/tiny-stable-diffusion-torch
for testing diffusion models. However, this model has some issues:

- still uses pickle (.bin) instead of safetensors
- there is a FutureWarning because of the config

Now, using hf-internal-testing/tiny-sd-pipe instead which doesn't have
those issues.
2025-01-28 12:31:32 +01:00
2a807359bd FIX Refactor OFT, small changes to BOFT (#1996)
The previous OFT implementation contained a few errors, which are fixed now.

Unfortunately, this makes previous OFT checkpoints invalid, which is why an
error will be raised. Users are instructed to either retrain the OFT adapter or
switch to an old PEFT version.
2024-10-01 16:51:18 +02:00
af275d2d42 ENH: Allow empty initialization of adapter weight (#1961)
This PR allows to initialize the adpater weights as empty, i.e. on meta
device, by passing low_cpu_mem_usage=True.

Why would this be useful? For PEFT training, it is indeed not useful, as
we need the real weights in order to train the model. However, when
loading a trained PEFT adapter, it is unnecessary to initialize the
adapters for real, as we override them with the loaded weights later.

In the grand scheme of things, loading the base model will typically be
much slower, but if the user loads, say, dozens of adapters, the
overhead could add up. Of course, besides loading the model, this has no
performance impact and is thus not a high priority feature.

For the time being, this is completely opt in. However, it should be safe to
make this default for loading adapters. Therefore, in the future we may change
the default there.

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-09-23 11:13:51 +02:00
b180ae46f8 TST Fewer inference steps for stable diffusion (#2051)
Reduce the number of inference steps for stable diffusion tests. These
tests are the slowest ones on CI, this should help (~3 min on average).
2024-09-06 09:57:56 +02:00
5268495213 FEAT Add HRA: Householder Reflection Adaptation (#1864)
Implements method from https://arxiv.org/abs/2405.17484.
2024-07-16 14:37:32 +02:00
811169939f BOFT: Orthogonal Finetuning via Butterfly Factorization (#1326)
Implements https://hf.co/papers/2311.06243.

---------

Co-authored-by: Zeju Qiu <zeju.qiu@gmail.com>
Co-authored-by: Yuliang Xiu <yuliangxiu@gmail.com>
Co-authored-by: Yao Feng <yaofeng1995@gmail.com>
2024-04-12 13:04:09 +02:00
5f2084698b TST Use plain asserts in tests (#1448)
Use pytest style asserts instead of unittest methods.

Use `pytest.raises` and `pytest.warns` where suitable.
2024-02-14 16:43:47 +01:00
fc78a2491e MNT Move code quality fully to ruff (#1421) 2024-02-07 12:52:35 +01:00
a30e006bb2 fix critical bug in diffusers (#1427) 2024-02-01 13:21:29 +01:00
1a7433b136 TST Improve test for SD LoHa and OFT (#1210) 2023-12-05 18:12:39 +01:00
da17ac0f48 [Feature] Support OFT (#1160)
* Support OFT

* add test

* Update README

* fix code quality

* fix test

* Skip 1 test

* fix eps rule and add more test

* feat: added examples to new OFT method

* fix: removed wrong arguments from model example

* fix: changed name of inference file

* fix: changed prompt variable

* fix docs

* fix: dreambooth inference revision based on feedback

* fix: review from BenjaminBossan

* apply safe merge

* del partially

* refactor oft

* refactor oft

* del unused line

* del unused line

* fix skip in windows

* skip test

* Add comments about bias added place

* rename orig_weights to new_weights

* use inverse instead of linalg.inv

* delete alpha and scaling

---------

Co-authored-by: Lukas Kuhn <lukaskuhn.lku@gmail.com>
Co-authored-by: Lukas Kuhn <lukas.kuhn@deutschebahn.com>
2023-11-30 21:28:42 +05:30
884b1ac3a8 Add implementation of LyCORIS LoKr for SD&SDXL models (#978)
KronA-like adapter
2023-10-30 15:36:41 +01:00
dfd99f61f8 TST: Comment out flaky LoHA test (#1002)
This test is flaky when running on Windows. It is probably related to
PyTorch 2.1, as this test used to work. Further investigation is needed.
2023-10-09 10:33:54 +02:00
7a5f17f39e FEAT Add LyCORIS LoHa for SD&SDXL models (#956)
https://arxiv.org/abs/2108.06098
2023-10-02 10:44:51 +02:00
61a8e3a3bd [WIP] FIX for disabling adapter, adding tests (#683)
This PR deals with some issues with disabling adapter:

- typo in active.adapter
- prompt encoder could be on wrong device
- when using prompt learning + generate, disabling did not work

For the last point, there is a somewhat ugly fix in place for now,
pending a more comprehensive refactor (a comment was added to that
effect).

Comprehensive tests were added to check that everything works now.

The following tests still not working:

- adaption prompt
- seq2seq with prompt tuning/prompt encoding
- stable diffusion is a little bit flaky but test is hopefully robust enough

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
2023-07-14 14:33:33 +02:00
032fff92fb Fixed LoraConfig alpha modification on add_weighted_adapter (#654)
* Fixed LoraConfig modification on add_weighted_adapter

* Added test for issue with adding weighted adapter for LoRA

* Fixed formatting
2023-07-01 11:13:25 +05:30
66fd087205 [Bugfix] Fixed LoRA conv2d merge (#637)
* Fixed LoRA conv2d merge

* Fixed typo
2023-06-27 12:18:08 +05:30