Compare commits

...

362 Commits

Author SHA1 Message Date
6c624c4a6b WIP 2025-08-04 00:03:01 +00:00
9359a0194f Parallelism config + TP + HSDP + BYODM (Bring Your Own Device Mesh) (#3682)
* Feat: init

* Feat: add validation + init from kwargs

* Fix: minor fixes

* Feat: more cleanup

* Minor refactor

* remove import

* adding support for pre-configured device mesh

* adding device mesh to fsdp2

* moving mesh dim defn to parralismconfig

* tests

* WIP device mesh/accelerator validation

* WIP more tests

* Test Driven Development (TDD)

* fixing build_device_mesh

* FSDP dim names

* adding example

* WIP

* fixing HSDP

* Feat: add back old options

* working example

* debugging

* adding parallelism config to partialstate

* Feat: revert ddp changes

* Revert DDP

* Feat: (untested) update mesh dims and some minor tweaks

* adding dp_cp dims

* updating comments

* WIP

* wip 2

* reverting

* storing state in accelerator rather than acceleratorstate

* Fix: minor tweaks

* wip example update

* Fixes for non-fsdp2 case

* Feat: ensure ddp/tp only works

* updating example

* updating example

* updating examples, fixing state

* fixed state

* comments

* fixing partial state check

* linting

* comments

* removing fn

* WIP: fix tp

* comments

* removing return

* reverting upcast

* add guards

* guards for empty self.parallelism_config

* use len on tuple to check if empty

* Feat: cleanup example

* Feat: some cleanup of example

* Feat: add trackio

* Fix: improve trackio

* Feat: TP works

* Feat: some fsdp2 improv

* Feat: working examples

* handle clipping for tensor parallel

* Implicit replicate

* Refactor: move to separate file + cleanup + basic comments

* Fix: add unadded files, fix circular import

* Feat: better readme

* Feat: add blog + ultrascale links

* Tmp: should_save_model now returns only true

* Fix: remove implicit_replication and style

* Fix: remove optional

* add guard on parallelism_config.tp_enabled

* fix import

* fixing empty parallelism_config

* fix import path for test patch

* fixing patch

---------

Co-authored-by: S1ro1 <matej.sirovatka@gmail.com>
Co-authored-by: Salman Mohammadi <“salman.mohammadi@outlook.com”>
Co-authored-by: Wing Lian <wing@axolotl.ai>
2025-07-30 21:03:13 +02:00
2f075c724c set default submesh_tp_size to prevent unset local variable error (#3687)
* set default submesh_tp_size to prevent unset local variable error

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-22 12:31:03 +02:00
7ecc2d7f39 bump to v1.10.0-release 2025-07-16 16:26:03 +00:00
12f89bb754 do not call partial state if not initialized 2025-07-16 13:42:58 +00:00
348aabaaaf Update Gaudi runner image to latest SynapseAI and enable previously disabled tests (#3653)
* update synapse and add tp tests

* only skip regional compile speedup check

* pass sdp test on hpu
2025-07-16 14:33:36 +02:00
3b13453bbf “Stop Halving My Batch!” · Default back-off 0.5 → 0.9 (#3684)
* feat(memory): change default find_executable_batch_size to change by 10% instead of 50%

* Update test_memory_utils.py

* Apply style fixes

---------

Co-authored-by: Amit Moryossef <amitmoryossef@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-16 12:32:46 +02:00
0408ab12d7 warn for invalid keys (#3613)
* warn for invalid keys

* add test for check_device_map invalid keys

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-16 12:23:41 +02:00
55e518a762 accelerate/data_loader.py: do not yield if the base_dataloader is empty (#3659)
* accelerate/data_loader.py: do not yield if the base_dataloader is empty

in the code:
```
        dataloader_iter = self.base_dataloader.__iter__()
        # We iterate one batch ahead to check when we are at the end
        try:
            current_batch = next(dataloader_iter)
        except StopIteration:
            yield
```

If the base dataloader is empty then the exception is raised but `yield`
yields nothing.

This at the time of:
```
if self.device is not None:
                    current_batch = send_to_device(current_batch, self.device, non_blocking=self._non_blocking)
```

would lead to uncaught exception like:
 File "/root/rl-swarm/.venv/lib/python3.10/site-packages/accelerate/data_loader.py", line 575, in iter
    current_batch = send_to_device(current_batch, self.device, non_blocking=self._non_blocking)
UnboundLocalError: local variable 'current_batch' referenced before assignment because `current_batch`
was never assigned because `next(dataloader_iter)` returned with exception `StopIteration`.

Signed-off-by: 0xnightwind <nightwind1899@gmail.com>

* Update src/accelerate/data_loader.py

---------

Signed-off-by: 0xnightwind <nightwind1899@gmail.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-07-16 12:04:25 +02:00
7e11ac43f0 fix: wandb config not saved in offline mode (#3648)
* fix: wandb config not saved in offline mode

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-15 17:51:44 +02:00
e2cc537db8 trackio (#3669)
* trackio

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Abubakar Abid <abubakar@huggingface.co>

* seven -> eight

* Add trackio as a real tracker instead

* Sort

* Style

* Style

* Remove step

* Disable trackio on Python < 3.10

* Update src/accelerate/tracking.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* More style

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Abubakar Abid <abubakar@huggingface.co>
2025-07-15 17:17:49 +02:00
847ae58c74 Fix FP8 tests, enable FP8 to be used without direct Accelerator() configuring (#3677)
* single-gpu tests passing

* install deepspeed in fp8 container

* revert mixed_precision check
2025-07-15 15:20:57 +02:00
6e104f31de unpin datasets (#3681) 2025-07-15 15:00:35 +02:00
524e5f9828 Speedup model loading by 4-5x in Diffusers (#3674)
* update

* update

* make style

* update

* merge if statements
2025-07-11 16:58:35 +02:00
d6c986c3f2 Bunch of FSDP improvements (#3671)
* Feat: split tests

* Feat: finito

* Fix

* Final, tests pass
2025-07-09 16:05:22 +02:00
1ac8643df7 xpu enablement on left cases (#3654)
* 1. enable xpu for launcher 2. expand cuda only ds uts to xpu 3. expand profiler example to xpu

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* rename

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* Update profiler.py

* Apply style fixes

---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-07 18:10:53 +02:00
07ce74868c Fix: properly error when DDP + Dtensor model (#3629)
* Feat: add check

* Refactor: nits
2025-06-27 01:33:45 +02:00
175fe91589 Added a check in the no_sync() function to avoid errors when using deepspeed zero2/3. (#3656) 2025-06-26 14:39:04 +02:00
fe16ce8bce Fix fsdp2 example (#3657) 2025-06-26 14:08:51 +02:00
5987d79a53 Update gradient_accumulation.md (#3649) 2025-06-23 11:58:31 +02:00
31af8d4e8e shards (#3645) 2025-06-20 11:24:20 +02:00
b7493a82b1 Add support for e5e2 and default to hybrid when launcher is used (#3640)
* add support for e5e2 and defaumt to hybrid when launcher is used

* style
2025-06-20 11:11:32 +02:00
a16d2bb3c1 bump to v1.9.0dev 2025-06-19 15:13:41 +02:00
cac22ed980 fix grad acc deepspeed (#3638)
* fix grad acc deepspeed

* style
2025-06-19 12:06:21 +02:00
be826a6b7b Fix: correct labels (#3637) 2025-06-19 11:01:56 +02:00
5939640829 Feat: add cpu offload (#3636) 2025-06-18 18:13:45 +02:00
7f9c8cbe34 [DeepSpeed] sync gradient accum steps from deepspeed plugin (#3632)
* sync steps

* add a debug log when overriding

* make grad accum always consistent

* remove debug
2025-06-18 16:45:57 +02:00
9888c7ed23 feat: use datasets.IterableDataset shard if possible (#3635)
* feat: use datasets.IterableDataset shard if possible.

When `accelerator.prepare` is called on a
`datasets.IterableDataset`, use the `shard` method to
split the dataset across the available processes. This
allows for more efficient data loading and processing.
Without load and slice overhead of `IterableDatasetShard`

* dataset

* remove unused import

* style

---------

Co-authored-by: wuwenxu.01 <wuwenxu.01@bytedance.com>
2025-06-18 16:45:17 +02:00
42a68c30dc Fix Typos in Documentation and Comments (#3621)
* Update state.py

* Update tracking.py
2025-06-18 15:53:02 +02:00
6597dae780 Integrate SwanLab for offline/online experiment tracking for Accelerate (#3605)
* add support for SwanLabTracker and update related documentation

* add emoji in FRAMWORK

* apply the style corrections and quality control

* add support for SwanLabTracker in tests

* fix bug in test_tracking
2025-06-18 15:42:29 +02:00
8878d93745 remove hardcoded cuda from fsdpv2 (#3631) 2025-06-17 14:32:10 +02:00
2eaf5cdbbc remove ipex.optimize in accelerate (#3608)
* remove ipex.optimize in accelerate

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix mis-style

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* Update intel_cpu.md

* Update launch.py

* fix comments

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* add logging

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* Update launch.py

* Apply style fixes

---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-17 11:08:19 +02:00
23c1d8db89 [Deepspeed] deepspeed auto grad accum (#3630)
* deepspeed auto grad accum

* add tests for grad accum

* use tiny-random-gpt2

* Update tests/deepspeed/test_deepspeed_gradient_accumulation.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix redundant code

* set_gradient_accumulation_boundary is always there

* remove unused helper

* no need for this

* full revert

* Apply style fixes

* get_global_grad_norm is always there

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-16 16:28:24 +02:00
0af621bbec add xpu support in TorchTensorParallelPlugin (#3627)
* add xpu support in TorchTensorParallelPlugin

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix typo

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-06-13 17:45:51 +02:00
bee04f1b01 Add fp8_e5m2 support in dtype_byte_size (#3625)
* float8_e5m2 device_map

* remove prints
2025-06-12 16:27:32 +02:00
8a953f08c6 fix xpu 8bit value loading (#3623)
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
2025-06-12 14:55:14 +02:00
3518c03584 small fix (#3619) 2025-06-11 14:02:45 +02:00
2f8fd72e51 Remove device_count (#3587) 2025-06-10 14:50:34 +02:00
d2e6b0313d [FSDP2] Refactor + FP8 (#3585)
* Fix double wrap

* Clocking off, ~equal to torch baseline

* works?

* Working version

* Partial rewrite

* FSDP2 path works

* Fix back prepare

* Almost done, proper AC left

* Feat: should work, cleanup + test more benchmarks left

* Style+quality

* Feat: fp8 example

* Feat: better example

* Feat: add readme

* Docs + should be done

* Fix: typos

* Fix: protect imports

* Feat: address comments

* Feat: add flops image
2025-06-10 14:26:48 +02:00
b9fee48c85 better handle FP8 with and without deepspeed (#3611)
* use the state mixed precision which has undergone all preprocessing

* Update src/accelerate/accelerator.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/accelerator.py

* accelerator state sets the mixed precision for deepspeed and fp8_enabled

* fix

* fix

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-06-10 14:24:43 +02:00
3a82b056cf Fix bf16 training with TP (#3610)
* fix

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-10 11:29:59 +02:00
6b61a373a2 fix deepspeed regional compilation (#3609) 2025-06-06 14:48:43 +02:00
682691deac Update Gaudi Runners (#3593)
* test

* fix

* push

* in the morning

* fix backend

* run first

* set habana modules

* dynamo backend

* trigger

* remove on pr

* remove on file change
2025-06-03 12:36:56 +02:00
791055b484 Fix: list object has no attribute keys (#3603) 2025-06-03 12:24:20 +02:00
16bf1d8901 enable torchao and pippy test cases on XPU (#3599)
* enable torchao and pippy test cases on XPU

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
2025-05-30 17:36:34 +02:00
ab3c604e48 enable big_model_inference on xpu (#3595)
* enable big_model_inference on XPU

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix quality

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
2025-05-30 17:23:26 +02:00
273799c85d enable fsdp2 benchmark on XPU (#3590)
* enable fsdp2 benchmark on XPU

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* add deterministic

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
2025-05-27 14:08:59 +02:00
43526c5c08 add device-agnostic GradScaler (#3588)
* add device-agnostic GradScaler

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix bug

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix review comments

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* format

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* Apply style fixes

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-27 11:44:50 +02:00
07f2392f40 change to use torch.device (#3594)
Signed-off-by: Matrix YAO <matrix.yao@intel.com>
2025-05-27 11:17:18 +02:00
ee2f48c2c3 [docs] no hard-coded cuda in the ddp documentation (#3589)
* make device-agnostic

* refactor
2025-05-27 11:16:42 +02:00
4f3abb73a7 Set ccl and KMP param in simple launch (#3575)
* Even 1 CPU mechine can also run multi process

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix ccl and kml param setting

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* set master addr only when processes > 1

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix num process check

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix ccl args check

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

---------

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
2025-05-26 15:55:10 +02:00
db536cbfeb Fix: Defer Tracker Initialization to Prevent Premature Distributed Setup (#3581)
* Fix tracker initialize distributed before InitProcessGroupKwargs

* Fix tracker initialize distributed before InitProcessGroupKwargs

* Add test for bug #3550

* Improve test for #3550

* Remove redundant code

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* fix style

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-05-26 15:08:13 +02:00
4e9d0deba6 enable regional_compilation benchmark on xpu (#3592)
* enable regional_compilation benchmark on xpu

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* Apply style fixes

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-26 15:05:42 +02:00
8cb3ace894 Add kwargs to optimizer, scheduler and dataloader using function accelerator().load_state() (#3540)
* Added artifacts and figure tracking at MLFlow tracker

* Added `log_artifact` to the MLFlowTracker

* Remove changes

* Added kwargs when loading state.

* added doc string

* Adjusted correct default types of kwargs

* Changed the load kwargs to a single one

* removed None value from kwargs

* fix kwargs for loading the model

* removed load_kwargs from optimizer state dict

* make load_kwargs a dictionary

* revert last changes

* reverted load_kwargs

* fix docstring

* added dict initiation

* Fix quality error during PR
2025-05-22 17:21:54 +02:00
b6d97cb856 Resolve logger warnings (#3582)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-05-22 16:26:31 +02:00
33967d4733 Add support for standalone mode when default port is occupied on single node (#3576)
* add standalone mode and replace ConnectionError with a warning when the main process port is in use, allowing for automatic port selection

* address review feedback: warn on port conflict only for single-node; raise error for multi-node

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-20 12:29:53 +02:00
5b1fcda371 enable test_cli & test_example cases on XPU (#3578)
* enable test_cli & test_example cases on XPU

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* remove print

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix ci issue

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

---------

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-05-20 12:04:24 +02:00
f55f0533b5 goodbye torch_ccl (#3580)
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-20 12:02:14 +02:00
1ec99f0b58 enable test_load_checkpoint_and_dispatch_with_broadcast cases on XPU (#3579)
* enable test_load_checkpoint_and_dispatch_with_broadcast cases on XPU

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* Update test_load_checkpoint_and_dispatch_with_broadcast.py

---------

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-19 11:27:40 +02:00
417bc52965 bump to v1.8.0dev 2025-05-15 12:02:44 +02:00
97c93c4809 enable test_dispatch_model_tied_weights_memory_with_nested_offload_cpu on xpu (#3569)
* enable test_dispatch_model_tied_weights_memory_with_nested_offload_cpu
case on XPU

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* replace hard-coded torch.cuda w/ device-dependent callings

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* use device agnostic clear_device_cache

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

---------

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-15 11:40:55 +02:00
cd37bbb629 set backend correctly for CUDA+FSDP2+cpu-offload (#3574)
* set backend correctly for CUDA+FSDP2+cpu-offload

* offload

* format

---------

Co-authored-by: Wing Lian <wing@axolotl.ai>
2025-05-15 11:38:53 +02:00
7aa3b56c80 Fix prevent duplicate GPU usage in distributed processing (#3526)
* check if num_extrs>0 and test

* test pass

* test passes

* make quality fix

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-15 11:31:20 +02:00
14f4306ca6 reenable FSDP2+qlora support (#3546) 2025-05-15 11:30:55 +02:00
e6e717589e Add regional compilation to cli tools and env vars (#3572)
* add regional compilation to cli tools and env vars

* added seq parallel to gaudi docs

* explain that lm_head is also compiled separately

* style

* docstring

* style
2025-05-15 11:30:27 +02:00
1f6efcea0b tune env command output (#3570)
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-15 10:51:43 +02:00
9fa97f9600 simplify model.to logic (#3562)
* simplify model.to logic

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* revert device_type == "cuda" changes

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

---------

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-15 10:31:08 +02:00
764eee4a48 add xpu synchronize (#3563) 2025-05-14 19:20:24 +02:00
202e6c178a Update dynamic env handling to preserve None when USE_DYNAMIC is unset (#3567)
* Update dynamic env handling to preserve None when USE_DYNAMIC is unset

* Apply suggestions from code review

---------

Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>
2025-05-14 16:34:08 +02:00
32874257f3 Add Gaudi doc (#3537)
* Add Gaudi doc

* Address comment from review

* Remove point about region compilation

---------

Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>
2025-05-13 18:27:33 +02:00
281314b479 preserve parameter keys when removing prefix (#3564)
* preserve parameter keys when removing  prefix

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-13 17:11:42 +02:00
3524a504c8 update path (#3561) 2025-05-13 13:57:29 +02:00
f48d95c493 canonicalize fsdp2 names when fixing optimizer (#3560) 2025-05-12 19:40:50 +02:00
f76208f5a8 make env var and dataclass flag consistent (#3307)
Signed-off-by: SumanthRH <sumanthrh@anyscale.com>
2025-05-12 17:57:58 +02:00
ae0499ea96 cast if dtype is not None (#3559)
Co-authored-by: dpappadopulo <dpappadopulo@bloomberg.net>
2025-05-12 15:27:11 +02:00
ddc49f1e9a Fix the issue where set_epoch does not take effect. (#3556)
* Fix the issue where `set_epoch` does not take effect.

* Apply style fixes

---------

Co-authored-by: root <root@hjx-dev-h20-3-0.hjx-dev-h20-3.bcloud.svc.cluster.local>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-12 14:30:19 +02:00
9b2d6eaf32 add support for port 0 auto-selection in multi-GPU environments (#3501)
* add support for port 0 auto-selection in multi-GPU environments

* address review feedback: [add implementation for DeepSpeed, simplify code logic]

---------

Co-authored-by: biondi <biondi_lee@htx.ht.gov.sg>
2025-05-12 13:36:45 +02:00
7b5774ac55 Dynamo regional compilation (#3529) 2025-05-12 09:49:29 +02:00
7013365791 fix typos (#3549) 2025-05-08 14:10:12 +02:00
8d8fd83672 fix notebook_launcher for Colab TPU compatibility. (#3541)
* fixes for notebook_launcher for google colab TPU compatibility.

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-06 17:55:18 +02:00
3a941d4b4e Fix: param is not a parameter or buffer (#3545) 2025-05-06 14:28:48 +02:00
d02e51cc21 Update big_modeling.md for layerwise casting (#3548)
* Update big_modeling.md for layerwise casting

* doc fix
2025-05-06 09:50:53 +02:00
c5caa11e85 Fix CI due to missing package (#3535)
* fix test

* fix

* fix

* fix

* fix worflow

* check

* revert
2025-04-29 10:48:39 +02:00
39e2bebb12 Update Docker builds to align with CI requirements (#3532) 2025-04-28 10:50:50 +02:00
0af45bf1e8 Fix logic in accelerator.prepare + IPEX for 2+ nn.Models and/or optim.Optimizers (#3517)
* Fix logic in _prepare_ipex

* Add caution about prepare in IPEX docs

* Add suggested workaround to IPEX docs

* Revert unnecessary change

* Update docs/source/usage_guides/ipex.md

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Remove double space

* Simplify logical checks for IPEX availability

* Revert unnecessary change

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-04-25 17:31:36 +02:00
806ac848c9 [FSDP2] Issues in Wrap Policy and Mixed Precision (#3528)
* fix fsdp2 wrap policy

* nn.Module doesn't have the dtype attribute

* Revert "nn.Module doesn't have the dtype attribute"

This reverts commit 513c7892876f81ec76ce32bcdce83bfe8556491d.

* Fix dtype handling in fsdp2_prepare_model to accommodate nn.Module without dtype attribute

* fix format problem
2025-04-24 22:59:13 +02:00
23b092507a [FSDP2] Fix memory spike with cpu_ram_efficient_loading=True (#3482)
* Feat: shard on meta device

* Feat: support fqns in get_non_persistent_buffers

* Fix: retie weights after loading
2025-04-24 12:19:49 +02:00
8fb073536a [FSDP2] Enable FULL_STATE_DICT (#3527)
* Feat: enable FULL_STATE_DICT in config

* Feat: support FSDP2 FULL_STATE_DICT

* Refactor: remove deprecated save/load_state_dict

* Docs: add FULL_STATE_DICT as supported to docs

* Feat: update tests

* Feat: change Accelerator.get_state_dict() to use new api
2025-04-23 18:03:45 +02:00
4f35cf713c Solve link error in internal_mechanism documentation (#3506) (#3507)
* Solve link error in internal_mechanism (#3506)

* Link correctly to documentation (#3506)
2025-04-23 17:47:25 +02:00
ada21cfbbd fix cuda init (#3530) 2025-04-23 15:57:40 +02:00
b451956fd6 Add torchao to FP8 error message (#3514) 2025-04-22 14:06:47 +02:00
6a9a61520d [Feat] Layerwise casting hook (#3427)
* start

* method implementation.

* updates.

* updates

* remove print.

* aryan as one of the contributors

Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com>

* change to attach_layerwise_casting_hooks

* enable skipping modules.

* tests

* revert style changes to other files.

* feedback

* remove comments

* add example

* fix test case for edges.

* reviewer feedback

---------

Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com>
2025-04-22 13:49:43 +02:00
423fbbfdea fix cache (#3513) 2025-04-18 18:07:46 +02:00
34c1779828 Remove deprecated PyTorch/XLA APIs (#3484) 2025-04-15 11:44:14 +02:00
54496571fd Fix: require transformers version for tp tests (#3504) 2025-04-15 11:42:26 +02:00
4a3cbcb63c fix: apply torchfix to set weights_only=True (#3497)
* fix: apply torchfix

* fix: apply torchfix
2025-04-15 11:41:05 +02:00
583b26db3c Add FP8 runners + tweak building FP8 image (#3493)
* Initial test

* Try on push

* Only wf dispatch now

* keep trying

* Try again

* Try again

* source activate?

* Force bash

* Source activate accelerate to make it get the env propelry

* try using nightly docker

* Try this?

* Try this?

* Try this, proper output

* Try this, proper output

* Try via full conda activate(?)

* rm conda

* te fp8 tests

* add ao

* ao in setup too

* actually include fp8 deps

* FP8 docker image, use newer version

* Update docker image to take in input

* Test

* prior month

* igpu?

* Use only last 2 digits of year

* Build rest

* Apply style fixes

---------

Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-15 11:39:43 +02:00
7812d979c3 Fix deepspeed tests (#3503)
* Fix: check for tp size when creating accelerator in tests

* Fix: better error handling in TorchTensorParallelPlugin

* Fix: make tp related args optional in tests (cmt by @kmehant)
2025-04-14 16:16:01 +02:00
67adb473a4 (Part 1) fix: make TP training compatible with new transformers (#3457)
* feat: support new tp refactor for training

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: @S1ro1 review cmt

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: @S1ro1 review cmt - tp_plan flag docstr

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: @SunMarc review cmt on un used flag

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: pick approach 3 as discussed in the PR

see https://github.com/huggingface/accelerate/pull/3457#discussion_r2037909077 for more details

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: styling errors

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: bump up transformers for tp_size feature

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

---------

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-04-11 18:31:28 +02:00
ee4cab96ed nit: needed sanity checks for fsdp2 (#3499)
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-04-11 17:04:34 +02:00
73c2378c55 Use torch.distributed.checkpoint.state_dict.set_model_state_dict in load_checkpoint_in_model (#3432)
* Use torch.distributed.checkpoint.state_dict.set_model_state_dict in load_checkpoint_in_model

load_checkpoint_in_model now supports loading into FSDP2-wrapped models when using device_map=None

for large models in a distributed setting, by leveraging broadcast_from_rank0, the reduced file system reads results in much faster loading (for loading a 70B model on a single node of 8 GPUs, 60 seconds vs 90 seconds)

* Guard torch.distributed.checkpoint.state_dict with is_torch_version('>=', '2.2.0')

This should fix issues with slow import and also fixes versioning issues

https://github.com/huggingface/accelerate/pull/3432#discussion_r1989782680
https://github.com/huggingface/accelerate/pull/3432#discussion_r1989946020

* Add test for non-distributed, TP, and DDP for load_checkpoint_and_dispatch(device_map=None) using set_model_state_dict

https://github.com/huggingface/accelerate/pull/3432#discussion_r1989741480
https://github.com/huggingface/accelerate/pull/3432#discussion_r1989960317

* Verify minimum version for broadcast_from_rank0

* Mark transformers as required for broadcast_from_rank0 tests, mark min version of torch to test as 2.4.0

* Add model_devices guard to set_model_state_dict

set_model_state_dict will fail if the model state_dict is not on at most one device

* Move decorators to top of test class

* https://github.com/huggingface/accelerate/pull/3432/files#r1993272280
* https://github.com/huggingface/accelerate/pull/3432/files#r1993268932

* Unindent functions

https://github.com/huggingface/accelerate/pull/3432/files#r1993275663

* Add condition for w/ explanatory links for set_model_state_dict model device restrictions

* Fix distribution of 2.2.0 condition

* Remove tensor parallel test

* Fix model materialization example

* Fix materialization example

* Remove old tensor parallel test
2025-04-11 17:01:33 +02:00
b2f937faec Add the HPU into accelerate config (#3495)
* Add the HPU into accelerate config

Signed-off-by: yuanwu <yuan.wu@intel.com>

* Fix the error of make style

Signed-off-by: yuanwu <yuan.wu@intel.com>

---------

Signed-off-by: yuanwu <yuan.wu@intel.com>
2025-04-10 17:41:47 +02:00
3b89987710 [bug] unsafe_serialization option doesn't work (#3496) 2025-04-09 15:16:28 +02:00
a43e4170fc fix warning error (#3491)
* fix warning error

* use logger.warning
2025-04-09 14:26:40 +02:00
334d6ab957 fix fp8 config (#3492) 2025-04-09 14:19:07 +02:00
650b6659c0 add support for custom function for reducing the batch size (#3071)
* add support for custom function for reducing the batch size

* fix scoping

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-08 14:08:07 +02:00
fb90996365 Don't create new param for TorchAO sequential offloading due to weak BC guarantees (#3444)
* update

* make style

* use assignment to set device
2025-04-08 12:29:12 +02:00
32b2e1606f Fix check_tied_parameters_in_config for multimodal models (#3479)
* fix

* fix
2025-04-08 12:27:49 +02:00
8c0a29626d Update low_precision_training.md (#3488) 2025-04-08 11:39:58 +02:00
63168b151f Adds style bot (#3478)
* Style bot

* Use reusable style bot

---------

Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
2025-04-03 17:09:49 +02:00
3cf5e4c802 use device agnostic torch.OutOfMemoryError from pytorch 2.5.0 (#3475)
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-04-02 15:08:22 +02:00
9642a1ac81 bump to v1.7.0dev 2025-04-01 13:55:11 +02:00
3169339f5b Bump ruff to 0.11.2 (#3471)
* ruff format

* Bump ruff to 0.11.2
2025-04-01 11:57:06 +02:00
67a768be07 remove use_xpu to fix ut issues, we don't need this since XPU is OOB … (#3460)
* remove use_xpu to fix ut issues, we don't need this since XPU is OOB supported now

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* add deprecate warnings

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

---------

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-04-01 11:55:37 +02:00
531643436e [MLU] fix deepspeed dependency (#3472) 2025-04-01 11:55:23 +02:00
83e09a9331 Update ruff target-version to py39 and apply more fixes (#3470)
Signed-off-by: cyy <cyyever@outlook.com>
2025-03-31 15:00:25 -04:00
9c4eeb9ba8 xpu: enable xccl distributed backend (#3401)
xccl distributed backend is available for XPU device backend starting
from torch 2.7 (requires torch built with `USE_XCCL=1 USE_C10D_XCCL=1`).

This change is verified with the following Transformers tests:
* `tests/extended/test_trainer_ext.py`
* `tests/trainer/test_trainer_distributed.py`

This commit does not impact IPEX which currently remains using custom
distributed backend.

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-03-31 19:11:47 +02:00
a0edc8dcf2 Apply ruff py39 fixes (#3461)
* Apply ruff py39 fixes

* Ruff format
2025-03-31 19:10:08 +02:00
11a3c0001d Update CometMLTracker to allow re-using experiment (#3328)
* Update CometMLTracker to allow re-using experiment

Update CometMLTracker to use new `comet_ml.start` function to create
Experiments, this way end-users can create online, offline experiments, append
data to an existing experiment and it also automatically re-use a running
experiment if one is present rather than creating a new one.

* Add back calling Experiment.end in finish

As `accelerator.end_training` is supposed to be called at the very end of
training by the user, users will still be able to log data after the main
training loop and this is needed for Offline Experiment to create the offline
archive.

* Update CometTracker behavior based on the version of the package

Use new method only for recent version of comet_ml
2025-03-31 19:09:34 +02:00
8b31a2fe2c Fix get_balanced_memory for MPS (#3464)
This also fixes a failure in test_get_balanced_memory:

```
assert {0: 215, 1: 300} == {0: 300, 1: 300}
[...]
tests/test_modeling_utils.py:871: AssertionError
```

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-03-31 17:33:33 +02:00
3f636d6260 Fix seeding of new generator for multi GPU (#3459)
* fix new generator seeding

* remaining arbitrary fixed seed

* test
2025-03-28 12:48:05 -04:00
803b6648b4 Update @ (#3466)
* Update @

* DS

* Add marc everywhere, he's always watching
2025-03-28 12:43:06 -04:00
17f9c19f48 Fix: clip grad norm in fsdp2 (#3465) 2025-03-28 15:55:49 +01:00
d7c741a6bc Initial FSDP2 support (#3394)
* Feat: initial conversion tool draft

* Feat: add value mapping to conversion tool

* Refactor: move from os to pathlib

* Feat: add first tests

* Feat: more tests

* Feat: minor fixes + dataclass conversions

* Feat: more remapping

* Fix: namespace has no attribute version + style

* Fix: offload params behavior

* Feat: add option to only rename keys in the config file to

* Fix: wrong attr name

* Fix: partially resolve comments

* Feat: work on config command + minor fixes to reflect changes

* Refactor: style + quality

* Feat: fsdp2 initial work

* Feat: some cleanups and first running fsdp2

* Fix: version checks + mixed precision policy

* Refactor: style + quality

* Remove obsolete todos

* Feat: grad norm clipping

* Fix: tests + rename attrs

* Refactor: style + quality

* Fix: None object is not iterable

* Fix: default cpu_offload for fsdp2

* Fix: cpu offload now behaves correctly

* Feat: apply_activation_checkpointing

* Fix: append to models

* Feat: start on concept guide

* wip: concept guide

* Fix: toctree

* cleanup of the concept guide

* Fix: minor fixes + mp

* Fix: quality + | to union

* Feat: backwards compatibility + args cleanup

* Fix: style + quality

* Feat: enable dropping refs when getting named params

* Fix: memory footprint with fsdp2

* Feat: cpu ram efficient loading

* Fix: mp

* Fix: not warn about sync_modules if fsdp version is 1

* Refactor: minor changes

* Small fixes + refactors

* Feat: docs + cleanup

* Feat: saving works (not sure about optim)

* More loading/saving work

* Feat: disable local_state_dict for fsdp2

* Fix: fsdp2 convergence

* Feat: working comparison script

* Feat: memory tracking fsdp2

* Feat: memory visualizer

* Feat: more work on benchmark

* Fix: raise error if model+optimizer arent prepared together

* Minor fixes

* Style

* More warnings

* Fix: reshard_after_forward vs sharding_strategy conflict

* Refactor: clean up accelerator

* Feat: more testing in fsdp2 benchmark

* Fix: memory visualizer

* Untested: support load/save_state

* Feat: concept guide improvements

* Refactor: concept guide

* Feat: benchmark works

* Feat: more work on fsdp2 benchmark

* Fix: note syntax

* Fix: small fixes + make original tests work

* Fix: grad scaling

* Feat: reshard after forward tests

* Feat: backward prefetch tests

* Feat: tests for fsdp2

* Refactor: minor fixes

* Feat: fsdp_utils docstrings

* Feat: autodoc fsdp.md

* Docs: get_module_children_bottom_up

* Fix: remove unused images

* Refactor: benchmark cleanup

* Fix: docs

* Feat: final doc changes

* Fix: torch.distributed has no attribute tensor

* Fix: style

* Feat: tests include version in failures

* Fix: benchmark force model to load in fp32

* Fix: rename runs

* Feat: last minor fixes

* Feat: new benchmark images
2025-03-27 15:01:18 -04:00
8ab01d32cf Fix device KeyError in tied_params_map (#3403)
Fixes: #3402

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-03-25 16:25:02 +01:00
140acb356e Fix AMD GPU support with should_reduce_batch_size() (#3405)
* Fix AMD GPU support with should_reduce_batch_size()

Even though torch has NVIDIA and AMD GPUs operate under the cuda namespace, the out of memory error for AMD GPUs is different. When trying to determine if a model can fit on an AMD GPU, this function will evaluate to false for a `torch.OutOfMemoryError`. This PR adds another check for the error string.

Example error messge:
```
'HIP out of memory. Tried to allocate 64.00 GiB. GPU 0 has a total capacity of 63.98 GiB of which 48.63 GiB is free. Of the allocated memory 15.02 GiB is allocated by PyTorch, and 129.49 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)'
```

* Missing comma

* Update memory.py

Consolidate OOM error check string
2025-03-25 10:32:29 -04:00
8576112bc8 enable 2 UT cases on XPU (#3445)
* enable test_dispatch_model_tied_weights_memory_with_nested_offload_cpu test case on XPU

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* enable test_dispatch_model_tied_weights_memory on XPU

Signed-off-by: N <matrix.yao@intel.com>

* fix bug

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>

* Update src/accelerate/test_utils/testing.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/test_utils/testing.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update tests/test_big_modeling.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

---------

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>
Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
Signed-off-by: N <matrix.yao@intel.com>
Co-authored-by: root <root@a4bf01945cfe.jf.intel.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2025-03-25 14:19:26 +01:00
806f661cd3 remove device index workaround on xpu since xpu supports integer device index as cuda now (#3448)
* remove xpu device index WAs since pytorch xpu supports integer index now

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>

* remove print

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

---------

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>
Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
Co-authored-by: root <root@a4bf01945cfe.jf.intel.com>
2025-03-24 14:49:05 +01:00
9015a26f09 Fixup ao module filter func (#3450) 2025-03-21 10:21:54 -04:00
6de900e10a feat: Add no_ssh and slurm multinode launcher options for deepspeed (#3329)
* feat: Add no_ssh multinode launcher option for deepspeed

* fix: Add CLI hints and brief documentation, add slurm launcher, and ensure that deepspeed 0.14.5 version is used for nossh
2025-03-20 10:33:00 -04:00
ffb27138f7 Changed --config arg to --config_file in the slurm multinode fsdp example. (#3447) 2025-03-20 10:14:18 -04:00
4b6be89910 Update build_and_run_tests.yml 2025-03-15 11:33:32 +01:00
a702364256 Fix attribute issue with deepspeed tp (#3443) 2025-03-13 18:27:25 +01:00
a31bd767c1 Fix prod issues (#3441)
* Fix default device

* Use CPU
2025-03-13 11:21:11 -04:00
71036329f7 tensor parallel dataloder for deepspeed accelerator (#3390)
* ds tp change

* update

* format

* add version check

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* put device_mesh logic to func + format

* fix comments

* format

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2025-03-13 12:40:34 +01:00
f648feba97 Add log_artifact, log_artifacts and log_figure capabilities to the MLflowTracker. (#3419)
* Added artifacts and figure tracking at MLFlow tracker

* Added `log_artifact` to the MLFlowTracker

* Remove changes

* Added artifacts, artifacts and figure tracking at MLFlow tracker

* Improved the docstring

* added require_mlflow function at test_utils

* add test for MLflowTracker

* Bit of litting

* Refactor to a more robust test

* Revised the test asserts to something more robust.

* Removed incorrect import and some litting.

* removed commented code

* initiate tracker using Accelerator

* Added mlflow and matplotlib to setup.py. Guarded and decoredated the functions that required them.

* Guarded mlflow import

* added matplotlib required warning.

* ran style and quality
2025-03-12 18:11:29 +01:00
14fc61eeac Bump to 1.6.0.dev0 2025-03-12 10:13:18 -04:00
d9e6af8773 HPU support (#3378)
* init

* style

* is_hpu_available

* fix

* import habana_frameworks.torch.distributed.hccl

* style

* test

* initialize dist proc group

* revert

* set backend to hccl only if hccl initialization sets a local rank

* force backend hccl and multi_hpu type when sure of distributed launch

* style

* pass accelerator tests

* pas big modeling tests with bigger atol/rtol for accelerators

* fix hpu device count and skip tests requiring hpu:x

* hpu autocast

* hpu rng_state

* hpu launch

* hpu special device placement

* hpu launch

* rng state

* distributed data loop tests

* enforce non contiguity after device memory allocation

* pass fsdp tests

* enforce pt_hpu_lazy_mode=0 when fsdp testing

* pass cli tests

* pass and document grad sync tests

* pass kwargs handler and autocast tests

* memory utils

* found source of int64 errors

* skip some modeling utils tests

* enable int64

* skip optimizer tests

* pass checkpointing tests

* pass accelerator tests with safetensors main

* more hpu stuff

* style

* remove PT_HPU_LAZY_MODE and PT_ENABLE_INT64_SUPPORT as they should be in the testing environment

* start testing on gaudi2

* support fp16 on gaudi2

* add testing order

* custom hpu fsdp env dict

* fix torch trace malloc

* test ddp half precision comm hooks

* fix

* fix

* remove lower bound for hpu

* use 0.72 as lower bound

* lower lower bound

* order deepspeed tests

* fix

* deepspeed_use_hpu

* assert non lazy mode with offloaded optimizer

* make patching torch with habana frameworks the default

* less of require_non_hpu

* skip test_multi_device_merge_fsdp_weights for now as it halts

* skip another flaky test

* format

* use habana_visible_modules

* patch torch hpu device count

* avoid setting HABANA_VISIBLE_MODULES

* don't play with habana visible devices/modules

* only with hpu

* fixes and skips

* skip

* fix device ids and add some todos

* skip offloading with generate()

* fix

* reduced atol/rtol for hpu

* fix

* tag deepspeed tests that should run first

* enable a test path that was skipped

* revert a test that was customized for gaudi1

* some patching to enable HABANA_VISIBLE_MODULES

* fix zero3 test

* misc

* test DTensor TP

* remove gaudi1

* test

* style

* comment

* pass pad_across_processes

* require_fp16

* pass memory utils test

* test_ddp_comm_hook

* skip half precision comm hooks on hpu

* fix

* is_fp16_available

* fp16

* tp as part of integration tests

* fix

* write_basic_config

* safetensors

* local sgd and masked_fill_fwd_i64

* fix num_processes in test_load_states_by_steps

* fp8 support

* test

* fix

* add a workflow

* Update src/accelerate/accelerator.py

* review comments

* ci

* style

* comments

* test

* habana_frameworks.torch

* patch device count

* fix

* fix

* require_fp8

* fix

* fix

* gaudi 1

* remove unnecessary

* fixed maskd fill error in transformers

* style

* balanced_memory pass on hpu

* remove for now

* run first

* Apply suggestions from code review

* style after merge

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/utils/transformer_engine.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* empty cache review comments

* test_scirpt.py error messages

* AccelerateTestCase for accelerator state cleanup

* test

* add gaudi1 workflow

* fp8 avilability

* fix

* reduce batch size

* concurrency

* check cuda as well

* nits and comments

* mark fsdp tests that require_fp16

* style

* mark deepspeed fp16 tests

* update image

* fix

* updated

* better msgs

* skip pippy

* test

* test on 2 device

* support up to 1% relative error in test_accelerate

* skip hpu fp16

* allow for 1 byte differene

* revert torch_device change

* style

* skip memory release since it's flaky

* add accelerator state cleanup to fixture

* fix

* atol

* fix

* more rtol

* equal grad test

* revert

* pass pippy on gaudi2 and skip on gaudi1

* enable sd 1.5 test with require fp16

* added warning on memory release

* don't log warning in memory release as it requires PartialState to be initialized

* Apply suggestions from code review

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2025-03-11 11:16:57 -04:00
b271eb1365 add distributed example for llava next video (#3417) 2025-03-11 11:07:46 -04:00
4677b8089f Fix quality (#3424)
* Run quality

* Update src/accelerate/test_utils/scripts/test_script.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-03-06 12:33:34 +01:00
e456796be8 fix typo : thier -> their (#3423) 2025-03-06 11:27:51 +01:00
ac3749dc11 Add Tecorigin SDAA accelerator support (#3330)
Co-authored-by: siqi <siqi@tecorigin.com>
2025-03-05 10:11:21 +01:00
6e8eea2e73 fix: Add device=torch.get_default_device() in torch.Generators (#3420) 2025-03-05 10:08:49 +01:00
c7b3625592 fix: ensure CLI args take precedence over config file. (#3409)
* fix: ensure CLI args take precedence over config file.

* add test case

* remove inappropriate comment

---------

Co-authored-by: 차영록 <jaycha@ncsoft.com>
2025-02-28 09:15:42 -05:00
90f81986b9 minor doc fixes (#3365) 2025-02-25 15:52:26 +01:00
fa26dc6156 add missing import (#3396) 2025-02-25 11:07:14 +01:00
6fcc8efd2e fix device bug (#3408) 2025-02-24 16:12:14 +01:00
8039158d71 Torchao float8 training (#3348)
* Bookmark

* bookmark

* Add torchao base example

* Currently broken

* Clean

* DDP varient working

* FSDP as well

* Works for all but zero3

* Bookmark: currently zero3 is underperforming

* Bookmark

* Another diff

* Fin

* Fin

* Add req huggingface suite

* update tests for fp8/torchao/ddp

* Log FP8 backend used and adjust typing

* add documentation for convert_to_float8_training

* Rename to convert_model_to_fp8_ao

* Call superinit"

* Add types

* Clean

* Use filter_first_and_last_linear_layers

* Update usage guide docs

* Actually loop through the zero stages

* Clean
2025-02-17 11:51:47 -05:00
e34db4d0d2 enable xpu (#3397) 2025-02-17 17:41:50 +01:00
526925b48c [memory leak] Replace GradientState -> DataLoader reference with weakrefs (#3391)
* Replace GradientState -> DataLoader reference with weakrefs

So they can be cleaned up. Otherwise, they will always stay in memory, leading to notable memory leaks. Note: even accelerator.free_memory() did not work!

* Add comments; initialize _dataloader_references_ref directly instead of indirectly
2025-02-11 12:47:40 -05:00
24f8d0276c [examples] upgrade code for seed setting (#3387)
* replace set_seed

* update import
2025-02-11 16:31:41 +01:00
5cc99e6e02 fix: typos in documentation files (#3388)
* Update test_scheduler.py

* Update test_big_modeling.py

* Update test_state_checkpointing.py

* Update test_script.py

* Update cli.md

* Update quicktour.md
2025-02-10 13:11:50 -05:00
ce63623421 works for fp8 with deepspeed (#3361)
* works for fp8 with deepspeed

* Add tests

---------

Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
2025-02-10 09:31:15 -05:00
f19b95700f fix torch_dtype in estimate memory (#3383)
* fix torch_dtype

* style

* add comments

* style
2025-02-07 15:58:13 +01:00
81d8a0356c [tests] Fix bnb cpu error (#3351)
* enable bnb tests

* bug fix

* enable more bnb tests on pxu

* fix on xpu

* fix quality issue

* furter fix quality

* fix style

* only use xpu check
2025-02-06 11:26:02 +01:00
f076495580 deepspeed github repo move (#3376) 2025-02-03 13:52:08 -05:00
03153658f4 feat: support tensor parallel & Data loader (#3173)
* feat: add dataloader for TP and n-dim parallel in non-dispatch mode

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* feat: add support for CLI usage

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: test cases

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: when tp not in use fix num_procs

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

---------

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-01-29 09:44:18 -05:00
675e35bcd4 [tests] enable more bnb tests on XPU (#3350)
* enable bnb tests

* bug fix

* enable more bnb tests on pxu

* fix quality issue

* furter fix quality

* fix style
2025-01-23 15:23:38 +01:00
8f2d31c5b9 Support more functionalities for MUSA backend (#3359)
* Support more functionalities for MUSA backend

* fix lint
2025-01-23 15:05:33 +01:00
4c2c89ea90 [tests] remove require_non_xpu test markers (#3301)
* remove non-xpu marker

* fix import
2025-01-22 16:10:17 +01:00
28c171b05a [tests] make cuda-only test work on other hardware accelerators (#3302)
* enable on xpu

* remove require_cuda
2025-01-22 16:09:50 +01:00
65356780d4 [Dev] Update release directions (#3352)
* Update release directions

* Update directions and makefile to account for testpypi fun
2025-01-21 08:59:43 -05:00
78b8126bff v1.4.0.dev0 2025-01-17 10:36:00 -05:00
7e324103c4 [tests] enable BNB test cases in tests/test_quantization.py on XPU (#3349)
* enable bnb tests

* bug fix

* fix quality issue

* furter fix quality

* fix style
2025-01-17 10:22:27 -05:00
02d25612a5 fix triton version check (#3345)
* fiix triton version check

* add xpu check
2025-01-17 10:21:52 -05:00
fbfa53bc5e dataloader: check that in_order is in kwargs before trying to drop it (#3346)
This fixes tests/test_data_loader.py::StatefulDataLoaderTester tests which
started to fail after 828aae4:
```
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_dispatcher_state_dict_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_dispatcher_state_dict_num_workers_2 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_inheritance - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_state_dict_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_state_dict_num_workers_2 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_decoupled_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_decoupled_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_2 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_end_of_dataloader - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_end_of_dataloader_dispatcher - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_skip_data_loader - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_2 - KeyError: 'in_order'
```

The reason for the failure is that "in_order" is added only if data loader
is created with `prepare_data_loader` or `skip_first_batches()`. Tests in
`tests/test_data_loader.py::StatefulDataLoaderTester` however are creating
data loaders directly as classes and "in_order" was not added. Hence the
issue.

Fixes: 828aae4 ("add torchdata version check to avoid in_order error (#3344)")

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-01-15 17:55:31 -05:00
d09040dfc9 [docs] fix typo, change "backoff_filter" to "backoff_factor" (#3296) 2025-01-15 11:55:38 -05:00
828aae4e32 add torchdata version check to avoid "in_order" error (#3344) 2025-01-15 09:04:03 -05:00
f0b030554c Fix for offloading when using TorchAO >= 0.7.0 (#3332)
* fix

* update

* fix

* apply suggestions from review

Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

Co-Authored-By: Xuehai Pan <XuehaiPan@pku.edu.cn>

* make style

---------

Co-authored-by: Xuehai Pan <XuehaiPan@pku.edu.cn>
2025-01-13 16:54:28 +01:00
80973430ee latest bnb no longer has optim_args attribute on optimizer (#3311)
* latest bnb no longer has optim_args attribute on optimizer

* update the other bnb based optimizer checks
2025-01-13 16:53:02 +01:00
c67d47ae79 [tests] make cuda-only test case device-agnostic (#3340)
* enable on xpu

* bug fix
2025-01-13 09:59:35 -05:00
8c423cff79 Fix offload generate tests (#3334)
* Fix tests

* format
2025-01-13 15:45:46 +01:00
95f34d6243 feat(tpu): remove nprocs from xla.spawn (#3324)
This parameter will cause issues on recent version of torch_xla.
2025-01-13 04:37:00 -05:00
ba90f85627 Fixup docker build err (#3333) 2025-01-10 04:54:05 -05:00
b13aadcb67 Bye bye torch <2 (#3331)
* Bye bye torch <1

* Add 2.6.0 dl args

* Rm require fsdp

* Adjust imports + 2.0 specific modeling code

* Bring back is_bf16
2025-01-09 12:11:08 -05:00
58f14364d5 Ensure that tied parameter is children of module (#3327)
Ensure that tied parameters are assigned to their parent module in
get_module_size_with_ties

Fixes: https://github.com/huggingface/accelerate/issues/3308
2025-01-09 12:03:51 -05:00
54370d4504 Adding keep_torch_compile argument to unwrap_model and extract_model_from_parallel. (#3282) 2025-01-08 12:45:22 -05:00
d6d3e03cd4 Use torch.xpu.mem_get_info for XPU (#3275)
torch.xpu.mem_get_info API is available starting from PyTorch 2.6 (and
in nightly 2.6.0.dev20241206+xpu or later). To work properly this method
requires PyTorch built with the SYCL runtime which supports API to query
device memory stats. If not available, exception will be raised.

Requires: https://github.com/pytorch/pytorch/pull/141230
Fixes: #2929
Fixes: https://github.com/huggingface/transformers/issues/31922

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2024-12-24 16:48:00 +01:00
acfbf72a7f Give example on how to handle gradient accumulation with cross-entropy (#3193)
* Add cross-entropy example in the gradient accumulation docs

* add example of logs

* correct skeleton code

* replace gather_for_metrics with gather

* batch_size -> per_device_batch_size

* remove main_process_only=True

* add autoregressive example in examples/

* Update docs/source/usage_guides/gradient_accumulation.md

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* ruff format

* add grad accum test

* update docs

* Update examples/by_feature/gradient_accumulation_for_autoregressive_models.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* update tests

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-12-24 12:26:45 +01:00
200c9eb783 fix: add max_memory to _init_infer_auto_device_map's return statement (#3279) 2024-12-13 10:47:33 -05:00
7b2edc0bf2 Fix test_nested_hook (#3289) 2024-12-11 10:00:45 -05:00
b92fb4774f fix load_state_dict for npu (#3211)
* fix load_state_dict for npu

* update
2024-12-10 21:38:00 -05:00
3e62fbb09c [docs] no hard-coding cuda (#3270)
* no hard-coding cuda

* Update docs/source/usage_guides/big_modeling.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* update device_type

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-12-10 21:32:10 -05:00
cb8b7c637a Fixed typos for Tutorials and Guides docs (#3274) 2024-12-06 10:39:45 -05:00
aa16d69561 [docs] use real path for checkpoint (#3220)
* fix bug

* update
2024-12-06 10:39:29 -05:00
f9a2e7902f fix typo (#3221) 2024-12-06 10:39:15 -05:00
51fd482d6e [docs] update set-seed (#3228)
* update set-seed

* update comment
2024-12-06 10:38:59 -05:00
60461ff7c4 Fix: Resolve #3060, preload_module_classes is lost for nested modules (#3248)
* resolve 3060

* format

* add tests

* fix

* fix

* format
2024-12-03 13:44:59 +01:00
f8c77f0522 Revert default behavior of get_state_dict_from_offload (#3253)
* change default to None

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

* introduce move_to_device argument

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

* remove move_to_device

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

---------

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
2024-12-02 13:47:02 -05:00
b626ef5f00 Select the DeepSpeedCPUOptimizer based on the original optimizer class. (#3255)
* Select the DeepSpeedCPUOptimizer based on the original optimizer class.

* abstract out optimizer selection to a deepspeed util

* add deepspeed cpu Adam & AdamW
2024-12-02 13:45:30 -05:00
dd68af886a Update troubleshooting.md (#3259)
I think the terminology of set_breakpoint and check_breakpoint has become set_trigger and check_trigger
2024-12-02 13:41:10 -05:00
11818e657b Fix: Resolve #3257 (#3261) 2024-12-02 13:41:00 -05:00
1f508a6df6 Update deferring_execution.md (#3262) 2024-12-02 13:40:33 -05:00
4a100eef43 support for wrapped schedulefree optimizer when using deepspeed (#3266)
* support for wrapped schedulefree optimizer when using deepspeed

* add comment and lint
2024-12-02 13:40:20 -05:00
c6f34a060f add xpu check (#3268) 2024-12-02 13:39:20 -05:00
29be478862 [WIP] FEAT Decorator to purge accelerate env vars (#3252)
* [WIP] FEAT Decorator to purge accelerate env vars

In some circumstances, calling certain classes or functions can result
in accelerate env vars being set and not being cleaned up afterwards. As
an example, when calling:

TrainingArguments(fp16=True, ...)

The following env var will be set:

ACCELERATE_MIXED_PRECISION=fp16

This can affect subsequent code, since the env var takes precedence over
TrainingArguments(fp16=False). This is especially relevant for unit
testing, where we want to avoid the individual tests to have side
effects on one another. Decorate the unit test function or whole class
with this decorator to ensure that after each test, the env vars are
cleaned up. This works for both unittest.TestCase and normal
classes (pytest); it also works when decorating the parent class.

In its current state, this PR adds the new decorator and tests it, but
the decorator is not yet applied to potentially problematic functions or
classes.

* Linter

* Refactor code to be more readable

---------

Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
2024-11-25 12:04:56 -05:00
e11d3ceff3 Allow for full dynamo config passed to Accelerator (#3251)
* Allow for full dynamo config

* Clean
2024-11-22 15:18:15 -05:00
08101b9dde Use numpy._core instead of numpy.core (#3247)
* Update other.py

* Update other.py

* add missing import

* use Version instead of version.parse

* Update np_core import in save function
2024-11-21 17:06:21 +01:00
5f96369161 v1.2.0.dev 2024-11-20 19:24:51 -05:00
069743775e [docs] add instruction to install bnb on non-cuda devices (#3227)
* ad bnb installation link

* add period

* add xpu comment and fix some bugs

* style fix
2024-11-20 16:58:46 -05:00
77f2b6235e [data_loader] Optionally also propagate set_epoch to batch sampler (#3246)
* Optionally also propagate set_epoch to batch sampler

* Add simple batch sampler set_epoch test
2024-11-20 16:58:04 -05:00
d7b1b368e9 Add warnings and fallback for unassigned devices in infer_auto_device_map (#3066)
* feat: feat: Add warning for unassigned main devices

* refactor: Improve warning for unassigned main devices

* feat: impl fallback_allocate; fix output format

* fix: include last dot index in the iteration

* feat: incorporate fallback allocation into infer_auto_device_map

* Revert "feat: incorporate fallback allocation into infer_auto_device_map"

This reverts commit d607bfb530517478b90aa89c2a87a03c318a2e58.

* refactor: add helper functions and eliminate redundant variables

The fallback allocation will be reintroduced once the branching logic is fully refactored. This commit prepares the function infer_auto_device_map for further refactoring.

* refactor: simplify allocation logic by removing duplicates and reducing nesting

* feat: incorporate fallback allocation into infer_auto_device_map

Implemented fallback allocation to allow modules to be allocated to devices using BFS when regular allocation fails. This enhancement improves the allocation process by ensuring that at least one module is assigned to the device, even under tight memory constraints.

* fix: fix module splitting logic

* styles: fix styling errors

* test: add test coverage for no-warning cases

test_infer_auto_device_map and test_infer_auto_device_map_with_fallback_allocation now each have a no-warning test case.

Simplified and rewrote code sections that were made unreadable by the linter.

* refactor: simplify control flow in infer_auto_device_map

Added complete return type hinting for _init_infer_auto_device_map

* refactor: replace warnings.warn with logger.info for allocation failures

* fix: use assertLogs to capture no allocation warning messages correctly
2024-11-20 10:10:01 -05:00
8ad2b3b8e7 [docs] update code in tracking documentation (#3235)
* update example code

* revert
2024-11-20 10:04:07 -05:00
e724c9a97f take care of case when "_tied_weights_keys" is not an attribute (#3226)
* take care of case when "_tied_weights_keys" is not an attribute

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>

* fix style

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>

---------

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
2024-11-20 09:57:42 -05:00
cf169a1ae6 enable find_executable_batch_size on XPU (#3236)
* enable on XPU

* Update src/accelerate/utils/memory.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-11-19 12:29:05 -05:00
8ade23cc6a remove hook for bnb 4-bit (#3223)
* relax dispatch for bnb

* style
2024-11-15 17:29:41 +01:00
c0552c9012 Fix align_module_device, ensure only cpu tensors for get_state_dict_offloaded_model (#3217)
* only onload direct parameter descendants, move buffers to cpu, add tests

* remove no longer applicable comment
2024-11-05 13:39:53 +01:00
bf4572b6ce [Utils] align_module_device (#3204)
* implement align_module

* add docs

* move to modeling utils, integrate into existing source code

* update source, expose through utils

* Suggested docstring

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Rewrite for readability, add try finally

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Use try-finally when aligning with hook

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* apply style

* improve get_state_dict_from_offload readability

* Update docstring

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* rename to align_module_device, update docstring

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-11-01 09:05:50 -04:00
a4a44aca1f Update big_modeling.py (#3207) 2024-11-01 08:41:01 -04:00
b0e5fd353c add xpu (#3163) 2024-10-31 10:50:51 -04:00
8159c98d43 Models With Tied Weights Need Re-Tieing After FSDP Param Init (#3154)
* add fsdp_tool to retie after param init

* make it handle generic param_init_fn

* fix quality

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>

---------

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
2024-10-31 10:50:28 -04:00
497eb3cf86 fix bug (#3166) 2024-10-31 09:08:20 -04:00
87732a4c32 take torch.nn.Module model into account when moving to device (#3167)
* bug fix

* update code
2024-10-31 09:08:00 -04:00
ffbca15979 eliminate dead code (#3198)
* eliminate dead code

* make style
2024-10-31 09:01:07 -04:00
ba7ab93f5e Update transformers.deepspeed references from transformers 4.46.0 release (#3196)
* Update dataclasses.py

* Update test_deepspeed.py
2024-10-24 19:42:45 -04:00
85f35647db 🚨 🚨 🚨 Goodbye Python 3.8! 🚨 🚨 🚨 (#3194) 2024-10-24 10:16:47 -04:00
2f39575bbd update Megatron-LM plugin code to version 0.8.0 or higher. (#3174)
* I have adapted the Megatron-LM plugin code to version 0.8.0 or higher.

* update megatron import in set_tensorboard_logging_options
2024-10-24 10:03:53 -04:00
1ace241db4 MLU devices : Checks if mlu is available via an cndev-based check which won't trigger the drivers and leave mlu (#3187)
* Add Cambricon MLU accelerator support

* up mlu support for test

* fix mlu device MULTI_MLU

* Update src/accelerate/utils/imports.py

it's beautiful !

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* up mlu for quality check

* fix mlu device longTensor error

* fix mlu device tensor dtype check

* fix mlu device send_to_device with torch dynamo error

* Refactor AcceleratorState

* Should be near complete now

* Last missing piece

* Make my way to the acceleratorstate

* Include update to global var

* Don't use global

* gpu -> cuda

* Don't use update for dict, easier to read

* Fix tests

* stash

* Getting closer...

* Needed to spawn at the very end after env was setup

* Explain set_device before deepspeed

* Make docstring more accurate

* Early return insteaD

* Delineat blocks

* Make prepare_backend return state + backend for clarity/less magic

* fix mlu longtensor.to() bugs.

* fix MLU devices rng state save and load.

* Cambricon MLU features, Checks if `mlu` is available via an `cndev-based` check which won't trigger the drivers and leave mlu uninitialized.

* MLU devices : Checks if mlu is available via an cndev-based check which won't trigger the drivers and leave mlu

* fix code style and quality

* fix is_cuda_available error

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-10-24 09:30:59 -04:00
78e1bdd088 Fix typo (#3191) 2024-10-23 14:11:15 -04:00
4dda5797bd [docs] use nn.module instead of tensor as model (#3157)
* use nn.module instead of tensor

Signed-off-by: Lin, Fanli <fanli.lin@intel.com>

* fix neptune

---------

Signed-off-by: Lin, Fanli <fanli.lin@intel.com>
2024-10-23 12:23:16 -04:00
1f4fbb77a2 docs: fix a wrong word in comment in src/accelerate/accelerate.py:1255 (#3183) 2024-10-23 12:15:00 -04:00
c809f8e45c [docs] update neptune API (#3181) 2024-10-23 12:14:52 -04:00
39dc2b120f fix bnb (#3186)
* bnb_4bit_compute_dtype is str

* fix error message

* fix _replace_with_bnb_layers of bnb.py in case of meta device

* undo with meta device in bnb.py
2024-10-23 17:08:52 +02:00
735dfa3018 [Utils] has_offloaded_params (#3188)
* implement has_offloaded_params

* update docstring

* expose to utils

* add docs

* apply style, quality

* add tests
2024-10-23 16:44:02 +02:00
a84327e596 enable cpu bnb distributed lora finetune (#3159)
* enable cpu bnb distributed lora finetune

* check bnb multi-backend
2024-10-15 13:56:55 +02:00
292954b547 fix version check bug in get_xpu_available_memory (#3165) 2024-10-14 10:21:25 -04:00
0e61127b5a Remove broken dynamo test (#3155) 2024-10-11 06:55:18 -04:00
6f79b63b86 Trigger weights_only=True by default for all compatible objects (#3036)
* rebase

* Update torch v

* Rename

* Prop to docs

* Actually reverse states

* Rebase fully

* Restore old state

* Keep as load()

* No need for explicit anymore

* Check numpy version, dtypes was added in 1.25

* Clean up diff

* Fix hang
2024-10-10 14:08:24 -04:00
1d2ca747f1 Fixup Zero3 + save_model (#3146)
* Fixup + test

* Easier diff

* Move os.makedirs to under return statement
2024-10-10 12:54:14 -04:00
cba3f2d5e0 support torch dynamo for deepspeed>=0.14.4 (#3069)
* compile after deepspeed 0.14.4

* fix

* fmt

* add test
2024-10-10 18:53:07 +02:00
f1f2b4d1a8 Adding multi gpu speech generation (#3149)
* skeleton code

* fix some errors for downloading the model

* fix some tqdm error

* fix some error

* fix some gpu errors with torch

* fix some gpu errors with torch

* testing simple way

* testing simple way

* testing simple way

* testing simple way

* actual code

* actual code

* final testing with serialization

* add multi_gpu speech generation

* fix some comments

* fix some style and quality
2024-10-10 12:40:15 -04:00
fd9880da91 POC: Allow for a data_seed (#3150) 2024-10-09 12:12:04 -04:00
21c994c298 Merge branch 'main' of https://github.com/huggingface/accelerate 2024-10-09 10:50:19 -04:00
52581c3f01 Change version 2024-10-09 10:50:12 -04:00
f4ee5a2dc7 Florence2 distributed inference example (#3123)
* Florence2 distributed inference example

* optimized

* Documentation
2024-10-09 05:49:05 -04:00
55136b8dc4 DS fix, continued (#3145) 2024-10-08 14:31:14 -04:00
fb68cb9d0e Refactor scaler to util (#3142)
* Refactor scaler to util

* Document

* Use the distributed_type directly
2024-10-08 11:07:01 -04:00
506d732230 Fixup DS issue with weakref (#3143)
* Fixup DS issue with weakref

* Clean
2024-10-08 11:04:13 -04:00
ae9cb6e4db Handle negative values for dim input in pad_across_processes (#3114)
* Handle negative values for dim

* Add tests for negative dimension
2024-10-08 16:01:26 +02:00
127818fc27 MNT Permission for PRs for GH token in stale.yml (#3112)
Continuation of #3102.

The equivalent PR in
PEFT (https://github.com/huggingface/peft/pull/2064) was successful to
restore stale bot function to PRs as well. Hence also making the same
change for accelerate.
2024-10-07 09:35:36 -04:00
bcc13c00b5 typo of "scalar" instead of "scaler" (#3116) 2024-10-07 09:34:34 -04:00
d4d6b6e7f5 fix tip brackets typo (#3129) 2024-10-07 09:34:24 -04:00
1077611552 only move model to device when model is in cpu and target device is xpu (#3133) 2024-10-07 09:34:08 -04:00
YH
cd93e35e08 🐛 [HotFix] Handle Profiler Activities Based on PyTorch Version (#3136) 2024-10-07 09:33:23 -04:00
e93b056687 fix deprecated torch.cuda.amp.GradScaler FutureWarning for pytorch 2.4+ (#3132)
* fix deprecated FutureWarning for pytorch 2.4+

* perform `make style` and `make quality`

* try to fix `Quality Check` on `actions/workflows/quality.yml`

* undo changes for `src/accelerate/utils/memory.py`

* adapt scaler for pytorch.__version__

* fix scalar waning for npu device deps on pytorch2.4 version check

* fallback to default npu scaler

* fallback to default `GradScaler` doc
2024-10-07 09:26:59 -04:00
5060574827 remove cpu restriction for bnb training (#3062)
* rm cpu restriction for 8-bit training

* check bnb version

* def is bnb multi backend avaliable

* fix log
2024-09-30 14:50:29 +02:00
018a99e5f6 Fixup multiple model DS tests (#3131)
* Multiple model multi GPU fixed, different issues than torch

* Fix multiple-model issues
2024-09-26 12:57:16 -04:00
4305033f80 add xpu skip (#3119) 2024-09-18 19:13:16 +02:00
4617be3760 Switch to XLA instead of TPU (#3118) 2024-09-18 04:13:32 +02:00
521eb5bee4 Fixup test_sync w/ deprecated stuff (#3109) 2024-09-13 10:16:52 -04:00
9f9951325c Patch: fix cpu flag never being set as true 2024-09-13 08:47:05 -04:00
e9e5a73fcc POC: multiple model/configuration DeepSpeed support (#3097)
* Bookmark

* Migratory

* Uncomment

* Rm name to model for now

* Rm container

* Left: test

* Allow only wrapping one model

* Add warning but only ref once

* Refine

* Update src/accelerate/accelerator.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Finish stas nits

* Clean

* Fixup test + test writing

* Fully working

* Fin

* Nit

* Quality

* Update src/accelerate/accelerator.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Actionable error

* Make note of when its enabled

* Apply suggestions from code review

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Merge tests

* Merge

* Add currently broken test script

* Push the working implementation

* Fin

* Add guards for user behavior

* Test nits

* TODO: finish knowledge distillation example

* Update tests/deepspeed/test_deepspeed_multiple_model.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Allow for dict-like interface

* Get rid of disable

* Uncomment

* Complete rewrite to force a dict to be used

* Working tests/fin

* Use name as stas suggestion

* Clean

* docnit

* toctree

* toctree

* Missing ref

* Put in break

* Smaller diff

* Make note on how to use zeroinit

* Make note about accelerator ds plugin

* More docnits

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Limit users to not pass in another ds plugin to another accelerator

* not implemented err + Make a note about why no params

* Apply suggestions from code review from Stas

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Add deepspeed_plugins arg + update doc

* Plugin -> plugins

* Change enable() -> select()

* Update ref properly + test

* Be consistent, model1,model2...

* first_, second_

* A few more auto values

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
2024-09-13 07:28:06 -04:00
79a8426416 🚨🚨🚨 The Great Deprecation 🚨🚨🚨 (#3098)
* The great purge

* Clean

* Some more fixings

* Some more deprecations Benjamin found

* Fix kwarghandler test
2024-09-12 21:12:32 -04:00
8a43837cc9 [docs] More docstrings (#3108) 2024-09-12 15:28:36 -04:00
a768b2b753 No more t5 (#3107) 2024-09-12 13:27:15 -04:00
85b1a03552 Update image ref for docs (#3105)
* Update image

* Fin
2024-09-11 15:44:39 -04:00
fc52fa969e [docs] Doc sprint (#3099)
* docs sprint

* youtube id

* feedback
2024-09-11 13:31:47 -04:00
3a670bd0da MAINT: Permission for GH token in stale.yml (#3102)
See https://github.com/huggingface/peft/pull/2061 in PEFT.

This restores the functionality of the stale bot after permissions for
the token have been limited. The action still shows errors for PEFT but
the bot appears to work fine.
2024-09-11 13:27:15 -04:00
b32d8bcb75 [docs] DataLoaderConfiguration docstring (#3103) 2024-09-11 13:26:56 -04:00
d5b7b70e06 MS-AMP support (w/o FSDP) (#3093)
* MS-AMP support sans FSDP

* Fix import

* Fixings

* Last Benjamin nit

* New ruff version cleaning

* Update src/accelerate/accelerator.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-09-10 12:25:45 -04:00
1ce2eb6385 Revert "Enable Unwrapping for Model State Dicts (FSDP) (#2959)" (#3096)
This reverts commit f35cbd1f023db2c7a4972388df3a34274cca7939.
2024-09-10 17:37:22 +02:00
3fd02e60dc MAINT: Upgrade ruff to v0.6.4 (#3095)
* MNT Upgrade ruff to 0.6.4

Currently used version, 0.2.1, is quite old at this point.

Not a lot needed to be changed:

- Change ruff version in setup.py
- Remove deprecated ignore-init-module-imports option for ruff
- Type comparison should use is and not ==
- Use f-string instead of % formatting
- Some line wrapping and empty lines

* Oops
2024-09-10 10:43:37 -04:00
ed9a574564 Update README.md to include distributed image generation gist (#3077)
* Update README.md to include distributed image generation gist

* add script
2024-09-10 10:42:35 -04:00
7d3bbe721b fix skip_keys usage in forward hooks (#3088)
* fix skip_keys

* fix linting
2024-09-10 14:12:17 +02:00
4b4c036933 use the correct available memory API for XPU (#3076)
* fix

* update

* remove blank line

* update

* add check

* add  imports

* warning for both

* reformat
2024-09-09 10:31:31 -04:00
e7e01812df fix bug in _get_named_modules (#3052)
* bug fix

* bug fix
2024-09-06 18:30:45 +02:00
5ad982ac51 Support sequential cpu offloading with torchao quantized tensors (#3085) 2024-09-06 08:49:23 +02:00
9d67867ad9 Re-enable setting state dict type (#3084) 2024-09-05 12:56:26 -04:00
52b3421d8f Fix three typos in src/accelerate/data_loader.py (#3082)
* Update data_loader.py

Fix a typo in line 678: "datalaoder" -> "dataloader"

* Fix typos in data_loader.py
2024-09-05 11:38:47 -04:00
f1ca8ac78f Allow DataLoaderAdapter subclasses to be pickled by implementing __reduce__ (#3074)
* initial fix for breaking accelerator pickling

* cleanup

* skip_first_batches should be used on raw dls

* multigpu sanity test

* bugs

* does this work with iterable dsets?

* fix typo

* ignore these commits, i'm just syncing the origin so i can test on my cloud workstation

* comment out failing tests, unsure if those are existing bugs or a recent regression

* torch 2.4.0?

* pickling generator issues

* test_pickle_accelerator

* test_pickle_accelerator should work now)

* base.__len__() -> len(base)

* undo reduce

* undo super().__reduce__() again

* pass args through superclass

* remove prints

* doc changes + make style && make quality
2024-09-05 11:25:37 -04:00
ab89fc7e1d Fix FSDP auto_wrap using characters instead of full str for layers (#3075) 2024-09-04 12:44:32 -04:00
b5235f21d8 0.35.0.dev 2024-09-02 18:18:42 -04:00
8931e5e48c Remove skip_first_batches support for StatefulDataloader and fix all the tests (#3068)
* Pippy tests - good

* Fix dataloader example tests

* SD issue

* Rm test

* Docs

* Rm from doc
2024-09-02 18:14:24 -04:00
a84859242d Speed up tests by shaving off subprocess when not needed (#3042)
* bookmark

* Continue making improvements

* Bookmark

* More

* Format
2024-09-02 12:12:55 -04:00
758d6243a7 add set_epoch for MpDeviceLoaderWrapper (#3053)
* add set_epoch for MpDeviceLoaderWrapper

* fix one over-indented space
2024-09-02 11:47:39 -04:00
b07ad2adf2 Fix typo in comment (#3045)
* Fix typo in comment

* Fix typo in comment: quality check
2024-09-02 11:47:04 -04:00
1d09a20fc1 use duck-typing to ensure underlying optimizer supports schedulefree hooks (#3055)
* use duck-typing to ensure underlying optimizer supports schedulefree hooks

* fixup
2024-09-02 11:43:18 -04:00
3fcc9461c4 Do not import transformer_engine on import (#3056)
* Do not import `transformer_engine` on import

* fix message

* add test

* Update test_imports.py

* resolve comment 1/2

* resolve comment 1.5/2

* lint

* more lint

* Update tests/test_imports.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* fmt

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-08-28 09:06:13 -04:00
939ce400cb Update torchpippy (#2938)
* rm warning

* Take 3

* Take 4

* Annotate

* Take 6

* Updated

* Spec

* Last fix

* Don't padd input

* Finished

* Continue refactor

* Rm comment

* Adjust the err

* Start adjustment

* GPT2 works, T5 does not

* llama too now I think

* Flag the t5 example
2024-08-26 14:21:13 -04:00
c2120927b0 Add FP8 docker images (#3048)
* Add fp8 docker images

* Add more docker images

* Rv

* bring back ds

* Less diffy

* No need for sep tag
2024-08-26 12:12:34 -04:00
654e1d9984 Add a SLURM example with minimal config (#2950)
* Add an example with minimal config

* Improve

* Even more minimal

* Rm slurm arg

* Update examples/slurm/submit_multinode_fsdp.sh

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-08-26 10:38:10 -04:00
8c3aded21a Update CONTRIBUTING.md Setup Instructions (#3046) 2024-08-26 10:22:29 -04:00
2789933938 Decouple prepare_data_loader() from Accelerator (#3047) 2024-08-26 10:19:59 -04:00
726140cad2 Fixup dataloader state dict bugs + incorporate load/save_state API (#3034)
* v1

* More testing, need to try on H100

* Bigger batch for h100 test

* test tweak

* Fixup all tests!

* Bookmark

* Fix issues, working now

* rm num samples

* Uncomment

* Give stateful dl end of dl

* Make skip DL stateful

* Migrate to update_state_dict

* try/finally

* Add comments to test

* rm comment

* Document

* refactor out for eventual override

* Doc nit

* Brute force it
2024-08-23 15:13:33 -04:00
2d4f1dda7e Fix batch_sampler maybe None error (#3025)
* Fix batch_sampler maybe None

For more details, see: https://github.com/huggingface/accelerate/issues/3011

* Update test_data_loader.py

Add unit test for dataloader with batch_size=None when using Iterabledataset

* Update tests/test_data_loader.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Fix inconsistent indentation

Fix inconsistent indentation

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-08-22 20:02:33 -04:00
c0cf860dc6 Fix fp8 benchmark on single GPU (#3032) 2024-08-22 16:54:32 -04:00
ad3f574a3b Add early support for torchdata.stateful_dataloader.StatefulDataLoader within the Accelerator (#2895)
* temporary commit

* checkout?

* dataloader wrapper

* tmp

* weird failing test

* trying multiple inheritance

* DataLoaderAdapter

* make style

* Some dark magic dynamic reflection (for backwards compat)

* typo

* some tests

* more mixin stuff

* maybe found broken test?

* this is a very invasive feature

* i think the feature is done?

* add xpu support (#2864)

* better tests

* discovered a bug

* maybe fixed bug?

* make style

* hopefully this is PR ready

* properly skip tests

* parameterize

* temporary commit

* checkout?

* dataloader wrapper

* tmp

* weird failing test

* trying multiple inheritance

* DataLoaderAdapter

* make style

* Some dark magic dynamic reflection (for backwards compat)

* typo

* some tests

* more mixin stuff

* maybe found broken test?

* this is a very invasive feature

* i think the feature is done?

* better tests

* discovered a bug

* maybe fixed bug?

* make style

* hopefully this is PR ready

* properly skip tests

* parameterize

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/data_loader.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* merge conflicts

* move imports

* make style

* merges are breaking tests

* fix test name

* Require safetensors>=0.4.3

* undo last commit

* minor style

* address pr comments

* Torchdata version 0.8.0 is stable now

* added docs and require torchdata>=0.8.0 for testing

* test base_dataloader attr doesn't cause infinite recursion

* address pr

* replace super().__iter__ with self.base_dataloader.__iter__

---------

Co-authored-by: Fanli Lin <fanli.lin@intel.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-08-22 08:43:45 -04:00
1a6af0bd6d Improve config handling and add a zoo (#3029)
* Improve config handling and add a zoo

* Docs

* rm comment

* Tweak doc
2024-08-20 10:40:21 -04:00
52fae0960c Add end_training/destroy_pg to everything and unpin numpy (#3030)
* Add end_training/destroy_pg to everything

* Carry over to AcceleratorState

* If forked, ignore

* More numpy fun

* Skip only init
2024-08-20 10:40:12 -04:00
7ffe7662ca Fix torch version check (#3024)
* Fix torch version check

* Adjust to simply change the FSDP pytorch v

* Forgot one, but keep consistent
2024-08-19 11:42:20 -04:00
5536a3a893 Set correct NPU backend and distributed_type when using transfer_to_npu (#3021)
* fix npu setting

* fix npu setting

* add code comments

---------

Co-authored-by: yangyuanhang7 <yangyuanhang7@jd.com>
2024-08-19 11:18:16 -04:00
7ec8eab955 Tweak defaults for quantized-typed FP8 TE weights (#3018)
* Tweak defaults

* Can't forget about CLI

* Update docs
2024-08-19 07:47:54 -04:00
589fddd317 destroy process group in end_training (#3012)
* destroy process group

* rephrase

* style

* fix on_main_process
2024-08-15 08:31:21 -04:00
99c69aaf73 Wrong import check for TE (#3016) 2024-08-15 07:06:38 -04:00
00785cd9fc fix default value for rank size in cpu threads_per_process assignment logic (#3009)
* fix default value for rank size

* fix style

* apply int in case ratio is decimal

* style quality fix
2024-08-14 21:49:38 -04:00
a452327e8e Enable FSDP & Deepspeed + FP8 (#2983)
* Working version rebased from main

* kwargs

* Clean

* Fix more nits

* Fin

* Delay autocast flag

* Enable FP8 autocast during eval only if specified

* Fin

* Rm comment

* All done

* Zero3 works!

* Let the wrapper come off during unwrap_model

* Add import check

* Migrate all to benchmarks folder and make TE import check work

* Add readme

* Add README to benchmarks folder

* Update CLI to now include fp8 args

* Add test config for 0_34

* Finish adding to config yaml

* Write docs

* Expound docs w/ FP8

* Add to toctree
2024-08-14 14:57:01 -04:00
851cf34351 Fix find_tied_params for models with shared layers (#2986)
* Add test case

* Fix find_tied_params

* Sort params in test

* Refactor variable naming, add comments

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Fix docstrings quality

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-08-13 08:27:26 -04:00
cd5698bb32 update version to 0.34.dev0 (#3007) 2024-08-12 12:13:37 -04:00
90d5023901 Add small util to enable FSDP offloading quickly (#3006)
* Wrap up util

* Add small util

* Update doc

* Don't req

* Clean
2024-08-12 11:53:02 -04:00
3bde615607 Make env variables optional for FSDP (#2998)
* Bookmark

* Tests pass!

* Fix imports

* Try with raw dict

* Make diff easier

* Add defaults to all relevent areas

* Rest of refactor

* Fix all of benjamin's nits

* Adjust logic based on Benjamin's feedback

* Adjust for new logic
2024-08-12 11:01:50 -04:00
dc3b5ad82e Fix deepspeed tests (#3003)
* Unpin deepspeed

* Include proper branch for docker image

* Properly working

* Revert all other changes
2024-08-09 15:35:25 -04:00
12a5befdd6 clear memory after offload (#2994) 2024-08-09 09:36:33 +02:00
79ca85c27d Support skip_first_batches for XLA (#2966)
* Fix skip_first_batches for XLA

* Use state to check XLA

* Change to PartialState
2024-08-08 08:55:44 -04:00
13d93c4f50 Fix typo on warning str: "meta device device" -> "meta device" (#2997) 2024-08-07 13:30:48 +02:00
d982751aec Explicit check for step when loading the state (#2992)
* Explicit check

* Nit
2024-08-06 12:26:51 -04:00
95edc68cb3 Fix gated test (#2993)
* Fix gated test

* Clean

* Finally, adjust test
2024-08-06 11:50:15 -04:00
288accc0ec Fix bug of clip_grad_norm_ for xla fsdp (#2941)
* fix bug of clip_grad_norm_ for xla

* modify
2024-08-01 16:58:21 -04:00
83b0610155 remove .md to allow proper linking (#2977) 2024-08-01 11:52:59 -04:00
386f7d2825 add MLU devices for rng state saving and loading. (#2940)
* Add Cambricon MLU accelerator support

* up mlu support for test

* fix mlu device MULTI_MLU

* Update src/accelerate/utils/imports.py

it's beautiful !

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* up mlu for quality check

* fix mlu device longTensor error

* fix mlu device tensor dtype check

* fix mlu device send_to_device with torch dynamo error

* Refactor AcceleratorState

* Should be near complete now

* Last missing piece

* Make my way to the acceleratorstate

* Include update to global var

* Don't use global

* gpu -> cuda

* Don't use update for dict, easier to read

* Fix tests

* stash

* Getting closer...

* Needed to spawn at the very end after env was setup

* Explain set_device before deepspeed

* Make docstring more accurate

* Early return insteaD

* Delineat blocks

* Make prepare_backend return state + backend for clarity/less magic

* fix mlu longtensor.to() bugs.

* fix MLU devices rng state save and load.

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-07-31 16:33:15 -04:00
308a8e9689 chore: Update runs-on configuration for CI workflows (#2981)
Signed-off-by: Adrien <adrien@huggingface.co>
2024-07-31 16:24:36 -04:00
f35cbd1f02 Enable Unwrapping for Model State Dicts (FSDP) (#2959)
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
2024-07-31 16:03:23 -04:00
a14260c9da Fix torchvision to be compatible with torch version in CI (#2982)
* skip test due to torchvision issue

* Revert "skip test due to torchvision issue"

This reverts commit b12b6b4ffafea6ec6c65b9721a30b8a54bf7af1e.

* change min version

* test upgrade

* exact version

* update

* add back
2024-07-31 18:16:12 +02:00
32f368ec3f Require safetensors>=0.4.3 (#2957) 2024-07-29 07:35:34 -04:00
415eddf1be feat(ci): add pip caching in CI (#2952) 2024-07-22 16:55:08 -04:00
230857691a Properly handle Params4bit in set_module_tensor_to_device (#2934)
* Properly handle  in

* Add comment to explain Params4bit skipping shape check for set_module_tensor_to_device
2024-07-22 08:42:49 -04:00
a5a3e57125 Add torch.float8_e4m3fn format dtype_byte_size (#2945)
* add new format

* check torch version

* style
2024-07-20 03:07:07 +02:00
0af1d8b8de delete CCL env var setting (#2927)
* delete CCL env var setting

* fix format
2024-07-17 22:15:46 -04:00
d16d7371a1 Improve test reliability for Accelerator.free_memory() (#2935) 2024-07-16 08:40:51 -04:00
7a5c231b9e Consider pynvml available when installed through the nvidia-ml-py distribution (#2936) 2024-07-16 08:40:16 -04:00
4f02bb764a Fix import test (#2931)
* Fix import test

* Tweak threash
2024-07-15 11:13:23 -04:00
YH
709fd1e42b Hotfix PyTorch Version Installation in CI Workflow for Minimum Version Matrix (#2889)
* Fix ci torch version matrix

* Patch torch minor version
2024-07-15 10:31:12 -04:00
f4f1260a0e Correct loading of models with shared tensors when using accelerator.load_state() (#2875)
* Enabled correct loading of models with shared tensors when using accelerator.load_state()

* removed unused import

* added a test for a model with shared weights

* removed unnecessary bits

* fixed linting errors
2024-07-15 10:29:17 -04:00
c6da9f8693 Allow multiple process per device (#2916)
* Allow more processes than devices

* Accept suggestion

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-07-15 10:18:15 -04:00
3ebbe573ad Add huggingface_hub version to setup.py (#2932) 2024-07-15 10:11:41 -04:00
24bf5ec546 add xpu device check before moving tensor directly to xpu device (#2928)
* add ipex check

* fix type

* fix bug
2024-07-15 09:30:22 -04:00
e1247de01e Better error when a bad directory is given for weight merging (#2852) 2024-07-12 13:20:00 -04:00
12a007d559 Support MUSA (Moore Threads GPU) backend in accelerate (#2917) 2024-07-10 13:42:28 +02:00
5bdcd7e169 fix: bug where mulit_gpu was being set and warning being printed even with num_processes=1 (#2921)
Signed-off-by: Harikrishnan Balagopal <harikrishmenon@gmail.com>
2024-07-08 12:06:30 -04:00
2471eacdd6 Fix slowdown on init with device_map="auto" (#2914) 2024-07-04 09:10:21 -04:00
167cb5eb20 [tests] fix bug in torch_device (#2909) 2024-07-04 06:44:40 -04:00
947f64ee62 Version update 2024-07-03 13:27:34 -04:00
8330b375d4 Fix get_backend bug and add clear_device_cache function (#2857)
* added clear_device_cache

* set lambda: 0 for mps and cpu
2024-07-03 06:59:10 -04:00
92404fbf5f fix load_state_dict for xpu and refine xpu safetensor version check (#2879)
* add fix

* update warning

* no and
2024-07-03 06:36:36 -04:00
3a02754915 add require_triton and enable test_dynamo work on xpu (#2878) 2024-07-03 04:52:09 -04:00
fec1170e35 fix mlu device longTensor bugs (#2887)
* Add Cambricon MLU accelerator support

* up mlu support for test

* fix mlu device MULTI_MLU

* Update src/accelerate/utils/imports.py

it's beautiful !

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* up mlu for quality check

* fix mlu device longTensor error

* fix mlu device tensor dtype check

* fix mlu device send_to_device with torch dynamo error

* Refactor AcceleratorState

* Should be near complete now

* Last missing piece

* Make my way to the acceleratorstate

* Include update to global var

* Don't use global

* gpu -> cuda

* Don't use update for dict, easier to read

* Fix tests

* stash

* Getting closer...

* Needed to spawn at the very end after env was setup

* Explain set_device before deepspeed

* Make docstring more accurate

* Early return insteaD

* Delineat blocks

* Make prepare_backend return state + backend for clarity/less magic

* fix mlu longtensor.to() bugs.

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-07-03 04:50:11 -04:00
eac206f063 make more cuda-only tests device-agnostic (#2876)
* enable 3 cases

* add ests

* add 2 more

* revert 1 back

* revert 1 more

* enable on xpu
2024-07-03 04:49:53 -04:00
6882ff2bea Added a MultiCPU SLURM example using Accelerate Launch and MPIRun (#2902)
* initial commit for slurm multicpu script

* changed output path

* Added multicpu example using accelerate + mpirun + slurm

* removed file

* rename file

* deleted file

* refactored for cleanliness

* updated docs

* fixed variable names

* quality update

* test fix

* addressed review comments

* fix typo for activateEnvironment.sh

* added ACCELERATE path

* Edit wording

Co-authored-by: Dina Suehiro Jones <dina.s.jones@intel.com>

* added back mistakenly deleted line

---------

Co-authored-by: Dina Suehiro Jones <dina.s.jones@intel.com>
2024-07-03 04:14:02 -04:00
57a4c7465e Add XLA Dynamo backends for training and inference (#2892) 2024-07-03 04:10:13 -04:00
YH
404510a5ec Make log_line_prefix_template Optional in Elastic Launcher for Backward Compatibility (#2888)
* Fix unexpected keyword argument err for elastic launch config

* Update torch version flow

* Del log prefix template from env vars
2024-07-03 04:06:08 -04:00
3086e26db9 Speed up imports and add a CI (#2845)
* Working test

* Timing cleanup

* Add CI

* Fix nits

* Mixup imports

* Clean

* tuna -> tuna-interpreter

* Refactor pippy imports

* Accelerator

* Fin

* Fin

* Keep specific ones for docs
2024-07-01 18:50:18 -04:00
YH
5d5d07abfc Add Profiler Support for Performance Analysis (#2883)
* Add torch profiler

* Add example

* Fix rank 0 saving

* Add docstring

* Add profile readme

* Fix minor

* Fix example path

* Add exp test code

* Rename profile dir

* Change readme

* Change save format

* Minor

* Enhance docstring example

* Add user guide

* Add memory profile guide

* Enhance error msg

* Fix type hinting

* Minor refactor

* Fix hf tag

* Fix copyright year

* Mv toctree

* Fix image path

* Fix license year

* Change profiler pattern name

* Update package reference

* Add slow decorator

* Check output value
2024-07-01 18:01:09 -04:00
5a0b7dc597 Support saving and loading of step while saving and loading state (#2765)
* Add feature to save step when saving state

* Update docstring for `load_accelerate_state`
2024-07-01 14:57:19 -04:00
c799c198e9 add xpu support (#2864) 2024-06-26 14:56:13 +02:00
1f7a79b428 Potentially fix tests (#2862)
* Potentially fix tests

* Try again with numpy sub 2
2024-06-18 11:38:30 +02:00
4cc3530b64 [tests] skip bnb-related tests instead of failing on xpu (#2860)
* fix requirement

* add one more

* add one more case

* remove files

* remove more file

* bug fix

* revert
2024-06-18 11:22:03 +02:00
5d4a3beb01 [tests] use torch_device instead of 0 for device check (#2861)
* bug fix

* fix one more case

* add more cases

* refine
2024-06-18 10:01:52 +02:00
0284f9a9f6 [tests] fix bug in test_tracking.ClearMLTest (#2863) 2024-06-17 21:40:45 +02:00
573d22d48f Default FSDP weights merge to safetensors (#2853) 2024-06-17 11:23:17 +02:00
13ca7dccb6 Drop torch re-imports in npu and mlu paths (#2856)
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2024-06-14 07:13:59 -04:00
3b5a00e048 xpu: support xpu backend from stock pytorch (>=2.4) (#2825)
Fixes: https://github.com/huggingface/transformers/issues/31237

XPU backend is available in the stock PyTorch starting from
version 2.4, see [1]. This commit extends huggingface accelerate
to support XPU from both IPEX and the stock pytorch. IPEX is being
tried first.

See: https://github.com/pytorch/pytorch/issues/114842

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2024-06-13 11:20:30 -04:00
3c4eaedd46 Refactor logging to use logger in dispatch_model (#2855) 2024-06-13 11:18:48 -04:00
YH
c0faec766c Add DDP Communication Hooks (#2841)
* Add ddp comm hook

* Fix dataclass order

* Merge ddp grad hook to ddp kwargs handler

* Reset ddp kwargs key

* Add test

* Fix test case

* Split ddp grad test

* Fix test case

* Ehance docstring

* Minor

* Use naive baseenum for ddp comm hook type

* Add by feature example

* Add multi device deco

* Add user guide

* Update examples/by_feature/ddp_comm_hook.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update examples/by_feature/ddp_comm_hook.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Add wrapper and state option details

* Update toctree

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Mv ddp comm hook index

* Fix ddp comm hook user guid

* Del empty line

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2024-06-13 10:34:20 -04:00
91a2599f93 Auto create dir when merging FSDP weights (#2854) 2024-06-13 05:32:22 -04:00
5f9235a731 Remove underlines between badges (#2851) 2024-06-12 15:30:28 -04:00
7a36a75c7c remove warning hook addede during dispatch_model (#2843)
* remove-warning-hook

* add _accelerate_added_attributes

* add comments
2024-06-12 16:24:45 +02:00
f62854a281 Revert "Slight rename" (#2850)
This reverts commit a9869ea0dc49652e49607d5f111caed79ed5cb67.
2024-06-12 08:10:13 -04:00
a9869ea0dc Slight rename 2024-06-11 10:15:28 -04:00
6d59614603 doc: fix link (#2844) 2024-06-11 07:41:09 -04:00
2d74c0c077 fix(ci): remove unnecessary permissions (#2842) 2024-06-10 05:35:19 -04:00
40007b4e97 feat(ci): add trufflehog secrets detection (#2836) 2024-06-07 18:29:14 +02:00
7141881b1f Push new release version 2024-06-07 10:05:51 -04:00
f0049b2cfb Use shard saving from huggingface_hub (#2795)
* use shard saving from huggingface hub

* move import

* add shard_checkpoint back but with deprecation msg

* add shard_checkpoint back
2024-06-07 10:03:46 -04:00
271 changed files with 22102 additions and 3319 deletions

View File

@ -37,11 +37,11 @@ members/contributors who may be interested in your PR.
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
- Big modeling: @SunMarc
- Fully-Sharded Data Parallism: @muellerzr
- DeepSpeed: @muellerzr
- Command Line Interface: @muellerzr
- Documentation: @muellerzr
- Core parts of the library: @muellerzr @BenjaminBossan @SunMarc
- Maintained examples: @muellerzr or @SunMarc
- Fully-Sharded Data Parallism: @SunMarc @zach-huggingface
- DeepSpeed: @SunMarc @zach-huggingface
- Command Line Interface: @SunMarc @zach-huggingface
- Documentation: @SunMarc @zach-huggingface
- Core parts of the library: @BenjaminBossan @SunMarc @zach-huggingface
- Maintained examples: @SunMarc or @zach-huggingface
-->

View File

@ -15,13 +15,14 @@ jobs:
outputs:
version: ${{ steps.step1.outputs.version }}
steps:
- uses: actions/checkout@v3.1.0
- uses: actions/checkout@4
- id: step1
run: echo "version=$(python setup.py --version)" >> $GITHUB_OUTPUT
version-cpu:
name: "Latest Accelerate CPU [version]"
runs-on: [self-hosted, intel-cpu, 8-cpu, ci]
runs-on:
group: aws-general-8-plus
needs: get-version
steps:
- name: Set up Docker Buildx
@ -41,7 +42,8 @@ jobs:
version-cuda:
name: "Latest Accelerate GPU [version]"
runs-on: [self-hosted, single-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-4xlarge-plus
needs: get-version
steps:
- name: Set up Docker Buildx
@ -61,7 +63,8 @@ jobs:
version-cuda-deepspeed:
name: "Latest Accelerate GPU DeepSpeed [version]"
runs-on: [self-hosted, single-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-4xlarge-plus
needs: get-version
steps:
- name: Set up Docker Buildx
@ -79,3 +82,23 @@ jobs:
push: true
tags: huggingface/accelerate:gpu-deepspeed-release-${{needs.get-version.outputs.version}}
version-cuda-fp8-transformerengine:
name: "Latest Accelerate GPU FP8 TransformerEngine [version]"
runs-on:
group: aws-g6-4xlarge-plus
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-gpu/Dockerfile
push: true
tags: huggingface/accelerate:gpu-fp8-transformerengine-release-${{needs.get-version.outputs.version}}

View File

@ -16,13 +16,13 @@ jobs:
outputs:
changed: ${{ steps.was_changed.outputs.changed }}
steps:
- uses: actions/checkout@v3.1.0
- uses: actions/checkout@v4
with:
fetch-depth: "2"
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v41
uses: tj-actions/changed-files@3f54ebb830831fc121d3263c1857cfbdc310cdb9 #v42
- name: Was setup changed
id: was_changed
@ -47,4 +47,4 @@ jobs:
run-integration-tests:
needs: build-docker-containers
if: always()
uses: ./.github/workflows/self_hosted_integration_tests.yml
uses: ./.github/workflows/self_hosted_integration_tests.yml

View File

@ -13,7 +13,8 @@ concurrency:
jobs:
latest-cpu:
name: "Latest Accelerate CPU [dev]"
runs-on: [self-hosted, intel-cpu, 8-cpu, ci]
runs-on:
group: aws-general-8-plus
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
@ -29,7 +30,7 @@ jobs:
- name: Build and Push CPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-cpu/Dockerfile
file: docker/accelerate-cpu/Dockerfile
push: true
tags: |
huggingface/accelerate:cpu-nightly
@ -37,7 +38,8 @@ jobs:
latest-cuda:
name: "Latest Accelerate GPU [dev]"
runs-on: [self-hosted, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-4xlarge-plus
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
@ -53,7 +55,7 @@ jobs:
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-gpu/Dockerfile
file: docker/accelerate-gpu/Dockerfile
push: true
tags: |
huggingface/accelerate:gpu-nightly
@ -61,7 +63,8 @@ jobs:
latest-cuda-deepspeed:
name: "Latest Accelerate GPU DeepSpeed [dev]"
runs-on: [self-hosted, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-4xlarge-plus
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
@ -77,9 +80,37 @@ jobs:
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-gpu-deepspeed/Dockerfile
file: docker/accelerate-gpu-deepspeed/Dockerfile
push: true
tags: |
huggingface/accelerate:gpu-deepspeed-nightly
huggingface/accelerate:gpu-deepspeed-nightly-${{ env.date }}
latest-cuda-fp8-transformerengine:
name: "Latest Accelerate GPU FP8 TransformerEngine [dev]"
runs-on:
group: aws-g6-4xlarge-plus
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Get current date
id: date
run: |
echo "date=$(date '+%Y-%m-%d')" >> $GITHUB_ENV
# Get the previous month
echo "base_year=$(date -d 'last month' '+%y')" >> $GITHUB_ENV
echo "base_month=$(date -d 'last month' '+%m')" >> $GITHUB_ENV
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: benchmarks/fp8/transformer_engine/Dockerfile
push: true
tags: huggingface/accelerate:gpu-fp8-transformerengine-nightly-${{ env.date }}
build-args: |
BASE_YEAR=${{ env.base_year }}
BASE_MONTH=${{ env.base_month }}

37
.github/workflows/fp8_runner.yml vendored Normal file
View File

@ -0,0 +1,37 @@
name: Test FP8 Runner
on:
workflow_dispatch:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
jobs:
set-prev-day:
runs-on: ubuntu-latest
outputs:
prev-day: ${{ steps.set-prev-day.outputs.prev-day }}
steps:
- name: Set PREV_DAY
id: set-prev-day
run: |
PREV_DAY=$(date -d "yesterday" '+%Y-%m-%d')
echo "prev-day=$PREV_DAY" >> $GITHUB_OUTPUT
run-fp8-tests:
needs: set-prev-day
runs-on:
group: aws-g6e-12xlarge
container:
image: huggingface/accelerate:gpu-fp8-transformerengine-nightly-${{ needs.set-prev-day.outputs.prev-day }}
options: --gpus all --shm-size "16gb"
steps:
- uses: actions/checkout@v3
- name: Install the library
run: |
pip install -e .[test_prod,test_fp8]
- name: Show installed libraries
run: |
pip freeze
- name: Run TE FP8 tests
run: |
python -m pytest -s -v ./tests/test_fp8.py

87
.github/workflows/gaudi3_scheduled.yml vendored Normal file
View File

@ -0,0 +1,87 @@
name: Gaudi3 tests (scheduled)
on:
workflow_dispatch:
schedule: # every day at 6 AM UTC
- cron: "0 6 * * *"
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
run-gaudi3-tests:
runs-on:
group: itac-bm-emr-gaudi3-dell-2gaudi
container:
image: docker://vault.habana.ai/gaudi-docker/1.21.1/ubuntu22.04/habanalabs/pytorch-installer-2.6.0:latest
options: --runtime=habana --shm-size=64G --cap-add=sys_nice --env HABANA_VISIBLE_DEVICES
env:
OMPI_MCA_btl_vader_single_copy_mechanism: none
PT_ENABLE_INT64_SUPPORT: 1
PT_HPU_LAZY_MODE: 0
RUN_SLOW: 1
steps:
- name: HL-SMI (1)
run: |
hl-smi
echo "HABANA_VISIBLE_DEVICES=${HABANA_VISIBLE_DEVICES}"
echo "HABANA_VISIBLE_MODULES=${HABANA_VISIBLE_MODULES}"
- name: Extract HPU visible modules
id: add-modules
run: |
export HABANA_VISIBLE_MODULES=$(hl-smi -Q module_id -f csv,noheader | tr '\n' ',' | sed 's/,$//')
echo "HABANA_VISIBLE_MODULES=${HABANA_VISIBLE_MODULES}" >> $GITHUB_ENV
- name: HL-SMI (2)
run: |
hl-smi
echo "HABANA_VISIBLE_DEVICES=${HABANA_VISIBLE_DEVICES}"
echo "HABANA_VISIBLE_MODULES=${HABANA_VISIBLE_MODULES}"
- name: Checkout to Accelerate
uses: actions/checkout@v4
- name: Install Accelerate with Transformers & DeepSpeed
run: |
pip install -e .[testing] \
git+https://github.com/HabanaAI/DeepSpeed.git@1.20.0 \
git+https://github.com/huggingface/transformers.git
- name: Run CLI tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_cli
- name: Run Core tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_core
- name: Run Big Modeling tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_big_modeling
- name: Run DeepSpeed integration tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_deepspeed
- name: Run FSDP integration tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_fsdp
- name: Run TP integration tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_tp
- name: Run Examples tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_examples

View File

@ -26,11 +26,13 @@ jobs:
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v3.1.0
- name: Set up python 3.8
uses: actions/setup-python@v3
- uses: actions/checkout@v4
- name: Set up python 3.9
uses: actions/setup-python@v5
with:
python-version: 3.8
python-version: 3.9
cache: 'pip'
cache-dependency-path: 'setup.py'
- name: Install Accelerate from source
run: |

View File

@ -13,7 +13,8 @@ env:
jobs:
run_core_tests_single_gpu:
runs-on: [self-hosted, single-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-4xlarge-plus
env:
CUDA_VISIBLE_DEVICES: "0"
TEST_TYPE: "single_gpu"
@ -43,7 +44,7 @@ jobs:
run: |
source activate accelerate
make test
- name: Run examples on GPUs
working-directory: accelerate
if: always()
@ -51,7 +52,7 @@ jobs:
source activate accelerate
pip uninstall comet_ml -y
make test_examples
- name: Generate Report
working-directory: accelerate
if: always()
@ -60,7 +61,8 @@ jobs:
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_deepspeed_tests_single_gpu:
runs-on: [self-hosted, single-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-4xlarge-plus
env:
CUDA_VISIBLE_DEVICES: "0"
TEST_TYPE: "single_gpu_deepspeed"
@ -105,7 +107,7 @@ jobs:
source activate accelerate
pip uninstall comet_ml -y
make test_examples
- name: Generate Report
working-directory: accelerate
if: always()
@ -114,7 +116,8 @@ jobs:
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_core_tests_multi_gpu:
runs-on: [self-hosted, multi-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-12xlarge-plus
env:
CUDA_VISIBLE_DEVICES: "0,1"
TEST_TYPE: "multi_gpu"
@ -170,7 +173,8 @@ jobs:
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_deepspeed_tests_multi_gpu:
runs-on: [self-hosted, multi-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-12xlarge-plus
env:
CUDA_VISIBLE_DEVICES: "0,1"
TEST_TYPE: "multi_gpu_deepspeed"
@ -223,7 +227,7 @@ jobs:
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run-integration-tests:
if: always()
uses: ./.github/workflows/self_hosted_integration_tests.yml
uses: ./.github/workflows/self_hosted_integration_tests.yml

19
.github/workflows/pr_style_bot.yml vendored Normal file
View File

@ -0,0 +1,19 @@
# To run this bot, comment "@bot /style" on a PR
name: Style Bot
on:
issue_comment:
types: [created]
permissions:
contents: write
pull-requests: write
jobs:
style:
uses: huggingface/huggingface_hub/.github/workflows/style-bot-action.yml@main
with:
python_quality_dependencies: "[quality]"
style_command_type: "default"
secrets:
bot_token: ${{ secrets.GITHUB_TOKEN }}

View File

@ -6,11 +6,13 @@ jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3.1.0
- name: Set up Python 3.8
uses: actions/setup-python@v3
- uses: actions/checkout@v4
- name: Set up Python 3.9
uses: actions/setup-python@v5
with:
python-version: 3.8
python-version: 3.9
cache: 'pip'
cache-dependency-path: 'setup.py'
- name: Install Python dependencies
run: pip install -e .[quality]
- name: Run Quality check

View File

@ -10,7 +10,8 @@ env:
jobs:
run_core_tests_single_gpu:
runs-on: [self-hosted, single-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-4xlarge-plus
env:
CUDA_VISIBLE_DEVICES: "0"
container:
@ -39,7 +40,7 @@ jobs:
run: |
source activate accelerate;
make test_cli
- name: Run test on GPUs
working-directory: accelerate
if: always()
@ -62,7 +63,8 @@ jobs:
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_deepspeed_tests_single_gpu:
runs-on: [self-hosted, single-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-4xlarge-plus
env:
CUDA_VISIBLE_DEVICES: "0"
container:
@ -85,7 +87,7 @@ jobs:
run: |
source activate accelerate;
pip freeze
- name: Run test on GPUs
working-directory: accelerate
if: always()
@ -101,7 +103,8 @@ jobs:
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_core_tests_multi_gpu:
runs-on: [self-hosted, multi-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-12xlarge-plus
env:
CUDA_VISIBLE_DEVICES: 0,1
container:
@ -147,7 +150,8 @@ jobs:
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_deepspeed_tests_multi_gpu:
runs-on: [self-hosted, multi-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-12xlarge-plus
container:
image: huggingface/accelerate:gpu-deepspeed-nightly
options: --gpus all --shm-size "16gb"
@ -181,4 +185,4 @@ jobs:
if: always()
run: |
pip install tabulate;
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

View File

@ -1,7 +1,7 @@
# CI for specifically ensuring integrations work fine (`transformers` mainly) on GPUs
# Useful tips:
# - `working-directory` should be set to the root of the repo, which is cloned on the actual CI runner.
# It follows the directory structure of `actions-runner/_work/{repo_name}/{repo_name}/{cloned_repo} on
# It follows the directory structure of `actions-runner/_work/{repo_name}/{repo_name}/{cloned_repo} on
# prem, but in Actions setting `working-directory` looks just in the `{repo_name}` level.
# - New integrations to test should have its own job, and follow a strategy method where we check both
# the pypi and github versions.
@ -25,12 +25,13 @@ jobs:
container:
image: huggingface/accelerate:gpu-deepspeed-nightly
options: --gpus all --shm-size "16gb"
runs-on: [self-hosted, multi-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-12xlarge-plus
strategy:
fail-fast: false
matrix:
cuda_visible_devices: [
"0",
"0",
"0,1"
]
steps:
@ -51,7 +52,7 @@ jobs:
pip install -e .[testing];
pip uninstall comet_ml wandb dvclive -y
cd ..;
- name: Show installed libraries
run: |
source activate accelerate;
@ -90,12 +91,13 @@ jobs:
container:
image: huggingface/accelerate:gpu-nightly
options: --gpus all --shm-size "16gb"
runs-on: [self-hosted, multi-gpu, nvidia-gpu, t4, ci]
runs-on:
group: aws-g6-12xlarge-plus
strategy:
fail-fast: false
steps:
- name: Install accelerate
run:
run:
source activate accelerate;
git clone https://github.com/huggingface/accelerate;
cd accelerate;
@ -122,4 +124,4 @@ jobs:
working-directory: skorch/
run: |
source activate accelerate;
pytest -sv -k TestAccelerate
pytest -sv -k TestAccelerate

View File

@ -10,19 +10,24 @@ jobs:
name: Close Stale Issues
if: github.repository == 'huggingface/accelerate'
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v3.1.0
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v3
uses: actions/setup-python@v5
with:
python-version: 3.8
python-version: 3.9
cache: 'pip'
cache-dependency-path: 'setup.py'
- name: Install requirements
run: |
pip install PyGithub
- name: Close stale issues
run: |
python utils/stale.py
python utils/stale.py

View File

@ -38,19 +38,21 @@ jobs:
test_rest
]
steps:
- uses: actions/checkout@v3.1.0
- name: Set up python 3.8
uses: actions/setup-python@v3
- uses: actions/checkout@v4
- name: Set up python 3.9
uses: actions/setup-python@v5
with:
python-version: 3.8
python-version: 3.9
cache: 'pip'
cache-dependency-path: 'setup.py'
- name: Install the library
run: |
if [[ ${{ matrix.test-kind }} = test_prod ]]; then pip install -e .[test_prod]; fi
if [[ ${{ matrix.test-kind }} != test_prod ]]; then pip install -e .[testing,test_trackers]; fi
if [[ ${{ matrix.test-kind }} = test_rest ]]; then pip uninstall comet_ml -y; fi
if [[ ${{ matrix.test-kind }} = minimum ]]; then pip install torch==1.10.0; fi
pip install pytest-reportlog tabulate setuptools
if [[ ${{ matrix.pytorch-version }} = minimum ]]; then pip install torchvision==0.18.1 torch==2.3.1; fi
pip install pytest-reportlog tabulate setuptools importlib_metadata
- name: Show installed libraries
run: |
@ -65,4 +67,4 @@ jobs:
- name: Generate Report
if: always()
run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

55
.github/workflows/test_imports.yml vendored Normal file
View File

@ -0,0 +1,55 @@
name: Run Import Tests
on:
pull_request:
paths:
- "src/**"
- "tests/**"
- ".github/**"
- "examples/**"
- "setup.py"
types: [opened, synchronize, reopened]
env:
HF_HOME: ~/hf_cache
TESTING_MOCKED_DATALOADERS: "1"
IS_GITHUB_CI: "1"
jobs:
run-tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
pytorch-version: [
latest,
minimum,
]
steps:
- uses: actions/checkout@v4
- name: Set up python 3.9
uses: actions/setup-python@v5
with:
python-version: 3.9
cache: 'pip'
cache-dependency-path: 'setup.py'
- name: Install the library
run: |
pip install -e .
pip install pytest-reportlog tabulate setuptools git+https://github.com/muellerzr/import-timer
- name: Show installed libraries
run: |
pip freeze
- name: Run Import Tests
env:
PYTORCH_VERSION: ${{ matrix.pytorch-version }}
run: |
pytest -sv tests/test_imports.py
- name: Generate Report
if: always()
run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

15
.github/workflows/trufflehog.yml vendored Normal file
View File

@ -0,0 +1,15 @@
on:
push:
name: Secret Leaks
jobs:
trufflehog:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Secret Scanning
uses: trufflesecurity/trufflehog@main

3
.gitignore vendored
View File

@ -142,3 +142,6 @@ wandb
# ruff
.ruff_cache
models

View File

@ -123,12 +123,15 @@ Follow these steps to start contributing:
4. Set up a development environment by running the following command in a conda or a virtual environment you've created for working on this library:
```bash
$ pip install -e ".[quality]"
$ pip install -e ".[dev]"
```
This will install all testing and linting/code quality dependencies for the library (see `quality`, `test_dev`,
`test_prod` targets in [`setup.py`](./setup.py)).
(If accelerate was already installed in the virtual environment, remove
it with `pip uninstall accelerate` before reinstalling it in editable
mode with the `-e` flag.)
mode with the `-e` flag).
Alternatively, if you are using [Visual Studio Code](https://code.visualstudio.com/Download), the fastest way to get set up is by using
the provided Dev Container. Documentation on how to get started with dev containers is available [here](https://code.visualstudio.com/docs/remote/containers).

View File

@ -23,22 +23,32 @@ style:
doc-builder style src/accelerate docs/source --max_len 119
# Run tests for the library
test_big_modeling:
python -m pytest -s -v ./tests/test_big_modeling.py ./tests/test_modeling_utils.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_big_modeling.log",)
test_core:
python -m pytest -s -v ./tests/ --ignore=./tests/test_examples.py --ignore=./tests/deepspeed --ignore=./tests/test_big_modeling.py \
--ignore=./tests/fsdp --ignore=./tests/test_cli.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_core.log",)
python -m pytest -s -v ./tests/ \
--ignore=./tests/test_big_modeling.py \
--ignore=./tests/test_modeling_utils.py \
--ignore=./tests/test_examples.py \
--ignore=./tests/test_cli.py \
--ignore=./tests/deepspeed \
--ignore=./tests/fsdp \
--ignore=./tests/tp \
$(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_core.log",)
test_cli:
python -m pytest -s -v ./tests/test_cli.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_cli.log",)
test_big_modeling:
python -m pytest -s -v ./tests/test_big_modeling.py ./tests/test_modeling_utils.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_big_modeling.log",)
test_deepspeed:
python -m pytest -s -v ./tests/deepspeed $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_deepspeed.log",)
test_fsdp:
python -m pytest -s -v ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_fsdp.log",)
test_tp:
python -m pytest -s -v ./tests/tp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_tp.log",)
# Since the new version of pytest will *change* how things are collected, we need `deepspeed` to
# run after test_core and test_cli
test:
@ -47,13 +57,14 @@ test:
$(MAKE) test_big_modeling
$(MAKE) test_deepspeed
$(MAKE) test_fsdp
$(MAKE) test_tp
test_examples:
python -m pytest -s -v ./tests/test_examples.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_examples.log",)
# Broken down example tests for the CI runners
test_integrations:
python -m pytest -s -v ./tests/deepspeed ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_integrations.log",)
python -m pytest -s -v ./tests/deepspeed ./tests/fsdp ./tests/tp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_integrations.log",)
test_example_differences:
python -m pytest -s -v ./tests/test_examples.py::ExampleDifferenceTests $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_example_diff.log",)
@ -70,3 +81,21 @@ test_prod:
test_rest:
python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "not by_step and not by_epoch" $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_rest.log",)
# For developers to prepare a release
prepare_release:
rm -rf dist build
python setup.py bdist_wheel sdist
# Make sure this is ran in a fresh venv of some form
install_test_release:
pip uninstall accelerate -y
pip install -i https://testpypi.python.org/pypi --extra-index-url https://pypi.org/simple accelerate$(if $(version),==$(version),)
# Run as `make target=testpypi upload_release`
upload_release:
@if [ "$(target)" != "testpypi" ] && [ "$(target)" != "pypi" ]; then \
echo "Error: target must be either 'testpypi' or 'pypi'"; \
exit 1; \
fi
twine upload dist/* -r $(target)

View File

@ -22,22 +22,12 @@ limitations under the License.
<p align="center">
<!-- Uncomment when CircleCI is set up
<a href="https://circleci.com/gh/huggingface/accelerate">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
</a>
<a href="https://circleci.com/gh/huggingface/accelerate"><img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"></a>
-->
<a href="https://github.com/huggingface/accelerate/blob/main/LICENSE">
<img alt="License" src="https://img.shields.io/github/license/huggingface/accelerate.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/accelerate/index.html">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/accelerate/index.html.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/accelerate/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/accelerate.svg">
</a>
<a href="https://github.com/huggingface/accelerate/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://github.com/huggingface/accelerate/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/huggingface/accelerate.svg?color=blue"></a>
<a href="https://huggingface.co/docs/accelerate/index.html"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/accelerate/index.html.svg?down_color=red&down_message=offline&up_message=online"></a>
<a href="https://github.com/huggingface/accelerate/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/accelerate.svg"></a>
<a href="https://github.com/huggingface/accelerate/blob/main/CODE_OF_CONDUCT.md"><img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"></a>
</p>
<h3 align="center">
@ -167,6 +157,8 @@ accelerate launch --multi_gpu --num_processes 2 examples/nlp_example.py
To learn more, check the CLI documentation available [here](https://huggingface.co/docs/accelerate/package_reference/cli).
Or view the configuration zoo [here](https://github.com/huggingface/accelerate/blob/main/examples/config_yaml_templates/)
## Launching multi-CPU run using MPI
🤗 Here is another way to launch multi-CPU run using MPI. You can learn how to install Open MPI on [this page](https://www.open-mpi.org/faq/?category=building#easy-build). You can use Intel MPI or MVAPICH as well.
@ -266,7 +258,7 @@ pip install accelerate
- multi-GPU on several nodes (machines)
- TPU
- FP16/BFloat16 mixed precision
- FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine)
- FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine) or [MS-AMP](https://github.com/Azure/MS-AMP/)
- DeepSpeed support (Experimental)
- PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental)
- Megatron-LM support (Experimental)

View File

@ -1,46 +1,5 @@
# Big model inference benchmarks
# Benchmarks
Running inference with Accelerate on big models.
The folders below contain suites to test various functionalities in Accelerate.
## Setup
These benchmarks use the `transformers` library:
```bash
pip install transformers
```
To reproduce or test a new setup, run
```py
python inference_acc.py model_name
```
This script supports `gpt-j-6b`, `gpt-neox`, `opt` (30B version) and `T0pp` out of the box, but you can specify any valid checkpoint for `model_name`.
To force a different `torch_dtype` than the one in the config: `--torch_dtype xxx`.
If you get an error linked to disk offload, you need to add the option `--disk-offload`
## Results
On a setup with two Titan RTXs (24GB of RAM) and 32GB of RAM, we get the following benchmarks (T0pp does not run in float16, which is why it's not included).
| Model | Model load time | Generation time | dtype | GPU 0 use | GPU 1 use | CPU use | Disk offload |
|:-----:|:---------------:|:---------------:|:-----:|:---------:|:---------:|:-------:|:------------:|
| GPT-J-6B | 8.7s | 0.05s per token | float16 | 11.7GB | 0GB | 0GB | no |
| GPT-J-6B | 12.4s | 0.06s per token | float32 | 21.9GB | 1.5GB | 0GB | no |
| GPT-Neo-X-20B | 30.9s | 0.08s per token | float16 | 21.5GB | 18GB | 0GB | no |
| GPT-Neo-X-20B | 78.2s | 10.72s per token | float32 | 20.3GB | 22.7 GB | 24.4GB | yes |
| T0pp (11B) | 29.4s | 0.05s per token | float32 | 21.1GB | 21.3GB | 0GB | no |
| OPT-30B | 34.5s | 2.37s per token | float16 | 20.7GB | 22.3GB | 14.1GB | no |
| OPT-30B | 112.3s | 33.9s per token | float32 | 20.2GB | 21.2GB | 23.5GB | yes |
Note on the results:
- using two GPUs instead of one does not slow down generation
- using CPU offload slows down a bit (see OPT-30b)
- using disk offload slows down a lot (need to implement prefetching)
You will also note that Accelerate does not use anymore GPU and CPU RAM than necessary:
- peak GPU memory is exactly the size of the model put on a given GPU
- peak CPU memory is either the size of the biggest checkpoint shard or the part of the model offloaded on CPU, whichever is bigger.
See their relevant README.md's for more information.

View File

@ -0,0 +1,46 @@
# Big model inference benchmarks
Running inference with Accelerate on big models.
## Setup
These benchmarks use the `transformers` library:
```bash
pip install transformers
```
To reproduce or test a new setup, run
```py
python big_model_inference.py model_name
```
This script supports `gpt-j-6b`, `gpt-neox`, `opt` (30B version) and `T0pp` out of the box, but you can specify any valid checkpoint for `model_name`.
To force a different `torch_dtype` than the one in the config: `--torch_dtype xxx`.
If you get an error linked to disk offload, you need to add the option `--disk-offload`
## Results
On a setup with two Titan RTXs (24GB of RAM) and 32GB of RAM, we get the following benchmarks (T0pp does not run in float16, which is why it's not included).
| Model | Model load time | Generation time | dtype | GPU 0 use | GPU 1 use | CPU use | Disk offload |
|:-----:|:---------------:|:---------------:|:-----:|:---------:|:---------:|:-------:|:------------:|
| GPT-J-6B | 8.7s | 0.05s per token | float16 | 11.7GB | 0GB | 0GB | no |
| GPT-J-6B | 12.4s | 0.06s per token | float32 | 21.9GB | 1.5GB | 0GB | no |
| GPT-Neo-X-20B | 30.9s | 0.08s per token | float16 | 21.5GB | 18GB | 0GB | no |
| GPT-Neo-X-20B | 78.2s | 10.72s per token | float32 | 20.3GB | 22.7 GB | 24.4GB | yes |
| T0pp (11B) | 29.4s | 0.05s per token | float32 | 21.1GB | 21.3GB | 0GB | no |
| OPT-30B | 34.5s | 2.37s per token | float16 | 20.7GB | 22.3GB | 14.1GB | no |
| OPT-30B | 112.3s | 33.9s per token | float32 | 20.2GB | 21.2GB | 23.5GB | yes |
Note on the results:
- using two GPUs instead of one does not slow down generation
- using CPU offload slows down a bit (see OPT-30b)
- using disk offload slows down a lot (need to implement prefetching)
You will also note that Accelerate does not use anymore GPU and CPU RAM than necessary:
- peak GPU memory is exactly the size of the model put on a given GPU
- peak CPU memory is either the size of the biggest checkpoint shard or the part of the model offloaded on CPU, whichever is bigger.

View File

@ -18,6 +18,12 @@ import time
import psutil
import torch
from accelerate.test_utils.testing import get_backend
torch_device_type, _, _ = get_backend()
torch_accelerator_module = getattr(torch, torch_device_type, torch.cuda)
class PeakCPUMemory:
def __init__(self):
@ -54,16 +60,16 @@ def start_measure():
measures = {"time": time.time()}
gc.collect()
torch.cuda.empty_cache()
torch_accelerator_module.empty_cache()
# CPU mem
measures["cpu"] = psutil.Process().memory_info().rss
cpu_peak_tracker.start()
# GPU mem
for i in range(torch.cuda.device_count()):
measures[str(i)] = torch.cuda.memory_allocated(i)
torch.cuda.reset_peak_memory_stats()
for i in range(torch_accelerator_module.device_count()):
measures[str(i)] = torch_accelerator_module.memory_allocated(i)
torch_accelerator_module.reset_peak_memory_stats()
return measures
@ -73,16 +79,16 @@ def end_measure(start_measures):
measures = {"time": time.time() - start_measures["time"]}
gc.collect()
torch.cuda.empty_cache()
torch_accelerator_module.empty_cache()
# CPU mem
measures["cpu"] = (psutil.Process().memory_info().rss - start_measures["cpu"]) / 2**20
measures["cpu-peak"] = (cpu_peak_tracker.stop() - start_measures["cpu"]) / 2**20
# GPU mem
for i in range(torch.cuda.device_count()):
measures[str(i)] = (torch.cuda.memory_allocated(i) - start_measures[str(i)]) / 2**20
measures[f"{i}-peak"] = (torch.cuda.max_memory_allocated(i) - start_measures[str(i)]) / 2**20
for i in range(torch_accelerator_module.device_count()):
measures[str(i)] = (torch_accelerator_module.memory_allocated(i) - start_measures[str(i)]) / 2**20
measures[f"{i}-peak"] = (torch_accelerator_module.max_memory_allocated(i) - start_measures[str(i)]) / 2**20
return measures
@ -90,9 +96,9 @@ def end_measure(start_measures):
def log_measures(measures, description):
print(f"{description}:")
print(f"- Time: {measures['time']:.2f}s")
for i in range(torch.cuda.device_count()):
print(f"- GPU {i} allocated: {measures[str(i)]:.2f}MiB")
for i in range(torch_accelerator_module.device_count()):
print(f"- {torch_device_type} {i} allocated: {measures[str(i)]:.2f}MiB")
peak = measures[f"{i}-peak"]
print(f"- GPU {i} peak: {peak:.2f}MiB")
print(f"- {torch_device_type} {i} peak: {peak:.2f}MiB")
print(f"- CPU RAM allocated: {measures['cpu']:.2f}MiB")
print(f"- CPU RAM peak: {measures['cpu-peak']:.2f}MiB")

View File

@ -0,0 +1,12 @@
FROM ghcr.io/azure/msamp
RUN pip install transformers evaluate datasets
RUN git clone https://github.com/huggingface/accelerate
RUN cd accelerate && \
pip install -e . && \
cd benchmarks/fp8
CMD ["bash"]

View File

@ -0,0 +1,123 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `MS-AMP`.
This particular script verifies this for DDP training.
"""
import evaluate
import msamp
import torch
from fp8_utils import evaluate_model, get_training_utilities
from torch.nn.parallel import DistributedDataParallel as DDP
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, get_grad_scaler, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline(opt_level="O2"):
set_seed(42)
scaler = get_grad_scaler()
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
accelerator = Accelerator()
device = accelerator.device
model, optimizer = msamp.initialize(model, optimizer, opt_level=opt_level)
model.to(device)
# Convert the model to DDP
device_ids, output_device = [accelerator.local_process_index], accelerator.local_process_index
model = DDP(model, device_ids=device_ids, output_device=output_device)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for i, batch in enumerate(train_dataloader):
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
loss = outputs.loss
scaler.scale(loss).backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration(opt_level="O2"):
kwargs_handlers = [FP8RecipeKwargs(backend="msamp", opt_level=opt_level)]
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=kwargs_handlers)
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for i, batch in enumerate(train_dataloader):
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
for opt_level in ["O1", "O2"]:
baseline_not_trained, baseline_trained = train_baseline(opt_level)
accelerator_not_trained, accelerator_trained = train_integration(opt_level)
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)

View File

@ -0,0 +1,161 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `MS-AMP`.
This particular script verifies this for DeepSpeed training.
NOTE: MS-AMP does *not* support ZeRO-3.
"""
# import msamp.deepspeed as msamp_deepspeed
import evaluate
import torch
from fp8_utils import evaluate_model, get_training_utilities
from msamp import deepspeed as msamp_deepspeed
from accelerate import Accelerator, DeepSpeedPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline(zero_stage: int = 1, opt_level: str = "O1"):
set_seed(42)
accelerator = Accelerator()
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
import numpy as np
config = {
"train_batch_size": 32,
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": zero_stage,
"offload_optimizer": {"device": "none", "nvme_path": None},
"offload_param": {"device": "none", "nvme_path": None},
},
"gradient_clipping": 1.0,
"steps_per_print": np.inf,
"bf16": {"enabled": True},
"fp16": {"enabled": False},
"zero_allow_untested_optimizer": True,
"msamp": {
"enabled": True,
"opt_level": opt_level,
},
}
(
model,
optimizer,
_,
_,
) = msamp_deepspeed.initialize(
model=model,
optimizer=optimizer,
config_params=config,
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
model.backward(loss)
model.step()
for _ in range(accelerator.num_processes):
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
torch.cuda.empty_cache()
AcceleratorState()._reset_state(True)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration(zero_stage: int = 1, opt_level: str = "O1"):
set_seed(42)
deepspeed_plugin = DeepSpeedPlugin(
zero_stage=zero_stage,
enable_msamp=True,
msamp_opt_level=opt_level,
)
accelerator = Accelerator(mixed_precision="fp8", deepspeed_plugin=deepspeed_plugin)
accelerator.state.deepspeed_plugin.deepspeed_config["train_micro_batch_size_per_gpu"] = 16
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
torch.cuda.empty_cache()
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
AcceleratorState()._reset_state(True)
return base_model_results, trained_model_results
if __name__ == "__main__":
for zero_stage in [1, 2]:
for opt_level in ["O1", "O2", "O3"]:
baseline_not_trained, baseline_trained = train_baseline(zero_stage, opt_level)
accelerator_not_trained, accelerator_trained = train_integration(zero_stage, opt_level)
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,118 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
def get_dataloaders(model_name: str, batch_size: int = 16):
from datasets import load_dataset
from torch.utils.data import DataLoader
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
return tokenizer.pad(
examples,
padding="longest",
pad_to_multiple_of=16, # Specific for FP8
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"],
shuffle=False,
collate_fn=collate_fn,
batch_size=16,
drop_last=True,
)
return train_dataloader, eval_dataloader
def get_training_utilities(model_name: str, batch_size: int = 16, accelerator=None):
"""
Returns a tuple of:
- Model
- Optimizer
- Train dataloader (prepared)
- Eval dataloader (prepared)
- LR Scheduler
Suitable for training on the MRPC dataset
"""
from torch.optim import AdamW
from transformers import AutoModelForSequenceClassification, get_linear_schedule_with_warmup
from accelerate import Accelerator
if accelerator is None:
accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(model_name)
train_dataloader, eval_dataloader = get_dataloaders(model_name, batch_size)
optimizer = AdamW(model.parameters(), lr=0.0001)
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=len(train_dataloader) * 2,
)
train_dataloader, eval_dataloader = accelerator.prepare(train_dataloader, eval_dataloader)
return model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
def get_named_parameters(model):
"""
Same thing as `Accelerator.get_named_parameters` Returns a list of the named parameters of the model (extracted
from parallel)
"""
from accelerate.utils import extract_model_from_parallel
model = extract_model_from_parallel(model)
return {n: p for n, p in model.named_parameters()}
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
# W/ MS-AMP, we need to cast while evaluating
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()

View File

@ -0,0 +1,118 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `MS-AMP`.
This particular script verifies this for single GPU training.
"""
import evaluate
import msamp
import torch
from fp8_utils import evaluate_model, get_training_utilities
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, get_grad_scaler, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline(opt_level="O2"):
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
model, optimizer = msamp.initialize(model, optimizer, opt_level=opt_level)
model.to("cuda")
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
scaler = get_grad_scaler()
for batch in train_dataloader:
batch = batch.to("cuda")
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
loss = outputs.loss
loss = scaler.scale(loss)
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration(opt_level="O2"):
kwargs_handlers = [FP8RecipeKwargs(backend="msamp", opt_level=opt_level)]
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=kwargs_handlers)
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler)
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
for opt_level in ["O1", "O2"]:
baseline_not_trained, baseline_trained = train_baseline(opt_level)
accelerator_not_trained, accelerator_trained = train_integration(opt_level)
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)

View File

@ -0,0 +1,12 @@
FROM nvcr.io/nvidia/pytorch:24.07-py3
RUN pip install transformers evaluate datasets
RUN git clone https://github.com/huggingface/accelerate.git
RUN cd accelerate && \
pip install -e . && \
cd benchmarks/fp8
RUN /bin/bash

View File

@ -0,0 +1,32 @@
# FP8 Benchmarks
Comparing and running [torchao](https://github.com/pytorch/ao/tree/main/torchao/float8) FP8 with accelerate
## Overview
This repo provides scripts which compare native `torchao` model training against `accelerate`'s own integration. Each modeling type is segmented out via a script, supporting the following:
* Single GPU training (`non_distributed.py`)
* Multi-GPU training via DistributedDataParallelism (`ddp.py`)
* Fully Sharded Data Parallelism (`fsdp.py`)
* DeepSpeed ZeRO 1-3 (`deepspeed.py`)
To run them, it's recommended to use a docker image (see the attached `Dockerfile`) and not install `torchao` manually.
## Running:
There are official Docker images located at `huggingface/accelerate:gpu-fp8-torchao-nightly` which can be used.
You can run all scripts using the core `accelerate launch` command without any `accelerate config` being needed.
For single GPU, run it via `python`:
```bash
python non_distributed.py
```
For the rest, run it via `accelerate launch`:
```bash
accelerate launch ddp.py # or distrib_deepspeed.py, ddp.py
```

View File

@ -0,0 +1,158 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
This particular script verifies this for DDP training.
"""
from functools import partial
import evaluate
import torch
from fp8_utils import get_training_utilities
from torch.nn.parallel import DistributedDataParallel as DDP
from torchao.float8 import convert_to_float8_training
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import AORecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
if isinstance(module, torch.nn.Linear):
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
return False
# For stability reasons, we skip the first and last linear layers
# Otherwise can lead to the model not training or converging properly
if fqn in (first_layer_name, last_layer_name):
return False
return True
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
first_linear = None
last_linear = None
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if first_linear is None:
first_linear = name
last_linear = name
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
accelerator = Accelerator()
device = accelerator.device
model.to(device)
convert_to_float8_training(model, module_filter_fn=func)
# Convert the model to DDP
device_ids, output_device = [accelerator.local_process_index], accelerator.local_process_index
model = DDP(model, device_ids=device_ids, output_device=output_device)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for batch in train_dataloader:
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
batch = batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=[AORecipeKwargs()])
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,213 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
This particular script verifies this for deepspeed training.
"""
from functools import partial
from unittest.mock import patch
import deepspeed
import evaluate
import torch
from fp8_utils import evaluate_model, get_training_utilities
from torchao.float8 import convert_to_float8_training
from transformers.integrations import HfDeepSpeedConfig
from accelerate import Accelerator, DeepSpeedPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import AORecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
if isinstance(module, torch.nn.Linear):
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
return False
# For stability reasons, we skip the first and last linear layers
# Otherwise can lead to the model not training or converging properly
if fqn in (first_layer_name, last_layer_name):
return False
return True
def train_baseline(zero_stage: int = 1):
set_seed(42)
# This forces transformers to think Zero-3 Init should be used
with patch("transformers.integrations.deepspeed.is_deepspeed_zero3_enabled") as mock:
mock.return_value = zero_stage == 3
config = HfDeepSpeedConfig(
{
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {"stage": zero_stage},
}
)
plugin = DeepSpeedPlugin(hf_ds_config=config)
accelerator = Accelerator(deepspeed_plugin=plugin)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
first_linear = None
last_linear = None
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if first_linear is None:
first_linear = name
last_linear = name
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
convert_to_float8_training(model, module_filter_fn=func)
import numpy as np
config = {
"train_batch_size": 32,
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": zero_stage,
"offload_optimizer": {"device": "none", "nvme_path": None},
"offload_param": {"device": "none", "nvme_path": None},
"stage3_gather_16bit_weights_on_model_save": False,
},
"gradient_clipping": 1.0,
"steps_per_print": np.inf,
"bf16": {"enabled": True},
"fp16": {"enabled": False},
"zero_allow_untested_optimizer": True,
}
(
model,
optimizer,
_,
lr_scheduler,
) = deepspeed.initialize(
model=model,
optimizer=optimizer,
lr_scheduler=lr_scheduler,
config_params=config,
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
model_outputs = []
data = []
for batch in train_dataloader:
outputs = model(**batch)
data.append(batch.to("cpu"))
model_outputs.append(outputs.logits.to("cpu"))
loss = outputs.loss
model.backward(loss)
model.step()
for _ in range(accelerator.num_processes):
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
del config
return base_model_results, trained_model_results, model_outputs, data
def train_integration(zero_stage: int = 1):
set_seed(42)
AcceleratorState()._reset_state(True)
config = HfDeepSpeedConfig(
{
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {"stage": zero_stage},
}
)
deepspeed_plugin = DeepSpeedPlugin(
hf_ds_config=config,
)
# This forces transformers to think Zero-3 Init should be used
with patch("transformers.integrations.deepspeed.is_deepspeed_zero3_enabled") as mock:
mock.return_value = zero_stage == 3
accelerator = Accelerator(
mixed_precision="fp8", kwargs_handlers=[AORecipeKwargs()], deepspeed_plugin=deepspeed_plugin
)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler, train_dataloader, eval_dataloader = accelerator.prepare(
model, optimizer, lr_scheduler, train_dataloader, eval_dataloader
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
model_outputs = []
data = []
for batch in train_dataloader:
outputs = model(**batch)
data.append(batch.to("cpu"))
model_outputs.append(outputs.logits.to("cpu"))
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
del config
return base_model_results, trained_model_results, model_outputs, data
if __name__ == "__main__":
for zero_stage in [1, 2, 3]:
baseline_not_trained, baseline_trained, baseline_outputs, baseline_data = train_baseline(zero_stage)
accelerator_not_trained, accelerator_trained, accelerator_outputs, accelerator_data = train_integration(
zero_stage
)
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
AcceleratorState()._reset_state(True)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,116 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
def get_dataloaders(model_name: str, batch_size: int = 16):
from datasets import load_dataset
from torch.utils.data import DataLoader
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
return tokenizer.pad(
examples,
padding="longest",
pad_to_multiple_of=16, # Specific for FP8
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"],
shuffle=False,
collate_fn=collate_fn,
batch_size=16,
drop_last=True,
)
return train_dataloader, eval_dataloader
def get_training_utilities(model_name: str, batch_size: int = 16, accelerator=None, prepare=True):
"""
Returns a tuple of:
- Model
- Optimizer
- Train dataloader (prepared)
- Eval dataloader (prepared)
- LR Scheduler
Suitable for training on the MRPC dataset
"""
from torch.optim import AdamW
from transformers import AutoModelForSequenceClassification, get_linear_schedule_with_warmup
from accelerate import Accelerator
if accelerator is None:
accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(model_name)
train_dataloader, eval_dataloader = get_dataloaders(model_name, batch_size)
optimizer = AdamW(model.parameters(), lr=0.0001)
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=len(train_dataloader) * 2,
)
train_dataloader, eval_dataloader = accelerator.prepare(train_dataloader, eval_dataloader)
return model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
def get_named_parameters(model):
"""
Same thing as `Accelerator.get_named_parameters` Returns a list of the named parameters of the model (extracted
from parallel)
"""
from accelerate.utils import extract_model_from_parallel
model = extract_model_from_parallel(model)
return {n: p for n, p in model.named_parameters()}
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()

View File

@ -0,0 +1,173 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
This particular script verifies this for FSDP training.
"""
from functools import partial
import evaluate
import torch
from fp8_utils import get_training_utilities
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import MixedPrecision
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
from torchao.float8 import convert_to_float8_training
from transformers.models.bert import BertLayer
from accelerate import Accelerator
from accelerate import FullyShardedDataParallelPlugin as FSDPPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import AORecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
FSDP_WRAP_POLICY = partial(transformer_auto_wrap_policy, transformer_layer_cls={BertLayer})
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
if isinstance(module, torch.nn.Linear):
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
return False
# For stability reasons, we skip the first and last linear layers
# Otherwise can lead to the model not training or converging properly
if fqn in (first_layer_name, last_layer_name):
return False
return True
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
first_linear = None
last_linear = None
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if first_linear is None:
first_linear = name
last_linear = name
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
accelerator = Accelerator()
device = accelerator.device
model.to(device)
convert_to_float8_training(model, module_filter_fn=func)
# Convert the model to FSDP
model = FSDP(
model,
use_orig_params=True,
mixed_precision=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.float32),
auto_wrap_policy=FSDP_WRAP_POLICY,
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for batch in train_dataloader:
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
batch = batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
AcceleratorState()._reset_state(True)
fsdp_plugin = FSDPPlugin(
auto_wrap_policy=FSDP_WRAP_POLICY,
use_orig_params=True,
mixed_precision_policy=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.float32),
)
accelerator = Accelerator(mixed_precision="fp8", fsdp_plugin=fsdp_plugin, kwargs_handlers=[AORecipeKwargs()])
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,145 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
This particular script verifies this for single GPU training.
"""
from functools import partial
import evaluate
import torch
from fp8_utils import get_training_utilities
from torchao.float8 import convert_to_float8_training
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import AORecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
if isinstance(module, torch.nn.Linear):
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
return False
# For stability reasons, we skip the first and last linear layers
# Otherwise can lead to the model not training or converging properly
if fqn in (first_layer_name, last_layer_name):
return False
return True
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
first_linear = None
last_linear = None
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if first_linear is None:
first_linear = name
last_linear = name
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
model.to("cuda")
convert_to_float8_training(model, module_filter_fn=func)
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
set_seed(42)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=[AORecipeKwargs()])
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model = accelerator.prepare(model)
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
AcceleratorState._reset_state(True)
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)

View File

@ -0,0 +1,15 @@
ARG BASE_YEAR=25
ARG BASE_MONTH=03
FROM nvcr.io/nvidia/pytorch:${BASE_YEAR}.${BASE_MONTH}-py3
RUN pip install transformers evaluate datasets
RUN git clone https://github.com/huggingface/accelerate.git
RUN cd accelerate && \
pip install -e .[deepspeed] && \
cd benchmarks/fp8
RUN /bin/bash

View File

@ -0,0 +1,32 @@
# FP8 Benchmarks
Comparing and running [TransformerEngine](https://github.com/NVIDIA/TransformerEngine) FP8 with accelerate
## Overview
This repo provides scripts which compare native TransformerEngine model training against `accelerate`'s own integration. Each modeling type is segmented out via a script, supporting the following:
* Single GPU training (`non_distributed.py`)
* Multi-GPU training via DistributedDataParallelism (`ddp.py`)
* Fully Sharded Data Parallelism (`fsdp.py`)
* DeepSpeed ZeRO 1-3 (`deepspeed.py`)
To run them, it's recommended to use a docker image (see the attached `Dockerfile`) and not install `TransformerEngine` manually.
## Running:
There are official Docker images located at `huggingface/accelerate:gpu-fp8-transformerengine-nightly` which can be used.
You can run all scripts using the core `accelerate launch` command without any `accelerate config` being needed.
For single GPU, run it via `python`:
```bash
python non_distributed.py
```
For the rest, run it via `accelerate launch`:
```bash
accelerate launch ddp.py # or distrib_deepspeed.py, ddp.py
```

View File

@ -0,0 +1,144 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `TransformersEngine`.
This particular script verifies this for DDP training.
"""
import evaluate
import torch
import transformer_engine.common.recipe as te_recipe
import transformer_engine.pytorch as te
from fp8_utils import evaluate_model, get_named_parameters, get_training_utilities
from torch.nn.parallel import DistributedDataParallel as DDP
from transformer_engine.common.recipe import DelayedScaling
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, set_seed
from accelerate.utils.transformer_engine import convert_model
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
accelerator = Accelerator()
device = accelerator.device
model.to(device)
# Convert the model to TE
old_named_params = get_named_parameters(model)
with torch.no_grad():
convert_model(model)
FP8_RECIPE_KWARGS = {"fp8_format": te_recipe.Format.HYBRID, "amax_history_len": 32, "amax_compute_algo": "max"}
fp8_recipe = DelayedScaling(**FP8_RECIPE_KWARGS)
new_named_params = get_named_parameters(model)
# Convert the model to DDP
device_ids, output_device = [accelerator.local_process_index], accelerator.local_process_index
model = DDP(model, device_ids=device_ids, output_device=output_device)
mapping = {p: new_named_params[n] for n, p in old_named_params.items()}
for param_group in optimizer.param_groups:
param_group["params"] = [mapping[p] for p in param_group["params"]]
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
with te.fp8_autocast(enabled=True, fp8_recipe=fp8_recipe):
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
batch = batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
FP8_RECIPE_KWARGS = {"fp8_format": "HYBRID", "amax_history_len": 32, "amax_compute_algo": "max"}
kwargs_handlers = [FP8RecipeKwargs(backend="TE", **FP8_RECIPE_KWARGS)]
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=kwargs_handlers)
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,191 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `TransformersEngine`.
This particular script verifies this for DDP training.
"""
from unittest.mock import patch
import deepspeed
import evaluate
import torch
import transformer_engine.common.recipe as te_recipe
import transformer_engine.pytorch as te
from fp8_utils import evaluate_model, get_named_parameters, get_training_utilities
from transformer_engine.common.recipe import DelayedScaling
from accelerate import Accelerator, DeepSpeedPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, set_seed
from accelerate.utils.transformer_engine import convert_model
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline(zero_stage: int = 1):
# This forces transformers to think Zero-3 Init should be used
with patch("transformers.integrations.deepspeed.is_deepspeed_zero3_enabled") as mock:
mock.return_value = zero_stage == 3
set_seed(42)
accelerator = Accelerator()
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
# Convert the model to TE
old_named_params = get_named_parameters(model)
with torch.no_grad():
convert_model(model)
new_named_params = get_named_parameters(model)
mapping = {p: new_named_params[n] for n, p in old_named_params.items()}
for param_group in optimizer.param_groups:
param_group["params"] = [mapping[p] for p in param_group["params"]]
FP8_RECIPE_KWARGS = {"fp8_format": te_recipe.Format.HYBRID, "amax_history_len": 32, "amax_compute_algo": "max"}
fp8_recipe = DelayedScaling(**FP8_RECIPE_KWARGS)
import numpy as np
config = {
"train_batch_size": 16,
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": zero_stage,
"offload_optimizer": {"device": "none", "nvme_path": None},
"offload_param": {"device": "none", "nvme_path": None},
"stage3_gather_16bit_weights_on_model_save": False,
},
"gradient_clipping": 1.0,
"steps_per_print": np.inf,
"bf16": {"enabled": True},
"fp16": {"enabled": False},
"zero_allow_untested_optimizer": True,
}
(
model,
optimizer,
_,
_,
) = deepspeed.initialize(
model=model,
optimizer=optimizer,
config_params=config,
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
model_outputs = []
data = []
for _ in range(2):
for batch in train_dataloader:
with te.fp8_autocast(enabled=True, fp8_recipe=fp8_recipe):
outputs = model(**batch)
data.append(batch.to("cpu"))
model_outputs.append(outputs.logits.to("cpu"))
loss = outputs.loss
model.backward(loss)
model.step()
for _ in range(accelerator.num_processes):
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results, model_outputs, data
def train_integration(zero_stage: int = 1):
set_seed(42)
FP8_RECIPE_KWARGS = {"fp8_format": "HYBRID", "amax_history_len": 32, "amax_compute_algo": "max"}
kwargs_handlers = [FP8RecipeKwargs(backend="TE", **FP8_RECIPE_KWARGS)]
AcceleratorState()._reset_state(True)
deepspeed_plugin = DeepSpeedPlugin(
zero_stage=zero_stage,
zero3_init_flag=zero_stage == 3,
)
accelerator = Accelerator(
mixed_precision="fp8", kwargs_handlers=kwargs_handlers, deepspeed_plugin=deepspeed_plugin
)
accelerator.state.deepspeed_plugin.deepspeed_config["train_micro_batch_size_per_gpu"] = 16
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
model_outputs = []
data = []
for _ in range(2):
for batch in train_dataloader:
outputs = model(**batch)
data.append(batch.to("cpu"))
model_outputs.append(outputs.logits.to("cpu"))
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results, model_outputs, data
if __name__ == "__main__":
for zero_stage in [1, 2, 3]:
baseline_not_trained, baseline_trained, baseline_outputs, baseline_data = train_baseline(zero_stage)
accelerator_not_trained, accelerator_trained, accelerator_outputs, accelerator_data = train_integration(
zero_stage
)
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,116 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
def get_dataloaders(model_name: str, batch_size: int = 16):
from datasets import load_dataset
from torch.utils.data import DataLoader
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
return tokenizer.pad(
examples,
padding="longest",
pad_to_multiple_of=16, # Specific for FP8
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"],
shuffle=False,
collate_fn=collate_fn,
batch_size=16,
drop_last=True,
)
return train_dataloader, eval_dataloader
def get_training_utilities(model_name: str, batch_size: int = 16, accelerator=None):
"""
Returns a tuple of:
- Model
- Optimizer
- Train dataloader (prepared)
- Eval dataloader (prepared)
- LR Scheduler
Suitable for training on the MRPC dataset
"""
from torch.optim import AdamW
from transformers import AutoModelForSequenceClassification, get_linear_schedule_with_warmup
from accelerate import Accelerator
if accelerator is None:
accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(model_name)
train_dataloader, eval_dataloader = get_dataloaders(model_name, batch_size)
optimizer = AdamW(model.parameters(), lr=0.0001)
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=len(train_dataloader) * 2,
)
train_dataloader, eval_dataloader = accelerator.prepare(train_dataloader, eval_dataloader)
return model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
def get_named_parameters(model):
"""
Same thing as `Accelerator.get_named_parameters` Returns a list of the named parameters of the model (extracted
from parallel)
"""
from accelerate.utils import extract_model_from_parallel
model = extract_model_from_parallel(model)
return {n: p for n, p in model.named_parameters()}
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()

View File

@ -0,0 +1,161 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `TransformersEngine`.
This particular script verifies this for FSDP training.
"""
from functools import partial
import evaluate
import torch
import transformer_engine.common.recipe as te_recipe
import transformer_engine.pytorch as te
from fp8_utils import evaluate_model, get_named_parameters, get_training_utilities
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import MixedPrecision
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
from transformer_engine.common.recipe import DelayedScaling
from transformers.models.bert import BertLayer
from accelerate import Accelerator
from accelerate import FullyShardedDataParallelPlugin as FSDPPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, set_seed
from accelerate.utils.transformer_engine import convert_model
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
FSDP_WRAP_POLICY = partial(transformer_auto_wrap_policy, transformer_layer_cls={BertLayer})
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
accelerator = Accelerator()
device = accelerator.device
model.to(device)
# Convert the model to TE
old_named_params = get_named_parameters(model)
with torch.no_grad():
convert_model(model)
FP8_RECIPE_KWARGS = {"fp8_format": te_recipe.Format.HYBRID, "amax_history_len": 32, "amax_compute_algo": "max"}
fp8_recipe = DelayedScaling(**FP8_RECIPE_KWARGS)
new_named_params = get_named_parameters(model)
# Convert the model to FSDP
model = FSDP(
model,
use_orig_params=True,
mixed_precision=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.float32),
auto_wrap_policy=FSDP_WRAP_POLICY,
)
mapping = {p: new_named_params[n] for n, p in old_named_params.items()}
for param_group in optimizer.param_groups:
param_group["params"] = [mapping[p] for p in param_group["params"]]
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
with te.fp8_autocast(enabled=True, fp8_recipe=fp8_recipe):
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
batch = batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
FP8_RECIPE_KWARGS = {"fp8_format": "HYBRID", "amax_history_len": 32, "amax_compute_algo": "max"}
kwargs_handlers = [FP8RecipeKwargs(backend="TE", **FP8_RECIPE_KWARGS)]
AcceleratorState()._reset_state(True)
fsdp_plugin = FSDPPlugin(
auto_wrap_policy=FSDP_WRAP_POLICY,
use_orig_params=True,
mixed_precision_policy=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.float32),
)
accelerator = Accelerator(mixed_precision="fp8", fsdp_plugin=fsdp_plugin, kwargs_handlers=kwargs_handlers)
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,132 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `TransformersEngine`.
This particular script verifies this for single GPU training.
"""
import evaluate
import torch
import transformer_engine.common.recipe as te_recipe
import transformer_engine.pytorch as te
from fp8_utils import evaluate_model, get_named_parameters, get_training_utilities
from transformer_engine.common.recipe import DelayedScaling
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, set_seed
from accelerate.utils.transformer_engine import convert_model
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
# Convert the model to TE
old_named_params = get_named_parameters(model)
with torch.no_grad():
convert_model(model)
new_named_params = get_named_parameters(model)
mapping = {p: new_named_params[n] for n, p in old_named_params.items()}
for param_group in optimizer.param_groups:
param_group["params"] = [mapping[p] for p in param_group["params"]]
FP8_RECIPE_KWARGS = {"fp8_format": te_recipe.Format.HYBRID, "amax_history_len": 32, "amax_compute_algo": "max"}
fp8_recipe = DelayedScaling(**FP8_RECIPE_KWARGS)
model.to("cuda")
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
with te.fp8_autocast(enabled=True, fp8_recipe=fp8_recipe):
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
batch = batch.to("cuda")
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
FP8_RECIPE_KWARGS = {"fp8_format": "HYBRID", "amax_history_len": 32, "amax_compute_algo": "max"}
kwargs_handlers = [FP8RecipeKwargs(backend="TE", **FP8_RECIPE_KWARGS)]
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=kwargs_handlers)
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler)
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)

View File

@ -0,0 +1,74 @@
# FSDP2 Benchmarks
This benchmark showcases `FSDP2` in 🤗 `accelerate` and compares it to `torch` baseline.
## Overview
This benchmark consists of two parts:
- `main.py` is the main script that runs the benchmark
- `visualize.py` is the script that visualizes the results (if `--output_dir` was specified for the previous command)
## Motivation
We want to showcase that 🤗 `accelerate`'s integration of `FSDP2` is on par raw PyTorch, and highlight a "broken" part in PyTorch that creating an optimizer before applying `FSDP2` **doesn't result in a working training loop**. (more on this later)
This script showcases **matching memory usage and convergence between `accelerate` and `torch`'s baseline.**
To deal with this breaking change (and maintain backward compatibility with FSDP1 in terms of an API), `accelerate` had to come up with a workaround since `accelerate` assumes that the user will nearly always create a model, optimizer, scheduler, etc beforehand and bring them themselves. This lead to an issue of a stark increase in memory as well as the model not even training if the user creates an optimizer beforehand.
To workaround this, we replace the parameters inside the optimizer with the newly created FSDP2 sharded ones. More about this can be found in this [blog post (TBD)](TODO)
> [!WARNING]
> This script is intended to fit on 2x 24GB GPUs, though on so few GPUs it's not possible to see the memory difference (discrepancies in grad allocation result in lower memory usage in the non-fixed case), only the difference in convergence. Below are attached results from 8x H100 GPUs where the difference is visible.
> TLDR: more GPUs = bigger memory difference between fixed and non-fixed cases.
## Results
Here are the results from running the benchmark on 8x H100 GPUs:
<p align="center">
<img src="imgs/allocated_memory.png" width="80%" alt="Allocated Memory Usage">
</p>
<p align="center">
<img src="imgs/reserved_memory.png" width="80%" alt="Reserved Memory Usage">
</p>
As you can see, the memory usage of `accelerate` and `torch_post_shard` (the **intended** way) are very similar, while `torch_pre_shard_not_fixed` uses significantly more memory. Our fix in `torch_pre_shard_fixed` brings the memory usage back in line with the **intended** approach.
> [!WARNING]
> Timing discrepancies are due to the benchmarks being ran in 1 script.
## Running
To run the benchmark, you can either use `accelerate launch` or `torchrun`:
```bash
accelerate launch main.py
```
```bash
# For two GPUs
torchrun --nproc_per_node 2 main.py
```
This supports multiple configurable options, you can learn about them by running:
```bash
python3 main.py --help
```
This script will run 4 different benchmarks:
- `torch_optimizer_after_fsdp`: `torch` baseline where optimizer is created after applying `FSDP2`, this is the **intended** way to do it
- `torch_optimizer_before_fsdp_not_fixed`: `torch` baseline where optimizer is created before applying `FSDP2` without fixing the optimizer parameters
- `torch_optimizer_before_fsdp_fixed`: `torch` baseline where optimizer is created before applying `FSDP2` with our fix to the optimizer
- `accelerate`: `accelerate`'s own integration of `FSDP2` where optimizer is created before applying `FSDP2`, but we apply our fix to the optimizer
Memory results are saved in a folder specified by `--output_dir` argument.
Optionally, you can specify `--save_memory_snapshot` to save the torch memory snapshot, which can then be viewed using [`torch memory viz`](https://pytorch.org/memory_viz)
## Visualizing results
To visualize the results, you can run:
```bash
python3 visualize.py --dir <path_to_output_dir>
```
This will then create two plots, showcasing allocated and reserved memory usage between all the different benchmarks discussed above.

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

122
benchmarks/fsdp2/main.py Normal file
View File

@ -0,0 +1,122 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
from typing import Callable
import torch
from accelerate import Accelerator
from utils import parse_args, prepare_accelerate, prepare_torch
MODEL_NAME = "Qwen/Qwen2.5-1.5B-Instruct"
LEARNING_RATE = 3e-5
CONFIG = {
"model_name": MODEL_NAME,
"learning_rate": LEARNING_RATE,
}
def train(
model: torch.nn.Module,
optimizer: torch.optim.Optimizer,
train_dataloader: torch.utils.data.DataLoader,
accelerator: Accelerator,
) -> torch.Tensor:
losses = []
for batch in train_dataloader:
optimizer.zero_grad()
outputs = model(**batch, use_cache=False)
loss = outputs.loss
losses.append(loss.item())
accelerator.backward(loss)
optimizer.step()
return torch.tensor(losses)
def evaluate(args, config: dict, init_fn: Callable, run_name: str) -> torch.Tensor:
model, optimizer, dataloader, accelerator, memory_tracker = init_fn(args, config)
loss = train(model, optimizer, dataloader, accelerator)
memory_tracker.stop()
msg = f"""Results for {run_name} (rank 0):
Loss: {loss[-1].item()}
Peak Allocated Memory: {float(memory_tracker.peak_allocated_memory):.2f} MB
Peak Reserved Memory: {float(memory_tracker.peak_reserved_memory):.2f} MB
{"-" * 34}"""
accelerator.print(msg)
return loss
def main():
args = parse_args()
evaluations = [
functools.partial(
evaluate,
init_fn=functools.partial(prepare_torch, post_shard_optimizer=False, apply_optimizer_fix=True),
run_name="Optimizer Before FSDP (w/ fix)",
),
functools.partial(
evaluate,
init_fn=functools.partial(prepare_torch, post_shard_optimizer=False, apply_optimizer_fix=False),
run_name="Optimizer Before FSDP (w/o fix)",
),
functools.partial(
evaluate,
init_fn=functools.partial(prepare_torch, post_shard_optimizer=True),
run_name="Optimizer After FSDP",
),
functools.partial(evaluate, init_fn=prepare_accelerate, run_name="Accelerate"),
]
labels = [
"Optimizer Before FSDP (w/ fix)",
"Optimizer Before FSDP (w/o fix)",
"Optimizer After FSDP",
"Accelerate",
]
results = {}
torch.use_deterministic_algorithms(True)
for evaluation, label in zip(evaluations, labels):
results[label] = evaluation(args, CONFIG)
torch.testing.assert_close(
results["Optimizer After FSDP"],
results["Optimizer Before FSDP (w/ fix)"],
msg="Optimizer After FSDP and Optimizer Before FSDP (w/ fix) should be the same",
)
torch.testing.assert_close(
results["Optimizer After FSDP"],
results["Accelerate"],
msg="Optimizer After FSDP and Accelerate should be the same",
)
torch.testing.assert_close(
results["Accelerate"],
results["Optimizer Before FSDP (w/ fix)"],
msg="Accelerate and Optimizer Before FSDP (w/ fix) should be the same",
)
torch.distributed.destroy_process_group()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,130 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc
import json
import os
import threading
import time
import psutil
import torch
from accelerate import PartialState
class MemoryTracker:
def __init__(
self,
device: torch.device,
output_directory: str,
run_name: str,
save_memory_snapshot: bool,
log_interval: float = 0.01,
):
"""Class for tracking gpu and cpu memory usage of the process.
Args:
device (`torch.device`):
PyTorch device to monitor.
output_directory (`str`):
Directory to save the memory usage data to, will be created if it doesn't exist.
run_name (`str`):
Name of the run, will be used to name the output files.
save_memory_snapshot (`bool`):
Whether to also save `torch.cuda.memory._dump_snapshot` to the output directory.
log_interval (`float`, *optional*):
Interval in seconds between memory measurements. Defaults to 0.01.
"""
self.log_interval = log_interval
self.save_memory_snapshot = save_memory_snapshot
self.output_directory = output_directory
self.run_name = run_name
self.timestamps = []
self.allocated_memory = []
self.reserved_memory = []
self.virtual_memory = []
self.start_time = None
self.running = False
self._thread = None
self._state = PartialState()
self._process = psutil.Process()
self._device = device
self.torch_accelerator_module = getattr(torch, device.type, torch.cuda)
def _monitor(self):
self.start_time = time.time()
while self.running:
allocated = self.torch_accelerator_module.memory_allocated(self._device) / (1024 * 1024)
reserved = self.torch_accelerator_module.memory_reserved(self._device) / (1024 * 1024)
virtual_memory = self._process.memory_info().rss / (1024 * 1024)
self.allocated_memory.append(allocated)
self.reserved_memory.append(reserved)
self.virtual_memory.append(virtual_memory)
self.timestamps.append(time.time() - self.start_time)
time.sleep(self.log_interval)
def start(self):
gc.collect()
self.torch_accelerator_module.empty_cache()
if self.output_directory:
os.makedirs(self.output_directory, exist_ok=True)
if self.save_memory_snapshot:
self.torch_accelerator_module.memory._record_memory_history()
self.running = True
self._thread = threading.Thread(target=self._monitor)
self._thread.daemon = True
self._thread.start()
def stop(self):
self.running = False
if self._thread:
self._thread.join()
if self.save_memory_snapshot and self._state.is_main_process and self.output_directory:
output_file = os.path.join(self.output_directory, f"{self.run_name}_memory_snapshot.pkl")
self.torch_accelerator_module.memory._dump_snapshot(output_file)
if self._state.is_main_process and self.output_directory:
path = os.path.join(self.output_directory, f"{self.run_name}_memory_usage.json")
with open(path, "w") as f:
json.dump(
{
"timestamps": self.timestamps,
"allocated_memory": self.allocated_memory,
"reserved_memory": self.reserved_memory,
"virtual_memory": self.virtual_memory,
},
f,
)
if self.save_memory_snapshot:
self.torch_accelerator_module.memory._record_memory_history(False)
self.torch_accelerator_module.empty_cache()
@property
def peak_allocated_memory(self):
return max(self.allocated_memory)
@property
def peak_reserved_memory(self):
return max(self.reserved_memory)

290
benchmarks/fsdp2/utils.py Normal file
View File

@ -0,0 +1,290 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from types import MethodType
from typing import Union
import torch
from datasets import load_dataset
from measure_utils import MemoryTracker
from torch.distributed.fsdp import MixedPrecisionPolicy, fully_shard
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, DataCollatorForLanguageModeling
from transformers.models.qwen2.modeling_qwen2 import Qwen2DecoderLayer
from accelerate import Accelerator, FullyShardedDataParallelPlugin
from accelerate.state import AcceleratorState, is_initialized
from accelerate.utils import convert_outputs_to_fp32, set_seed
SEED = 421
def get_named_parameters(model: torch.nn.Module, drop_refs: bool = False) -> dict[str, Union[torch.Tensor, int]]:
"""
This function returns a dictionary mapping the parameter names to their data pointers or
the original parameters if `drop_refs` is `False`.
It is used to get the original parameter names before `fully_shard` is applied.
We only return the data pointers, so we drop the references to the original parameters
and `fully_shard` will then trigger a new allocation for the sharded ones.
Args:
model (`torch.nn.Module`): Model instance to get the named parameters from
drop_refs (`bool`, *optional*, defaults to `False`): Whether to drop the references to the original parameters
Returns:
`dict[str, Union[torch.Tensor, int]]`: Dictionary mapping the parameter names to their data pointers or the original parameters if `drop_refs` is `False`
"""
named_parameters = {}
for n, p in model.named_parameters():
# We only preserve the data pointers to have the unique 1:1 mapping between the original and the sharded parameters
named_parameters[n] = p.data_ptr() if drop_refs else p
return named_parameters
def replace_optimizer_params(optimizer: torch.optim.Optimizer):
"""
This function is called before using `fully_shard` on the model. It replaces the parameters of the optimizer with
empty tensors, so `fully_shard` can trigger a new allocation for the sharded ones. After this, we swap the parameters
`data_ptr` to the original one, so we can reuse that later to map the sharded parameters to the original ones.
This function modifies the optimizer in-place.
Args:
optimizer (torch.optim.Optimizer): Optimizer instance which contains the original model parameters
"""
for param_group in optimizer.param_groups:
for i, p in enumerate(param_group["params"]):
# We drop a reference to the original param here, so that _move_states_to_device triggers a reallocation
# This is required or else the `fully_shard` -> `_move_states_to_device` uses the original memory address
# for the sharded parameters, and we get a weird/undefined behavior.
param_group["params"][i] = torch.empty_like(p)
# We save the original data_ptr, so we can swap back the parameters later
param_group["params"][i].data_ptr = p.data_ptr()
def swap_back_optimizer_params(
model: torch.nn.Module, optimizer: torch.optim.Optimizer, old_named_parameter_pointers: dict[str, int]
):
"""
This function is the counterpart of `replace_optimizer_params`. It is called after `fully_shard` being applied to
the model. It swaps the parameters of the optimizer to their sharded counterparts.
It is done using the `data_ptr` mapping prepared in `replace_optimizer_params` and `get_named_parameters`.
Args:
model (`torch.nn.Module`): Model instance to get the new named parameters from
optimizer (`torch.optim.Optimizer`): Optimizer instance to swap the parameters of
old_named_parameter_pointers (`dict[str, int]`): Dictionary mapping the original parameter names: data_ptrs to the new ones
"""
# We get the new named parameters after `fully_shard` being applied
# We don't drop the references as we need the sharded parameters now
new_named_parameters = get_named_parameters(model, drop_refs=False)
# We create a mapping from the original data_ptr to the new sharded param corresponding to it
mapping = {p: new_named_parameters[n] for n, p in old_named_parameter_pointers.items()}
for param_group in optimizer.param_groups:
# We swap the parameters of the optimizer to the new sharded ones
param_group["params"] = [mapping[p.data_ptr] for p in param_group["params"]]
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument(
"--output_dir",
type=str,
help="Directory to save the benchmarking results.",
)
parser.add_argument(
"--save_memory_snapshot",
action="store_true",
default=False,
help="If True, `torch.cuda.memory._dump_snapshot` will be used to additionaly save the memory trace.",
)
######################
# Training arguments #
######################
parser.add_argument(
"--batch_size",
type=int,
default=2,
help="Batch size for the training loop.",
)
parser.add_argument(
"--block_size",
type=int,
default=128,
help="The maximum sequence length to use with the model.",
)
parser.add_argument(
"--dataset_fraction",
type=float,
default=1.0,
help="Fraction of the dataset to use.",
)
return parser.parse_args()
def prepare_dataloader(tokenizer, args, accelerator: Accelerator) -> DataLoader:
dataset = load_dataset("tiny_shakespeare", split="train", trust_remote_code=True)
def tokenize_function(example):
return tokenizer(
example["text"],
)
dataset = dataset.map(
tokenize_function,
batched=True,
remove_columns=["text"],
)
block_size = min(tokenizer.model_max_length, args.block_size)
def group_texts(examples):
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
total_length = (total_length // block_size) * block_size
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
dataset = dataset.map(group_texts, batched=True)
dataset = dataset.select(range(int(len(dataset) * args.dataset_fraction)))
def collate_fn(examples):
return DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False,
)(examples)
dataloader = DataLoader(
dataset,
batch_size=args.batch_size,
collate_fn=collate_fn,
)
dataloader = accelerator.prepare(dataloader)
return dataloader
def get_model(model_name: str):
# We reguire model to be loaded in fp32, otherwise benchmarks don't match as accelerate does upcasting of parameters to fp32
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float32)
model = AutoModelForCausalLM.from_config(config)
return model
def get_tokenizer(model_name: str):
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
return tokenizer
def prepare_torch(
args, config: dict, post_shard_optimizer: bool = False, apply_optimizer_fix: bool = False
) -> tuple[torch.nn.Module, torch.optim.Optimizer, torch.utils.data.DataLoader, Accelerator]:
mp_policy = MixedPrecisionPolicy(
param_dtype=torch.bfloat16,
reduce_dtype=torch.bfloat16,
output_dtype=torch.bfloat16,
)
accelerator = Accelerator(mixed_precision="bf16")
set_seed(SEED)
is_fixed = "fixed" if apply_optimizer_fix else "not_fixed"
is_post_shard = "optimizer_after_fsdp" if post_shard_optimizer else "optimizer_before_fsdp"
run_name = f"torch_{is_post_shard}" if post_shard_optimizer else f"torch_{is_post_shard}_{is_fixed}"
tokenizer = get_tokenizer(config["model_name"])
train_dataloader = prepare_dataloader(tokenizer, args, accelerator)
memory_tracker = MemoryTracker(accelerator.device, args.output_dir, run_name, args.save_memory_snapshot)
memory_tracker.start()
model = get_model(config["model_name"])
optimizer = None
if not post_shard_optimizer:
optimizer = AdamW(model.parameters(), lr=config["learning_rate"])
if apply_optimizer_fix:
# We drop the references to the original parameters, so that `fully_shard` can trigger a new allocation
# Then we get the `module_name: data_ptr` mapping, so we can swap back the parameters later
old_named_parameters = get_named_parameters(model, drop_refs=True)
# We replace the parameters of the optimizer with empty tensors, so that `fully_shard` can trigger a new allocation
# We also change the `data_ptr` of the parameters to the original ones, so we can swap back the parameters later
replace_optimizer_params(optimizer)
for module in model.modules():
if isinstance(module, Qwen2DecoderLayer):
fully_shard(module, mp_policy=mp_policy)
fully_shard(model, mp_policy=mp_policy)
# We do this to imitate how accelerate forces outputs to be in fp32 via `convert_outputs_to_fp32`
autocast_context = torch.autocast(device_type=accelerator.state.device.type, dtype=torch.bfloat16)
model_forward_func = model.forward.__func__
new_forward = autocast_context(model_forward_func)
model.forward = MethodType(new_forward, model)
model.forward = MethodType(convert_outputs_to_fp32(model.forward.__func__), model)
if post_shard_optimizer:
optimizer = AdamW(model.parameters(), lr=config["learning_rate"])
if not post_shard_optimizer and apply_optimizer_fix:
# We swap back the parameters of the optimizer to the original ones
swap_back_optimizer_params(model, optimizer, old_named_parameters)
return model, optimizer, train_dataloader, accelerator, memory_tracker
def prepare_accelerate(
args, config: dict
) -> tuple[torch.nn.Module, torch.optim.Optimizer, torch.utils.data.DataLoader, Accelerator]:
if is_initialized():
AcceleratorState()._reset_state(True)
fsdp_plugin = FullyShardedDataParallelPlugin(
fsdp_version=2,
auto_wrap_policy="transformer_based_wrap",
transformer_cls_names_to_wrap=["Qwen2DecoderLayer"],
)
accelerator = Accelerator(
fsdp_plugin=fsdp_plugin,
mixed_precision="bf16",
)
set_seed(SEED)
tokenizer = get_tokenizer(config["model_name"])
train_dataloader = prepare_dataloader(tokenizer, args, accelerator)
memory_tracker = MemoryTracker(accelerator.device, args.output_dir, "accelerate", args.save_memory_snapshot)
memory_tracker.start()
model = get_model(config["model_name"])
optimizer = AdamW(model.parameters(), lr=config["learning_rate"])
model, optimizer = accelerator.prepare(model, optimizer)
return model, optimizer, train_dataloader, accelerator, memory_tracker

View File

@ -0,0 +1,114 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import json
import matplotlib.pyplot as plt
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--dir", type=str, help="Directory containing the memory usage data")
parser.add_argument(
"--memory_threshold",
type=int,
default=0,
help="Memory threshold to filter data that is below this value (only filters 1st `--filter_partition` of the points which should roughtly correspond to the model loading)",
)
parser.add_argument(
"--filter_partition",
type=float,
default=1 / 3,
help="Partition to drop data from that are below the memory threshold",
)
return parser.parse_args()
def filter_data(data, memory_threshold, filter_partition, key):
timestamps = data["timestamps"]
memory = data[key]
mid_point = int(len(timestamps) * filter_partition)
filtered_times = []
filtered_memory = []
for i, (t, m) in enumerate(zip(timestamps, memory)):
if i < mid_point and m < memory_threshold:
continue
filtered_times.append(t)
filtered_memory.append(m)
return filtered_times, filtered_memory
def compare_memory_usage(data, labels, memory_threshold, filter_partition):
plt.style.use("seaborn-v0_8")
colors = ["#2ecc71", "#e74c3c", "#3498db", "#f1c40f"]
fig1, ax1 = plt.subplots(figsize=(15, 5))
for data_item, label, color in zip(data, labels, colors):
timestamps, allocated = filter_data(data_item, memory_threshold, filter_partition, "allocated_memory")
ax1.plot(timestamps, allocated, label=label, color=color, linewidth=2)
ax1.set_xlabel("Time (s)", fontsize=12)
ax1.set_ylabel("Allocated Memory (GB)", fontsize=12)
ax1.set_title("Allocated Memory Usage Over Time", fontsize=14, pad=15)
ax1.grid(True, linestyle="--", alpha=0.7)
ax1.legend(frameon=True, fancybox=True, shadow=True, fontsize=10)
ax1.spines["top"].set_visible(False)
ax1.spines["right"].set_visible(False)
plt.tight_layout()
fig2, ax2 = plt.subplots(figsize=(15, 5))
for data_item, label, color in zip(data, labels, colors):
timestamps, reserved = filter_data(data_item, memory_threshold, filter_partition, "reserved_memory")
ax2.plot(timestamps, reserved, label=label, color=color, linewidth=2)
ax2.set_xlabel("Time (s)", fontsize=12)
ax2.set_ylabel("Reserved Memory (GB)", fontsize=12)
ax2.set_title("Reserved Memory Usage Over Time", fontsize=14, pad=15)
ax2.grid(True, linestyle="--", alpha=0.7)
ax2.legend(frameon=True, fancybox=True, shadow=True, fontsize=10)
ax2.spines["top"].set_visible(False)
ax2.spines["right"].set_visible(False)
plt.tight_layout()
return fig1, fig2
if __name__ == "__main__":
args = parse_args()
DIR = args.dir
with open(f"{DIR}/torch_optimizer_before_fsdp_not_fixed_memory_usage.json") as f:
optimizer_before_fsdp_not_fixed = json.load(f)
with open(f"{DIR}/torch_optimizer_after_fsdp_memory_usage.json") as f:
optimizer_after_fsdp = json.load(f)
with open(f"{DIR}/torch_optimizer_before_fsdp_fixed_memory_usage.json") as f:
optimizer_before_fsdp_fixed = json.load(f)
with open(f"{DIR}/accelerate_memory_usage.json") as f:
accelerate = json.load(f)
data = [optimizer_before_fsdp_not_fixed, optimizer_before_fsdp_fixed, optimizer_after_fsdp, accelerate]
labels = [
"Optimizer Before FSDP (w/o fix)",
"Optimizer Before FSDP (w/ fix)",
"Optimizer After FSDP",
"Accelerate",
]
fig1, fig2 = compare_memory_usage(data, labels, args.memory_threshold, args.filter_partition)
fig1.savefig(f"{DIR}/allocated_memory.png")
fig2.savefig(f"{DIR}/reserved_memory.png")

View File

@ -0,0 +1,111 @@
# Regional Compilation Benchmark
This benchmark compares different compilation strategies using PyTorch's `torch.compile` and Accelerate's `compile_regions` utility, which is based on the recipe in [PyTorch documentation](https://pytorch.org/tutorials/recipes/regional_compilation.html).
## Overview
The benchmark evaluates three approaches:
- **Baseline**: No compilation, standard PyTorch eager execution.
- **Full compilation**: Using PyTorch's `torch.compile()` on the entire model.
- **Regional compilation**: Using `accelerate.utils.compile_regions()` which targets specific blocks of the model to optimize compilation time.
Each approach is tested with different batch sizes (1 and 4) and sequence lengths (128) on various LLaMA-based models ranging from 1B to 13B parameters. We purposefully run the forward pass outside of the `torch.no_grad()` context to simulate performance in a training environment, where gradients are needed.
## Usage
To run this benchmark:
```bash
python regional_compilation.py
```
The script will automatically download the model configurations, create models, and benchmark both compilation and inference times across different scenarios.
## Requirements
- Suitable GPU memory for the models being tested.
- PyTorch with CUDA support.
- Transformers library.
- Accelerate library.
## Results
The benchmark results are summarized in the following figures:
- Compilation time is how long it takes to run the first forward pass.
- Speedup factor is the ratio of non-compiled baseline inference time to the fully/regionally compiled inference time.
<p align="center">
<img src="imgs/compilation_time.png" width="80%" alt="Compilation Time">
</p>
<p align="center">
<img src="imgs/speedup_factor.png" width="80%" alt="Speedup Factor">
</p>
Full results are available in the tables below:
```markdown
[-------------------------------------------------- NousResearch/Llama-3.2-1B ---------------------------------------------------]
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
1 threads: -----------------------------------------------------------------------------------------------------------------------
Baseline | 18.3 | 18.4 | |
Full compilation | 6.3 | 10.0 | 10696.4 | 10248.0
Regional compilation | 9.7 | 10.0 | 1952.7 | 2903.9
Times are in milliseconds (ms).
[---------------------------------------------- NousResearch/Hermes-3-Llama-3.2-3B ----------------------------------------------]
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
1 threads: -----------------------------------------------------------------------------------------------------------------------
Baseline | 33.4 | 33.6 | |
Full compilation | 11.2 | 23.9 | 17857.5 | 17736.5
Regional compilation | 17.3 | 23.7 | 2993.2 | 2478.8
Times are in milliseconds (ms).
[---------------------------------------------- NousResearch/Hermes-3-Llama-3.1-8B ----------------------------------------------]
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
1 threads: -----------------------------------------------------------------------------------------------------------------------
Baseline | 40.3 | 59.5 | |
Full compilation | 18.9 | 54.4 | 20437.8 | 20152.3
Regional compilation | 19.7 | 54.0 | 2903.1 | 2438.0
Times are in milliseconds (ms).
[--------------------------------------------- NousResearch/Nous-Hermes-Llama2-13b ----------------------------------------------]
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
1 threads: -----------------------------------------------------------------------------------------------------------------------
Baseline | 45.5 | 100.4 | |
Full compilation | 29.4 | 89.7 | 23099.4 | 22885.9
Regional compilation | 29.4 | 87.5 | 2945.5 | 2526.2
Times are in milliseconds (ms).
```
## Results Summary
### Compilation Time
Regional compilation provides significantly faster compilation times compared to full model compilation:
- **Full compilation**: Takes ~10-23 seconds depending on model size.
- **Regional compilation**: Takes only ~2-3 seconds across all model sizes.
- **Speed improvement**: Regional compilation is **5-9x faster** to compile.
### Inference Time
Regional compilation delivers inference performance close to full compilation:
- For batch size 1:
- For smaller models (1B-3B): Full compilation has a slight edge over regional compilation.
- For larger models (8B-13B): Regional compilation performs similarly to full compilation.
- For batch size 4: Regional compilation performs similarly to full compilation across all models.
## Key Takeaways
1. **Comparable Performance**: Regional compilation delivers performance speedups similar to full compilation, especially for larger models.
2. **Faster Compilation**: Regional compilation significantly reduces the time taken to compile models, making it a more efficient choice for deployment.
3. **Batch Size Impact**: At batch size 4, full compilation and regional compilation perform nearly identically.
4. **Model Size Impact**: Even with a small batch size, full compilation and regional compilation perform similarly for larger models (8B-13B).
5. **Practical Application**: For real-world applications, regional compilation is a practical choice for optimizing training cold start times, especially when working with large models.

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

View File

@ -0,0 +1,77 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from torch.utils.benchmark import Compare, Timer
from transformers import AutoConfig, AutoModelForCausalLM
from accelerate.test_utils.testing import get_backend
from accelerate.utils import compile_regions
torch.set_float32_matmul_precision("high")
COMPILE_ITERS = 2
INFERENCE_ITERS = 100
BASELINE = "Baseline"
COMPILE_TIME = "Compile time"
INFRENCE_TIME = "Inference time"
FULL_COMPILATION = "Full compilation"
REGIONAL_COMPILATION = "Regional compilation"
INFRENCE_STMT = "model(input_ids, use_cache=False)"
COMPILE_STMT = f"torch._dynamo.reset(); torch._inductor.utils.clear_inductor_caches(); {INFRENCE_STMT}"
torch_device_type, _, _ = get_backend()
results = []
for model_id in [
# non-gated llama models
"NousResearch/Llama-3.2-1B",
"NousResearch/Hermes-3-Llama-3.2-3B",
"NousResearch/Hermes-3-Llama-3.1-8B",
"NousResearch/Nous-Hermes-Llama2-13b",
]:
with torch.device(torch_device_type):
config = AutoConfig.from_pretrained(model_id)
model = AutoModelForCausalLM.from_config(config).to(dtype=torch.float16).eval()
full_compilation_model = torch.compile(model)
regional_compilation_model = compile_regions(model)
for model, sub_label, description, stmt, iters in [
(model, BASELINE, INFRENCE_TIME, INFRENCE_STMT, INFERENCE_ITERS),
(full_compilation_model, FULL_COMPILATION, COMPILE_TIME, COMPILE_STMT, COMPILE_ITERS),
(full_compilation_model, FULL_COMPILATION, INFRENCE_TIME, INFRENCE_STMT, INFERENCE_ITERS),
(regional_compilation_model, REGIONAL_COMPILATION, COMPILE_TIME, COMPILE_STMT, COMPILE_ITERS),
(regional_compilation_model, REGIONAL_COMPILATION, INFRENCE_TIME, INFRENCE_STMT, INFERENCE_ITERS),
]:
for batch_size, sequence_length in [(1, 128), (4, 128)]:
input_ids = torch.randint(
0, 1000, size=(batch_size, sequence_length), dtype=torch.int64, device=torch_device_type
)
results.append(
Timer(
label=model_id,
sub_label=sub_label,
description=f"{description} ({batch_size}x{sequence_length})",
globals={"model": model, "input_ids": input_ids},
stmt=stmt,
).timeit(number=iters)
)
compare = Compare(results)
compare.colorize()
compare.print()

View File

@ -33,6 +33,7 @@ huggingface/accelerate:{accelerator}-{nightly,release}
* `cpu`: Comes compiled off of `python:3.9-slim` and is designed for non-CUDA based workloads.
* More to come soon
* `gpu-deepspeed`: Comes compiled off of the `nvidia/cuda` image and includes core parts like `bitsandbytes` as well as the latest `deepspeed` version. Runs off python 3.10.
* `gpu-fp8-transformerengine`: Comes compiled off of `nvcr.io/nvidia/pytorch` and is specifically for running the `benchmarks/fp8` scripts on devices which support FP8 operations using the `TransformerEngine` library (RTX 4090, H100, etc)
## Nightlies vs Releases

View File

@ -1,7 +1,7 @@
# Builds CPU-only Docker image of PyTorch
# Uses multi-staged approach to reduce size
# Stage 1
FROM python:3.8-slim as compile-image
FROM python:3.9-slim as compile-image
ARG DEBIAN_FRONTEND=noninteractive
@ -25,7 +25,7 @@ RUN python3 -m pip install --no-cache-dir \
--extra-index-url https://download.pytorch.org/whl/cpu
# Stage 2
FROM python:3.8-slim AS build-image
FROM python:3.9-slim AS build-image
COPY --from=compile-image /opt/venv /opt/venv
RUN useradd -ms /bin/bash user
USER user

View File

@ -25,12 +25,12 @@ RUN source activate accelerate && conda install -c conda-forge mpi4py
RUN source activate accelerate && \
python3 -m pip install --no-cache-dir \
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers,deepspeed] \
--extra-index-url https://download.pytorch.org/whl/cu117
--extra-index-url https://download.pytorch.org/whl/cu126
RUN python3 -m pip install --no-cache-dir bitsandbytes
# Stage 2
FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu20.04 AS build-image
FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 AS build-image
COPY --from=compile-image /opt/conda /opt/conda
ENV PATH /opt/conda/bin:$PATH

View File

@ -24,12 +24,12 @@ RUN source activate accelerate && conda install -c conda-forge mpi4py
RUN source activate accelerate && \
python3 -m pip install --no-cache-dir \
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers] \
--extra-index-url https://download.pytorch.org/whl/cu117
--extra-index-url https://download.pytorch.org/whl/cu126
RUN python3 -m pip install --no-cache-dir bitsandbytes
# Stage 2
FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu20.04 AS build-image
FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 AS build-image
COPY --from=compile-image /opt/conda /opt/conda
ENV PATH /opt/conda/bin:$PATH

View File

@ -16,7 +16,7 @@
- local: basic_tutorials/tpu
title: TPU training
- local: basic_tutorials/launch
title: Launching distributed code
title: Launching Accelerate scripts
- local: basic_tutorials/notebook
title: Launching distributed training from Jupyter Notebooks
title: Tutorials
@ -31,8 +31,10 @@
title: Model quantization
- local: usage_guides/tracking
title: Experiment trackers
- local: usage_guides/profiler
title: Profiler
- local: usage_guides/checkpoint
title: Save and load training states
title: Checkpointing
- local: basic_tutorials/troubleshooting
title: Troubleshoot
- local: usage_guides/training_zoo
@ -48,16 +50,24 @@
title: Low precision (FP8) training
- local: usage_guides/deepspeed
title: DeepSpeed
- local: usage_guides/deepspeed_multiple_model
title: Using multiple models with DeepSpeed
- local: usage_guides/ddp_comm_hook
title: DDP Communication Hooks
- local: usage_guides/fsdp
title: Fully Sharded Data Parallelism
title: Fully Sharded Data Parallel
- local: usage_guides/megatron_lm
title: Megatron-LM
- local: usage_guides/sagemaker
title: Amazon SageMaker
- local: usage_guides/mps
title: Apple M1 GPUs
- local: usage_guides/ipex
title: IPEX training with CPU
- local: usage_guides/intel_cpu
title: Intel CPU
- local: usage_guides/gaudi
title: Intel Gaudi
- local: usage_guides/compilation
title: Compilation
title: Training
- isExpanded: true
sections:
@ -69,7 +79,7 @@
title: How to guides
- sections:
- local: concept_guides/internal_mechanism
title: 🤗 Accelerate's internal mechanism
title: Accelerate's internal mechanism
- local: concept_guides/big_model_inference
title: Loading big models into memory
- local: concept_guides/performance
@ -80,24 +90,26 @@
title: Gradient synchronization
- local: concept_guides/fsdp_and_deepspeed
title: FSDP vs DeepSpeed
- local: concept_guides/fsdp1_vs_fsdp2
title: FSDP1 vs FSDP2
- local: concept_guides/low_precision_training
title: How training in low-precision environments is possible (FP8)
title: Low precision training methods
- local: concept_guides/training_tpu
title: TPU best practices
title: Training on TPUs
title: Concepts and fundamentals
- sections:
- sections:
- local: package_reference/accelerator
title: Accelerator
- local: package_reference/state
title: Stateful configuration classes
title: Stateful classes
- local: package_reference/cli
title: The Command Line
- local: package_reference/torch_wrappers
title: Torch wrapper classes
title: DataLoaders, Optimizers, Schedulers
- local: package_reference/tracking
title: Experiment trackers
- local: package_reference/launchers
title: Distributed launchers
title: Launchers
- local: package_reference/deepspeed
title: DeepSpeed utilities
- local: package_reference/logging
@ -105,13 +117,15 @@
- local: package_reference/big_modeling
title: Working with large models
- local: package_reference/inference
title: Distributed inference with big models
title: Pipeline parallelism
- local: package_reference/kwargs
title: Kwargs handlers
- local: package_reference/fp8
title: FP8
- local: package_reference/utilities
title: Utility functions and classes
- local: package_reference/megatron_lm
title: Megatron-LM Utilities
title: Megatron-LM utilities
- local: package_reference/fsdp
title: Fully Sharded Data Parallelism Utilities
title: Fully Sharded Data Parallel utilities
title: "Reference"

View File

@ -13,31 +13,29 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Installation and Configuration
# Installation
Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 Accelerate. 🤗 Accelerate is tested on **Python 3.8+**.
Before you start, you will need to setup your environment, install the appropriate packages, and configure Accelerate. Accelerate is tested on **Python 3.8+**.
## Installing 🤗 Accelerate
Accelerate is available on pypi and conda, as well as on GitHub. Details to install from each are below:
🤗 Accelerate is available on pypi and conda, as well as on GitHub. Details to install from each are below:
## pip
### pip
To install 🤗 Accelerate from pypi, perform:
To install Accelerate from pypi, perform:
```bash
pip install accelerate
```
### conda
## conda
🤗 Accelerate can also be installed with conda with:
Accelerate can also be installed with conda with:
```bash
conda install -c conda-forge accelerate
```
### Source
## Source
New features are added every day that haven't been released yet. To try them out yourself, install
from the GitHub repository:
@ -56,9 +54,9 @@ cd accelerate
pip install -e .
```
## Configuring 🤗 Accelerate
## Configuration
After installing, you need to configure 🤗 Accelerate for how the current system is setup for training.
After installing, you need to configure Accelerate for how the current system is setup for training.
To do so run the following and answer the questions prompted to you:
```bash
@ -70,7 +68,8 @@ To write a barebones configuration that doesn't include options such as DeepSpee
```bash
python -c "from accelerate.utils import write_basic_config; write_basic_config(mixed_precision='fp16')"
```
🤗 Accelerate will automatically utilize the maximum number of GPUs available and set the mixed precision mode.
Accelerate will automatically utilize the maximum number of GPUs available and set the mixed precision mode.
To check that your configuration looks fine, run:
@ -80,23 +79,36 @@ accelerate env
An example output is shown below, which describes two GPUs on a single machine with no mixed precision being used:
```bash
- `Accelerate` version: 0.11.0.dev0
- Platform: Linux-5.10.0-15-cloud-amd64-x86_64-with-debian-11.3
- Python version: 3.7.12
- Numpy version: 1.19.5
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- `Accelerate` version: 1.2.0.dev0
- Platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.35
- `accelerate` bash location: /home/zach/miniconda3/envs/accelerate/bin/accelerate
- Python version: 3.10.13
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- PyTorch MLU available: False
- PyTorch MUSA available: False
- System RAM: 187.91 GB
- GPU type: NVIDIA GeForce RTX 4090
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- main_process_ip: None
- main_process_port: None
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {}
- fsdp_config: {}
```
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
```

View File

@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Launching your 🤗 Accelerate scripts
# Launching Accelerate scripts
In the previous tutorial, you were introduced to how to modify your current training script to use 🤗 Accelerate.
In the previous tutorial, you were introduced to how to modify your current training script to use Accelerate.
The final version of that code is shown below:
```python
@ -69,14 +69,14 @@ Next, you need to launch it with `accelerate launch`.
<Tip warning={true}>
It's recommended you run `accelerate config` before using `accelerate launch` to configure your environment to your liking.
Otherwise 🤗 Accelerate will use very basic defaults depending on your system setup.
Otherwise Accelerate will use very basic defaults depending on your system setup.
</Tip>
## Using accelerate launch
🤗 Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them is.
<Tip>
@ -97,11 +97,14 @@ Since this runs the various torch spawn methods, all of the expected environment
For example, here is how to use `accelerate launch` with a single GPU:
```bash
# for cuda device:
CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...
# for xpu device:
ZE_AFFINITY_MASK="0" accelerate launch {script_name.py} --arg1 --arg2 ...
```
You can also use `accelerate launch` without performing `accelerate config` first, but you may need to manually pass in the right configuration parameters.
In this case, 🤗 Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision.
In this case, Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision.
Here is how you would use all GPUs and train with mixed precision disabled:
```bash
@ -129,14 +132,14 @@ accelerate launch -h
<Tip>
Even if you are not using 🤗 Accelerate in your code, you can still use the launcher for starting your scripts!
Even if you are not using Accelerate in your code, you can still use the launcher for starting your scripts!
</Tip>
For a visualization of this difference, that earlier `accelerate launch` on multi-gpu would look something like so with `torchrun`:
```bash
MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --num_machines=1 {script_name.py} {--arg1} {--arg2} ...
MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --nnodes=1 {script_name.py} {--arg1} {--arg2} ...
```
You can also launch your script utilizing the launch CLI as a python module itself, enabling the ability to pass in other python-specific
@ -178,7 +181,7 @@ accelerate launch {script_name.py} {--arg1} {--arg2} ...
## Custom Configurations
As briefly mentioned earlier, `accelerate launch` should be mostly used through combining set configurations
made with the `accelerate config` command. These configs are saved to a `default_config.yaml` file in your cache folder for 🤗 Accelerate.
made with the `accelerate config` command. These configs are saved to a `default_config.yaml` file in your cache folder for Accelerate.
This cache folder is located at (with decreasing order of priority):
- The content of your environment variable `HF_HOME` suffixed with `accelerate`.
@ -211,7 +214,7 @@ accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_nam
```
## Multi-node training
Multi-node training with 🤗Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following:
Multi-node training with Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following:
- Copy your codebase and data to all nodes. (or place them on a shared filesystem)
- Setup your python packages on all nodes.

View File

@ -145,7 +145,7 @@ Set the mixed precision type to use in the [`Accelerator`], and then use the [`~
```diff
+ accelerator = Accelerator(mixed_precision="fp16")
+ with accelerator.autocast():
loss = complex_loss_function(outputs, target):
loss = complex_loss_function(outputs, target)
```
## Save and load
@ -219,3 +219,6 @@ During training, you may want to save the current state of the model, optimizer,
To further customize where and how states are saved through [`~Accelerator.save_state`], use the [`~utils.ProjectConfiguration`] class. For example, if `automatic_checkpoint_naming` is enabled, each saved checkpoint is stored at `Accelerator.project_dir/checkpoints/checkpoint_{checkpoint_number}`.
Any other stateful items to be stored should be registered with the [`~Accelerator.register_for_checkpointing`] method so they can be saved and loaded. Every object passed to this method to be stored must have a `load_state_dict` and `state_dict` function.
> [!TIP]
> If you have [`torchdata>=0.8.0`](https://github.com/pytorch/data/tree/main) installed, you can additionally pass `use_stateful_dataloader=True` into your [`~utils.DataLoaderConfiguration`]. This extends Accelerate's DataLoader classes with a `load_state_dict` and `state_dict` function, and makes it so `Accelerator.save_state` and `Accelerator.load_state` also track how far into the training dataset it has read when persisting the model.

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Launching Multi-GPU Training from a Jupyter Environment
# Launching distributed training from Jupyter Notebooks
This tutorial teaches you how to fine tune a computer vision model with 🤗 Accelerate from a Jupyter Notebook on a distributed system.
You will also learn how to setup a few requirements needed for ensuring your environment is configured properly, your data has been prepared properly, and finally how to launch training.
@ -26,13 +26,13 @@ You will also learn how to setup a few requirements needed for ensuring your env
## Configuring the Environment
Before any training can be performed, a 🤗 Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
Before any training can be performed, an Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
```bash
accelerate config
```
However, if general defaults are fine and you are *not* running on a TPU, 🤗Accelerate has a utility to quickly write your GPU configuration into a config file via [`utils.write_basic_config`].
However, if general defaults are fine and you are *not* running on a TPU, Accelerate has a utility to quickly write your GPU configuration into a config file via [`utils.write_basic_config`].
The following code will restart Jupyter after writing the configuration, as CUDA code was called to perform this.
@ -52,7 +52,7 @@ os._exit(00) # Restart the notebook
## Preparing the Dataset and Model
Next you should prepare your dataset. As mentioned at earlier, great care should be taken when preparing the `DataLoaders` and model to make sure that **nothing** is put on *any* GPU.
Next you should prepare your dataset. As mentioned earlier, great care should be taken when preparing the `DataLoaders` and model to make sure that **nothing** is put on *any* GPU.
If you do, it is recommended to put that specific code into a function and call that from within the notebook launcher interface, which will be shown later.
@ -327,7 +327,7 @@ def training_loop(mixed_precision="fp16", seed: int = 42, batch_size: int = 64):
# Build dataloaders
train_dataloader, eval_dataloader = get_dataloaders(batch_size)
# Instantiate the model (you build the model here so that the seed also controls new weight initaliziations)
# Instantiate the model (you build the model here so that the seed also controls new weight initializations)
model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
# Freeze the base model
@ -454,7 +454,7 @@ epoch 4: 94.71
And that's it!
Please note that [`notebook_launcher`] ignores the 🤗 Accelerate config file, to launch based on the config use:
Please note that [`notebook_launcher`] ignores the Accelerate config file, to launch based on the config use:
```bash
accelerate launch

View File

@ -15,10 +15,10 @@ rendered properly in your Markdown viewer.
# Overview
Welcome to the 🤗 Accelerate tutorials! These introductory guides will help catch you up to speed on working with 🤗 Accelerate.
Welcome to the Accelerate tutorials! These introductory guides will help catch you up to speed on working with Accelerate.
You'll learn how to modify your code to have it work with the API seamlessly, how to launch your script properly,
and more!
These tutorials assume some basic knowledge of Python and familiarity with the PyTorch framework.
If you have any questions about 🤗 Accelerate, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/accelerate/18).
If you have any questions about Accelerate, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/accelerate/18).

View File

@ -111,17 +111,17 @@ Input shapes:
For early stopping in distributed training, if each process has a specific stopping condition (e.g. validation loss), it may not be synchronized across all processes. As a result, a break can happen on process 0 but not on process 1 which will cause your code to hang indefinitely until a timeout occurs.
If you have early stopping conditionals, use the `set_breakpoint` and `check_breakpoint` methods to make sure all the processes
If you have early stopping conditionals, use the `set_trigger` and `check_trigger` methods to make sure all the processes
are ended correctly.
```py
# Assume `should_do_breakpoint` is a custom defined function that returns a conditional,
# and that conditional might be true only on process 1
if should_do_breakpoint(loss):
accelerator.set_breakpoint()
accelerator.set_trigger()
# Later in the training script when we need to check for the breakpoint
if accelerator.check_breakpoint():
if accelerator.check_trigger():
break
```
@ -142,9 +142,9 @@ hostnames for each of the nodes.
mpirun -f hostfile -n {number of nodes} -ppn 1 hostname
```
## CUDA Out-of-Memory
## Out-of-Memory
One of the most frustrating errors when it comes to running training scripts is hitting "CUDA Out-of-Memory". The entire script needs to be restarted and any progress is lost.
One of the most frustrating errors when it comes to running training scripts is hitting "Out-of-Memory" on devices like CUDA, XPU or CPU. The entire script needs to be restarted and any progress is lost.
To address this problem, Accelerate provides the [`find_executable_batch_size`] utility that is heavily based on [toma](https://github.com/BlackHC/toma).
This utility retries code that fails due to OOM (out-of-memory) conditions and automatically lowers batch sizes. For each OOM condition, the algorithm decreases the batch size by half and retries the code until it succeeds.
@ -153,7 +153,7 @@ To use [`find_executable_batch_size`], restructure your training function to inc
<Tip warning={true}>
The inner function **must** take batch size as the first parameter, but we do not pass one to it when called. The wrapper will handles this for you. Any object (models, optimizers) that consumes CUDA memory and is passed to the [`Accelerator`] also **must** be declared inside the inner function.
The inner function **must** take batch size as the first parameter, but we do not pass one to it when called. The wrapper will handle this for you. Any object (models, optimizers) that consumes device memory and is passed to the [`Accelerator`] also **must** be declared inside the inner function.
</Tip>
@ -204,8 +204,8 @@ Vastly different GPUs within the same setup can lead to performance bottlenecks.
If none of the solutions and advice here helped resolve your issue, you can always reach out to the community and Accelerate team for help.
- Ask for help on the Hugging Face forums by posting your question in the [🤗 Accelerate category](https://discuss.huggingface.co/c/accelerate/18). Make sure to write a descriptive post with relevant context about your setup and reproducible code to maximize the likelihood that your problem is solved!
- Ask for help on the Hugging Face forums by posting your question in the [Accelerate category](https://discuss.huggingface.co/c/accelerate/18). Make sure to write a descriptive post with relevant context about your setup and reproducible code to maximize the likelihood that your problem is solved!
- Post a question on [Discord](http://hf.co/join/discord), and let the team and the community help you.
- Create an Issue on the 🤗 Accelerate [GitHub repository](https://github.com/huggingface/accelerate/issues) if you think you've found a bug related to the library. Include context regarding the bug and details about your distributed setup to help us better figure out what's wrong and how we can fix it.
- Create an Issue on the Accelerate [GitHub repository](https://github.com/huggingface/accelerate/issues) if you think you've found a bug related to the library. Include context regarding the bug and details about your distributed setup to help us better figure out what's wrong and how we can fix it.

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Handling big models for inference
# Loading big models into memory
When loading a pre-trained model in PyTorch, the usual workflow looks like this:
@ -46,7 +46,7 @@ This API is quite new and still in its experimental stage. While we strive to pr
### Instantiating an empty model
The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
The first tool Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
```py
from accelerate import init_empty_weights
@ -74,7 +74,7 @@ initializes an empty model with a bit more than 100B parameters. Behind the scen
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split into several smaller files that we call checkpoint shards.
🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
```bash
first_state_dict.bin
@ -97,9 +97,9 @@ and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"l
### Loading weights
The second tool 🤗 Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
The second tool Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
If you want to use big model inference with 🤗 Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
If you want to use big model inference with Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
Here is how we can use this to load the [GPT2-1.5B](https://huggingface.co/marcsun13/gpt2-xl-linear-sharded) model.
@ -145,7 +145,7 @@ model = load_checkpoint_and_dispatch(
)
```
By passing `device_map="auto"`, we tell 🤗 Accelerate to determine automatically where to put each layer of the model depending on the available resources:
By passing `device_map="auto"`, we tell Accelerate to determine automatically where to put each layer of the model depending on the available resources:
- first, we use the maximum space available on the GPU(s)
- if we still need space, we store the remaining weights on the CPU
- if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors
@ -159,7 +159,7 @@ include a residual connection of some kind.
#### The `device_map`
You can see the `device_map` that 🤗 Accelerate picked by accessing the `hf_device_map` attribute of your model:
You can see the `device_map` that Accelerate picked by accessing the `hf_device_map` attribute of your model:
```py
model.hf_device_map
@ -210,7 +210,7 @@ outputs = model.generate(x1, max_new_tokens=10, do_sample=False)[0]
tokenizer.decode(outputs.cpu().squeeze())
```
Behind the scenes, 🤗 Accelerate added hooks to the model, so that:
Behind the scenes, Accelerate added hooks to the model, so that:
- at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works)
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after
@ -225,7 +225,7 @@ This way, your model can run for inference even if it doesn't fit on one of the
### Designing a device map
You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
You can let Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
<Tip>

View File

@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Deferring Executions
# Executing and deferring jobs
When you run your usual script, instructions are executed in order. Using 🤗 Accelerate to deploy your script on several
When you run your usual script, instructions are executed in order. Using Accelerate to deploy your script on several
GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
faster than others.
@ -127,4 +127,4 @@ for (x,y) in data_loader:
# Later in the training script when we need to check for the breakpoint
if accelerator.check_trigger():
break
```
```

View File

@ -0,0 +1,105 @@
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# FSDP1 vs FSDP2
This guide explains the key differences between `FSDP1` and `FSDP2` and helps you migrate your existing code to use `FSDP2` with minimal changes.
## How is FSDP2 better than FSDP1?
First, we want to understand how `FSDP1` and `FSDP2` work internally to understand the differences between them. This also helps us understand the limitations of `FSDP1` and how `FSDP2` solves them.
We'll be discussing a scenario where we have a single `Layer` that contains 3 `Linear` layers and is wrapped using `FSDP` to be sharded across 2 GPUs.
<div align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/layer.png" alt="Layer">
</div>
### FSDP1
First, we have to understand the original `FSDP1` and the limitations it brings. It represents each `FSDP` module as a single `FlatParameter` which is a single 1D tensor that contains all of the module parameters, which then get sharded across ranks. I.e. if you wrap the `Layer` with `FSDP1`, you'd achieve something as such:
<div align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/fsdp1.png" alt="FSDP1">
</div>
You might notice a problem. The whole `Layer` gets flattened into a single `FlatParameter`, which then gets sharded across ranks. But if it's a single `FlatParameter` object, how do we store metadata? That is one of the limitations. Properly storing per-parameter metadata such as `dtype`, `requires_grad`, etc. is not possible without some ugly hacks.
### FSDP2
This is why `FSDP2` was introduced. It doesn't use `FlatParameter`, instead it uses `DTensor` which is short for "Distributed Tensor". Each `DTensor` basically represents a vanilla `torch.Tensor` that has been sharded across ranks. It contains metadata about the original `torch.Tensor` and how it's sharded, what is the [placement type](https://pytorch.org/docs/stable/distributed.tensor.html#module-torch.distributed.tensor.placement_types) and so on. This is why it's called `per-parameter sharding`. The following figure shows the difference:
<div align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/fsdp2.png" alt="FSDP2">
</div>
Each Parameter of the original `Layer` is sharded across the 0th dimension, and split between 2 GPUs. Now, each `Linear` layer is a separate `DTensor` and storing metadata per-parameter is possible and straightforward.
> [!TIP]
> In the image above, the tensors were sharded across the 1st dimension for the sake of fitting the image on the screen, in reality, they are sharded across the 0th dimension as stated above
## What does FSDP2 offer?
`FSDP2` is a new and improved version of PyTorch's fully-sharded data parallel training API. Its main advantage is using `DTensor` to represent sharded parameters. Compared to `FSDP1`, it offers:
- Simpler internal implementation, where each `Parameter` is a separate `DTensor`
- Enables simple partial parameter freezing because of the above, which makes methods as [`LORA`](https://arxiv.org/abs/2106.09685) work out of the box
- With `DTensor`, `FSDP2` supports mixing `fp8` and other parameter types in the same model out of the box
- Faster and simpler checkpointing without extra communication across ranks using `SHARDED_STATE_DICT` and [`torch.distributed.checkpoint`](https://pytorch.org/docs/stable/distributed.checkpoint.html), this way, each rank only saves its own shard and corresponding metadata
- For loading, it uses a `state_dict` of the sharded model to directly load the sharded parameters
- Support for asynchronous checkpointing, where parameters are first copied to CPU memory, after this, main thread continues training while another thread stores the parameters on disk
- Memory efficiency and deterministic memory usage, `FSDP2` doesn't use `recordStream` anymore and uses stream-to-stream synchronization (for more technical details see [this forum post](https://dev-discuss.pytorch.org/t/fsdp-cudacachingallocator-an-outsider-newb-perspective/1486) and [this issue](https://github.com/pytorch/pytorch/issues/114299))
- In the future, optimizations of the communication patterns via `torch.compile` are planned, further improving the performance and memory efficiency
## API Differences
We have already discussed the internal differences, now let's discuss the differences, you, as a user, will need to know.
Here are the main changes in configuration options when using `FSDP2` through the `accelerate` CLI:
Previous (`FSDP1`) | New (`FSDP2`) | What Changed
-- | -- | --
`--fsdp_sharding_strategy` | `--fsdp_reshard_after_forward` | replaces `--fsdp_sharding_strategy`, changed to `true` (previously `FULL_SHARD`) or `false` (previously `SHARD_GRAD_OP`)
`--fsdp_backward_prefetch` | \*\***REMOVED**\*\* | `FSDP2` uses previous `BACKWARD_PRE` option by default, as only this allows communication and computation overlap
`--fsdp_forward_prefetch` | \*\***NOT YET IMPLEMENTED**\*\* | How to implement this is under active discussion, for now it is not supported in `FSDP2`
`--fsdp_sync_module_states` | \*\***REMOVED**\*\* | with `FSDP2`, this parameter becomes redundant
`--fsdp_cpu_ram_efficient_loading` | `--fsdp_cpu_ram_efficient_loading` | if `true`, `FSDP2` will similarly load the model only on rank 0, and then parameters get synced to other ranks, this is the same behavior as `FSDP1`, however, setting `--fsdp_sync_module_states` isn't required anymore
`--fsdp_state_dict_type` | `--fsdp_state_dict_type` | `LOCAL_STATE_DICT` becomes obsolete and with `FSDP2` `SHARDED_STATE_DICT` is the default option, which results in no extra communication and each rank saving its own shard, other possible option is `FULL_STATE_DICT` which results in extra communication and spike in memory usage but saves the full model from rank 0.
`--fsdp_use_orig_params` | \*\***REMOVED**\*\* | `FSDP2` uses a `DTensor` class on the background, which means it *always* uses the original parameters by default
\*\***NEW**\*\* | `--fsdp_version` | `1` is the default option, to not break existing code, set to `2` to use `FSDP2`
For all other options that remain unchanged, see the [`FSDP` documentation](../usage_guides/fsdp.md).
## How to Switch to FSDP2
### If using Python code:
Simply set `fsdp_version=2` when creating your plugin and replace options according to the table above.
```python
from accelerate import FullyShardedDataParallelPlugin, Accelerator
fsdp_plugin = FullyShardedDataParallelPlugin(
fsdp_version=2
# other options...
)
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
```
### If using YAML config:
Use our conversion tool:
```bash
accelerate to-fsdp2 --config_file config.yaml --output_file new_config.yaml
```
This will automatically convert all FSDP1 settings to their FSDP2 equivalents. Use `--overwrite` to update the existing file instead of creating a new one.

View File

@ -13,15 +13,15 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Moving between FSDP And DeepSpeed
# FSDP vs DeepSpeed
🤗 Accelerate offers flexibilty of training frameworks, by integrating two extremely powerful tools for distributed training, namely [Pytorch FSDP](../usage_guides/fsdp.md) and [Microsoft DeepSpeed](../usage_guides/deepspeed.md). The aim of this tutorial is to draw parallels, as well as to outline potential differences, to empower the user to switch seamlessly between these two frameworks.
Accelerate offers flexibilty of training frameworks, by integrating two extremely powerful tools for distributed training, namely [Pytorch FSDP](../usage_guides/fsdp) and [Microsoft DeepSpeed](../usage_guides/deepspeed). The aim of this tutorial is to draw parallels, as well as to outline potential differences, to empower the user to switch seamlessly between these two frameworks.
<Tip>
To switch between the frameworks, we recommend launching code 🤗 `accelerate launch` passing in the correct config file with `--config_file`, or passing in the respective arguments directly for [FSDP and DeepSpeed](../package_reference/cli#accelerate-launch) .
To switch between the frameworks, we recommend launching code `accelerate launch` passing in the correct config file with `--config_file`, or passing in the respective arguments directly for [FSDP and DeepSpeed](../package_reference/cli#accelerate-launch) .
Example 🤗 Accelerate configurations can be found here for [DeepSpeed](../usage_guides/deepspeed#accelerate-deepspeed-plugin) and [FSDP](../usage_guides/fsdp#how-it-works-out-of-the-box), or in the [example zoo under "Launch Configurations"](../usage_guides/explore)
Example Accelerate configurations can be found here for [DeepSpeed](../usage_guides/deepspeed#accelerate-deepspeed-plugin) and [FSDP](../usage_guides/fsdp#how-it-works-out-of-the-box), or in the [example zoo under "Launch Configurations"](../usage_guides/explore)
</Tip>
@ -47,7 +47,7 @@ parameters summoning | FSDP<br>DeepSpeed | `--fsdp_use_orig_params`<br>None | `t
parameters syncing | FSDP<br>DeepSpeed | `--fsdp_sync_module_states`<br>None | `true` |
training | FSDP<br>DeepSpeed | None<br>`--gradient_accumulation_steps`<br>`--gradient_clipping` | <br>`auto`<br>`auto` | Transparent to user
For detailed descriptions of the above, refer to [🤗 `Accelerate` launch documentation](../package_reference/cli#accelerate-launch).
For detailed descriptions of the above, refer to [`Accelerate` launch documentation](../package_reference/cli#accelerate-launch).
<Tip>
@ -94,7 +94,7 @@ FSDP only allows *all-or-nothing* offload (i.e., either offload parameters, grad
### Prefetching
FSDP allows two prefetching configurations `--fsdp_forward_prefetch` and `--fsdp_backward_prefetch` to improve overlap of comms / computation at a cost of extra memory, see [FSDP documentation](https://pytorch.org/docs/stable/fsdp.html).
For DeepSpeed, the prefetching will be turned on when needed, and it turns on depending on certain hyper-params like `stage3_param_persistence_threshold`, `stage3_max_reuse_distance`, etc, [that can be configured for Zero3](https://www.deepspeed.ai/docs/config-json/#parameter-offloading); 🤗 `accelerate` may set these hyper-params automatically if you don't set those explicitly in the deepspeed config file.
For DeepSpeed, the prefetching will be turned on when needed, and it turns on depending on certain hyper-params like `stage3_param_persistence_threshold`, `stage3_max_reuse_distance`, etc, [that can be configured for Zero3](https://www.deepspeed.ai/docs/config-json/#parameter-offloading); `accelerate` may set these hyper-params automatically if you don't set those explicitly in the deepspeed config file.
<Tip>
@ -104,12 +104,12 @@ For DeepSpeed, the prefetching will be turned on when needed, and it turns on de
### Model Loading
While FSDP require an explicit `--fsdp_cpu_ram_efficient_loading true` to activate efficient model loading, 🤗 `transformers` will activate the similar feature whenever DeepSpeed Zero3 is used.
While FSDP require an explicit `--fsdp_cpu_ram_efficient_loading true` to activate efficient model loading, `transformers` will activate the similar feature whenever DeepSpeed Zero3 is used.
<Tip>
For FSDP, whenever setting `--fsdp_cpu_ram_efficient_loading true`, 🤗 `accelerate` will automatically set `sync_module_states` to true.
For RAM efficient loading the weights will be loaded only in a singe rank, and thus requires `sync_module_states` to broadcast weights to other ranks.
For FSDP, whenever setting `--fsdp_cpu_ram_efficient_loading true`, `accelerate` will automatically set `sync_module_states` to true.
For RAM efficient loading the weights will be loaded only in a single rank, and thus requires `sync_module_states` to broadcast weights to other ranks.
</Tip>
@ -125,7 +125,7 @@ FSDP requires an explicit `--fsdp_auto_wrap_policy` for the algorithm to decide
### Parameters Summoning
FSDP requires an explicit `--fsdp_use_orig_params` flag if using `torch.compile`, see [the pytorch documenation](https://pytorch.org/docs/stable/fsdp.html#module-torch.distributed.fsdp). For DeepSpeed this is transparent to the user.
FSDP requires an explicit `--fsdp_use_orig_params` flag if using `torch.compile`, see [the pytorch documentation](https://pytorch.org/docs/stable/fsdp.html#module-torch.distributed.fsdp). For DeepSpeed this is transparent to the user.
<Tip>
@ -147,7 +147,7 @@ Deepspeed requires explicit `--gradient_accumulation_steps` and `--gradient_clip
## On Differences in Data Precision Handling
To discuss the how data precision is handled in both FSDP and Deepspeed, it is instructive to first give an overview of how model parameters are handled in these frameworks. Before the model / optimizer parameters are distributed across GPUs, parameter preparation is involved to first "flatten" them to one-dimensional [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor). The implementation of FSDP / DeepSpeed varies in the respect of the `dtype` in which these "flattened" parameters are stored, and there are ramifications with regards to how [`torch.Optimizer`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) allocate their `dtype`s. The table below outlines the processes for both frameworks; the "Local" column indicates the process occurring at a per-gpu level, therefore any memory overheads by upcasting should be understood to be amortized by the number of gpus used.
To discuss how data precision is handled in both FSDP and Deepspeed, it is instructive to first give an overview of how model parameters are handled in these frameworks. Before the model / optimizer parameters are distributed across GPUs, parameter preparation is involved to first "flatten" them to one-dimensional [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor). The implementation of FSDP / DeepSpeed varies in the respect of the `dtype` in which these "flattened" parameters are stored, and there are ramifications with regards to how [`torch.Optimizer`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) allocate their `dtype`s. The table below outlines the processes for both frameworks; the "Local" column indicates the process occurring at a per-gpu level, therefore any memory overheads by upcasting should be understood to be amortized by the number of gpus used.
<Tip>
@ -166,7 +166,7 @@ Optimizer (Actual Step) | ✅ | FSDP<br>DeepSpeed | occurs in `torch_dtype` <br
<Tip warning={true}>
Therefore when using DeepSpeed a small number of GPUs, be aware of potentially significant memory overheads due to the upcasting during preperation.
Therefore when using DeepSpeed a small number of GPUs, be aware of potentially significant memory overheads due to the upcasting during preparation.
</Tip>
@ -189,4 +189,4 @@ Framework | Model Loading (`torch_dtype`) | Mixed Precision | Preparation (Local
--|--|--|--|--|--
FSDP | bf16 | default (none) | bf16 | bf16 | bf16
FSDP | bf16 | bf16 | fp32 | bf16 | fp32
DeepSpeed | bf16 | bf16 | fp32 | bf16 | fp32
DeepSpeed | bf16 | bf16 | fp32 | bf16 | fp32

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Gradient Synchronization
# Gradient synchronization
PyTorch's distributed module operates by communicating back and forth between all of the GPUs in your system.
This communication takes time, and ensuring all processes know the states of each other happens at particular triggerpoints
@ -28,7 +28,7 @@ from torch.nn.parallel import DistributedDataParallel
model = nn.Linear(10, 10)
ddp_model = DistributedDataParallel(model)
```
In 🤗 Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model.
In Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model.
```diff
+ from accelerate import Accelerator
@ -90,7 +90,7 @@ for index, batch in enumerate(dataloader):
optimizer.step()
```
In 🤗 Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
In Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
`ddp_model.no_sync` gets replaced with [`~Accelerator.no_sync`] and operates the same way:
```diff

View File

@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# 🤗 Accelerate's internal mechanisms
# Accelerate's internal mechanisms
Internally, 🤗 Accelerate works by first analyzing the environment in which the script is launched to determine which
Internally, Accelerate works by first analyzing the environment in which the script is launched to determine which
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
that information is stored in the [`~AcceleratorState`].
@ -69,4 +69,6 @@ setting the same seed in the main random number generator in all processes.
</Tip>
For more details about the internals, see the [Internals page](package_reference/torch_wrappers).
If you have [`torchdata>=0.8.0`](https://github.com/pytorch/data/tree/main) installed, and you have passed `use_stateful_dataloader=True` into your [`~utils.DataLoaderConfiguration`], these classes will directly inherit from `StatefulDataLoader` instead, and maintain a `state_dict`.
For more details about the internals, see the [Internals page](../package_reference/torch_wrappers).

View File

@ -13,12 +13,12 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Low Precision Training Methods
# Low precision training methods
The release of new kinds of hardware led to the emergence of new training paradigms that better utilize them. Currently, this is in the form of training
in 8-bit precision using packages such as [TransformersEngine](https://github.com/NVIDIA/TransformerEngine) (TE) or [MS-AMP](https://github.com/Azure/MS-AMP/tree/main).
For an introduction to the topics discussed today, we recommend reviewing the [low-precision usage guide](../usage_guides/low_precision_training.md) as this documentation will reference it regularly.
For an introduction to the topics discussed today, we recommend reviewing the [low-precision usage guide](../usage_guides/low_precision_training) as this documentation will reference it regularly.
## A Quick Chart
@ -36,7 +36,7 @@ MS-AMP O3 | FP8 | FP8 | FP8 | FP16 | FP8 | FP8+FP16
`TransformersEngine` is the first solution to trying to train in 8-bit floating point. It works by using drop-in replacement layers for certain ones in a model that utilizes their FP8-engine to reduce the number of bits (such as 32 to 8) without degrading the final accuracy of the model.
Specifically, 🤗 Accelerate will find and replace the following layers with `TransformersEngine` versions:
Specifically, Accelerate will find and replace the following layers with `TransformersEngine` versions:
* `nn.LayerNorm` for `te.LayerNorm`
* `nn.Linear` for `te.Linear`
@ -50,7 +50,7 @@ The `TransformerEngine` can receive many different arguments that customize how
* `margin`: The margin to use for the gradient scaling.
* `interval`: The interval to use for how often the scaling factor is recomputed.
* `fp8_format``: The format to use for the FP8 recipe. Must be one of `E4M3` or `HYBRID`.
* `fp8_format``: The format to use for the FP8 recipe. Must be one of `HYBRID` or `E4M3`. (Generally `HYBRID` for training, `E4M3` for evaluation)
* `amax_history_len`: The length of the history to use for the scaling factor computation
* `amax_compute_algo`: The algorithm to use for the scaling factor computation. Must be one of `max` or `most_recent`.
* `override_linear_precision`: Whether or not to execute `fprop`, `dgrad`, and `wgrad` GEMMS in higher precision.
@ -67,7 +67,7 @@ MS-AMP takes a different approach to `TransformersEngine` by providing three dif
* The second optimization level (`O2`) improves upon this by also reducing the precision of the optimizer states. One is in FP8 while the other is in FP16. Generally it's been shown that this will only provide a net-gain of no degraded end accuracy, increased training speed, and reduced memory as now every state is either in FP16 or FP8.
* Finally, MS-AMP has a third optimization level (`O3`) which helps during DDP scenarios such as DeepSpeed. The weights of the model in memory are fully cast to FP8, and the master weights are now stored in FP16. This fully reduces memory by the highest factor as now not only is almost everything in FP8, only two states are left in FP16. Currently, only DeepSpeed versions up through 0.9.2 are supported, so this capability is not included in the 🤗 Accelerate integration
* Finally, MS-AMP has a third optimization level (`O3`) which helps during DDP scenarios such as DeepSpeed. The weights of the model in memory are fully cast to FP8, and the master weights are now stored in FP16. This fully reduces memory by the highest factor as now not only is almost everything in FP8, only two states are left in FP16. Currently, only DeepSpeed versions up through 0.9.2 are supported, so this capability is not included in the Accelerate integration
## Combining the two

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Comparing performance between different device setups
# Comparing performance across distributed setups
Evaluating and comparing the performance from different setups can be quite tricky if you don't know what to look for.
For example, you cannot run the same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate
@ -43,13 +43,13 @@ Why is this important? Under the hood this will set **5** different seed setting
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed_all(seed) # or torch.xpu.manual_seed_all, etc
# ^^ safe to call this function even if cuda is not available
if is_torch_xla_available():
xm.set_rng_state(seed)
```
The random state, numpy's state, torch, torch's cuda state, and if TPUs are available torch_xla's cuda state.
The random state, numpy's state, torch, torch's device state, and if TPUs are available torch_xla's cuda state.
## Observed Batch Sizes

View File

@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Training on TPUs with 🤗 Accelerate
# Training on TPUs
Training on TPUs can be slightly different from training on multi-gpu, even with 🤗 Accelerate. This guide aims to show you
Training on TPUs can be slightly different from training on multi-gpu, even with Accelerate. This guide aims to show you
where you should be careful and why, as well as the best practices in general.
## Training in a Notebook
@ -81,7 +81,7 @@ notebook_launcher(training_function)
<Tip>
The `notebook_launcher` will default to 8 processes if 🤗 Accelerate has been configured for a TPU
The `notebook_launcher` will default to 8 processes if Accelerate has been configured for a TPU
</Tip>
@ -128,10 +128,10 @@ And finally calling the training function with:
## Mixed Precision and Global Variables
As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), 🤗 Accelerate supports fp16 and bf16, both of which can be used on TPUs.
As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), Accelerate supports fp16 and bf16, both of which can be used on TPUs.
That being said, ideally `bf16` should be utilized as it is extremely efficient to use.
There are two "layers" when using `bf16` and 🤗 Accelerate on TPUs, at the base level and at the operation level.
There are two "layers" when using `bf16` and Accelerate on TPUs, at the base level and at the operation level.
At the base level, this is enabled when passing `mixed_precision="bf16"` to `Accelerator`, such as:
```python

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

View File

@ -15,7 +15,7 @@ rendered properly in your Markdown viewer.
# Accelerate
🤗 Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.
Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.
```diff
+ from accelerate import Accelerator
@ -37,7 +37,7 @@ rendered properly in your Markdown viewer.
scheduler.step()
```
Built on `torch_xla` and `torch.distributed`, 🤗 Accelerate takes care of the heavy lifting, so you don't have to write any custom code to adapt to these platforms.
Built on `torch_xla` and `torch.distributed`, Accelerate takes care of the heavy lifting, so you don't have to write any custom code to adapt to these platforms.
Convert existing codebases to utilize [DeepSpeed](usage_guides/deepspeed), perform [fully sharded data parallelism](usage_guides/fsdp), and have automatic support for mixed-precision training!
<Tip>
@ -56,11 +56,11 @@ accelerate launch {my_script.py}
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./basic_tutorials/overview"
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
<p class="text-gray-700">Learn the basics and become familiar with using 🤗 Accelerate. Start here if you are using 🤗 Accelerate for the first time!</p>
<p class="text-gray-700">Learn the basics and become familiar with using Accelerate. Start here if you are using Accelerate for the first time!</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./usage_guides/explore"
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
<p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Accelerate to solve real-world problems.</p>
<p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use Accelerate to solve real-world problems.</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concept_guides/gradient_synchronization"
><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
@ -68,7 +68,7 @@ accelerate launch {my_script.py}
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/accelerator"
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
<p class="text-gray-700">Technical descriptions of how 🤗 Accelerate classes and methods work.</p>
<p class="text-gray-700">Technical descriptions of how Accelerate classes and methods work.</p>
</a>
</div>
</div>

View File

@ -15,33 +15,96 @@ rendered properly in your Markdown viewer.
# Working with large models
## Dispatching and Offloading Models
## Dispatch and offload
### init_empty_weights
[[autodoc]] big_modeling.init_empty_weights
### cpu_offload
[[autodoc]] big_modeling.cpu_offload
### cpu_offload_with_hook
[[autodoc]] big_modeling.cpu_offload_with_hook
### disk_offload
[[autodoc]] big_modeling.disk_offload
### dispatch_model
[[autodoc]] big_modeling.dispatch_model
### load_checkpoint_and_dispatch
[[autodoc]] big_modeling.load_checkpoint_and_dispatch
### load_checkpoint_in_model
[[autodoc]] big_modeling.load_checkpoint_in_model
### infer_auto_device_map
[[autodoc]] utils.infer_auto_device_map
## Model Hooks
## Hooks
### Hook Classes
### ModelHook
[[autodoc]] hooks.ModelHook
### AlignDevicesHook
[[autodoc]] hooks.AlignDevicesHook
### SequentialHook
[[autodoc]] hooks.SequentialHook
### Adding Hooks
### LayerwiseCastingHook
[[autodoc]] hooks.LayerwiseCastingHook
## Adding Hooks
### add_hook_to_module
[[autodoc]] hooks.add_hook_to_module
### attach_execution_device_hook
[[autodoc]] hooks.attach_execution_device_hook
### attach_align_device_hook
[[autodoc]] hooks.attach_align_device_hook
### attach_align_device_hook_on_blocks
[[autodoc]] hooks.attach_align_device_hook_on_blocks
### Removing Hooks
### attach_layerwise_casting_hooks
[[autodoc]] big_modeling.attach_layerwise_casting_hooks
## Removing Hooks
### remove_hook_from_module
[[autodoc]] hooks.remove_hook_from_module
[[autodoc]] hooks.remove_hook_from_submodules
### remove_hook_from_submodules
[[autodoc]] hooks.remove_hook_from_submodules
## Utilities
### has_offloaded_params
[[autodoc]] utils.has_offloaded_params
### align_module_device
[[autodoc]] utils.align_module_device

View File

@ -139,16 +139,17 @@ values. They can also be passed in manually.
* `--cpu` (`bool`) -- Whether or not to force the training on the CPU.
* `--multi_gpu` (`bool`) -- Whether or not this should launch a distributed GPU training.
* `--tpu` (`bool`) -- Whether or not this should launch a TPU training.
* `--ipex` (`bool`) -- Whether or not this should launch an Intel Pytorch Extension (IPEX) training.
* `--ipex` (`bool`) -- Whether or not this should launch an Intel Pytorch Extension (IPEX) training. **This argument is deprecated, will be removed in Accelerate v1.10**
**Resource Selection Arguments**:
The following arguments are useful for fine-tuning how available hardware should be used
* `--mixed_precision {no,fp16,bf16}` (`str`) -- Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later.
* `--mixed_precision {no,fp16,bf16,fp8}` (`str`) -- Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later.
* `--num_processes NUM_PROCESSES` (`int`) -- The total number of processes to be launched in parallel.
* `--num_machines NUM_MACHINES` (`int`) -- The total number of machines used in this training.
* `--num_cpu_threads_per_process NUM_CPU_THREADS_PER_PROCESS` (`int`) -- The number of CPU threads per process. Can be tuned for optimal performance.
* `--enable_cpu_affinity` (`bool`) -- Whether or not CPU affinity and balancing should be enabled. Currently only supported on NVIDIA hardware.
**Training Paradigm Arguments**:
@ -157,27 +158,34 @@ The following arguments are useful for selecting which training paradigm to use.
* `--use_deepspeed` (`bool`) -- Whether or not to use DeepSpeed for training.
* `--use_fsdp` (`bool`) -- Whether or not to use FullyShardedDataParallel for training.
* `--use_megatron_lm` (`bool`) -- Whether or not to use Megatron-LM for training.
* `--use_xpu` (`bool`) -- Whether to use IPEX plugin to speed up training on XPU specifically.
* `--use_xpu` (`bool`) -- Whether to use IPEX plugin to speed up training on XPU specifically. **This argument is deprecated and ignored, will be removed in Accelerate v1.10**
**Distributed GPU Arguments**:
The following arguments are only useful when `multi_gpu` is passed or multi-gpu training is configured through `accelerate config`:
* `--gpu_ids` (`str`) -- What GPUs (by id) should be used for training on this machine as a comma-seperated list
* `--gpu_ids` (`str`) -- What GPUs (by id) should be used for training on this machine as a comma-separated list
* `--same_network` (`bool`) -- Whether all machines used for multinode training exist on the same local network.
* `--machine_rank MACHINE_RANK` (`int`) -- The rank of the machine on which this script is launched.
* `--main_process_ip MAIN_PROCESS_IP` (`str`) -- The IP address of the machine of rank 0.
* `--main_process_port MAIN_PROCESS_PORT` (`int`) -- The port to use to communicate with the machine of rank 0.
* `--rdzv_backend` (`str`) -- The rendezvous method to use, such as "static" or "c10d"
* `--machine_rank` (`int`) -- The rank of the machine on which this script is launched.
* `--main_process_ip` (`str`) -- The IP address of the machine of rank 0.
* `--main_process_port` (`int`) -- The port to use to communicate with the machine of rank 0.
* `-t`, `--tee` (`str`) -- Tee std streams into a log file and also to console.
* `--log_dir` (`str`) -- Base directory to use for log files when using torchrun/torch.distributed.run as launcher. Use with --tee to redirect std streams info log files.
* `--role` (`str`) -- User-defined role for the workers.
* `--rdzv_backend` (`str`) -- The rendezvous method to use, such as 'static' (the default) or 'c10d'
* `--rdzv_conf` (`str`) -- Additional rendezvous configuration (<key1>=<value1>,<key2>=<value2>,...).
* `--max_restarts` (`int`) -- Maximum number of worker group restarts before failing.
* `--monitor_interval` (`float`) -- Interval, in seconds, to monitor the state of workers.
* `--monitor_interval` (`int`) -- Interval, in seconds, to monitor the state of workers.
**TPU Arguments**:
The following arguments are only useful when `tpu` is passed or TPU training is configured through `accelerate config`:
* `--main_training_function MAIN_TRAINING_FUNCTION` (`str`) -- The name of the main function to be executed in your script.
* `--tpu_cluster` (`bool`) -- Whether to use a GCP TPU pod for training.
* `--tpu_use_sudo` (`bool`) -- Whether to use `sudo` when running the TPU training script in each pod.
* `--vm` (`str`) -- List of single Compute VM instance names. If not provided we assume usage of instance groups. For TPU pods.
* `--env` (`str`) -- List of environment variables to set on the Compute VM instances. For TPU pods.
* `--main_training_function` (`str`) -- The name of the main function to be executed in your script (only for TPU training).
* `--downcast_bf16` (`bool`) -- Whether when using bf16 precision on TPUs if both float and double tensors are cast to bfloat16 or if double tensors remain as float32.
**DeepSpeed Arguments**:
@ -188,14 +196,16 @@ The following arguments are only useful when `use_deepspeed` is passed or `deeps
* `--zero_stage` (`int`) -- DeepSpeed's ZeRO optimization stage.
* `--offload_optimizer_device` (`str`) -- Decides where (none|cpu|nvme) to offload optimizer states.
* `--offload_param_device` (`str`) -- Decides where (none|cpu|nvme) to offload parameters.
* `--offload_optimizer_nvme_path` (`str`) -- Decides Nvme Path to offload optimizer states.
* `--gradient_accumulation_steps` (`int`) -- No of gradient_accumulation_steps used in your training script.
* `--gradient_clipping` (`float`) -- Gradient clipping value used in your training script.
* `--zero3_init_flag` (`str`) -- Decides Whether (true|false) to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with DeepSpeed ZeRO Stage-3.
* `--zero3_save_16bit_model` (`str`) -- Decides Whether (true|false) to save 16-bit model weights when using ZeRO Stage-3. Only applicable with DeepSpeed ZeRO Stage-3.
* `--deepspeed_hostfile` (`str`) -- DeepSpeed hostfile for configuring multi-node compute resources.
* `--deepspeed_exclusion_filter` (`str`) -- DeepSpeed exclusion filter string when using mutli-node setup.
* `--deepspeed_inclusion_filter` (`str`) -- DeepSpeed inclusion filter string when using mutli-node setup.
* `--deepspeed_exclusion_filter` (`str`) -- DeepSpeed exclusion filter string when using multi-node setup.
* `--deepspeed_inclusion_filter` (`str`) -- DeepSpeed inclusion filter string when using multi-node setup.
* `--deepspeed_multinode_launcher` (`str`) -- DeepSpeed multi-node launcher to use.
* `--deepspeed_moe_layer_cls_names` (`str`) -- comma-separated list of transformer MoE layer class names (case-sensitive) to wrap, e.g, `MixtralSparseMoeBlock` `Qwen2MoeSparseMoeBlock`, `JetMoEAttention,JetMoEBlock`
**Fully Sharded Data Parallelism Arguments**:
@ -210,8 +220,9 @@ The following arguments are only useful when `use_fsdp` is passed or Fully Shard
* `--fsdp_state_dict_type` (`str`) -- FSDP's state dict type.
* `--fsdp_forward_prefetch` (`str`) -- FSDP forward prefetch.
* `--fsdp_use_orig_params` (`str`) -- If True, allows non-uniform `requires_grad` mixed in a FSDP unit.
* `--fsdp_cpu_ram_efficient_loading` (`str`) - If true, only the first process loads the pretrained model checkoint while all other processes have empty weights. When using this, `--fsdp_sync_module_states` needs to True.
* `--fsdp_sync_module_states` (`str`) - If true, each individually wrapped FSDP unit will broadcast module parameters from rank 0.
* `--fsdp_cpu_ram_efficient_loading` (`str`) -- If true, only the first process loads the pretrained model checkoint while all other processes have empty weights. When using this, `--fsdp_sync_module_states` needs to True.
* `--fsdp_sync_module_states` (`str`) -- If true, each individually wrapped FSDP unit will broadcast module parameters from rank 0.
* `--fsdp_activation_checkpointing` (`bool`) -- Decides Whether intermediate activations are freed during the forward pass, and a checkpoint is left as a placeholder
**Megatron-LM Arguments**:
@ -225,6 +236,18 @@ The following arguments are only useful when `use_megatron_lm` is passed or Mega
* `--megatron_lm_use_distributed_optimizer` (``) -- Decides Whether (true|false) to use distributed optimizer which shards optimizer state and gradients across Data Parallel (DP) ranks.
* `--megatron_lm_gradient_clipping` (``) -- Megatron-LM's gradient clipping value based on global L2 Norm (0 to disable).
**FP8 Arguments**:
* `--fp8_backend` (`str`) -- Choose a backend to train with FP8 (`te` or `msamp`)
* `--fp8_use_autocast_during_eval` (`bool`) -- Whether to use FP8 autocast during eval mode (useful only when `--fp8_backend=te` is passed). Generally better metrics are found when this is not passed.
* `--fp8_margin` (`int`) -- The margin to use for the gradient scaling (useful only when `--fp8_backend=te` is passed).
* `--fp8_interval` (`int`) -- The interval to use for how often the scaling factor is recomputed (useful only when `--fp8_backend=te` is passed).
* `--fp8_format` (`str`) -- The format to use for the FP8 recipe (useful only when `--fp8_backend=te` is passed).
* `--fp8_amax_history_len` (`int`) -- The length of the history to use for the scaling factor computation (useful only when `--fp8_backend=te` is passed).
* `--fp8_amax_compute_algo` (`str`) -- The algorithm to use for the scaling factor computation. (useful only when `--fp8_backend=te` is passed).
* `--fp8_override_linear_precision` (`Tuple[bool, bool, bool]`) -- Whether or not to execute `fprop`, `dgrad`, and `wgrad` GEMMS in higher precision.
* `--fp8_opt_level` (`str`) -- What level of 8-bit collective communication should be used with MS-AMP (useful only when `--fp8_backend=msamp` is passed)
**AWS SageMaker Arguments**:
The following arguments are only useful when training in SageMaker

View File

@ -13,16 +13,32 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Utilities for DeepSpeed
# DeepSpeed utilities
## DeepSpeedPlugin
## get_active_deepspeed_plugin
[[autodoc]] utils.get_active_deepspeed_plugin
[[autodoc]] utils.DeepSpeedPlugin
[[autodoc]] utils.deepspeed.DummyOptim
[[autodoc]] utils.deepspeed.DummyScheduler
## DeepSpeedEnginerWrapper
[[autodoc]] utils.deepspeed.DeepSpeedEngineWrapper
## DeepSpeedOptimizerWrapper
[[autodoc]] utils.deepspeed.DeepSpeedOptimizerWrapper
## DeepSpeedSchedulerWrapper
[[autodoc]] utils.deepspeed.DeepSpeedSchedulerWrapper
## DummyOptim
[[autodoc]] utils.deepspeed.DummyOptim
## DummyScheduler

View File

@ -0,0 +1,38 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# FP8
Below are functions and classes relative to the underlying FP8 implementation
## FP8RecipeKwargs
[[autodoc]] utils.FP8RecipeKwargs
## convert_model
[[autodoc]] utils.convert_model
## has_transformer_engine_layers
[[autodoc]] utils.has_transformer_engine_layers
## contextual_fp8_autocast
[[autodoc]] utils.contextual_fp8_autocast
## apply_fp8_autowrap
[[autodoc]] utils.apply_fp8_autowrap

View File

@ -13,8 +13,34 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Utilities for Fully Sharded Data Parallelism
# Fully Sharded Data Parallel utilities
## enable_fsdp_ram_efficient_loading
[[autodoc]] utils.enable_fsdp_ram_efficient_loading
## disable_fsdp_ram_efficient_loading
[[autodoc]] utils.disable_fsdp_ram_efficient_loading
## merge_fsdp_weights
[[autodoc]] utils.merge_fsdp_weights
[[autodoc]] utils.FullyShardedDataParallelPlugin
## FullyShardedDataParallelPlugin
[[autodoc]] utils.FullyShardedDataParallelPlugin
## fsdp2_load_full_state_dict
[[autodoc]] utils.fsdp2_load_full_state_dict
## fsdp2_switch_optimizer_parameters
[[autodoc]] utils.fsdp2_switch_optimizer_parameters
## fsdp2_prepare_model
[[autodoc]] utils.fsdp2_prepare_model
## fsdp2_prepare_auto_wrap_policy

View File

@ -13,8 +13,10 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# The inference API
# Pipeline parallelism
These docs refer to the [PiPPy](https://github.com/PyTorch/PiPPy) integration.
Accelerate supports pipeline parallelism for large-scale training with the PyTorch [torch.distributed.pipelining](https://pytorch.org/docs/stable/distributed.pipelining.html) API.
## prepare_pippy
[[autodoc]] inference.prepare_pippy

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Kwargs Handlers
# Kwargs handlers
The following objects can be passed to the main [`Accelerator`] to customize how some PyTorch objects
related to distributed training or mixed precision are created.
@ -30,6 +30,10 @@ related to distributed training or mixed precision are created.
[[autodoc]] utils.FP8RecipeKwargs
## ProfileKwargs
[[autodoc]] utils.ProfileKwargs
## GradScalerKwargs
[[autodoc]] GradScalerKwargs

View File

@ -17,6 +17,10 @@ rendered properly in your Markdown viewer.
Functions for launching training on distributed processes.
## notebook_launcher
[[autodoc]] accelerate.notebook_launcher
## debug_launcher
[[autodoc]] accelerate.debug_launcher

View File

@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Logging with Accelerate
# Logging
Refer to the [Troubleshooting guide](../usage_guides/troubleshooting#logging) or to the example below to learn
how to use 🤗 Accelerate's logger.
how to use Accelerate's logger.
[[autodoc]] logging.get_logger

View File

@ -13,20 +13,36 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Utilities for Megatron-LM
# Megatron-LM utilities
## MegatronLMPlugin
[[autodoc]] utils.MegatronLMPlugin
## MegatronLMDummyScheduler
[[autodoc]] utils.MegatronLMDummyScheduler
## MegatronLMDummyDataLoader
[[autodoc]] utils.MegatronLMDummyDataLoader
## AbstractTrainStep
[[autodoc]] utils.AbstractTrainStep
## GPTTrainStep
[[autodoc]] utils.GPTTrainStep
## BertTrainStep
[[autodoc]] utils.BertTrainStep
## T5TrainStep
[[autodoc]] utils.T5TrainStep
## avg_losses_across_data_parallel_group
[[autodoc]] utils.avg_losses_across_data_parallel_group

View File

@ -21,8 +21,14 @@ instances share the same state, which is initialized on the first instantiation.
These classes are immutable and store information about certain configurations or
states.
## PartialState
[[autodoc]] state.PartialState
## AcceleratorState
[[autodoc]] state.AcceleratorState
## GradientState
[[autodoc]] state.GradientState

View File

@ -13,25 +13,36 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Wrapper classes for torch Dataloaders, Optimizers, and Schedulers
# DataLoaders, Optimizers, and Schedulers
The internal classes Accelerate uses to prepare objects for distributed training
when calling [`~Accelerator.prepare`].
## Datasets and DataLoaders
## DataLoader utilities
[[autodoc]] data_loader.prepare_data_loader
[[autodoc]] data_loader.skip_first_batches
## BatchSamplerShard
[[autodoc]] data_loader.BatchSamplerShard
## IterableDatasetShard
[[autodoc]] data_loader.IterableDatasetShard
## DataLoaderShard
[[autodoc]] data_loader.DataLoaderShard
## DataLoaderDispatcher
[[autodoc]] data_loader.DataLoaderDispatcher
## Optimizers
## AcceleratedOptimizer
[[autodoc]] optimizer.AcceleratedOptimizer
## Schedulers
## AcceleratedScheduler
[[autodoc]] scheduler.AcceleratedScheduler

View File

@ -13,23 +13,48 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Experiment Tracking
# Experiment Trackers
## The Base Tracker Class
## GeneralTracker
[[autodoc]] tracking.GeneralTracker
## Integrated Trackers
## TensorBoardTracker
[[autodoc]] tracking.TensorBoardTracker
- __init__
## WandBTracker
[[autodoc]] tracking.WandBTracker
- __init__
## Trackio
[[autodoc]] tracking.TrackioTracker
- __init__
## CometMLTracker
[[autodoc]] tracking.CometMLTracker
- __init__
## AimTracker
[[autodoc]] tracking.AimTracker
- __init__
## MLflowTracker
[[autodoc]] tracking.MLflowTracker
- __init__
## ClearMLTracker
[[autodoc]] tracking.ClearMLTracker
- __init__
## SwanLabTracker
[[autodoc]] tracking.SwanLabTracker
- __init__

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Helpful Utilities
# Utility functions and classes
Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case.
@ -126,6 +126,10 @@ These include data operations that mimic the same `torch` ops but can be used on
[[autodoc]] utils.gather_object
[[autodoc]] utils.get_grad_scaler
[[autodoc]] utils.get_mixed_precision_context_manager
[[autodoc]] utils.listify
[[autodoc]] utils.pad_across_processes
@ -170,6 +174,8 @@ When setting up 🤗 Accelerate for the first time, rather than running `acceler
[[autodoc]] utils.environment.override_numa_affinity
[[autodoc]] utils.purge_accelerate_environment
## Memory
[[autodoc]] utils.find_executable_batch_size
@ -202,8 +208,7 @@ These utilities relate to interacting with PyTorch models
[[autodoc]] utils.set_module_tensor_to_device
[[autodoc]] utils.shard_checkpoint
[[autodoc]] utils.get_module_children_bottom_up
## Parallel
@ -213,6 +218,8 @@ These include general utilities that should be used when working in parallel.
[[autodoc]] utils.save
[[autodoc]] utils.load
[[autodoc]] utils.wait_for_everyone

View File

@ -53,6 +53,8 @@ accelerate launch path_to_script.py --args_for_the_script
To learn more, check out the [Launch distributed code](basic_tutorials/launch) tutorial for more information about launching your scripts.
We also have a [configuration zoo](https://github.com/huggingface/accelerate/blob/main/examples/config_yaml_templates) which showcases a number of premade **minimal** example configurations for a variety of setups you can run.
## Adapt training code
The next main feature of Accelerate is the [`Accelerator`] class which adapts your PyTorch code to run on different distributed setups.
@ -166,13 +168,14 @@ with init_empty_weights():
The [`~accelerate.load_checkpoint_and_dispatch`] function loads full or sharded checkpoints into the empty model, and automatically distribute weights across all available devices.
The `device_map` parameter determines where to place each model layer, and specifiying `"auto"` places them on the GPU first, then the CPU, and finally the hard drive as memory-mapped tensors if there's still not enough memory. Use the `no_split_module_classes` parameter to indicate which modules shouldn't be split across devices (typically those with a residual connection).
The `device_map` parameter determines where to place each model layer, and specifying `"auto"` places them on the GPU first, then the CPU, and finally the hard drive as memory-mapped tensors if there's still not enough memory. Use the `no_split_module_classes` parameter to indicate which modules shouldn't be split across devices (typically those with a residual connection).
```py
from accelerate import load_checkpoint_and_dispatch
model_checkpoint = "your-local-model-folder"
model = load_checkpoint_and_dispatch(
model, checkpoint="mistralai/Mixtral-8x7B-Instruct-v0.1", device_map="auto", no_split_module_classes=['Block']
model, checkpoint=model_checkpoint, device_map="auto", no_split_module_classes=['Block']
)
```

View File

@ -13,15 +13,15 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Handling big models for inference
# Big Model Inference
One of the biggest advancements 🤗 Accelerate provides is the concept of [large model inference](../concept_guides/big_model_inference) wherein you can perform *inference* on models that cannot fully fit on your graphics card.
One of the biggest advancements Accelerate provides is [Big Model Inference](../concept_guides/big_model_inference), which allows you to perform inference with models that don't fully fit on your graphics card.
This tutorial will be broken down into two parts showcasing how to use both 🤗 Accelerate and 🤗 Transformers (a higher API-level) to make use of this idea.
This tutorial will show you how to use Big Model Inference in Accelerate and the Hugging Face ecosystem.
## Using 🤗 Accelerate
## Accelerate
For these tutorials, we'll assume a typical workflow for loading your model in such that:
A typical workflow for loading a PyTorch model is shown below. `ModelClass` is a model that exceeds the GPU memory of your device (mps or cuda or xpu).
```py
import torch
@ -31,9 +31,7 @@ state_dict = torch.load(checkpoint_file)
my_model.load_state_dict(state_dict)
```
Note that here we assume that `ModelClass` is a model that takes up more video-card memory than what can fit on your device (be it `mps` or `cuda`).
The first step is to init an empty skeleton of the model which won't take up any RAM using the [`init_empty_weights`] context manager:
With Big Model Inference, the first step is to init an empty skeleton of the model with the `init_empty_weights` context manager. This doesn't require any memory because `my_model` is "parameterless".
```py
from accelerate import init_empty_weights
@ -41,22 +39,14 @@ with init_empty_weights():
my_model = ModelClass(...)
```
With this `my_model` currently is "parameterless", hence leaving the smaller footprint than what one would normally get loading this onto the CPU directly.
Next, the weights are loaded into the model for inference.
Next we need to load in the weights to our model so we can perform inference.
The [`load_checkpoint_and_dispatch`] method loads a checkpoint inside your empty model and dispatches the weights for each layer across all available devices, starting with the fastest devices (GPU, MPS, XPU, NPU, MLU, SDAA, MUSA) first before moving to the slower ones (CPU and hard drive).
For this we will use [`load_checkpoint_and_dispatch`], which as the name implies will load a checkpoint inside your empty model and dispatch the weights for each layer across all the devices you have available (GPU/MPS and CPU RAM).
Setting `device_map="auto"` automatically fills all available space on the GPU(s) first, then the CPU, and finally, the hard drive (the absolute slowest option) if there is still not enough memory.
To determine how this `dispatch` can be performed, generally specifying `device_map="auto"` will be good enough as 🤗 Accelerate
will attempt to fill all the space in your GPU(s), then loading them to the CPU, and finally if there is not enough RAM it will be loaded to the disk (the absolute slowest option).
<Tip>
For more details on designing your own device map, see this section of the [concept guide](../concept_guides/big_model_inference#designing-a-device-map)
</Tip>
See an example below:
> [!TIP]
> Refer to the [Designing a device map](../concept_guides/big_model_inference#designing-a-device-map) guide for more details on how to design your own device map.
```py
from accelerate import load_checkpoint_and_dispatch
@ -66,42 +56,29 @@ model = load_checkpoint_and_dispatch(
)
```
<Tip>
If there are certain “chunks” of layers that shouldnt be split, pass them to `no_split_module_classes` (see [here](../concept_guides/big_model_inference#loading-weights) for more details).
If there are certain "chunks" of layers that shouldn't be split, you can pass them in as `no_split_module_classes`. Read more about it [here](../concept_guides/big_model_inference#loading-weights)
A models weights can also be sharded into multiple checkpoints to save memory, such as when the `state_dict` doesn't fit in memory (see [here](../concept_guides/big_model_inference#sharded-checkpoints) for more details).
</Tip>
<Tip>
Also to save on memory (such as if the `state_dict` will not fit in RAM), a model's weights can be divided and split into multiple checkpoint files. Read more about it [here](../concept_guides/big_model_inference#sharded-checkpoints)
</Tip>
Now that the model is dispatched fully, you can perform inference as normal with the model:
Now that the model is fully dispatched, you can perform inference.
```py
input = torch.randn(2,3)
input = input.to("cuda")
device_type = next(iter(model.parameters())).device.type
input = input.to(device_type)
output = model(input)
```
What will happen now is each time the input gets passed through a layer, it will be sent from the CPU to the GPU (or disk to CPU to GPU), the output is calculated, and then the layer is pulled back off the GPU going back down the line. While this adds some overhead to the inference being performed, through this method it is possible to run **any size model** on your system, as long as the largest layer is capable of fitting on your GPU.
Each time an input is passed through a layer, it is sent from the CPU to the GPU (or disk to CPU to GPU), the output is calculated, and the layer is removed from the GPU going back down the line. While this adds some overhead to inference, it enables you to run any size model on your system, as long as the largest layer fits on your GPU.
<Tip>
Multiple GPUs, or "model parallelism", can be utilized but only one GPU will be active at any given moment. This forces the GPU to wait for the previous GPU to send it the output. You should launch your script normally with Python instead of other tools like torchrun and accelerate launch.
Multiple GPUs can be utilized, however this is considered "model parallelism" and as a result only one GPU will be active at a given moment, waiting for the prior one to send it the output. You should launch your script normally with `python`
and not need `torchrun`, `accelerate launch`, etc.
> [!TIP]
> You may also be interested in *pipeline parallelism* which utilizes all available GPUs at once, instead of only having one GPU active at a time. This approach is less flexbile though. For more details, refer to the [Memory-efficient pipeline parallelism](./distributed_inference#memory-efficient-pipeline-parallelism-experimental) guide.
</Tip>
<Youtube id="MWCSGj9jEAo"/>
For a visual representation of this, check out the animation below:
<Youtube id="MWCSGj9jEAo" />
### Complete Example
Below is the full example showcasing what we performed above:
Take a look at a full example of Big Model Inference below.
```py
import torch
@ -115,17 +92,18 @@ model = load_checkpoint_and_dispatch(
)
input = torch.randn(2,3)
input = input.to("cuda")
device_type = next(iter(model.parameters())).device.type
input = input.to(device_type)
output = model(input)
```
## Using 🤗 Transformers, 🤗 Diffusers, and other 🤗 Open Source Libraries
## Hugging Face ecosystem
Libraries that support 🤗 Accelerate big model inference include all of the earlier logic in their `from_pretrained` constructors.
Other libraries in the Hugging Face ecosystem, like Transformers or Diffusers, supports Big Model Inference in their [`~transformers.PreTrainedModel.from_pretrained`] constructors.
These operate by specifying a string representing the model to download from the [🤗 Hub](https://hf.co/models) and then denoting `device_map="auto"` along with a few extra parameters.
You just need to add `device_map="auto"` in [`~transformers.PreTrainedModel.from_pretrained`] to enable Big Model Inference.
As a brief example, we will look at using `transformers` and loading in Big Science's T0pp model.
For example, load Big Sciences T0pp 11 billion parameter model with Big Model Inference.
```py
from transformers import AutoModelForSeq2SeqLM
@ -133,9 +111,7 @@ from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
```
After loading the model in, the initial steps from before to prepare a model have all been done and the model is fully
ready to make use of all the resources in your machine. Through these constructors, you can also save *more* memory by
specifying the precision the model is loaded into as well, through the `torch_dtype` parameter, such as:
After loading the model, the empty init and smart dispatch steps from before are executed and the model is fully ready to make use of all the resources in your machine. Through these constructors, you can also save more memory by specifying the `torch_dtype` parameter to load a model in a lower precision.
```py
from transformers import AutoModelForSeq2SeqLM
@ -143,8 +119,6 @@ from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto", torch_dtype=torch.float16)
```
To learn more about this, check out the 🤗 Transformers documentation available [here](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
## Next steps
## Where to go from here
For a much more detailed look at big model inference, be sure to check out the [Conceptual Guide on it](../concept_guides/big_model_inference)
For a more detailed explanation of Big Model Inference, make sure to check out the [conceptual guide](../concept_guides/big_model_inference)!

View File

@ -15,8 +15,8 @@ rendered properly in your Markdown viewer.
# Checkpointing
When training a PyTorch model with 🤗 Accelerate, you may often want to save and continue a state of training. Doing so requires
saving and loading the model, optimizer, RNG generators, and the GradScaler. Inside 🤗 Accelerate are two convenience functions to achieve this quickly:
When training a PyTorch model with Accelerate, you may often want to save and continue a state of training. Doing so requires
saving and loading the model, optimizer, RNG generators, and the GradScaler. Inside Accelerate are two convenience functions to achieve this quickly:
- Use [`~Accelerator.save_state`] for saving everything mentioned above to a folder location
- Use [`~Accelerator.load_state`] for loading everything stored from an earlier `save_state`

View File

@ -0,0 +1,76 @@
# Compilation
## Overview
Pytorch 2.0 introduced `torch.compile`, a powerful feature that makes PyTorch code run faster by JIT-compiling PyTorch code into optimized kernels. Key features of `torch.compile` include:
- **Performance Improvement**: Significantly speeds up model execution by optimizing the computation graph.
- **Ease of Use**: Requires minimal code changes to implement, making it highly accessible.
- **Compatibility**: Works seamlessly with existing PyTorch code and models.
When used with Accelerate, `torch.compile` integrates smoothly into distributed training workflows, allowing you to benefit from both distributed execution and compilation optimizations simultaneously.
The first execution of compiled code typically takes longer as it includes the compilation time, but subsequent runs are significantly faster. For optimal performance in different scenarios, `torch.compile` offers various modes like `"default"`, `"reduce-overhead"` (which uses CUDA graphs to further reduce overhead), and `"max-autotune"` (which performs extensive autotuning to find the best kernels for your model).
## Using `torch.compile` with Accelerate
Accelerate provides `TorchDynamoPlugin` for easy and seemless integration of `torch.compile` into your training scripts.
```python
from accelerate import Accelerator
from accelerate.utils import TorchDynamoPlugin
# Configure the compilation backend
dynamo_plugin = TorchDynamoPlugin(
backend="inductor", # Options: "inductor", "aot_eager", "aot_nvfuser", etc.
mode="default", # Options: "default", "reduce-overhead", "max-autotune"
fullgraph=True,
dynamic=False
)
# Initialize accelerator with the plugin
accelerator = Accelerator(dynamo_plugin=dynamo_plugin)
# This will apply torch.compile to your model
model = accelerator.prepare(model)
```
It is compatible with all other features and plugins of Accelerate, including mixed precision, distributed training (DDP, FSDP, Deepspeed), etc.
## Regional Compilation
Instead of trying to compile the whole model, which usually has a big problem space for optimization. Regional compilation targets repeated blocks of the same class and compiles them sequentially to hit the compiler's cache. For example, in `GPT2LMHeadModel`, the repeated block/class is `GPT2Block`, and can be accessed as `model.transformer.h[0]`. The rest of the model (e.g model.lm_head) is compiled separately.
This allows us to speed up the compilation overhead / cold start of models like LLMs and Transformers in general.
See <https://pytorch.org/tutorials/recipes/regional_compilation.html> for more details.
### How to Use Regional Compilation
It can be enabled by setting `use_regional_compilation=True` in the `TorchDynamoPlugin` configuration:
```python
# Configure the compilation backend
dynamo_plugin = TorchDynamoPlugin(
use_regional_compilation=True,
... # other parameters
)
# Initialize accelerator with the plugin
accelerator = Accelerator(dynamo_plugin=dynamo_plugin)
# This will apply compile_regions to your model
model = accelerator.prepare(model)
```
You could also use the `accelerate.utils.compile_regions` utility directly the same way you would use `torch.compile`.
### Benefits of Regional Compilation
We have conducted extensive benchmarks comparing full compilation and regional compilation using the `torch.compile` feature in PyTorch. The full results are available in the [accelerate repository](https://github.com/huggingface/accelerate/tree/main/benchmarks/torch.compile/regional_compilation). The key findings from our benchmarks are:
1. **Comparable Performance**: Regional compilation delivers performance speedups similar to full compilation, especially for larger models.
2. **Faster Compilation**: Regional compilation significantly reduces the time taken to compile models, making it a more efficient choice for deployment.
3. **Batch Size Impact**: The performance difference between compilation strategies diminishes with larger batch sizes, indicating that the overhead of compilation is less impactful in those scenarios.
4. **Model Size Consideration**: The benefits of regional compilation are more pronounced in larger models, where the compilation time savings can be substantial.
5. **Practical Application**: For real-world applications, regional compilation is a practical choice for optimizing training cold start times, especially when working with large models.
## Conclusion
Both full and regional compilation can significantly speed up your models. Regional compilation offers a practical balance between compilation time and runtime performance, especially for training large models with substantial batch sizes.

View File

@ -0,0 +1,337 @@
<!--
Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DDP Communication Hooks
Distributed Data Parallel (DDP) communication hooks provide a generic interface to control how gradients are communicated across workers by overriding the vanilla allreduce in `DistributedDataParallel`. A few built-in communication hooks are provided, and users can easily apply any of these hooks to optimize communication.
- **FP16 Compression Hook**: Compresses gradients by casting them to half-precision floating-point format (`torch.float16`), reducing communication overhead.
- **BF16 Compression Hook**: Similar to FP16, but uses the Brain Floating Point format (`torch.bfloat16`), which can be more efficient on certain hardware.
- **PowerSGD Hook**: An advanced gradient compression algorithm that provides high compression rates and can accelerate bandwidth-bound distributed training.
In this tutorial, you will see how to quickly set up DDP communication hooks and perform training with the utilities provided in Accelerate, which can be as simple as adding just one new line of code! This demonstrates how to use DDP communication hooks to optimize gradient communication in distributed training with the Accelerate library.
## FP16 Compression Hook
<hfoptions id="fp16">
<hfoption id="PyTorch">
```python
import torch
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed.algorithms.ddp_comm_hooks import default_hooks
from accelerate.test_utils.testing import get_backend
device_type, _, _ = get_backend()
device_id = getattr(torch, device_type, torch.cuda).current_device()
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
model = MyModel()
model = DDP(model, device_ids=[device_id])
model.register_comm_hook(state=None, hook=default_hooks.fp16_compress_hook)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
optimizer.zero_grad()
```
</hfoption>
<hfoption id="Accelerate">
```python
from accelerate import Accelerator, DDPCommunicationHookType, DistributedDataParallelKwargs
import torch
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
# DDP Communication Hook setup
ddp_kwargs = DistributedDataParallelKwargs(comm_hook=DDPCommunicationHookType.FP16)
accelerator = Accelerator(kwargs_handlers=[ddp_kwargs])
model = MyModel()
optimizer = torch.optim.Adam(model.parameters())
data_loader = DataLoader(dataset, batch_size=16)
model, optimizer, data_loader = accelerator.prepare(model, optimizer, data_loader)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
</hfoption>
</hfoptions>
### BF16 Compression Hook
<Tip warning={true}>
BF16 Compression Hook API is experimental, and it requires NCCL version later than 2.9.6.
</Tip>
<hfoptions id="bf16">
<hfoption id="PyTorch">
```python
import torch
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed.algorithms.ddp_comm_hooks import default_hooks
from accelerate.test_utils.testing import get_backend
device_type, _, _ = get_backend()
device_id = getattr(torch, device_type, torch.cuda).current_device()
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
model = MyModel()
model = DDP(model, device_ids=[device_id])
model.register_comm_hook(state=None, hook=default_hooks.bf16_compress_hook)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
optimizer.zero_grad()
```
</hfoption>
<hfoption id="Accelerate">
```python
from accelerate import Accelerator, DDPCommunicationHookType, DistributedDataParallelKwargs
import torch
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
# DDP Communication Hook setup
ddp_kwargs = DistributedDataParallelKwargs(comm_hook=DDPCommunicationHookType.BF16)
accelerator = Accelerator(kwargs_handlers=[ddp_kwargs])
model = MyModel()
optimizer = torch.optim.Adam(model.parameters())
data_loader = DataLoader(dataset, batch_size=16)
model, optimizer, data_loader = accelerator.prepare(model, optimizer, data_loader)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
</hfoption>
</hfoptions>
### PowerSGD Hook
<Tip warning={true}>
PowerSGD typically requires extra memory of the same size as the models gradients to enable error feedback, which can compensate for biased compressed communication and improve accuracy.
</Tip>
<hfoptions id="powerSGD">
<hfoption id="PyTorch">
```python
import torch
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed.algorithms.ddp_comm_hooks import powerSGD_hook
from accelerate.test_utils.testing import get_backend
device_type, _, _ = get_backend()
device_id = getattr(torch, device_type, torch.cuda).current_device()
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
model = MyModel()
model = DDP(model, device_ids=[device_id])
state = powerSGD_hook.PowerSGDState(process_group=None)
model.register_comm_hook(state=state, hook=powerSGD_hook.powerSGD_hook)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
optimizer.zero_grad()
```
</hfoption>
<hfoption id="Accelerate">
```python
from accelerate import Accelerator, DDPCommunicationHookType, DistributedDataParallelKwargs
import torch
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
# DDP Communication Hook setup
ddp_kwargs = DistributedDataParallelKwargs(comm_hook=DDPCommunicationHookType.POWER_SGD)
accelerator = Accelerator(kwargs_handlers=[ddp_kwargs])
model = MyModel()
optimizer = torch.optim.Adam(model.parameters())
data_loader = DataLoader(dataset, batch_size=16)
model, optimizer, data_loader = accelerator.prepare(model, optimizer, data_loader)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
</hfoption>
</hfoptions>
## DDP Communication Hooks utilities
There are two additional utilities for supporting optional functionalities with the communication hooks.
### comm_wrapper
`comm_wrapper` is an option to wrap a communication hook with additional functionality. For example, it can be used to combine FP16 compression with other communication strategies. Currently supported wrappers are `no`, `fp16`, and `bf16`.
```python
from accelerate import Accelerator, DDPCommunicationHookType, DistributedDataParallelKwargs
import torch
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
# DDP Communication Hook setup
ddp_kwargs = DistributedDataParallelKwargs(
comm_hook=DDPCommunicationHookType.POWER_SGD,
comm_wrapper=DDPCommunicationHookType.FP16
)
accelerator = Accelerator(kwargs_handlers=[ddp_kwargs])
model = MyModel()
optimizer = torch.optim.Adam(model.parameters())
data_loader = DataLoader(dataset, batch_size=16)
model, optimizer, data_loader = accelerator.prepare(model, optimizer, data_loader)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
### comm_state_option
`comm_state_option` allows you to pass additional state information required by certain communication hooks. This is particularly useful for stateful hooks like `PowerSGD`, which require maintaining hyperparameters and internal states across training steps. Below is an example showcasing the use of `comm_state_option` with the `PowerSGD` hook.
```python
from accelerate import Accelerator, DDPCommunicationHookType, DistributedDataParallelKwargs
import torch
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
# DDP Communication Hook setup
ddp_kwargs = DistributedDataParallelKwargs(
comm_hook=DDPCommunicationHookType.POWER_SGD,
comm_state_option={"matrix_approximation_rank": 2}
)
accelerator = Accelerator(kwargs_handlers=[ddp_kwargs])
model = MyModel()
optimizer = torch.optim.Adam(model.parameters())
data_loader = DataLoader(dataset, batch_size=16)
model, optimizer, data_loader = accelerator.prepare(model, optimizer, data_loader)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
For more advanced usage and additional hooks, refer to the [PyTorch DDP Communication Hooks documentation](https://pytorch.org/docs/stable/ddp_comm_hooks.html).

View File

@ -15,7 +15,7 @@ rendered properly in your Markdown viewer.
# DeepSpeed
[DeepSpeed](https://github.com/microsoft/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Some of the salient optimizations are:
[DeepSpeed](https://github.com/deepspeedai/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Some of the salient optimizations are:
1. Optimizer state partitioning (ZeRO stage 1)
2. Gradient partitioning (ZeRO stage 2)
@ -33,7 +33,7 @@ DeepSpeed ZeRO-2 is primarily used only for training, as its features are of no
DeepSpeed ZeRO-3 can be used for inference as well since it allows huge models to be loaded on multiple GPUs, which
won't be possible on a single GPU.
🤗 Accelerate integrates [DeepSpeed](https://github.com/microsoft/DeepSpeed) via 2 options:
Accelerate integrates [DeepSpeed](https://github.com/deepspeedai/DeepSpeed) via 2 options:
1. Integration of the DeepSpeed features via `deepspeed config file` specification in `accelerate config` . You just supply your custom config file or use our template. Most of
this document is focused on this feature. This supports all the core features of DeepSpeed and gives user a lot of flexibility.
@ -45,7 +45,7 @@ won't be possible on a single GPU.
Training:
1. 🤗 Accelerate integrates all features of DeepSpeed ZeRO. This includes all the ZeRO stages 1, 2 and 3 as well as ZeRO-Offload, ZeRO-Infinity (which can offload to disk/NVMe) and ZeRO++.
1. Accelerate integrates all features of DeepSpeed ZeRO. This includes all the ZeRO stages 1, 2 and 3 as well as ZeRO-Offload, ZeRO-Infinity (which can offload to disk/NVMe) and ZeRO++.
Below is a short description of Data Parallelism using ZeRO - Zero Redundancy Optimizer along with diagram from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)
![ZeRO Data Parallelism](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero.png)
@ -74,7 +74,7 @@ Inference:
## How it works?
**Pre-Requisites**: Install DeepSpeed version >=0.6.5. Please refer to the [DeepSpeed Installation details](https://github.com/microsoft/DeepSpeed#installation)
**Pre-Requisites**: Install DeepSpeed version >=0.6.5. Please refer to the [DeepSpeed Installation details](https://github.com/deepspeedai/DeepSpeed#installation)
for more information.
We will first look at easy to use integration via `accelerate config`.
@ -167,7 +167,7 @@ Currently, `Accelerate` supports following config through the CLI:
`deepspeed_hostfile`: DeepSpeed hostfile for configuring multi-node compute resources.
`deepspeed_exclusion_filter`: DeepSpeed exclusion filter string when using mutli-node setup.
`deepspeed_inclusion_filter`: DeepSpeed inclusion filter string when using mutli-node setup.
`deepspeed_multinode_launcher`: DeepSpeed multi-node launcher to use. If unspecified, will default to `pdsh`.
`deepspeed_multinode_launcher`: DeepSpeed multi-node launcher to use, e.g. `pdsh`, `standard`, `openmpi`, `mvapich`, `mpich`, `slurm`, `nossh` (requires DeepSpeed >= 0.14.5). If unspecified, will default to `pdsh`.
`deepspeed_config_file`: path to the DeepSpeed config file in `json` format. See the next section for more details on this.
```
To be able to tweak more options, you will need to use a DeepSpeed config file.
@ -194,7 +194,7 @@ For instance, here is how you would run the NLP example `examples/by_feature/dee
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/ubuntu/accelerate/examples/configs/deepspeed_config_templates/zero_stage2_config.json
deepspeed_config_file: /home/ubuntu/accelerate/examples/deepspeed_config_templates/zero_stage2_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
fsdp_config: {}
@ -275,7 +275,7 @@ accelerate launch examples/by_feature/deepspeed_with_config_support.py \
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/ubuntu/accelerate/examples/configs/deepspeed_config_templates/zero_stage3_offload_config.json
deepspeed_config_file: /home/ubuntu/accelerate/examples/deepspeed_config_templates/zero_stage3_offload_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
fsdp_config: {}
@ -710,11 +710,18 @@ model, eval_dataloader = accelerator.prepare(model, eval_dataloader)
2. Current integration doesnt support `mpu`, limiting the tensor parallelism which is supported in Megatron-LM.
3. Current integration doesnt support multiple models.
## Multi-node DeepSpeed
DeepSpeed supports multi-node inference and training over a variety of different launchers. You can specify a different launcher by setting the `deepspeed_multinode_launcher` config in the CLI or in the DeepSpeed config file.
Currently, accelerate supports passing configuration for the following DeepSpeed multi-node launchers: `pdsh` (default), `standard`, `openmpi`, `mvapich`, `mpich`, `slurm`, `nossh` (requires DeepSpeed >= 0.14.5).
Please read the [DeepSpeed documentation](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) for more information on the different launchers. By default, DeepSpeed will attempt to use passwordless SSH from the main machine node to the other nodes to perform the launcher command. In this configuration, the accelerate launch command only needs to be run on the main node. If using the `nossh` launcher, you will need to run the accelerate launch command on every node using copied configuration.
## DeepSpeed Resources
The documentation for the internals related to deepspeed can be found [here](../package_reference/deepspeed).
- [Project's github](https://github.com/microsoft/deepspeed)
- [Project's github](https://github.com/deepspeedai/DeepSpeed)
- [Usage docs](https://www.deepspeed.ai/getting-started/)
- [API docs](https://deepspeed.readthedocs.io/en/latest/index.html)
- [Blog posts](https://www.microsoft.com/en-us/research/search/?q=deepspeed)
@ -727,12 +734,12 @@ Papers:
- [ZeRO++: Extremely Efficient Collective Communication for Giant Model Training](https://arxiv.org/abs/2306.10209)
Finally, please, remember that 🤗 `Accelerate` only integrates DeepSpeed, therefore if you
have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues).
Finally, please, remember that `Accelerate` only integrates DeepSpeed, therefore if you
have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/deepspeedai/DeepSpeed/issues).
<Tip>
For those interested in the similarities and differences between FSDP and DeepSpeed, please check out the [concept guide here](../concept_guides/fsdp_and_deepspeed.md)!
For those interested in the similarities and differences between FSDP and DeepSpeed, please check out the [concept guide here](../concept_guides/fsdp_and_deepspeed)!
</Tip>

View File

@ -0,0 +1,246 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Using multiple models with DeepSpeed
<Tip warning={true}>
This guide assumes that you have read and understood the [DeepSpeed usage guide](./deepspeed.md).
</Tip>
Running multiple models with Accelerate and DeepSpeed is useful for:
* Knowledge distillation
* Post-training techniques like RLHF (see the [TRL](https://github.com/huggingface/trl) library for more examples)
* Training multiple models at once
Currently, Accelerate has a **very experimental API** to help you use multiple models.
This tutorial will focus on two common use cases:
1. Knowledge distillation, where a smaller student model is trained to mimic a larger, better-performing teacher. If the student model fits on a single GPU, we can use ZeRO-2 for training and ZeRO-3 to shard the teacher for inference. This is significantly faster than using ZeRO-3 for both models.
2. Training multiple *disjoint* models at once.
## Knowledge distillation
Knowledge distillation is a good example of using multiple models, but only training one of them.
Normally, you would use a single [`utils.DeepSpeedPlugin`] for both models. However, in this case, there are two separate configurations. Accelerate allows you to create and use multiple plugins **if and only if** they are in a `dict` so that you can reference and enable the proper plugin when needed.
```python
from accelerate.utils import DeepSpeedPlugin
zero2_plugin = DeepSpeedPlugin(hf_ds_config="zero2_config.json")
zero3_plugin = DeepSpeedPlugin(hf_ds_config="zero3_config.json")
deepspeed_plugins = {"student": zero2_plugin, "teacher": zero3_plugin}
```
The `zero2_config.json` should be configured for full training (so specify `scheduler` and `optimizer` if you are not utilizing your own), while `zero3_config.json` should only be configured for the inference model, as shown in the example below.
```json
{
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": "auto",
"stage3_max_reuse_distance": "auto",
},
"train_micro_batch_size_per_gpu": 1
}
```
An example `zero2_config.json` configuration is shown below.
```json
{
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto",
"torch_adam": true,
"adam_w_mode": true
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
}
```
<Tip>
DeepSpeed will raise an error if `train_micro_batch_size_per_gpu` isn't specified, even if this particular model isn't being trained.
</Tip>
From here, create a single [`Accelerator`] and pass in both configurations.
```python
from accelerate import Accelerator
accelerator = Accelerator(deepspeed_plugins=deepspeed_plugins)
```
Now let's see how to use them.
### Student model
By default, Accelerate sets the first item in the `dict` as the default or enabled plugin (`"student"` plugin). Verify this by using the [`utils.deepspeed.get_active_deepspeed_plugin`] function to see which plugin is enabled.
```python
active_plugin = get_active_deepspeed_plugin(accelerator.state)
assert active_plugin is deepspeed_plugins["student"]
```
[`AcceleratorState`] also keeps the active DeepSpeed plugin saved in `state.deepspeed_plugin`.
```python
assert active_plugin is accelerator.deepspeed_plugin
```
Since `student` is the currently active plugin, let's go ahead and prepare the model, optimizer, and scheduler.
```python
student_model, optimizer, scheduler = ...
student_model, optimizer, scheduler, train_dataloader = accelerator.prepare(student_model, optimizer, scheduler, train_dataloader)
```
Now it's time to deal with the teacher model.
### Teacher model
First, you need to specify in [`Accelerator`] that the `zero3_config.json` configuration should be used.
```python
accelerator.state.select_deepspeed_plugin("teacher")
```
This disables the `"student"` plugin and enables the `"teacher"` plugin instead. The
DeepSpeed stateful config inside of Transformers is updated, and it changes which plugin configuration gets called when using
`deepspeed.initialize()`. This allows you to use the automatic `deepspeed.zero.Init` context manager integration Transformers provides.
```python
teacher_model = AutoModel.from_pretrained(...)
teacher_model = accelerator.prepare(teacher_model)
```
Otherwise, you should manually initialize the model with `deepspeed.zero.Init`.
```python
with deepspeed.zero.Init(accelerator.deepspeed_plugin.config):
model = MyModel(...)
```
### Training
From here, your training loop can be whatever you like, as long as `teacher_model` is never being trained on.
```python
teacher_model.eval()
student_model.train()
for batch in train_dataloader:
with torch.no_grad():
output_teacher = teacher_model(**batch)
output_student = student_model(**batch)
# Combine the losses or modify it in some way
loss = output_teacher.loss + output_student.loss
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
## Train multiple disjoint models
Training multiple models is a more complicated scenario.
In its current state, we assume each model is **completely disjointed** from the other during training.
This scenario still requires two [`utils.DeepSpeedPlugin`]'s to be made. However, you also need a second [`Accelerator`], since different `deepspeed` engines are being called at different times. A single [`Accelerator`] can only carry one instance at a time.
Since the [`state.AcceleratorState`] is a stateful object though, it is already aware of both [`utils.DeepSpeedPlugin`]'s available. You can just instantiate a second [`Accelerator`] with no extra arguments.
```python
first_accelerator = Accelerator(deepspeed_plugins=deepspeed_plugins)
second_accelerator = Accelerator()
```
You can call either `first_accelerator.state.select_deepspeed_plugin()` to enable or disable
a particular plugin, and then call [`prepare`].
```python
# can be `accelerator_0`, `accelerator_1`, or by calling `AcceleratorState().select_deepspeed_plugin(...)`
first_accelerator.state.select_deepspeed_plugin("first_model")
first_model = AutoModel.from_pretrained(...)
# For this example, `get_training_items` is a nonexistent function that gets the setup we need for training
first_optimizer, first_scheduler, train_dl, eval_dl = get_training_items(model1)
first_model, first_optimizer, first_scheduler, train_dl, eval_dl = accelerator.prepare(
first_model, first_optimizer, first_scheduler, train_dl, eval_dl
)
second_accelerator.state.select_deepspeed_plugin("second_model")
second_model = AutoModel.from_pretrained(...)
# For this example, `get_training_items` is a nonexistent function that gets the setup we need for training
second_optimizer, second_scheduler, _, _ = get_training_items(model2)
second_model, second_optimizer, second_scheduler = accelerator.prepare(
second_model, second_optimizer, second_scheduler
)
```
And now you can train:
```python
for batch in dl:
outputs1 = first_model(**batch)
first_accelerator.backward(outputs1.loss)
first_optimizer.step()
first_scheduler.step()
first_optimizer.zero_grad()
outputs2 = model2(**batch)
second_accelerator.backward(outputs2.loss)
second_optimizer.step()
second_scheduler.step()
second_optimizer.zero_grad()
```
## Resources
To see more examples, please check out the [related tests](https://github.com/huggingface/accelerate/blob/main/src/accelerate/test_utils/scripts/external_deps/test_ds_multiple_model.py) currently in [Accelerate].

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Distributed Inference with 🤗 Accelerate
# Distributed inference
Distributed inference can fall into three brackets:
@ -56,19 +56,20 @@ def run_inference(rank, world_size):
```
One will notice how we have to check the rank to know what prompt to send, which can be a bit tedious.
A user might then also think that with 🤗 Accelerate, using the `Accelerator` to prepare a dataloader for such a task might also be
A user might then also think that with Accelerate, using the `Accelerator` to prepare a dataloader for such a task might also be
a simple way to manage this. (To learn more, check out the relevant section in the [Quick Tour](../quicktour#distributed-evaluation))
Can it manage it? Yes. Does it add unneeded extra code however: also yes.
With 🤗 Accelerate, we can simplify this process by using the [`Accelerator.split_between_processes`] context manager (which also exists in `PartialState` and `AcceleratorState`).
With Accelerate, we can simplify this process by using the [`Accelerator.split_between_processes`] context manager (which also exists in `PartialState` and `AcceleratorState`).
This function will automatically split whatever data you pass to it (be it a prompt, a set of tensors, a dictionary of the prior data, etc.) across all the processes (with a potential
to be padded) for you to use right away.
Let's rewrite the above example using this context manager:
```python
import torch
from accelerate import PartialState # Can also be Accelerator or AcceleratorState
from diffusers import DiffusionPipeline
@ -82,7 +83,7 @@ with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt:
result.save(f"result_{distributed_state.process_index}.png")
```
And then to launch the code, we can use the 🤗 Accelerate:
And then to launch the code, we can use the Accelerate:
If you have generated a config file to be used using `accelerate config`:
@ -125,6 +126,7 @@ needs to be the same length. Basic inference does not require this.
For instance:
```python
import torch
from accelerate import PartialState # Can also be Accelerator or AcceleratorState
from diffusers import DiffusionPipeline
@ -144,22 +146,20 @@ You can find more complex examples [here](https://github.com/huggingface/acceler
## Memory-efficient pipeline parallelism (experimental)
This next part will discuss using *pipeline parallelism*. This is an **experimental** API utilizing the [PiPPy library by PyTorch](https://github.com/pytorch/PiPPy/) as a native solution.
This next part will discuss using *pipeline parallelism*. This is an **experimental** API that utilizes [torch.distributed.pipelining](https://pytorch.org/docs/stable/distributed.pipelining.html#) as a native solution.
The general idea with pipeline parallelism is: say you have 4 GPUs and a model big enough it can be *split* on four GPUs using `device_map="auto"`. With this method you can send in 4 inputs at a time (for example here, any amount works) and each model chunk will work on an input, then receive the next input once the prior chunk finished, making it *much* more efficient **and faster** than the method described earlier. Here's a visual taken from the PyTorch repository:
![PiPPy example](https://camo.githubusercontent.com/681d7f415d6142face9dd1b837bdb2e340e5e01a58c3a4b119dea6c0d99e2ce0/68747470733a2f2f692e696d6775722e636f6d2f657955633934372e706e67)
![Pipeline parallelism example](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/pipeline_parallel.png)
To illustrate how you can use this with Accelerate, we have created an [example zoo](https://github.com/huggingface/accelerate/tree/main/examples/inference) showcasing a number of different models and situations. In this tutorial, we'll show this method for GPT2 across two GPUs.
Before you proceed, please make sure you have the latest pippy installed by running the following:
Before you proceed, please make sure you have the latest PyTorch version installed by running the following:
```bash
pip install torchpippy
pip install torch
```
We require at least version 0.2.0. To confirm that you have the correct version, run `pip show torchpippy`.
Start by creating the model on the CPU:
```{python}
@ -170,7 +170,7 @@ model = GPT2ForSequenceClassification(config)
model.eval()
```
Next you'll need to create some example inputs to use. These help PiPPy trace the model.
Next you'll need to create some example inputs to use. These help `torch.distributed.pipelining` trace the model.
<Tip warning={true}>
However you make this example will determine the relative batch size that will be used/passed

View File

@ -13,14 +13,14 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Learning how to incorporate 🤗 Accelerate features quickly!
# Start Here!
Please use the interactive tool below to help you get started with learning about a particular
feature of 🤗 Accelerate and how to utilize it! It will provide you with a code diff, an explanation
feature of Accelerate and how to utilize it! It will provide you with a code diff, an explanation
towards what is going on, as well as provide you with some useful links to explore more within
the documentation!
Most code examples start from the following python code before integrating 🤗 Accelerate in some way:
Most code examples start from the following python code before integrating Accelerate in some way:
```python
for batch in dataloader:

View File

@ -79,7 +79,7 @@ Currently, `Accelerate` supports the following config through the CLI:
`fsdp_auto_wrap_policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP
`fsdp_transformer_layer_cls_to_wrap`: Only applicable for 🤗 Transformers. When using `fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP`, a user may provide a comma-separated string of transformer layer class names (case-sensitive) to wrap, e.g., `BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`. This is important because submodules that share weights (e.g., embedding layers) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by a couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer-based models. You can use the `model._no_split_modules` for 🤗 Transformer models by answering `yes` to `Do you want to use the model's `_no_split_modules` to wrap. It will try to use `model._no_split_modules` when possible.
`fsdp_transformer_layer_cls_to_wrap`: Only applicable for Transformers. When using `fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP`, a user may provide a comma-separated string of transformer layer class names (case-sensitive) to wrap, e.g., `BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`. This is important because submodules that share weights (e.g., embedding layers) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by a couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer-based models. You can use the `model._no_split_modules` for Transformer models by answering `yes` to `Do you want to use the model's `_no_split_modules` to wrap. It will try to use `model._no_split_modules` when possible.
`fsdp_min_num_params`: minimum number of parameters when using `fsdp_auto_wrap_policy=SIZE_BASED_WRAP`.
@ -91,7 +91,7 @@ Currently, `Accelerate` supports the following config through the CLI:
`fsdp_use_orig_params`: If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable parameters. This setting is useful in cases such as parameter-efficient fine-tuning as discussed in [this post](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019). This option also allows one to have multiple optimizer param groups. This should be `True` when creating an optimizer before preparing/wrapping the model with FSDP.
`fsdp_cpu_ram_efficient_loading`: Only applicable for 🤗 Transformers models. If True, only the first process loads the pretrained model checkpoint while all other processes have empty weights. This should be set to False if you experience errors when loading the pretrained 🤗 Transformers model via `from_pretrained` method. When this setting is True `fsdp_sync_module_states` also must to be True, otherwise all the processes except the main process would have random weights leading to unexpected behaviour during training. For this to work, make sure the distributed process group is initialized before calling Transformers `from_pretrained` method. When using 🤗 Trainer API, the distributed process group is initialized when you create an instance of `TrainingArguments` class.
`fsdp_cpu_ram_efficient_loading`: Only applicable for Transformers models. If True, only the first process loads the pretrained model checkpoint while all other processes have empty weights. This should be set to False if you experience errors when loading the pretrained Transformers model via `from_pretrained` method. When this setting is True `fsdp_sync_module_states` also must to be True, otherwise all the processes except the main process would have random weights leading to unexpected behaviour during training. For this to work, make sure the distributed process group is initialized before calling Transformers `from_pretrained` method. When using Trainer API, the distributed process group is initialized when you create an instance of `TrainingArguments` class.
`fsdp_sync_module_states`: If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0.
@ -187,7 +187,7 @@ accelerate merge-weights pytorch_model_fsdp_0/ output_path
## A few caveats to be aware of
- In case of multiple models, pass the optimizers to the prepare call in the same order as corresponding models else `accelerator.save_state()` and `accelerator.load_state()` will result in wrong/unexpected behaviour.
- This feature is incompatible with `--predict_with_generate` in the `run_translation.py` script of 🤗 `Transformers` library.
- This feature is incompatible with `--predict_with_generate` in the `run_translation.py` script of `Transformers` library.
For more control, users can leverage the `FullyShardedDataParallelPlugin`. After creating an instance of this class, users can pass it to the Accelerator class instantiation.
For more information on these options, please refer to the PyTorch [FullyShardedDataParallel](https://github.com/pytorch/pytorch/blob/0df2e863fbd5993a7b9e652910792bd21a516ff3/torch/distributed/fsdp/fully_sharded_data_parallel.py#L236) code.
@ -195,6 +195,6 @@ For more information on these options, please refer to the PyTorch [FullySharded
<Tip>
For those interested in the similarities and differences between FSDP and DeepSpeed, please check out the [concept guide here](../concept_guides/fsdp_and_deepspeed.md)!
For those interested in the similarities and differences between FSDP and DeepSpeed, please check out the [concept guide here](../concept_guides/fsdp_and_deepspeed)!
</Tip>

Some files were not shown because too many files have changed in this diff Show More