Compare commits

...

152 Commits

Author SHA1 Message Date
4b5b838185 other modif 2025-06-12 17:37:50 +00:00
7729f44040 some modif 2025-06-12 17:23:23 +00:00
65a3bc0beb test 2025-06-12 14:53:39 +00:00
bee04f1b01 Add fp8_e5m2 support in dtype_byte_size (#3625)
* float8_e5m2 device_map

* remove prints
2025-06-12 16:27:32 +02:00
8a953f08c6 fix xpu 8bit value loading (#3623)
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
2025-06-12 14:55:14 +02:00
3518c03584 small fix (#3619) 2025-06-11 14:02:45 +02:00
2f8fd72e51 Remove device_count (#3587) 2025-06-10 14:50:34 +02:00
d2e6b0313d [FSDP2] Refactor + FP8 (#3585)
* Fix double wrap

* Clocking off, ~equal to torch baseline

* works?

* Working version

* Partial rewrite

* FSDP2 path works

* Fix back prepare

* Almost done, proper AC left

* Feat: should work, cleanup + test more benchmarks left

* Style+quality

* Feat: fp8 example

* Feat: better example

* Feat: add readme

* Docs + should be done

* Fix: typos

* Fix: protect imports

* Feat: address comments

* Feat: add flops image
2025-06-10 14:26:48 +02:00
b9fee48c85 better handle FP8 with and without deepspeed (#3611)
* use the state mixed precision which has undergone all preprocessing

* Update src/accelerate/accelerator.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/accelerator.py

* accelerator state sets the mixed precision for deepspeed and fp8_enabled

* fix

* fix

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-06-10 14:24:43 +02:00
3a82b056cf Fix bf16 training with TP (#3610)
* fix

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-10 11:29:59 +02:00
6b61a373a2 fix deepspeed regional compilation (#3609) 2025-06-06 14:48:43 +02:00
682691deac Update Gaudi Runners (#3593)
* test

* fix

* push

* in the morning

* fix backend

* run first

* set habana modules

* dynamo backend

* trigger

* remove on pr

* remove on file change
2025-06-03 12:36:56 +02:00
791055b484 Fix: list object has no attribute keys (#3603) 2025-06-03 12:24:20 +02:00
16bf1d8901 enable torchao and pippy test cases on XPU (#3599)
* enable torchao and pippy test cases on XPU

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
2025-05-30 17:36:34 +02:00
ab3c604e48 enable big_model_inference on xpu (#3595)
* enable big_model_inference on XPU

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix quality

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
2025-05-30 17:23:26 +02:00
273799c85d enable fsdp2 benchmark on XPU (#3590)
* enable fsdp2 benchmark on XPU

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* add deterministic

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
2025-05-27 14:08:59 +02:00
43526c5c08 add device-agnostic GradScaler (#3588)
* add device-agnostic GradScaler

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix bug

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix review comments

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* format

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* Apply style fixes

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-27 11:44:50 +02:00
07f2392f40 change to use torch.device (#3594)
Signed-off-by: Matrix YAO <matrix.yao@intel.com>
2025-05-27 11:17:18 +02:00
ee2f48c2c3 [docs] no hard-coded cuda in the ddp documentation (#3589)
* make device-agnostic

* refactor
2025-05-27 11:16:42 +02:00
4f3abb73a7 Set ccl and KMP param in simple launch (#3575)
* Even 1 CPU mechine can also run multi process

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix ccl and kml param setting

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* set master addr only when processes > 1

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix num process check

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix ccl args check

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

---------

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
2025-05-26 15:55:10 +02:00
db536cbfeb Fix: Defer Tracker Initialization to Prevent Premature Distributed Setup (#3581)
* Fix tracker initialize distributed before InitProcessGroupKwargs

* Fix tracker initialize distributed before InitProcessGroupKwargs

* Add test for bug #3550

* Improve test for #3550

* Remove redundant code

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* fix style

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-05-26 15:08:13 +02:00
4e9d0deba6 enable regional_compilation benchmark on xpu (#3592)
* enable regional_compilation benchmark on xpu

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* Apply style fixes

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-26 15:05:42 +02:00
8cb3ace894 Add kwargs to optimizer, scheduler and dataloader using function accelerator().load_state() (#3540)
* Added artifacts and figure tracking at MLFlow tracker

* Added `log_artifact` to the MLFlowTracker

* Remove changes

* Added kwargs when loading state.

* added doc string

* Adjusted correct default types of kwargs

* Changed the load kwargs to a single one

* removed None value from kwargs

* fix kwargs for loading the model

* removed load_kwargs from optimizer state dict

* make load_kwargs a dictionary

* revert last changes

* reverted load_kwargs

* fix docstring

* added dict initiation

* Fix quality error during PR
2025-05-22 17:21:54 +02:00
b6d97cb856 Resolve logger warnings (#3582)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-05-22 16:26:31 +02:00
33967d4733 Add support for standalone mode when default port is occupied on single node (#3576)
* add standalone mode and replace ConnectionError with a warning when the main process port is in use, allowing for automatic port selection

* address review feedback: warn on port conflict only for single-node; raise error for multi-node

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-20 12:29:53 +02:00
5b1fcda371 enable test_cli & test_example cases on XPU (#3578)
* enable test_cli & test_example cases on XPU

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* remove print

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix ci issue

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

---------

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-05-20 12:04:24 +02:00
f55f0533b5 goodbye torch_ccl (#3580)
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-20 12:02:14 +02:00
1ec99f0b58 enable test_load_checkpoint_and_dispatch_with_broadcast cases on XPU (#3579)
* enable test_load_checkpoint_and_dispatch_with_broadcast cases on XPU

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* Update test_load_checkpoint_and_dispatch_with_broadcast.py

---------

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-19 11:27:40 +02:00
417bc52965 bump to v1.8.0dev 2025-05-15 12:02:44 +02:00
97c93c4809 enable test_dispatch_model_tied_weights_memory_with_nested_offload_cpu on xpu (#3569)
* enable test_dispatch_model_tied_weights_memory_with_nested_offload_cpu
case on XPU

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* replace hard-coded torch.cuda w/ device-dependent callings

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* use device agnostic clear_device_cache

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

---------

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-15 11:40:55 +02:00
cd37bbb629 set backend correctly for CUDA+FSDP2+cpu-offload (#3574)
* set backend correctly for CUDA+FSDP2+cpu-offload

* offload

* format

---------

Co-authored-by: Wing Lian <wing@axolotl.ai>
2025-05-15 11:38:53 +02:00
7aa3b56c80 Fix prevent duplicate GPU usage in distributed processing (#3526)
* check if num_extrs>0 and test

* test pass

* test passes

* make quality fix

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-15 11:31:20 +02:00
14f4306ca6 reenable FSDP2+qlora support (#3546) 2025-05-15 11:30:55 +02:00
e6e717589e Add regional compilation to cli tools and env vars (#3572)
* add regional compilation to cli tools and env vars

* added seq parallel to gaudi docs

* explain that lm_head is also compiled separately

* style

* docstring

* style
2025-05-15 11:30:27 +02:00
1f6efcea0b tune env command output (#3570)
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-15 10:51:43 +02:00
9fa97f9600 simplify model.to logic (#3562)
* simplify model.to logic

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* revert device_type == "cuda" changes

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

---------

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-15 10:31:08 +02:00
764eee4a48 add xpu synchronize (#3563) 2025-05-14 19:20:24 +02:00
202e6c178a Update dynamic env handling to preserve None when USE_DYNAMIC is unset (#3567)
* Update dynamic env handling to preserve None when USE_DYNAMIC is unset

* Apply suggestions from code review

---------

Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>
2025-05-14 16:34:08 +02:00
32874257f3 Add Gaudi doc (#3537)
* Add Gaudi doc

* Address comment from review

* Remove point about region compilation

---------

Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>
2025-05-13 18:27:33 +02:00
281314b479 preserve parameter keys when removing prefix (#3564)
* preserve parameter keys when removing  prefix

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-13 17:11:42 +02:00
3524a504c8 update path (#3561) 2025-05-13 13:57:29 +02:00
f48d95c493 canonicalize fsdp2 names when fixing optimizer (#3560) 2025-05-12 19:40:50 +02:00
f76208f5a8 make env var and dataclass flag consistent (#3307)
Signed-off-by: SumanthRH <sumanthrh@anyscale.com>
2025-05-12 17:57:58 +02:00
ae0499ea96 cast if dtype is not None (#3559)
Co-authored-by: dpappadopulo <dpappadopulo@bloomberg.net>
2025-05-12 15:27:11 +02:00
ddc49f1e9a Fix the issue where set_epoch does not take effect. (#3556)
* Fix the issue where `set_epoch` does not take effect.

* Apply style fixes

---------

Co-authored-by: root <root@hjx-dev-h20-3-0.hjx-dev-h20-3.bcloud.svc.cluster.local>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-12 14:30:19 +02:00
9b2d6eaf32 add support for port 0 auto-selection in multi-GPU environments (#3501)
* add support for port 0 auto-selection in multi-GPU environments

* address review feedback: [add implementation for DeepSpeed, simplify code logic]

---------

Co-authored-by: biondi <biondi_lee@htx.ht.gov.sg>
2025-05-12 13:36:45 +02:00
7b5774ac55 Dynamo regional compilation (#3529) 2025-05-12 09:49:29 +02:00
7013365791 fix typos (#3549) 2025-05-08 14:10:12 +02:00
8d8fd83672 fix notebook_launcher for Colab TPU compatibility. (#3541)
* fixes for notebook_launcher for google colab TPU compatibility.

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-06 17:55:18 +02:00
3a941d4b4e Fix: param is not a parameter or buffer (#3545) 2025-05-06 14:28:48 +02:00
d02e51cc21 Update big_modeling.md for layerwise casting (#3548)
* Update big_modeling.md for layerwise casting

* doc fix
2025-05-06 09:50:53 +02:00
c5caa11e85 Fix CI due to missing package (#3535)
* fix test

* fix

* fix

* fix

* fix worflow

* check

* revert
2025-04-29 10:48:39 +02:00
39e2bebb12 Update Docker builds to align with CI requirements (#3532) 2025-04-28 10:50:50 +02:00
0af45bf1e8 Fix logic in accelerator.prepare + IPEX for 2+ nn.Models and/or optim.Optimizers (#3517)
* Fix logic in _prepare_ipex

* Add caution about prepare in IPEX docs

* Add suggested workaround to IPEX docs

* Revert unnecessary change

* Update docs/source/usage_guides/ipex.md

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Remove double space

* Simplify logical checks for IPEX availability

* Revert unnecessary change

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-04-25 17:31:36 +02:00
806ac848c9 [FSDP2] Issues in Wrap Policy and Mixed Precision (#3528)
* fix fsdp2 wrap policy

* nn.Module doesn't have the dtype attribute

* Revert "nn.Module doesn't have the dtype attribute"

This reverts commit 513c7892876f81ec76ce32bcdce83bfe8556491d.

* Fix dtype handling in fsdp2_prepare_model to accommodate nn.Module without dtype attribute

* fix format problem
2025-04-24 22:59:13 +02:00
23b092507a [FSDP2] Fix memory spike with cpu_ram_efficient_loading=True (#3482)
* Feat: shard on meta device

* Feat: support fqns in get_non_persistent_buffers

* Fix: retie weights after loading
2025-04-24 12:19:49 +02:00
8fb073536a [FSDP2] Enable FULL_STATE_DICT (#3527)
* Feat: enable FULL_STATE_DICT in config

* Feat: support FSDP2 FULL_STATE_DICT

* Refactor: remove deprecated save/load_state_dict

* Docs: add FULL_STATE_DICT as supported to docs

* Feat: update tests

* Feat: change Accelerator.get_state_dict() to use new api
2025-04-23 18:03:45 +02:00
4f35cf713c Solve link error in internal_mechanism documentation (#3506) (#3507)
* Solve link error in internal_mechanism (#3506)

* Link correctly to documentation (#3506)
2025-04-23 17:47:25 +02:00
ada21cfbbd fix cuda init (#3530) 2025-04-23 15:57:40 +02:00
b451956fd6 Add torchao to FP8 error message (#3514) 2025-04-22 14:06:47 +02:00
6a9a61520d [Feat] Layerwise casting hook (#3427)
* start

* method implementation.

* updates.

* updates

* remove print.

* aryan as one of the contributors

Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com>

* change to attach_layerwise_casting_hooks

* enable skipping modules.

* tests

* revert style changes to other files.

* feedback

* remove comments

* add example

* fix test case for edges.

* reviewer feedback

---------

Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com>
2025-04-22 13:49:43 +02:00
423fbbfdea fix cache (#3513) 2025-04-18 18:07:46 +02:00
34c1779828 Remove deprecated PyTorch/XLA APIs (#3484) 2025-04-15 11:44:14 +02:00
54496571fd Fix: require transformers version for tp tests (#3504) 2025-04-15 11:42:26 +02:00
4a3cbcb63c fix: apply torchfix to set weights_only=True (#3497)
* fix: apply torchfix

* fix: apply torchfix
2025-04-15 11:41:05 +02:00
583b26db3c Add FP8 runners + tweak building FP8 image (#3493)
* Initial test

* Try on push

* Only wf dispatch now

* keep trying

* Try again

* Try again

* source activate?

* Force bash

* Source activate accelerate to make it get the env propelry

* try using nightly docker

* Try this?

* Try this?

* Try this, proper output

* Try this, proper output

* Try via full conda activate(?)

* rm conda

* te fp8 tests

* add ao

* ao in setup too

* actually include fp8 deps

* FP8 docker image, use newer version

* Update docker image to take in input

* Test

* prior month

* igpu?

* Use only last 2 digits of year

* Build rest

* Apply style fixes

---------

Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-15 11:39:43 +02:00
7812d979c3 Fix deepspeed tests (#3503)
* Fix: check for tp size when creating accelerator in tests

* Fix: better error handling in TorchTensorParallelPlugin

* Fix: make tp related args optional in tests (cmt by @kmehant)
2025-04-14 16:16:01 +02:00
67adb473a4 (Part 1) fix: make TP training compatible with new transformers (#3457)
* feat: support new tp refactor for training

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: @S1ro1 review cmt

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: @S1ro1 review cmt - tp_plan flag docstr

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: @SunMarc review cmt on un used flag

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: pick approach 3 as discussed in the PR

see https://github.com/huggingface/accelerate/pull/3457#discussion_r2037909077 for more details

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: styling errors

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: bump up transformers for tp_size feature

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

---------

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-04-11 18:31:28 +02:00
ee4cab96ed nit: needed sanity checks for fsdp2 (#3499)
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-04-11 17:04:34 +02:00
73c2378c55 Use torch.distributed.checkpoint.state_dict.set_model_state_dict in load_checkpoint_in_model (#3432)
* Use torch.distributed.checkpoint.state_dict.set_model_state_dict in load_checkpoint_in_model

load_checkpoint_in_model now supports loading into FSDP2-wrapped models when using device_map=None

for large models in a distributed setting, by leveraging broadcast_from_rank0, the reduced file system reads results in much faster loading (for loading a 70B model on a single node of 8 GPUs, 60 seconds vs 90 seconds)

* Guard torch.distributed.checkpoint.state_dict with is_torch_version('>=', '2.2.0')

This should fix issues with slow import and also fixes versioning issues

https://github.com/huggingface/accelerate/pull/3432#discussion_r1989782680
https://github.com/huggingface/accelerate/pull/3432#discussion_r1989946020

* Add test for non-distributed, TP, and DDP for load_checkpoint_and_dispatch(device_map=None) using set_model_state_dict

https://github.com/huggingface/accelerate/pull/3432#discussion_r1989741480
https://github.com/huggingface/accelerate/pull/3432#discussion_r1989960317

* Verify minimum version for broadcast_from_rank0

* Mark transformers as required for broadcast_from_rank0 tests, mark min version of torch to test as 2.4.0

* Add model_devices guard to set_model_state_dict

set_model_state_dict will fail if the model state_dict is not on at most one device

* Move decorators to top of test class

* https://github.com/huggingface/accelerate/pull/3432/files#r1993272280
* https://github.com/huggingface/accelerate/pull/3432/files#r1993268932

* Unindent functions

https://github.com/huggingface/accelerate/pull/3432/files#r1993275663

* Add condition for w/ explanatory links for set_model_state_dict model device restrictions

* Fix distribution of 2.2.0 condition

* Remove tensor parallel test

* Fix model materialization example

* Fix materialization example

* Remove old tensor parallel test
2025-04-11 17:01:33 +02:00
b2f937faec Add the HPU into accelerate config (#3495)
* Add the HPU into accelerate config

Signed-off-by: yuanwu <yuan.wu@intel.com>

* Fix the error of make style

Signed-off-by: yuanwu <yuan.wu@intel.com>

---------

Signed-off-by: yuanwu <yuan.wu@intel.com>
2025-04-10 17:41:47 +02:00
3b89987710 [bug] unsafe_serialization option doesn't work (#3496) 2025-04-09 15:16:28 +02:00
a43e4170fc fix warning error (#3491)
* fix warning error

* use logger.warning
2025-04-09 14:26:40 +02:00
334d6ab957 fix fp8 config (#3492) 2025-04-09 14:19:07 +02:00
650b6659c0 add support for custom function for reducing the batch size (#3071)
* add support for custom function for reducing the batch size

* fix scoping

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-08 14:08:07 +02:00
fb90996365 Don't create new param for TorchAO sequential offloading due to weak BC guarantees (#3444)
* update

* make style

* use assignment to set device
2025-04-08 12:29:12 +02:00
32b2e1606f Fix check_tied_parameters_in_config for multimodal models (#3479)
* fix

* fix
2025-04-08 12:27:49 +02:00
8c0a29626d Update low_precision_training.md (#3488) 2025-04-08 11:39:58 +02:00
63168b151f Adds style bot (#3478)
* Style bot

* Use reusable style bot

---------

Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
2025-04-03 17:09:49 +02:00
3cf5e4c802 use device agnostic torch.OutOfMemoryError from pytorch 2.5.0 (#3475)
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-04-02 15:08:22 +02:00
9642a1ac81 bump to v1.7.0dev 2025-04-01 13:55:11 +02:00
3169339f5b Bump ruff to 0.11.2 (#3471)
* ruff format

* Bump ruff to 0.11.2
2025-04-01 11:57:06 +02:00
67a768be07 remove use_xpu to fix ut issues, we don't need this since XPU is OOB … (#3460)
* remove use_xpu to fix ut issues, we don't need this since XPU is OOB supported now

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* add deprecate warnings

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

---------

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-04-01 11:55:37 +02:00
531643436e [MLU] fix deepspeed dependency (#3472) 2025-04-01 11:55:23 +02:00
83e09a9331 Update ruff target-version to py39 and apply more fixes (#3470)
Signed-off-by: cyy <cyyever@outlook.com>
2025-03-31 15:00:25 -04:00
9c4eeb9ba8 xpu: enable xccl distributed backend (#3401)
xccl distributed backend is available for XPU device backend starting
from torch 2.7 (requires torch built with `USE_XCCL=1 USE_C10D_XCCL=1`).

This change is verified with the following Transformers tests:
* `tests/extended/test_trainer_ext.py`
* `tests/trainer/test_trainer_distributed.py`

This commit does not impact IPEX which currently remains using custom
distributed backend.

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-03-31 19:11:47 +02:00
a0edc8dcf2 Apply ruff py39 fixes (#3461)
* Apply ruff py39 fixes

* Ruff format
2025-03-31 19:10:08 +02:00
11a3c0001d Update CometMLTracker to allow re-using experiment (#3328)
* Update CometMLTracker to allow re-using experiment

Update CometMLTracker to use new `comet_ml.start` function to create
Experiments, this way end-users can create online, offline experiments, append
data to an existing experiment and it also automatically re-use a running
experiment if one is present rather than creating a new one.

* Add back calling Experiment.end in finish

As `accelerator.end_training` is supposed to be called at the very end of
training by the user, users will still be able to log data after the main
training loop and this is needed for Offline Experiment to create the offline
archive.

* Update CometTracker behavior based on the version of the package

Use new method only for recent version of comet_ml
2025-03-31 19:09:34 +02:00
8b31a2fe2c Fix get_balanced_memory for MPS (#3464)
This also fixes a failure in test_get_balanced_memory:

```
assert {0: 215, 1: 300} == {0: 300, 1: 300}
[...]
tests/test_modeling_utils.py:871: AssertionError
```

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-03-31 17:33:33 +02:00
3f636d6260 Fix seeding of new generator for multi GPU (#3459)
* fix new generator seeding

* remaining arbitrary fixed seed

* test
2025-03-28 12:48:05 -04:00
803b6648b4 Update @ (#3466)
* Update @

* DS

* Add marc everywhere, he's always watching
2025-03-28 12:43:06 -04:00
17f9c19f48 Fix: clip grad norm in fsdp2 (#3465) 2025-03-28 15:55:49 +01:00
d7c741a6bc Initial FSDP2 support (#3394)
* Feat: initial conversion tool draft

* Feat: add value mapping to conversion tool

* Refactor: move from os to pathlib

* Feat: add first tests

* Feat: more tests

* Feat: minor fixes + dataclass conversions

* Feat: more remapping

* Fix: namespace has no attribute version + style

* Fix: offload params behavior

* Feat: add option to only rename keys in the config file to

* Fix: wrong attr name

* Fix: partially resolve comments

* Feat: work on config command + minor fixes to reflect changes

* Refactor: style + quality

* Feat: fsdp2 initial work

* Feat: some cleanups and first running fsdp2

* Fix: version checks + mixed precision policy

* Refactor: style + quality

* Remove obsolete todos

* Feat: grad norm clipping

* Fix: tests + rename attrs

* Refactor: style + quality

* Fix: None object is not iterable

* Fix: default cpu_offload for fsdp2

* Fix: cpu offload now behaves correctly

* Feat: apply_activation_checkpointing

* Fix: append to models

* Feat: start on concept guide

* wip: concept guide

* Fix: toctree

* cleanup of the concept guide

* Fix: minor fixes + mp

* Fix: quality + | to union

* Feat: backwards compatibility + args cleanup

* Fix: style + quality

* Feat: enable dropping refs when getting named params

* Fix: memory footprint with fsdp2

* Feat: cpu ram efficient loading

* Fix: mp

* Fix: not warn about sync_modules if fsdp version is 1

* Refactor: minor changes

* Small fixes + refactors

* Feat: docs + cleanup

* Feat: saving works (not sure about optim)

* More loading/saving work

* Feat: disable local_state_dict for fsdp2

* Fix: fsdp2 convergence

* Feat: working comparison script

* Feat: memory tracking fsdp2

* Feat: memory visualizer

* Feat: more work on benchmark

* Fix: raise error if model+optimizer arent prepared together

* Minor fixes

* Style

* More warnings

* Fix: reshard_after_forward vs sharding_strategy conflict

* Refactor: clean up accelerator

* Feat: more testing in fsdp2 benchmark

* Fix: memory visualizer

* Untested: support load/save_state

* Feat: concept guide improvements

* Refactor: concept guide

* Feat: benchmark works

* Feat: more work on fsdp2 benchmark

* Fix: note syntax

* Fix: small fixes + make original tests work

* Fix: grad scaling

* Feat: reshard after forward tests

* Feat: backward prefetch tests

* Feat: tests for fsdp2

* Refactor: minor fixes

* Feat: fsdp_utils docstrings

* Feat: autodoc fsdp.md

* Docs: get_module_children_bottom_up

* Fix: remove unused images

* Refactor: benchmark cleanup

* Fix: docs

* Feat: final doc changes

* Fix: torch.distributed has no attribute tensor

* Fix: style

* Feat: tests include version in failures

* Fix: benchmark force model to load in fp32

* Fix: rename runs

* Feat: last minor fixes

* Feat: new benchmark images
2025-03-27 15:01:18 -04:00
8ab01d32cf Fix device KeyError in tied_params_map (#3403)
Fixes: #3402

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-03-25 16:25:02 +01:00
140acb356e Fix AMD GPU support with should_reduce_batch_size() (#3405)
* Fix AMD GPU support with should_reduce_batch_size()

Even though torch has NVIDIA and AMD GPUs operate under the cuda namespace, the out of memory error for AMD GPUs is different. When trying to determine if a model can fit on an AMD GPU, this function will evaluate to false for a `torch.OutOfMemoryError`. This PR adds another check for the error string.

Example error messge:
```
'HIP out of memory. Tried to allocate 64.00 GiB. GPU 0 has a total capacity of 63.98 GiB of which 48.63 GiB is free. Of the allocated memory 15.02 GiB is allocated by PyTorch, and 129.49 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)'
```

* Missing comma

* Update memory.py

Consolidate OOM error check string
2025-03-25 10:32:29 -04:00
8576112bc8 enable 2 UT cases on XPU (#3445)
* enable test_dispatch_model_tied_weights_memory_with_nested_offload_cpu test case on XPU

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* enable test_dispatch_model_tied_weights_memory on XPU

Signed-off-by: N <matrix.yao@intel.com>

* fix bug

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>

* Update src/accelerate/test_utils/testing.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/test_utils/testing.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update tests/test_big_modeling.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

---------

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>
Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
Signed-off-by: N <matrix.yao@intel.com>
Co-authored-by: root <root@a4bf01945cfe.jf.intel.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2025-03-25 14:19:26 +01:00
806f661cd3 remove device index workaround on xpu since xpu supports integer device index as cuda now (#3448)
* remove xpu device index WAs since pytorch xpu supports integer index now

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>

* remove print

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

---------

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>
Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
Co-authored-by: root <root@a4bf01945cfe.jf.intel.com>
2025-03-24 14:49:05 +01:00
9015a26f09 Fixup ao module filter func (#3450) 2025-03-21 10:21:54 -04:00
6de900e10a feat: Add no_ssh and slurm multinode launcher options for deepspeed (#3329)
* feat: Add no_ssh multinode launcher option for deepspeed

* fix: Add CLI hints and brief documentation, add slurm launcher, and ensure that deepspeed 0.14.5 version is used for nossh
2025-03-20 10:33:00 -04:00
ffb27138f7 Changed --config arg to --config_file in the slurm multinode fsdp example. (#3447) 2025-03-20 10:14:18 -04:00
4b6be89910 Update build_and_run_tests.yml 2025-03-15 11:33:32 +01:00
a702364256 Fix attribute issue with deepspeed tp (#3443) 2025-03-13 18:27:25 +01:00
a31bd767c1 Fix prod issues (#3441)
* Fix default device

* Use CPU
2025-03-13 11:21:11 -04:00
71036329f7 tensor parallel dataloder for deepspeed accelerator (#3390)
* ds tp change

* update

* format

* add version check

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* put device_mesh logic to func + format

* fix comments

* format

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2025-03-13 12:40:34 +01:00
f648feba97 Add log_artifact, log_artifacts and log_figure capabilities to the MLflowTracker. (#3419)
* Added artifacts and figure tracking at MLFlow tracker

* Added `log_artifact` to the MLFlowTracker

* Remove changes

* Added artifacts, artifacts and figure tracking at MLFlow tracker

* Improved the docstring

* added require_mlflow function at test_utils

* add test for MLflowTracker

* Bit of litting

* Refactor to a more robust test

* Revised the test asserts to something more robust.

* Removed incorrect import and some litting.

* removed commented code

* initiate tracker using Accelerator

* Added mlflow and matplotlib to setup.py. Guarded and decoredated the functions that required them.

* Guarded mlflow import

* added matplotlib required warning.

* ran style and quality
2025-03-12 18:11:29 +01:00
14fc61eeac Bump to 1.6.0.dev0 2025-03-12 10:13:18 -04:00
d9e6af8773 HPU support (#3378)
* init

* style

* is_hpu_available

* fix

* import habana_frameworks.torch.distributed.hccl

* style

* test

* initialize dist proc group

* revert

* set backend to hccl only if hccl initialization sets a local rank

* force backend hccl and multi_hpu type when sure of distributed launch

* style

* pass accelerator tests

* pas big modeling tests with bigger atol/rtol for accelerators

* fix hpu device count and skip tests requiring hpu:x

* hpu autocast

* hpu rng_state

* hpu launch

* hpu special device placement

* hpu launch

* rng state

* distributed data loop tests

* enforce non contiguity after device memory allocation

* pass fsdp tests

* enforce pt_hpu_lazy_mode=0 when fsdp testing

* pass cli tests

* pass and document grad sync tests

* pass kwargs handler and autocast tests

* memory utils

* found source of int64 errors

* skip some modeling utils tests

* enable int64

* skip optimizer tests

* pass checkpointing tests

* pass accelerator tests with safetensors main

* more hpu stuff

* style

* remove PT_HPU_LAZY_MODE and PT_ENABLE_INT64_SUPPORT as they should be in the testing environment

* start testing on gaudi2

* support fp16 on gaudi2

* add testing order

* custom hpu fsdp env dict

* fix torch trace malloc

* test ddp half precision comm hooks

* fix

* fix

* remove lower bound for hpu

* use 0.72 as lower bound

* lower lower bound

* order deepspeed tests

* fix

* deepspeed_use_hpu

* assert non lazy mode with offloaded optimizer

* make patching torch with habana frameworks the default

* less of require_non_hpu

* skip test_multi_device_merge_fsdp_weights for now as it halts

* skip another flaky test

* format

* use habana_visible_modules

* patch torch hpu device count

* avoid setting HABANA_VISIBLE_MODULES

* don't play with habana visible devices/modules

* only with hpu

* fixes and skips

* skip

* fix device ids and add some todos

* skip offloading with generate()

* fix

* reduced atol/rtol for hpu

* fix

* tag deepspeed tests that should run first

* enable a test path that was skipped

* revert a test that was customized for gaudi1

* some patching to enable HABANA_VISIBLE_MODULES

* fix zero3 test

* misc

* test DTensor TP

* remove gaudi1

* test

* style

* comment

* pass pad_across_processes

* require_fp16

* pass memory utils test

* test_ddp_comm_hook

* skip half precision comm hooks on hpu

* fix

* is_fp16_available

* fp16

* tp as part of integration tests

* fix

* write_basic_config

* safetensors

* local sgd and masked_fill_fwd_i64

* fix num_processes in test_load_states_by_steps

* fp8 support

* test

* fix

* add a workflow

* Update src/accelerate/accelerator.py

* review comments

* ci

* style

* comments

* test

* habana_frameworks.torch

* patch device count

* fix

* fix

* require_fp8

* fix

* fix

* gaudi 1

* remove unnecessary

* fixed maskd fill error in transformers

* style

* balanced_memory pass on hpu

* remove for now

* run first

* Apply suggestions from code review

* style after merge

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/utils/transformer_engine.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* empty cache review comments

* test_scirpt.py error messages

* AccelerateTestCase for accelerator state cleanup

* test

* add gaudi1 workflow

* fp8 avilability

* fix

* reduce batch size

* concurrency

* check cuda as well

* nits and comments

* mark fsdp tests that require_fp16

* style

* mark deepspeed fp16 tests

* update image

* fix

* updated

* better msgs

* skip pippy

* test

* test on 2 device

* support up to 1% relative error in test_accelerate

* skip hpu fp16

* allow for 1 byte differene

* revert torch_device change

* style

* skip memory release since it's flaky

* add accelerator state cleanup to fixture

* fix

* atol

* fix

* more rtol

* equal grad test

* revert

* pass pippy on gaudi2 and skip on gaudi1

* enable sd 1.5 test with require fp16

* added warning on memory release

* don't log warning in memory release as it requires PartialState to be initialized

* Apply suggestions from code review

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2025-03-11 11:16:57 -04:00
b271eb1365 add distributed example for llava next video (#3417) 2025-03-11 11:07:46 -04:00
4677b8089f Fix quality (#3424)
* Run quality

* Update src/accelerate/test_utils/scripts/test_script.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-03-06 12:33:34 +01:00
e456796be8 fix typo : thier -> their (#3423) 2025-03-06 11:27:51 +01:00
ac3749dc11 Add Tecorigin SDAA accelerator support (#3330)
Co-authored-by: siqi <siqi@tecorigin.com>
2025-03-05 10:11:21 +01:00
6e8eea2e73 fix: Add device=torch.get_default_device() in torch.Generators (#3420) 2025-03-05 10:08:49 +01:00
c7b3625592 fix: ensure CLI args take precedence over config file. (#3409)
* fix: ensure CLI args take precedence over config file.

* add test case

* remove inappropriate comment

---------

Co-authored-by: 차영록 <jaycha@ncsoft.com>
2025-02-28 09:15:42 -05:00
90f81986b9 minor doc fixes (#3365) 2025-02-25 15:52:26 +01:00
fa26dc6156 add missing import (#3396) 2025-02-25 11:07:14 +01:00
6fcc8efd2e fix device bug (#3408) 2025-02-24 16:12:14 +01:00
8039158d71 Torchao float8 training (#3348)
* Bookmark

* bookmark

* Add torchao base example

* Currently broken

* Clean

* DDP varient working

* FSDP as well

* Works for all but zero3

* Bookmark: currently zero3 is underperforming

* Bookmark

* Another diff

* Fin

* Fin

* Add req huggingface suite

* update tests for fp8/torchao/ddp

* Log FP8 backend used and adjust typing

* add documentation for convert_to_float8_training

* Rename to convert_model_to_fp8_ao

* Call superinit"

* Add types

* Clean

* Use filter_first_and_last_linear_layers

* Update usage guide docs

* Actually loop through the zero stages

* Clean
2025-02-17 11:51:47 -05:00
e34db4d0d2 enable xpu (#3397) 2025-02-17 17:41:50 +01:00
526925b48c [memory leak] Replace GradientState -> DataLoader reference with weakrefs (#3391)
* Replace GradientState -> DataLoader reference with weakrefs

So they can be cleaned up. Otherwise, they will always stay in memory, leading to notable memory leaks. Note: even accelerator.free_memory() did not work!

* Add comments; initialize _dataloader_references_ref directly instead of indirectly
2025-02-11 12:47:40 -05:00
24f8d0276c [examples] upgrade code for seed setting (#3387)
* replace set_seed

* update import
2025-02-11 16:31:41 +01:00
5cc99e6e02 fix: typos in documentation files (#3388)
* Update test_scheduler.py

* Update test_big_modeling.py

* Update test_state_checkpointing.py

* Update test_script.py

* Update cli.md

* Update quicktour.md
2025-02-10 13:11:50 -05:00
ce63623421 works for fp8 with deepspeed (#3361)
* works for fp8 with deepspeed

* Add tests

---------

Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
2025-02-10 09:31:15 -05:00
f19b95700f fix torch_dtype in estimate memory (#3383)
* fix torch_dtype

* style

* add comments

* style
2025-02-07 15:58:13 +01:00
81d8a0356c [tests] Fix bnb cpu error (#3351)
* enable bnb tests

* bug fix

* enable more bnb tests on pxu

* fix on xpu

* fix quality issue

* furter fix quality

* fix style

* only use xpu check
2025-02-06 11:26:02 +01:00
f076495580 deepspeed github repo move (#3376) 2025-02-03 13:52:08 -05:00
03153658f4 feat: support tensor parallel & Data loader (#3173)
* feat: add dataloader for TP and n-dim parallel in non-dispatch mode

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* feat: add support for CLI usage

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: test cases

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: when tp not in use fix num_procs

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

---------

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-01-29 09:44:18 -05:00
675e35bcd4 [tests] enable more bnb tests on XPU (#3350)
* enable bnb tests

* bug fix

* enable more bnb tests on pxu

* fix quality issue

* furter fix quality

* fix style
2025-01-23 15:23:38 +01:00
8f2d31c5b9 Support more functionalities for MUSA backend (#3359)
* Support more functionalities for MUSA backend

* fix lint
2025-01-23 15:05:33 +01:00
4c2c89ea90 [tests] remove require_non_xpu test markers (#3301)
* remove non-xpu marker

* fix import
2025-01-22 16:10:17 +01:00
28c171b05a [tests] make cuda-only test work on other hardware accelerators (#3302)
* enable on xpu

* remove require_cuda
2025-01-22 16:09:50 +01:00
65356780d4 [Dev] Update release directions (#3352)
* Update release directions

* Update directions and makefile to account for testpypi fun
2025-01-21 08:59:43 -05:00
78b8126bff v1.4.0.dev0 2025-01-17 10:36:00 -05:00
7e324103c4 [tests] enable BNB test cases in tests/test_quantization.py on XPU (#3349)
* enable bnb tests

* bug fix

* fix quality issue

* furter fix quality

* fix style
2025-01-17 10:22:27 -05:00
02d25612a5 fix triton version check (#3345)
* fiix triton version check

* add xpu check
2025-01-17 10:21:52 -05:00
fbfa53bc5e dataloader: check that in_order is in kwargs before trying to drop it (#3346)
This fixes tests/test_data_loader.py::StatefulDataLoaderTester tests which
started to fail after 828aae4:
```
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_dispatcher_state_dict_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_dispatcher_state_dict_num_workers_2 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_inheritance - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_state_dict_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_state_dict_num_workers_2 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_decoupled_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_decoupled_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_2 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_end_of_dataloader - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_end_of_dataloader_dispatcher - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_skip_data_loader - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_2 - KeyError: 'in_order'
```

The reason for the failure is that "in_order" is added only if data loader
is created with `prepare_data_loader` or `skip_first_batches()`. Tests in
`tests/test_data_loader.py::StatefulDataLoaderTester` however are creating
data loaders directly as classes and "in_order" was not added. Hence the
issue.

Fixes: 828aae4 ("add torchdata version check to avoid in_order error (#3344)")

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-01-15 17:55:31 -05:00
d09040dfc9 [docs] fix typo, change "backoff_filter" to "backoff_factor" (#3296) 2025-01-15 11:55:38 -05:00
828aae4e32 add torchdata version check to avoid "in_order" error (#3344) 2025-01-15 09:04:03 -05:00
f0b030554c Fix for offloading when using TorchAO >= 0.7.0 (#3332)
* fix

* update

* fix

* apply suggestions from review

Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

Co-Authored-By: Xuehai Pan <XuehaiPan@pku.edu.cn>

* make style

---------

Co-authored-by: Xuehai Pan <XuehaiPan@pku.edu.cn>
2025-01-13 16:54:28 +01:00
80973430ee latest bnb no longer has optim_args attribute on optimizer (#3311)
* latest bnb no longer has optim_args attribute on optimizer

* update the other bnb based optimizer checks
2025-01-13 16:53:02 +01:00
c67d47ae79 [tests] make cuda-only test case device-agnostic (#3340)
* enable on xpu

* bug fix
2025-01-13 09:59:35 -05:00
8c423cff79 Fix offload generate tests (#3334)
* Fix tests

* format
2025-01-13 15:45:46 +01:00
95f34d6243 feat(tpu): remove nprocs from xla.spawn (#3324)
This parameter will cause issues on recent version of torch_xla.
2025-01-13 04:37:00 -05:00
ba90f85627 Fixup docker build err (#3333) 2025-01-10 04:54:05 -05:00
b13aadcb67 Bye bye torch <2 (#3331)
* Bye bye torch <1

* Add 2.6.0 dl args

* Rm require fsdp

* Adjust imports + 2.0 specific modeling code

* Bring back is_bf16
2025-01-09 12:11:08 -05:00
58f14364d5 Ensure that tied parameter is children of module (#3327)
Ensure that tied parameters are assigned to their parent module in
get_module_size_with_ties

Fixes: https://github.com/huggingface/accelerate/issues/3308
2025-01-09 12:03:51 -05:00
54370d4504 Adding keep_torch_compile argument to unwrap_model and extract_model_from_parallel. (#3282) 2025-01-08 12:45:22 -05:00
d6d3e03cd4 Use torch.xpu.mem_get_info for XPU (#3275)
torch.xpu.mem_get_info API is available starting from PyTorch 2.6 (and
in nightly 2.6.0.dev20241206+xpu or later). To work properly this method
requires PyTorch built with the SYCL runtime which supports API to query
device memory stats. If not available, exception will be raised.

Requires: https://github.com/pytorch/pytorch/pull/141230
Fixes: #2929
Fixes: https://github.com/huggingface/transformers/issues/31922

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2024-12-24 16:48:00 +01:00
acfbf72a7f Give example on how to handle gradient accumulation with cross-entropy (#3193)
* Add cross-entropy example in the gradient accumulation docs

* add example of logs

* correct skeleton code

* replace gather_for_metrics with gather

* batch_size -> per_device_batch_size

* remove main_process_only=True

* add autoregressive example in examples/

* Update docs/source/usage_guides/gradient_accumulation.md

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* ruff format

* add grad accum test

* update docs

* Update examples/by_feature/gradient_accumulation_for_autoregressive_models.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* update tests

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-12-24 12:26:45 +01:00
200c9eb783 fix: add max_memory to _init_infer_auto_device_map's return statement (#3279) 2024-12-13 10:47:33 -05:00
7b2edc0bf2 Fix test_nested_hook (#3289) 2024-12-11 10:00:45 -05:00
b92fb4774f fix load_state_dict for npu (#3211)
* fix load_state_dict for npu

* update
2024-12-10 21:38:00 -05:00
3e62fbb09c [docs] no hard-coding cuda (#3270)
* no hard-coding cuda

* Update docs/source/usage_guides/big_modeling.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* update device_type

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-12-10 21:32:10 -05:00
180 changed files with 9678 additions and 1844 deletions

View File

@ -37,11 +37,11 @@ members/contributors who may be interested in your PR.
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
- Big modeling: @SunMarc
- Fully-Sharded Data Parallism: @muellerzr
- DeepSpeed: @muellerzr
- Command Line Interface: @muellerzr
- Documentation: @muellerzr
- Core parts of the library: @muellerzr @BenjaminBossan @SunMarc
- Maintained examples: @muellerzr or @SunMarc
- Fully-Sharded Data Parallism: @SunMarc @zach-huggingface
- DeepSpeed: @SunMarc @zach-huggingface
- Command Line Interface: @SunMarc @zach-huggingface
- Documentation: @SunMarc @zach-huggingface
- Core parts of the library: @BenjaminBossan @SunMarc @zach-huggingface
- Maintained examples: @SunMarc or @zach-huggingface
-->

View File

@ -15,7 +15,7 @@ jobs:
outputs:
version: ${{ steps.step1.outputs.version }}
steps:
- uses: actions/checkout@v3.1.0
- uses: actions/checkout@4
- id: step1
run: echo "version=$(python setup.py --version)" >> $GITHUB_OUTPUT

View File

@ -16,13 +16,13 @@ jobs:
outputs:
changed: ${{ steps.was_changed.outputs.changed }}
steps:
- uses: actions/checkout@v3.1.0
- uses: actions/checkout@v4
with:
fetch-depth: "2"
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v41
uses: tj-actions/changed-files@3f54ebb830831fc121d3263c1857cfbdc310cdb9 #v42
- name: Was setup changed
id: was_changed
@ -47,4 +47,4 @@ jobs:
run-integration-tests:
needs: build-docker-containers
if: always()
uses: ./.github/workflows/self_hosted_integration_tests.yml
uses: ./.github/workflows/self_hosted_integration_tests.yml

View File

@ -102,9 +102,15 @@ jobs:
id: date
run: |
echo "date=$(date '+%Y-%m-%d')" >> $GITHUB_ENV
# Get the previous month
echo "base_year=$(date -d 'last month' '+%y')" >> $GITHUB_ENV
echo "base_month=$(date -d 'last month' '+%m')" >> $GITHUB_ENV
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: benchmarks/fp8/Dockerfile
file: benchmarks/fp8/transformer_engine/Dockerfile
push: true
tags: huggingface/accelerate:gpu-fp8-transformerengine-nightly-${{ env.date }}
tags: huggingface/accelerate:gpu-fp8-transformerengine-nightly-${{ env.date }}
build-args: |
BASE_YEAR=${{ env.base_year }}
BASE_MONTH=${{ env.base_month }}

37
.github/workflows/fp8_runner.yml vendored Normal file
View File

@ -0,0 +1,37 @@
name: Test FP8 Runner
on:
workflow_dispatch:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
jobs:
set-prev-day:
runs-on: ubuntu-latest
outputs:
prev-day: ${{ steps.set-prev-day.outputs.prev-day }}
steps:
- name: Set PREV_DAY
id: set-prev-day
run: |
PREV_DAY=$(date -d "yesterday" '+%Y-%m-%d')
echo "prev-day=$PREV_DAY" >> $GITHUB_OUTPUT
run-fp8-tests:
needs: set-prev-day
runs-on:
group: aws-g6e-12xlarge
container:
image: huggingface/accelerate:gpu-fp8-transformerengine-nightly-${{ needs.set-prev-day.outputs.prev-day }}
options: --gpus all --shm-size "16gb"
steps:
- uses: actions/checkout@v3
- name: Install the library
run: |
pip install -e .[test_prod,test_fp8]
- name: Show installed libraries
run: |
pip freeze
- name: Run TE FP8 tests
run: |
python -m pytest -s -v ./tests/test_fp8.py

82
.github/workflows/gaudi3_scheduled.yml vendored Normal file
View File

@ -0,0 +1,82 @@
name: Gaudi3 tests (scheduled)
on:
workflow_dispatch:
schedule: # every day at 6 AM UTC
- cron: "0 6 * * *"
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
run-gaudi3-tests:
runs-on:
group: itac-bm-emr-gaudi3-dell-2gaudi
container:
image: docker://vault.habana.ai/gaudi-docker/1.20.0/ubuntu22.04/habanalabs/pytorch-installer-2.6.0:latest
options: --runtime=habana --shm-size=64G --cap-add=sys_nice --env HABANA_VISIBLE_DEVICES
env:
OMPI_MCA_btl_vader_single_copy_mechanism: none
PT_ENABLE_INT64_SUPPORT: 1
PT_HPU_LAZY_MODE: 0
RUN_SLOW: 1
steps:
- name: HL-SMI (1)
run: |
hl-smi
echo "HABANA_VISIBLE_DEVICES=${HABANA_VISIBLE_DEVICES}"
echo "HABANA_VISIBLE_MODULES=${HABANA_VISIBLE_MODULES}"
- name: Extract HPU visible modules
id: add-modules
run: |
export HABANA_VISIBLE_MODULES=$(hl-smi -Q module_id -f csv,noheader | tr '\n' ',' | sed 's/,$//')
echo "HABANA_VISIBLE_MODULES=${HABANA_VISIBLE_MODULES}" >> $GITHUB_ENV
- name: HL-SMI (2)
run: |
hl-smi
echo "HABANA_VISIBLE_DEVICES=${HABANA_VISIBLE_DEVICES}"
echo "HABANA_VISIBLE_MODULES=${HABANA_VISIBLE_MODULES}"
- name: Checkout to Accelerate
uses: actions/checkout@v4
- name: Install Accelerate with Transformers & DeepSpeed
run: |
pip install -e .[testing] \
git+https://github.com/HabanaAI/DeepSpeed.git@1.20.0 \
git+https://github.com/huggingface/transformers.git
- name: Run CLI tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_cli
- name: Run Core tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_core
- name: Run Big Modeling tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_big_modeling
- name: Run FSDP integration tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_fsdp
- name: Run DeepSpeed integration tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_deepspeed
- name: Run Examples tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_examples

View File

@ -26,9 +26,9 @@ jobs:
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v3.1.0
- uses: actions/checkout@v4
- name: Set up python 3.9
uses: actions/setup-python@v3
uses: actions/setup-python@v5
with:
python-version: 3.9
cache: 'pip'

19
.github/workflows/pr_style_bot.yml vendored Normal file
View File

@ -0,0 +1,19 @@
# To run this bot, comment "@bot /style" on a PR
name: Style Bot
on:
issue_comment:
types: [created]
permissions:
contents: write
pull-requests: write
jobs:
style:
uses: huggingface/huggingface_hub/.github/workflows/style-bot-action.yml@main
with:
python_quality_dependencies: "[quality]"
style_command_type: "default"
secrets:
bot_token: ${{ secrets.GITHUB_TOKEN }}

View File

@ -6,9 +6,9 @@ jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3.1.0
- uses: actions/checkout@v4
- name: Set up Python 3.9
uses: actions/setup-python@v3
uses: actions/setup-python@v5
with:
python-version: 3.9
cache: 'pip'

View File

@ -16,10 +16,10 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v3.1.0
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v3
uses: actions/setup-python@v5
with:
python-version: 3.9
cache: 'pip'

View File

@ -38,9 +38,9 @@ jobs:
test_rest
]
steps:
- uses: actions/checkout@v3.1.0
- uses: actions/checkout@v4
- name: Set up python 3.9
uses: actions/setup-python@v3
uses: actions/setup-python@v5
with:
python-version: 3.9
cache: 'pip'
@ -52,7 +52,7 @@ jobs:
if [[ ${{ matrix.test-kind }} != test_prod ]]; then pip install -e .[testing,test_trackers]; fi
if [[ ${{ matrix.test-kind }} = test_rest ]]; then pip uninstall comet_ml -y; fi
if [[ ${{ matrix.pytorch-version }} = minimum ]]; then pip install torchvision==0.18.1 torch==2.3.1; fi
pip install pytest-reportlog tabulate setuptools
pip install pytest-reportlog tabulate setuptools importlib_metadata
- name: Show installed libraries
run: |

View File

@ -26,9 +26,9 @@ jobs:
minimum,
]
steps:
- uses: actions/checkout@v3.1.0
- uses: actions/checkout@v4
- name: Set up python 3.9
uses: actions/setup-python@v3
uses: actions/setup-python@v5
with:
python-version: 3.9
cache: 'pip'

View File

@ -28,7 +28,7 @@ test_big_modeling:
test_core:
python -m pytest -s -v ./tests/ --ignore=./tests/test_examples.py --ignore=./tests/deepspeed --ignore=./tests/test_big_modeling.py \
--ignore=./tests/fsdp --ignore=./tests/test_cli.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_core.log",)
--ignore=./tests/fsdp --ignore=./tests/tp --ignore=./tests/test_cli.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_core.log",)
test_cli:
python -m pytest -s -v ./tests/test_cli.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_cli.log",)
@ -39,6 +39,9 @@ test_deepspeed:
test_fsdp:
python -m pytest -s -v ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_fsdp.log",)
test_tp:
python -m pytest -s -v ./tests/tp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_tp.log",)
# Since the new version of pytest will *change* how things are collected, we need `deepspeed` to
# run after test_core and test_cli
test:
@ -47,13 +50,14 @@ test:
$(MAKE) test_big_modeling
$(MAKE) test_deepspeed
$(MAKE) test_fsdp
$(MAKE) test_tp
test_examples:
python -m pytest -s -v ./tests/test_examples.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_examples.log",)
# Broken down example tests for the CI runners
test_integrations:
python -m pytest -s -v ./tests/deepspeed ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_integrations.log",)
python -m pytest -s -v ./tests/deepspeed ./tests/fsdp ./tests/tp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_integrations.log",)
test_example_differences:
python -m pytest -s -v ./tests/test_examples.py::ExampleDifferenceTests $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_example_diff.log",)
@ -70,3 +74,21 @@ test_prod:
test_rest:
python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "not by_step and not by_epoch" $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_rest.log",)
# For developers to prepare a release
prepare_release:
rm -rf dist build
python setup.py bdist_wheel sdist
# Make sure this is ran in a fresh venv of some form
install_test_release:
pip uninstall accelerate -y
pip install -i https://testpypi.python.org/pypi --extra-index-url https://pypi.org/simple accelerate$(if $(version),==$(version),)
# Run as `make target=testpypi upload_release`
upload_release:
@if [ "$(target)" != "testpypi" ] && [ "$(target)" != "pypi" ]; then \
echo "Error: target must be either 'testpypi' or 'pypi'"; \
exit 1; \
fi
twine upload dist/* -r $(target)

View File

@ -13,7 +13,7 @@ pip install transformers
To reproduce or test a new setup, run
```py
python inference_acc.py model_name
python big_model_inference.py model_name
```
This script supports `gpt-j-6b`, `gpt-neox`, `opt` (30B version) and `T0pp` out of the box, but you can specify any valid checkpoint for `model_name`.
@ -43,4 +43,4 @@ Note on the results:
You will also note that Accelerate does not use anymore GPU and CPU RAM than necessary:
- peak GPU memory is exactly the size of the model put on a given GPU
- peak CPU memory is either the size of the biggest checkpoint shard or the part of the model offloaded on CPU, whichever is bigger.
- peak CPU memory is either the size of the biggest checkpoint shard or the part of the model offloaded on CPU, whichever is bigger.

View File

@ -18,6 +18,12 @@ import time
import psutil
import torch
from accelerate.test_utils.testing import get_backend
torch_device_type, _, _ = get_backend()
torch_accelerator_module = getattr(torch, torch_device_type, torch.cuda)
class PeakCPUMemory:
def __init__(self):
@ -54,16 +60,16 @@ def start_measure():
measures = {"time": time.time()}
gc.collect()
torch.cuda.empty_cache()
torch_accelerator_module.empty_cache()
# CPU mem
measures["cpu"] = psutil.Process().memory_info().rss
cpu_peak_tracker.start()
# GPU mem
for i in range(torch.cuda.device_count()):
measures[str(i)] = torch.cuda.memory_allocated(i)
torch.cuda.reset_peak_memory_stats()
for i in range(torch_accelerator_module.device_count()):
measures[str(i)] = torch_accelerator_module.memory_allocated(i)
torch_accelerator_module.reset_peak_memory_stats()
return measures
@ -73,16 +79,16 @@ def end_measure(start_measures):
measures = {"time": time.time() - start_measures["time"]}
gc.collect()
torch.cuda.empty_cache()
torch_accelerator_module.empty_cache()
# CPU mem
measures["cpu"] = (psutil.Process().memory_info().rss - start_measures["cpu"]) / 2**20
measures["cpu-peak"] = (cpu_peak_tracker.stop() - start_measures["cpu"]) / 2**20
# GPU mem
for i in range(torch.cuda.device_count()):
measures[str(i)] = (torch.cuda.memory_allocated(i) - start_measures[str(i)]) / 2**20
measures[f"{i}-peak"] = (torch.cuda.max_memory_allocated(i) - start_measures[str(i)]) / 2**20
for i in range(torch_accelerator_module.device_count()):
measures[str(i)] = (torch_accelerator_module.memory_allocated(i) - start_measures[str(i)]) / 2**20
measures[f"{i}-peak"] = (torch_accelerator_module.max_memory_allocated(i) - start_measures[str(i)]) / 2**20
return measures
@ -90,9 +96,9 @@ def end_measure(start_measures):
def log_measures(measures, description):
print(f"{description}:")
print(f"- Time: {measures['time']:.2f}s")
for i in range(torch.cuda.device_count()):
print(f"- GPU {i} allocated: {measures[str(i)]:.2f}MiB")
for i in range(torch_accelerator_module.device_count()):
print(f"- {torch_device_type} {i} allocated: {measures[str(i)]:.2f}MiB")
peak = measures[f"{i}-peak"]
print(f"- GPU {i} peak: {peak:.2f}MiB")
print(f"- {torch_device_type} {i} peak: {peak:.2f}MiB")
print(f"- CPU RAM allocated: {measures['cpu']:.2f}MiB")
print(f"- CPU RAM peak: {measures['cpu-peak']:.2f}MiB")

View File

@ -62,12 +62,12 @@ def train_baseline(opt_level="O2"):
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
@ -95,12 +95,12 @@ def train_integration(opt_level="O2"):
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
@ -109,15 +109,15 @@ if __name__ == "__main__":
for opt_level in ["O1", "O2"]:
baseline_not_trained, baseline_trained = train_baseline(opt_level)
accelerator_not_trained, accelerator_trained = train_integration(opt_level)
assert (
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
), f'Accuracy not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
assert (
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
), f'F1 not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
assert (
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
), f'Accuracy not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
assert (
baseline_trained["f1"] == accelerator_trained["f1"]
), f'F1 not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)

View File

@ -90,12 +90,12 @@ def train_baseline(zero_stage: int = 1, opt_level: str = "O1"):
model.destroy()
torch.cuda.empty_cache()
AcceleratorState()._reset_state(True)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
@ -129,12 +129,12 @@ def train_integration(zero_stage: int = 1, opt_level: str = "O1"):
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
torch.cuda.empty_cache()
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
AcceleratorState()._reset_state(True)
return base_model_results, trained_model_results
@ -145,17 +145,17 @@ if __name__ == "__main__":
for opt_level in ["O1", "O2", "O3"]:
baseline_not_trained, baseline_trained = train_baseline(zero_stage, opt_level)
accelerator_not_trained, accelerator_trained = train_integration(zero_stage, opt_level)
assert (
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
), f'ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
assert (
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
), f'ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
assert (
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
), f'ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
assert (
baseline_trained["f1"] == accelerator_trained["f1"]
), f'ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -56,12 +56,12 @@ def train_baseline(opt_level="O2"):
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
@ -89,12 +89,12 @@ def train_integration(opt_level="O2"):
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
@ -104,15 +104,15 @@ if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline(opt_level)
accelerator_not_trained, accelerator_trained = train_integration(opt_level)
assert (
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
), f'Accuracy should be the same for the baseline and accelerator: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
assert (
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
), f'F1 score should be the same for the baseline and accelerator: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
assert (
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
), f'Accuracy should be the same for the baseline and accelerator: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
assert (
baseline_trained["f1"] == accelerator_trained["f1"]
), f'F1 score should be the same for the baseline and accelerator: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)

View File

@ -0,0 +1,12 @@
FROM nvcr.io/nvidia/pytorch:24.07-py3
RUN pip install transformers evaluate datasets
RUN git clone https://github.com/huggingface/accelerate.git
RUN cd accelerate && \
pip install -e . && \
cd benchmarks/fp8
RUN /bin/bash

View File

@ -0,0 +1,32 @@
# FP8 Benchmarks
Comparing and running [torchao](https://github.com/pytorch/ao/tree/main/torchao/float8) FP8 with accelerate
## Overview
This repo provides scripts which compare native `torchao` model training against `accelerate`'s own integration. Each modeling type is segmented out via a script, supporting the following:
* Single GPU training (`non_distributed.py`)
* Multi-GPU training via DistributedDataParallelism (`ddp.py`)
* Fully Sharded Data Parallelism (`fsdp.py`)
* DeepSpeed ZeRO 1-3 (`deepspeed.py`)
To run them, it's recommended to use a docker image (see the attached `Dockerfile`) and not install `torchao` manually.
## Running:
There are official Docker images located at `huggingface/accelerate:gpu-fp8-torchao-nightly` which can be used.
You can run all scripts using the core `accelerate launch` command without any `accelerate config` being needed.
For single GPU, run it via `python`:
```bash
python non_distributed.py
```
For the rest, run it via `accelerate launch`:
```bash
accelerate launch ddp.py # or distrib_deepspeed.py, ddp.py
```

View File

@ -0,0 +1,158 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
This particular script verifies this for DDP training.
"""
from functools import partial
import evaluate
import torch
from fp8_utils import get_training_utilities
from torch.nn.parallel import DistributedDataParallel as DDP
from torchao.float8 import convert_to_float8_training
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import AORecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
if isinstance(module, torch.nn.Linear):
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
return False
# For stability reasons, we skip the first and last linear layers
# Otherwise can lead to the model not training or converging properly
if fqn in (first_layer_name, last_layer_name):
return False
return True
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
first_linear = None
last_linear = None
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if first_linear is None:
first_linear = name
last_linear = name
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
accelerator = Accelerator()
device = accelerator.device
model.to(device)
convert_to_float8_training(model, module_filter_fn=func)
# Convert the model to DDP
device_ids, output_device = [accelerator.local_process_index], accelerator.local_process_index
model = DDP(model, device_ids=device_ids, output_device=output_device)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for batch in train_dataloader:
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
batch = batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=[AORecipeKwargs()])
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,213 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
This particular script verifies this for deepspeed training.
"""
from functools import partial
from unittest.mock import patch
import deepspeed
import evaluate
import torch
from fp8_utils import evaluate_model, get_training_utilities
from torchao.float8 import convert_to_float8_training
from transformers.integrations import HfDeepSpeedConfig
from accelerate import Accelerator, DeepSpeedPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import AORecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
if isinstance(module, torch.nn.Linear):
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
return False
# For stability reasons, we skip the first and last linear layers
# Otherwise can lead to the model not training or converging properly
if fqn in (first_layer_name, last_layer_name):
return False
return True
def train_baseline(zero_stage: int = 1):
set_seed(42)
# This forces transformers to think Zero-3 Init should be used
with patch("transformers.integrations.deepspeed.is_deepspeed_zero3_enabled") as mock:
mock.return_value = zero_stage == 3
config = HfDeepSpeedConfig(
{
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {"stage": zero_stage},
}
)
plugin = DeepSpeedPlugin(hf_ds_config=config)
accelerator = Accelerator(deepspeed_plugin=plugin)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
first_linear = None
last_linear = None
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if first_linear is None:
first_linear = name
last_linear = name
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
convert_to_float8_training(model, module_filter_fn=func)
import numpy as np
config = {
"train_batch_size": 32,
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": zero_stage,
"offload_optimizer": {"device": "none", "nvme_path": None},
"offload_param": {"device": "none", "nvme_path": None},
"stage3_gather_16bit_weights_on_model_save": False,
},
"gradient_clipping": 1.0,
"steps_per_print": np.inf,
"bf16": {"enabled": True},
"fp16": {"enabled": False},
"zero_allow_untested_optimizer": True,
}
(
model,
optimizer,
_,
lr_scheduler,
) = deepspeed.initialize(
model=model,
optimizer=optimizer,
lr_scheduler=lr_scheduler,
config_params=config,
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
model_outputs = []
data = []
for batch in train_dataloader:
outputs = model(**batch)
data.append(batch.to("cpu"))
model_outputs.append(outputs.logits.to("cpu"))
loss = outputs.loss
model.backward(loss)
model.step()
for _ in range(accelerator.num_processes):
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
del config
return base_model_results, trained_model_results, model_outputs, data
def train_integration(zero_stage: int = 1):
set_seed(42)
AcceleratorState()._reset_state(True)
config = HfDeepSpeedConfig(
{
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {"stage": zero_stage},
}
)
deepspeed_plugin = DeepSpeedPlugin(
hf_ds_config=config,
)
# This forces transformers to think Zero-3 Init should be used
with patch("transformers.integrations.deepspeed.is_deepspeed_zero3_enabled") as mock:
mock.return_value = zero_stage == 3
accelerator = Accelerator(
mixed_precision="fp8", kwargs_handlers=[AORecipeKwargs()], deepspeed_plugin=deepspeed_plugin
)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler, train_dataloader, eval_dataloader = accelerator.prepare(
model, optimizer, lr_scheduler, train_dataloader, eval_dataloader
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
model_outputs = []
data = []
for batch in train_dataloader:
outputs = model(**batch)
data.append(batch.to("cpu"))
model_outputs.append(outputs.logits.to("cpu"))
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
del config
return base_model_results, trained_model_results, model_outputs, data
if __name__ == "__main__":
for zero_stage in [1, 2, 3]:
baseline_not_trained, baseline_trained, baseline_outputs, baseline_data = train_baseline(zero_stage)
accelerator_not_trained, accelerator_trained, accelerator_outputs, accelerator_data = train_integration(
zero_stage
)
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
AcceleratorState()._reset_state(True)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,116 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
def get_dataloaders(model_name: str, batch_size: int = 16):
from datasets import load_dataset
from torch.utils.data import DataLoader
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
return tokenizer.pad(
examples,
padding="longest",
pad_to_multiple_of=16, # Specific for FP8
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"],
shuffle=False,
collate_fn=collate_fn,
batch_size=16,
drop_last=True,
)
return train_dataloader, eval_dataloader
def get_training_utilities(model_name: str, batch_size: int = 16, accelerator=None, prepare=True):
"""
Returns a tuple of:
- Model
- Optimizer
- Train dataloader (prepared)
- Eval dataloader (prepared)
- LR Scheduler
Suitable for training on the MRPC dataset
"""
from torch.optim import AdamW
from transformers import AutoModelForSequenceClassification, get_linear_schedule_with_warmup
from accelerate import Accelerator
if accelerator is None:
accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(model_name)
train_dataloader, eval_dataloader = get_dataloaders(model_name, batch_size)
optimizer = AdamW(model.parameters(), lr=0.0001)
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=len(train_dataloader) * 2,
)
train_dataloader, eval_dataloader = accelerator.prepare(train_dataloader, eval_dataloader)
return model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
def get_named_parameters(model):
"""
Same thing as `Accelerator.get_named_parameters` Returns a list of the named parameters of the model (extracted
from parallel)
"""
from accelerate.utils import extract_model_from_parallel
model = extract_model_from_parallel(model)
return {n: p for n, p in model.named_parameters()}
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()

View File

@ -0,0 +1,173 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
This particular script verifies this for FSDP training.
"""
from functools import partial
import evaluate
import torch
from fp8_utils import get_training_utilities
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import MixedPrecision
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
from torchao.float8 import convert_to_float8_training
from transformers.models.bert import BertLayer
from accelerate import Accelerator
from accelerate import FullyShardedDataParallelPlugin as FSDPPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import AORecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
FSDP_WRAP_POLICY = partial(transformer_auto_wrap_policy, transformer_layer_cls={BertLayer})
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
if isinstance(module, torch.nn.Linear):
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
return False
# For stability reasons, we skip the first and last linear layers
# Otherwise can lead to the model not training or converging properly
if fqn in (first_layer_name, last_layer_name):
return False
return True
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
first_linear = None
last_linear = None
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if first_linear is None:
first_linear = name
last_linear = name
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
accelerator = Accelerator()
device = accelerator.device
model.to(device)
convert_to_float8_training(model, module_filter_fn=func)
# Convert the model to FSDP
model = FSDP(
model,
use_orig_params=True,
mixed_precision=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.float32),
auto_wrap_policy=FSDP_WRAP_POLICY,
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for batch in train_dataloader:
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
batch = batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
AcceleratorState()._reset_state(True)
fsdp_plugin = FSDPPlugin(
auto_wrap_policy=FSDP_WRAP_POLICY,
use_orig_params=True,
mixed_precision_policy=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.float32),
)
accelerator = Accelerator(mixed_precision="fp8", fsdp_plugin=fsdp_plugin, kwargs_handlers=[AORecipeKwargs()])
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,145 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
This particular script verifies this for single GPU training.
"""
from functools import partial
import evaluate
import torch
from fp8_utils import get_training_utilities
from torchao.float8 import convert_to_float8_training
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import AORecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
if isinstance(module, torch.nn.Linear):
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
return False
# For stability reasons, we skip the first and last linear layers
# Otherwise can lead to the model not training or converging properly
if fqn in (first_layer_name, last_layer_name):
return False
return True
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
first_linear = None
last_linear = None
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if first_linear is None:
first_linear = name
last_linear = name
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
model.to("cuda")
convert_to_float8_training(model, module_filter_fn=func)
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
set_seed(42)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=[AORecipeKwargs()])
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model = accelerator.prepare(model)
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
AcceleratorState._reset_state(True)
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)

View File

@ -1,4 +1,7 @@
FROM nvcr.io/nvidia/pytorch:24.07-py3
ARG BASE_YEAR=25
ARG BASE_MONTH=03
FROM nvcr.io/nvidia/pytorch:${BASE_YEAR}.${BASE_MONTH}-py3
RUN pip install transformers evaluate datasets
RUN git clone https://github.com/huggingface/accelerate.git

View File

@ -79,12 +79,12 @@ def train_baseline():
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
@ -114,12 +114,12 @@ def train_integration():
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
@ -128,17 +128,17 @@ if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert (
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
), f'Accuracy should be the same for the baseline and accelerator: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
assert (
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
), f'F1 score should be the same for the baseline and accelerator: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
assert (
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
), f'Accuracy should be the same for the baseline and accelerator: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
assert (
baseline_trained["f1"] == accelerator_trained["f1"]
), f'F1 score should be the same for the baseline and accelerator: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -66,7 +66,7 @@ def train_baseline(zero_stage: int = 1):
import numpy as np
config = {
"train_batch_size": 32,
"train_batch_size": 16,
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {
@ -113,12 +113,12 @@ def train_baseline(zero_stage: int = 1):
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results, model_outputs, data
@ -159,32 +159,33 @@ def train_integration(zero_stage: int = 1):
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results, model_outputs, data
if __name__ == "__main__":
# for zero_stage in [1, 2, 3]:
zero_stage = 1
baseline_not_trained, baseline_trained, baseline_outputs, baseline_data = train_baseline(zero_stage)
accelerator_not_trained, accelerator_trained, accelerator_outputs, accelerator_data = train_integration(zero_stage)
assert (
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
), f'ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
assert (
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
), f'ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
assert (
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
), f'ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
assert (
baseline_trained["f1"] == accelerator_trained["f1"]
), f'ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'
for zero_stage in [1, 2, 3]:
baseline_not_trained, baseline_trained, baseline_outputs, baseline_data = train_baseline(zero_stage)
accelerator_not_trained, accelerator_trained, accelerator_outputs, accelerator_data = train_integration(
zero_stage
)
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()
torch.distributed.destroy_process_group()

View File

@ -91,12 +91,12 @@ def train_baseline():
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
@ -131,12 +131,12 @@ def train_integration():
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
@ -145,17 +145,17 @@ if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert (
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
), f'Accuracy should be the same for the baseline and accelerator: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
assert (
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
), f'F1 score should be the same for the baseline and accelerator: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
assert (
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
), f'Accuracy should be the same for the baseline and accelerator: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
assert (
baseline_trained["f1"] == accelerator_trained["f1"]
), f'F1 score should be the same for the baseline and accelerator: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -70,12 +70,12 @@ def train_baseline():
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
@ -104,12 +104,12 @@ def train_integration():
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
@ -118,15 +118,15 @@ if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert (
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
), f'Accuracy should be the same for the baseline and accelerator: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
assert (
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
), f'F1 score should be the same for the baseline and accelerator: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
assert (
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
), f'Accuracy should be the same for the baseline and accelerator: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
assert (
baseline_trained["f1"] == accelerator_trained["f1"]
), f'F1 score should be the same for the baseline and accelerator: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)

View File

@ -0,0 +1,74 @@
# FSDP2 Benchmarks
This benchmark showcases `FSDP2` in 🤗 `accelerate` and compares it to `torch` baseline.
## Overview
This benchmark consists of two parts:
- `main.py` is the main script that runs the benchmark
- `visualize.py` is the script that visualizes the results (if `--output_dir` was specified for the previous command)
## Motivation
We want to showcase that 🤗 `accelerate`'s integration of `FSDP2` is on par raw PyTorch, and highlight a "broken" part in PyTorch that creating an optimizer before applying `FSDP2` **doesn't result in a working training loop**. (more on this later)
This script showcases **matching memory usage and convergence between `accelerate` and `torch`'s baseline.**
To deal with this breaking change (and maintain backward compatibility with FSDP1 in terms of an API), `accelerate` had to come up with a workaround since `accelerate` assumes that the user will nearly always create a model, optimizer, scheduler, etc beforehand and bring them themselves. This lead to an issue of a stark increase in memory as well as the model not even training if the user creates an optimizer beforehand.
To workaround this, we replace the parameters inside the optimizer with the newly created FSDP2 sharded ones. More about this can be found in this [blog post (TBD)](TODO)
> [!WARNING]
> This script is intended to fit on 2x 24GB GPUs, though on so few GPUs it's not possible to see the memory difference (discrepancies in grad allocation result in lower memory usage in the non-fixed case), only the difference in convergence. Below are attached results from 8x H100 GPUs where the difference is visible.
> TLDR: more GPUs = bigger memory difference between fixed and non-fixed cases.
## Results
Here are the results from running the benchmark on 8x H100 GPUs:
<p align="center">
<img src="imgs/allocated_memory.png" width="80%" alt="Allocated Memory Usage">
</p>
<p align="center">
<img src="imgs/reserved_memory.png" width="80%" alt="Reserved Memory Usage">
</p>
As you can see, the memory usage of `accelerate` and `torch_post_shard` (the **intended** way) are very similar, while `torch_pre_shard_not_fixed` uses significantly more memory. Our fix in `torch_pre_shard_fixed` brings the memory usage back in line with the **intended** approach.
> [!WARNING]
> Timing discrepancies are due to the benchmarks being ran in 1 script.
## Running
To run the benchmark, you can either use `accelerate launch` or `torchrun`:
```bash
accelerate launch main.py
```
```bash
# For two GPUs
torchrun --nproc_per_node 2 main.py
```
This supports multiple configurable options, you can learn about them by running:
```bash
python3 main.py --help
```
This script will run 4 different benchmarks:
- `torch_optimizer_after_fsdp`: `torch` baseline where optimizer is created after applying `FSDP2`, this is the **intended** way to do it
- `torch_optimizer_before_fsdp_not_fixed`: `torch` baseline where optimizer is created before applying `FSDP2` without fixing the optimizer parameters
- `torch_optimizer_before_fsdp_fixed`: `torch` baseline where optimizer is created before applying `FSDP2` with our fix to the optimizer
- `accelerate`: `accelerate`'s own integration of `FSDP2` where optimizer is created before applying `FSDP2`, but we apply our fix to the optimizer
Memory results are saved in a folder specified by `--output_dir` argument.
Optionally, you can specify `--save_memory_snapshot` to save the torch memory snapshot, which can then be viewed using [`torch memory viz`](https://pytorch.org/memory_viz)
## Visualizing results
To visualize the results, you can run:
```bash
python3 visualize.py --dir <path_to_output_dir>
```
This will then create two plots, showcasing allocated and reserved memory usage between all the different benchmarks discussed above.

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

122
benchmarks/fsdp2/main.py Normal file
View File

@ -0,0 +1,122 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
from typing import Callable
import torch
from accelerate import Accelerator
from utils import parse_args, prepare_accelerate, prepare_torch
MODEL_NAME = "Qwen/Qwen2.5-1.5B-Instruct"
LEARNING_RATE = 3e-5
CONFIG = {
"model_name": MODEL_NAME,
"learning_rate": LEARNING_RATE,
}
def train(
model: torch.nn.Module,
optimizer: torch.optim.Optimizer,
train_dataloader: torch.utils.data.DataLoader,
accelerator: Accelerator,
) -> torch.Tensor:
losses = []
for batch in train_dataloader:
optimizer.zero_grad()
outputs = model(**batch, use_cache=False)
loss = outputs.loss
losses.append(loss.item())
accelerator.backward(loss)
optimizer.step()
return torch.tensor(losses)
def evaluate(args, config: dict, init_fn: Callable, run_name: str) -> torch.Tensor:
model, optimizer, dataloader, accelerator, memory_tracker = init_fn(args, config)
loss = train(model, optimizer, dataloader, accelerator)
memory_tracker.stop()
msg = f"""Results for {run_name} (rank 0):
Loss: {loss[-1].item()}
Peak Allocated Memory: {float(memory_tracker.peak_allocated_memory):.2f} MB
Peak Reserved Memory: {float(memory_tracker.peak_reserved_memory):.2f} MB
{"-" * 34}"""
accelerator.print(msg)
return loss
def main():
args = parse_args()
evaluations = [
functools.partial(
evaluate,
init_fn=functools.partial(prepare_torch, post_shard_optimizer=False, apply_optimizer_fix=True),
run_name="Optimizer Before FSDP (w/ fix)",
),
functools.partial(
evaluate,
init_fn=functools.partial(prepare_torch, post_shard_optimizer=False, apply_optimizer_fix=False),
run_name="Optimizer Before FSDP (w/o fix)",
),
functools.partial(
evaluate,
init_fn=functools.partial(prepare_torch, post_shard_optimizer=True),
run_name="Optimizer After FSDP",
),
functools.partial(evaluate, init_fn=prepare_accelerate, run_name="Accelerate"),
]
labels = [
"Optimizer Before FSDP (w/ fix)",
"Optimizer Before FSDP (w/o fix)",
"Optimizer After FSDP",
"Accelerate",
]
results = {}
torch.use_deterministic_algorithms(True)
for evaluation, label in zip(evaluations, labels):
results[label] = evaluation(args, CONFIG)
torch.testing.assert_close(
results["Optimizer After FSDP"],
results["Optimizer Before FSDP (w/ fix)"],
msg="Optimizer After FSDP and Optimizer Before FSDP (w/ fix) should be the same",
)
torch.testing.assert_close(
results["Optimizer After FSDP"],
results["Accelerate"],
msg="Optimizer After FSDP and Accelerate should be the same",
)
torch.testing.assert_close(
results["Accelerate"],
results["Optimizer Before FSDP (w/ fix)"],
msg="Accelerate and Optimizer Before FSDP (w/ fix) should be the same",
)
torch.distributed.destroy_process_group()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,130 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc
import json
import os
import threading
import time
import psutil
import torch
from accelerate import PartialState
class MemoryTracker:
def __init__(
self,
device: torch.device,
output_directory: str,
run_name: str,
save_memory_snapshot: bool,
log_interval: float = 0.01,
):
"""Class for tracking gpu and cpu memory usage of the process.
Args:
device (`torch.device`):
PyTorch device to monitor.
output_directory (`str`):
Directory to save the memory usage data to, will be created if it doesn't exist.
run_name (`str`):
Name of the run, will be used to name the output files.
save_memory_snapshot (`bool`):
Whether to also save `torch.cuda.memory._dump_snapshot` to the output directory.
log_interval (`float`, *optional*):
Interval in seconds between memory measurements. Defaults to 0.01.
"""
self.log_interval = log_interval
self.save_memory_snapshot = save_memory_snapshot
self.output_directory = output_directory
self.run_name = run_name
self.timestamps = []
self.allocated_memory = []
self.reserved_memory = []
self.virtual_memory = []
self.start_time = None
self.running = False
self._thread = None
self._state = PartialState()
self._process = psutil.Process()
self._device = device
self.torch_accelerator_module = getattr(torch, device.type, torch.cuda)
def _monitor(self):
self.start_time = time.time()
while self.running:
allocated = self.torch_accelerator_module.memory_allocated(self._device) / (1024 * 1024)
reserved = self.torch_accelerator_module.memory_reserved(self._device) / (1024 * 1024)
virtual_memory = self._process.memory_info().rss / (1024 * 1024)
self.allocated_memory.append(allocated)
self.reserved_memory.append(reserved)
self.virtual_memory.append(virtual_memory)
self.timestamps.append(time.time() - self.start_time)
time.sleep(self.log_interval)
def start(self):
gc.collect()
self.torch_accelerator_module.empty_cache()
if self.output_directory:
os.makedirs(self.output_directory, exist_ok=True)
if self.save_memory_snapshot:
self.torch_accelerator_module.memory._record_memory_history()
self.running = True
self._thread = threading.Thread(target=self._monitor)
self._thread.daemon = True
self._thread.start()
def stop(self):
self.running = False
if self._thread:
self._thread.join()
if self.save_memory_snapshot and self._state.is_main_process and self.output_directory:
output_file = os.path.join(self.output_directory, f"{self.run_name}_memory_snapshot.pkl")
self.torch_accelerator_module.memory._dump_snapshot(output_file)
if self._state.is_main_process and self.output_directory:
path = os.path.join(self.output_directory, f"{self.run_name}_memory_usage.json")
with open(path, "w") as f:
json.dump(
{
"timestamps": self.timestamps,
"allocated_memory": self.allocated_memory,
"reserved_memory": self.reserved_memory,
"virtual_memory": self.virtual_memory,
},
f,
)
if self.save_memory_snapshot:
self.torch_accelerator_module.memory._record_memory_history(False)
self.torch_accelerator_module.empty_cache()
@property
def peak_allocated_memory(self):
return max(self.allocated_memory)
@property
def peak_reserved_memory(self):
return max(self.reserved_memory)

290
benchmarks/fsdp2/utils.py Normal file
View File

@ -0,0 +1,290 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from types import MethodType
from typing import Union
import torch
from datasets import load_dataset
from measure_utils import MemoryTracker
from torch.distributed.fsdp import MixedPrecisionPolicy, fully_shard
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, DataCollatorForLanguageModeling
from transformers.models.qwen2.modeling_qwen2 import Qwen2DecoderLayer
from accelerate import Accelerator, FullyShardedDataParallelPlugin
from accelerate.state import AcceleratorState, is_initialized
from accelerate.utils import convert_outputs_to_fp32, set_seed
SEED = 421
def get_named_parameters(model: torch.nn.Module, drop_refs: bool = False) -> dict[str, Union[torch.Tensor, int]]:
"""
This function returns a dictionary mapping the parameter names to their data pointers or
the original parameters if `drop_refs` is `False`.
It is used to get the original parameter names before `fully_shard` is applied.
We only return the data pointers, so we drop the references to the original parameters
and `fully_shard` will then trigger a new allocation for the sharded ones.
Args:
model (`torch.nn.Module`): Model instance to get the named parameters from
drop_refs (`bool`, *optional*, defaults to `False`): Whether to drop the references to the original parameters
Returns:
`dict[str, Union[torch.Tensor, int]]`: Dictionary mapping the parameter names to their data pointers or the original parameters if `drop_refs` is `False`
"""
named_parameters = {}
for n, p in model.named_parameters():
# We only preserve the data pointers to have the unique 1:1 mapping between the original and the sharded parameters
named_parameters[n] = p.data_ptr() if drop_refs else p
return named_parameters
def replace_optimizer_params(optimizer: torch.optim.Optimizer):
"""
This function is called before using `fully_shard` on the model. It replaces the parameters of the optimizer with
empty tensors, so `fully_shard` can trigger a new allocation for the sharded ones. After this, we swap the parameters
`data_ptr` to the original one, so we can reuse that later to map the sharded parameters to the original ones.
This function modifies the optimizer in-place.
Args:
optimizer (torch.optim.Optimizer): Optimizer instance which contains the original model parameters
"""
for param_group in optimizer.param_groups:
for i, p in enumerate(param_group["params"]):
# We drop a reference to the original param here, so that _move_states_to_device triggers a reallocation
# This is required or else the `fully_shard` -> `_move_states_to_device` uses the original memory address
# for the sharded parameters, and we get a weird/undefined behavior.
param_group["params"][i] = torch.empty_like(p)
# We save the original data_ptr, so we can swap back the parameters later
param_group["params"][i].data_ptr = p.data_ptr()
def swap_back_optimizer_params(
model: torch.nn.Module, optimizer: torch.optim.Optimizer, old_named_parameter_pointers: dict[str, int]
):
"""
This function is the counterpart of `replace_optimizer_params`. It is called after `fully_shard` being applied to
the model. It swaps the parameters of the optimizer to their sharded counterparts.
It is done using the `data_ptr` mapping prepared in `replace_optimizer_params` and `get_named_parameters`.
Args:
model (`torch.nn.Module`): Model instance to get the new named parameters from
optimizer (`torch.optim.Optimizer`): Optimizer instance to swap the parameters of
old_named_parameter_pointers (`dict[str, int]`): Dictionary mapping the original parameter names: data_ptrs to the new ones
"""
# We get the new named parameters after `fully_shard` being applied
# We don't drop the references as we need the sharded parameters now
new_named_parameters = get_named_parameters(model, drop_refs=False)
# We create a mapping from the original data_ptr to the new sharded param corresponding to it
mapping = {p: new_named_parameters[n] for n, p in old_named_parameter_pointers.items()}
for param_group in optimizer.param_groups:
# We swap the parameters of the optimizer to the new sharded ones
param_group["params"] = [mapping[p.data_ptr] for p in param_group["params"]]
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument(
"--output_dir",
type=str,
help="Directory to save the benchmarking results.",
)
parser.add_argument(
"--save_memory_snapshot",
action="store_true",
default=False,
help="If True, `torch.cuda.memory._dump_snapshot` will be used to additionaly save the memory trace.",
)
######################
# Training arguments #
######################
parser.add_argument(
"--batch_size",
type=int,
default=2,
help="Batch size for the training loop.",
)
parser.add_argument(
"--block_size",
type=int,
default=128,
help="The maximum sequence length to use with the model.",
)
parser.add_argument(
"--dataset_fraction",
type=float,
default=1.0,
help="Fraction of the dataset to use.",
)
return parser.parse_args()
def prepare_dataloader(tokenizer, args, accelerator: Accelerator) -> DataLoader:
dataset = load_dataset("tiny_shakespeare", split="train", trust_remote_code=True)
def tokenize_function(example):
return tokenizer(
example["text"],
)
dataset = dataset.map(
tokenize_function,
batched=True,
remove_columns=["text"],
)
block_size = min(tokenizer.model_max_length, args.block_size)
def group_texts(examples):
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
total_length = (total_length // block_size) * block_size
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
dataset = dataset.map(group_texts, batched=True)
dataset = dataset.select(range(int(len(dataset) * args.dataset_fraction)))
def collate_fn(examples):
return DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False,
)(examples)
dataloader = DataLoader(
dataset,
batch_size=args.batch_size,
collate_fn=collate_fn,
)
dataloader = accelerator.prepare(dataloader)
return dataloader
def get_model(model_name: str):
# We reguire model to be loaded in fp32, otherwise benchmarks don't match as accelerate does upcasting of parameters to fp32
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float32)
model = AutoModelForCausalLM.from_config(config)
return model
def get_tokenizer(model_name: str):
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
return tokenizer
def prepare_torch(
args, config: dict, post_shard_optimizer: bool = False, apply_optimizer_fix: bool = False
) -> tuple[torch.nn.Module, torch.optim.Optimizer, torch.utils.data.DataLoader, Accelerator]:
mp_policy = MixedPrecisionPolicy(
param_dtype=torch.bfloat16,
reduce_dtype=torch.bfloat16,
output_dtype=torch.bfloat16,
)
accelerator = Accelerator(mixed_precision="bf16")
set_seed(SEED)
is_fixed = "fixed" if apply_optimizer_fix else "not_fixed"
is_post_shard = "optimizer_after_fsdp" if post_shard_optimizer else "optimizer_before_fsdp"
run_name = f"torch_{is_post_shard}" if post_shard_optimizer else f"torch_{is_post_shard}_{is_fixed}"
tokenizer = get_tokenizer(config["model_name"])
train_dataloader = prepare_dataloader(tokenizer, args, accelerator)
memory_tracker = MemoryTracker(accelerator.device, args.output_dir, run_name, args.save_memory_snapshot)
memory_tracker.start()
model = get_model(config["model_name"])
optimizer = None
if not post_shard_optimizer:
optimizer = AdamW(model.parameters(), lr=config["learning_rate"])
if apply_optimizer_fix:
# We drop the references to the original parameters, so that `fully_shard` can trigger a new allocation
# Then we get the `module_name: data_ptr` mapping, so we can swap back the parameters later
old_named_parameters = get_named_parameters(model, drop_refs=True)
# We replace the parameters of the optimizer with empty tensors, so that `fully_shard` can trigger a new allocation
# We also change the `data_ptr` of the parameters to the original ones, so we can swap back the parameters later
replace_optimizer_params(optimizer)
for module in model.modules():
if isinstance(module, Qwen2DecoderLayer):
fully_shard(module, mp_policy=mp_policy)
fully_shard(model, mp_policy=mp_policy)
# We do this to imitate how accelerate forces outputs to be in fp32 via `convert_outputs_to_fp32`
autocast_context = torch.autocast(device_type=accelerator.state.device.type, dtype=torch.bfloat16)
model_forward_func = model.forward.__func__
new_forward = autocast_context(model_forward_func)
model.forward = MethodType(new_forward, model)
model.forward = MethodType(convert_outputs_to_fp32(model.forward.__func__), model)
if post_shard_optimizer:
optimizer = AdamW(model.parameters(), lr=config["learning_rate"])
if not post_shard_optimizer and apply_optimizer_fix:
# We swap back the parameters of the optimizer to the original ones
swap_back_optimizer_params(model, optimizer, old_named_parameters)
return model, optimizer, train_dataloader, accelerator, memory_tracker
def prepare_accelerate(
args, config: dict
) -> tuple[torch.nn.Module, torch.optim.Optimizer, torch.utils.data.DataLoader, Accelerator]:
if is_initialized():
AcceleratorState()._reset_state(True)
fsdp_plugin = FullyShardedDataParallelPlugin(
fsdp_version=2,
auto_wrap_policy="transformer_based_wrap",
transformer_cls_names_to_wrap=["Qwen2DecoderLayer"],
)
accelerator = Accelerator(
fsdp_plugin=fsdp_plugin,
mixed_precision="bf16",
)
set_seed(SEED)
tokenizer = get_tokenizer(config["model_name"])
train_dataloader = prepare_dataloader(tokenizer, args, accelerator)
memory_tracker = MemoryTracker(accelerator.device, args.output_dir, "accelerate", args.save_memory_snapshot)
memory_tracker.start()
model = get_model(config["model_name"])
optimizer = AdamW(model.parameters(), lr=config["learning_rate"])
model, optimizer = accelerator.prepare(model, optimizer)
return model, optimizer, train_dataloader, accelerator, memory_tracker

View File

@ -0,0 +1,114 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import json
import matplotlib.pyplot as plt
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--dir", type=str, help="Directory containing the memory usage data")
parser.add_argument(
"--memory_threshold",
type=int,
default=0,
help="Memory threshold to filter data that is below this value (only filters 1st `--filter_partition` of the points which should roughtly correspond to the model loading)",
)
parser.add_argument(
"--filter_partition",
type=float,
default=1 / 3,
help="Partition to drop data from that are below the memory threshold",
)
return parser.parse_args()
def filter_data(data, memory_threshold, filter_partition, key):
timestamps = data["timestamps"]
memory = data[key]
mid_point = int(len(timestamps) * filter_partition)
filtered_times = []
filtered_memory = []
for i, (t, m) in enumerate(zip(timestamps, memory)):
if i < mid_point and m < memory_threshold:
continue
filtered_times.append(t)
filtered_memory.append(m)
return filtered_times, filtered_memory
def compare_memory_usage(data, labels, memory_threshold, filter_partition):
plt.style.use("seaborn-v0_8")
colors = ["#2ecc71", "#e74c3c", "#3498db", "#f1c40f"]
fig1, ax1 = plt.subplots(figsize=(15, 5))
for data_item, label, color in zip(data, labels, colors):
timestamps, allocated = filter_data(data_item, memory_threshold, filter_partition, "allocated_memory")
ax1.plot(timestamps, allocated, label=label, color=color, linewidth=2)
ax1.set_xlabel("Time (s)", fontsize=12)
ax1.set_ylabel("Allocated Memory (GB)", fontsize=12)
ax1.set_title("Allocated Memory Usage Over Time", fontsize=14, pad=15)
ax1.grid(True, linestyle="--", alpha=0.7)
ax1.legend(frameon=True, fancybox=True, shadow=True, fontsize=10)
ax1.spines["top"].set_visible(False)
ax1.spines["right"].set_visible(False)
plt.tight_layout()
fig2, ax2 = plt.subplots(figsize=(15, 5))
for data_item, label, color in zip(data, labels, colors):
timestamps, reserved = filter_data(data_item, memory_threshold, filter_partition, "reserved_memory")
ax2.plot(timestamps, reserved, label=label, color=color, linewidth=2)
ax2.set_xlabel("Time (s)", fontsize=12)
ax2.set_ylabel("Reserved Memory (GB)", fontsize=12)
ax2.set_title("Reserved Memory Usage Over Time", fontsize=14, pad=15)
ax2.grid(True, linestyle="--", alpha=0.7)
ax2.legend(frameon=True, fancybox=True, shadow=True, fontsize=10)
ax2.spines["top"].set_visible(False)
ax2.spines["right"].set_visible(False)
plt.tight_layout()
return fig1, fig2
if __name__ == "__main__":
args = parse_args()
DIR = args.dir
with open(f"{DIR}/torch_optimizer_before_fsdp_not_fixed_memory_usage.json") as f:
optimizer_before_fsdp_not_fixed = json.load(f)
with open(f"{DIR}/torch_optimizer_after_fsdp_memory_usage.json") as f:
optimizer_after_fsdp = json.load(f)
with open(f"{DIR}/torch_optimizer_before_fsdp_fixed_memory_usage.json") as f:
optimizer_before_fsdp_fixed = json.load(f)
with open(f"{DIR}/accelerate_memory_usage.json") as f:
accelerate = json.load(f)
data = [optimizer_before_fsdp_not_fixed, optimizer_before_fsdp_fixed, optimizer_after_fsdp, accelerate]
labels = [
"Optimizer Before FSDP (w/o fix)",
"Optimizer Before FSDP (w/ fix)",
"Optimizer After FSDP",
"Accelerate",
]
fig1, fig2 = compare_memory_usage(data, labels, args.memory_threshold, args.filter_partition)
fig1.savefig(f"{DIR}/allocated_memory.png")
fig2.savefig(f"{DIR}/reserved_memory.png")

View File

@ -0,0 +1,111 @@
# Regional Compilation Benchmark
This benchmark compares different compilation strategies using PyTorch's `torch.compile` and Accelerate's `compile_regions` utility, which is based on the recipe in [PyTorch documentation](https://pytorch.org/tutorials/recipes/regional_compilation.html).
## Overview
The benchmark evaluates three approaches:
- **Baseline**: No compilation, standard PyTorch eager execution.
- **Full compilation**: Using PyTorch's `torch.compile()` on the entire model.
- **Regional compilation**: Using `accelerate.utils.compile_regions()` which targets specific blocks of the model to optimize compilation time.
Each approach is tested with different batch sizes (1 and 4) and sequence lengths (128) on various LLaMA-based models ranging from 1B to 13B parameters. We purposefully run the forward pass outside of the `torch.no_grad()` context to simulate performance in a training environment, where gradients are needed.
## Usage
To run this benchmark:
```bash
python regional_compilation.py
```
The script will automatically download the model configurations, create models, and benchmark both compilation and inference times across different scenarios.
## Requirements
- Suitable GPU memory for the models being tested.
- PyTorch with CUDA support.
- Transformers library.
- Accelerate library.
## Results
The benchmark results are summarized in the following figures:
- Compilation time is how long it takes to run the first forward pass.
- Speedup factor is the ratio of non-compiled baseline inference time to the fully/regionally compiled inference time.
<p align="center">
<img src="imgs/compilation_time.png" width="80%" alt="Compilation Time">
</p>
<p align="center">
<img src="imgs/speedup_factor.png" width="80%" alt="Speedup Factor">
</p>
Full results are available in the tables below:
```markdown
[-------------------------------------------------- NousResearch/Llama-3.2-1B ---------------------------------------------------]
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
1 threads: -----------------------------------------------------------------------------------------------------------------------
Baseline | 18.3 | 18.4 | |
Full compilation | 6.3 | 10.0 | 10696.4 | 10248.0
Regional compilation | 9.7 | 10.0 | 1952.7 | 2903.9
Times are in milliseconds (ms).
[---------------------------------------------- NousResearch/Hermes-3-Llama-3.2-3B ----------------------------------------------]
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
1 threads: -----------------------------------------------------------------------------------------------------------------------
Baseline | 33.4 | 33.6 | |
Full compilation | 11.2 | 23.9 | 17857.5 | 17736.5
Regional compilation | 17.3 | 23.7 | 2993.2 | 2478.8
Times are in milliseconds (ms).
[---------------------------------------------- NousResearch/Hermes-3-Llama-3.1-8B ----------------------------------------------]
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
1 threads: -----------------------------------------------------------------------------------------------------------------------
Baseline | 40.3 | 59.5 | |
Full compilation | 18.9 | 54.4 | 20437.8 | 20152.3
Regional compilation | 19.7 | 54.0 | 2903.1 | 2438.0
Times are in milliseconds (ms).
[--------------------------------------------- NousResearch/Nous-Hermes-Llama2-13b ----------------------------------------------]
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
1 threads: -----------------------------------------------------------------------------------------------------------------------
Baseline | 45.5 | 100.4 | |
Full compilation | 29.4 | 89.7 | 23099.4 | 22885.9
Regional compilation | 29.4 | 87.5 | 2945.5 | 2526.2
Times are in milliseconds (ms).
```
## Results Summary
### Compilation Time
Regional compilation provides significantly faster compilation times compared to full model compilation:
- **Full compilation**: Takes ~10-23 seconds depending on model size.
- **Regional compilation**: Takes only ~2-3 seconds across all model sizes.
- **Speed improvement**: Regional compilation is **5-9x faster** to compile.
### Inference Time
Regional compilation delivers inference performance close to full compilation:
- For batch size 1:
- For smaller models (1B-3B): Full compilation has a slight edge over regional compilation.
- For larger models (8B-13B): Regional compilation performs similarly to full compilation.
- For batch size 4: Regional compilation performs similarly to full compilation across all models.
## Key Takeaways
1. **Comparable Performance**: Regional compilation delivers performance speedups similar to full compilation, especially for larger models.
2. **Faster Compilation**: Regional compilation significantly reduces the time taken to compile models, making it a more efficient choice for deployment.
3. **Batch Size Impact**: At batch size 4, full compilation and regional compilation perform nearly identically.
4. **Model Size Impact**: Even with a small batch size, full compilation and regional compilation perform similarly for larger models (8B-13B).
5. **Practical Application**: For real-world applications, regional compilation is a practical choice for optimizing training cold start times, especially when working with large models.

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

View File

@ -0,0 +1,77 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from torch.utils.benchmark import Compare, Timer
from transformers import AutoConfig, AutoModelForCausalLM
from accelerate.test_utils.testing import get_backend
from accelerate.utils import compile_regions
torch.set_float32_matmul_precision("high")
COMPILE_ITERS = 2
INFERENCE_ITERS = 100
BASELINE = "Baseline"
COMPILE_TIME = "Compile time"
INFRENCE_TIME = "Inference time"
FULL_COMPILATION = "Full compilation"
REGIONAL_COMPILATION = "Regional compilation"
INFRENCE_STMT = "model(input_ids, use_cache=False)"
COMPILE_STMT = f"torch._dynamo.reset(); torch._inductor.utils.clear_inductor_caches(); {INFRENCE_STMT}"
torch_device_type, _, _ = get_backend()
results = []
for model_id in [
# non-gated llama models
"NousResearch/Llama-3.2-1B",
"NousResearch/Hermes-3-Llama-3.2-3B",
"NousResearch/Hermes-3-Llama-3.1-8B",
"NousResearch/Nous-Hermes-Llama2-13b",
]:
with torch.device(torch_device_type):
config = AutoConfig.from_pretrained(model_id)
model = AutoModelForCausalLM.from_config(config).to(dtype=torch.float16).eval()
full_compilation_model = torch.compile(model)
regional_compilation_model = compile_regions(model)
for model, sub_label, description, stmt, iters in [
(model, BASELINE, INFRENCE_TIME, INFRENCE_STMT, INFERENCE_ITERS),
(full_compilation_model, FULL_COMPILATION, COMPILE_TIME, COMPILE_STMT, COMPILE_ITERS),
(full_compilation_model, FULL_COMPILATION, INFRENCE_TIME, INFRENCE_STMT, INFERENCE_ITERS),
(regional_compilation_model, REGIONAL_COMPILATION, COMPILE_TIME, COMPILE_STMT, COMPILE_ITERS),
(regional_compilation_model, REGIONAL_COMPILATION, INFRENCE_TIME, INFRENCE_STMT, INFERENCE_ITERS),
]:
for batch_size, sequence_length in [(1, 128), (4, 128)]:
input_ids = torch.randint(
0, 1000, size=(batch_size, sequence_length), dtype=torch.int64, device=torch_device_type
)
results.append(
Timer(
label=model_id,
sub_label=sub_label,
description=f"{description} ({batch_size}x{sequence_length})",
globals={"model": model, "input_ids": input_ids},
stmt=stmt,
).timeit(number=iters)
)
compare = Compare(results)
compare.colorize()
compare.print()

View File

@ -25,12 +25,12 @@ RUN source activate accelerate && conda install -c conda-forge mpi4py
RUN source activate accelerate && \
python3 -m pip install --no-cache-dir \
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers,deepspeed] \
--extra-index-url https://download.pytorch.org/whl/cu117
--extra-index-url https://download.pytorch.org/whl/cu126
RUN python3 -m pip install --no-cache-dir bitsandbytes
# Stage 2
FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu20.04 AS build-image
FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 AS build-image
COPY --from=compile-image /opt/conda /opt/conda
ENV PATH /opt/conda/bin:$PATH

View File

@ -24,12 +24,12 @@ RUN source activate accelerate && conda install -c conda-forge mpi4py
RUN source activate accelerate && \
python3 -m pip install --no-cache-dir \
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers] \
--extra-index-url https://download.pytorch.org/whl/cu117
--extra-index-url https://download.pytorch.org/whl/cu126
RUN python3 -m pip install --no-cache-dir bitsandbytes
# Stage 2
FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu20.04 AS build-image
FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 AS build-image
COPY --from=compile-image /opt/conda /opt/conda
ENV PATH /opt/conda/bin:$PATH

View File

@ -64,6 +64,10 @@
title: Apple M1 GPUs
- local: usage_guides/ipex
title: IPEX training with CPU
- local: usage_guides/gaudi
title: Intel Gaudi
- local: usage_guides/compilation
title: Compilation
title: Training
- isExpanded: true
sections:
@ -86,12 +90,14 @@
title: Gradient synchronization
- local: concept_guides/fsdp_and_deepspeed
title: FSDP vs DeepSpeed
- local: concept_guides/fsdp1_vs_fsdp2
title: FSDP1 vs FSDP2
- local: concept_guides/low_precision_training
title: Low precision training methods
- local: concept_guides/training_tpu
title: Training on TPUs
title: Concepts and fundamentals
- sections:
- sections:
- local: package_reference/accelerator
title: Accelerator
- local: package_reference/state

View File

@ -79,23 +79,36 @@ accelerate env
An example output is shown below, which describes two GPUs on a single machine with no mixed precision being used:
```bash
- `Accelerate` version: 0.11.0.dev0
- Platform: Linux-5.10.0-15-cloud-amd64-x86_64-with-debian-11.3
- Python version: 3.7.12
- Numpy version: 1.19.5
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- `Accelerate` version: 1.2.0.dev0
- Platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.35
- `accelerate` bash location: /home/zach/miniconda3/envs/accelerate/bin/accelerate
- Python version: 3.10.13
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- PyTorch MLU available: False
- PyTorch MUSA available: False
- System RAM: 187.91 GB
- GPU type: NVIDIA GeForce RTX 4090
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- main_process_ip: None
- main_process_port: None
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {}
- fsdp_config: {}
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
```

View File

@ -26,7 +26,7 @@ You will also learn how to setup a few requirements needed for ensuring your env
## Configuring the Environment
Before any training can be performed, a Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
Before any training can be performed, an Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
```bash
accelerate config
@ -52,7 +52,7 @@ os._exit(00) # Restart the notebook
## Preparing the Dataset and Model
Next you should prepare your dataset. As mentioned at earlier, great care should be taken when preparing the `DataLoaders` and model to make sure that **nothing** is put on *any* GPU.
Next you should prepare your dataset. As mentioned earlier, great care should be taken when preparing the `DataLoaders` and model to make sure that **nothing** is put on *any* GPU.
If you do, it is recommended to put that specific code into a function and call that from within the notebook launcher interface, which will be shown later.

View File

@ -153,7 +153,7 @@ To use [`find_executable_batch_size`], restructure your training function to inc
<Tip warning={true}>
The inner function **must** take batch size as the first parameter, but we do not pass one to it when called. The wrapper will handles this for you. Any object (models, optimizers) that consumes device memory and is passed to the [`Accelerator`] also **must** be declared inside the inner function.
The inner function **must** take batch size as the first parameter, but we do not pass one to it when called. The wrapper will handle this for you. Any object (models, optimizers) that consumes device memory and is passed to the [`Accelerator`] also **must** be declared inside the inner function.
</Tip>

View File

@ -0,0 +1,105 @@
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# FSDP1 vs FSDP2
This guide explains the key differences between `FSDP1` and `FSDP2` and helps you migrate your existing code to use `FSDP2` with minimal changes.
## How is FSDP2 better than FSDP1?
First, we want to understand how `FSDP1` and `FSDP2` work internally to understand the differences between them. This also helps us understand the limitations of `FSDP1` and how `FSDP2` solves them.
We'll be discussing a scenario where we have a single `Layer` that contains 3 `Linear` layers and is wrapped using `FSDP` to be sharded across 2 GPUs.
<div align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/layer.png" alt="Layer">
</div>
### FSDP1
First, we have to understand the original `FSDP1` and the limitations it brings. It represents each `FSDP` module as a single `FlatParameter` which is a single 1D tensor that contains all of the module parameters, which then get sharded across ranks. I.e. if you wrap the `Layer` with `FSDP1`, you'd achieve something as such:
<div align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/fsdp1.png" alt="FSDP1">
</div>
You might notice a problem. The whole `Layer` gets flattened into a single `FlatParameter`, which then gets sharded across ranks. But if it's a single `FlatParameter` object, how do we store metadata? That is one of the limitations. Properly storing per-parameter metadata such as `dtype`, `requires_grad`, etc. is not possible without some ugly hacks.
### FSDP2
This is why `FSDP2` was introduced. It doesn't use `FlatParameter`, instead it uses `DTensor` which is short for "Distributed Tensor". Each `DTensor` basically represents a vanilla `torch.Tensor` that has been sharded across ranks. It contains metadata about the original `torch.Tensor` and how it's sharded, what is the [placement type](https://pytorch.org/docs/stable/distributed.tensor.html#module-torch.distributed.tensor.placement_types) and so on. This is why it's called `per-parameter sharding`. The following figure shows the difference:
<div align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/fsdp2.png" alt="FSDP2">
</div>
Each Parameter of the original `Layer` is sharded across the 0th dimension, and split between 2 GPUs. Now, each `Linear` layer is a separate `DTensor` and storing metadata per-parameter is possible and straightforward.
> [!TIP]
> In the image above, the tensors were sharded across the 1st dimension for the sake of fitting the image on the screen, in reality, they are sharded across the 0th dimension as stated above
## What does FSDP2 offer?
`FSDP2` is a new and improved version of PyTorch's fully-sharded data parallel training API. Its main advantage is using `DTensor` to represent sharded parameters. Compared to `FSDP1`, it offers:
- Simpler internal implementation, where each `Parameter` is a separate `DTensor`
- Enables simple partial parameter freezing because of the above, which makes methods as [`LORA`](https://arxiv.org/abs/2106.09685) work out of the box
- With `DTensor`, `FSDP2` supports mixing `fp8` and other parameter types in the same model out of the box
- Faster and simpler checkpointing without extra communication across ranks using `SHARDED_STATE_DICT` and [`torch.distributed.checkpoint`](https://pytorch.org/docs/stable/distributed.checkpoint.html), this way, each rank only saves its own shard and corresponding metadata
- For loading, it uses a `state_dict` of the sharded model to directly load the sharded parameters
- Support for asynchronous checkpointing, where parameters are first copied to CPU memory, after this, main thread continues training while another thread stores the parameters on disk
- Memory efficiency and deterministic memory usage, `FSDP2` doesn't use `recordStream` anymore and uses stream-to-stream synchronization (for more technical details see [this forum post](https://dev-discuss.pytorch.org/t/fsdp-cudacachingallocator-an-outsider-newb-perspective/1486) and [this issue](https://github.com/pytorch/pytorch/issues/114299))
- In the future, optimizations of the communication patterns via `torch.compile` are planned, further improving the performance and memory efficiency
## API Differences
We have already discussed the internal differences, now let's discuss the differences, you, as a user, will need to know.
Here are the main changes in configuration options when using `FSDP2` through the `accelerate` CLI:
Previous (`FSDP1`) | New (`FSDP2`) | What Changed
-- | -- | --
`--fsdp_sharding_strategy` | `--fsdp_reshard_after_forward` | replaces `--fsdp_sharding_strategy`, changed to `true` (previously `FULL_SHARD`) or `false` (previously `SHARD_GRAD_OP`)
`--fsdp_backward_prefetch` | \*\***REMOVED**\*\* | `FSDP2` uses previous `BACKWARD_PRE` option by default, as only this allows communication and computation overlap
`--fsdp_forward_prefetch` | \*\***NOT YET IMPLEMENTED**\*\* | How to implement this is under active discussion, for now it is not supported in `FSDP2`
`--fsdp_sync_module_states` | \*\***REMOVED**\*\* | with `FSDP2`, this parameter becomes redundant
`--fsdp_cpu_ram_efficient_loading` | `--fsdp_cpu_ram_efficient_loading` | if `true`, `FSDP2` will similarly load the model only on rank 0, and then parameters get synced to other ranks, this is the same behavior as `FSDP1`, however, setting `--fsdp_sync_module_states` isn't required anymore
`--fsdp_state_dict_type` | `--fsdp_state_dict_type` | `LOCAL_STATE_DICT` becomes obsolete and with `FSDP2` `SHARDED_STATE_DICT` is the default option, which results in no extra communication and each rank saving its own shard, other possible option is `FULL_STATE_DICT` which results in extra communication and spike in memory usage but saves the full model from rank 0.
`--fsdp_use_orig_params` | \*\***REMOVED**\*\* | `FSDP2` uses a `DTensor` class on the background, which means it *always* uses the original parameters by default
\*\***NEW**\*\* | `--fsdp_version` | `1` is the default option, to not break existing code, set to `2` to use `FSDP2`
For all other options that remain unchanged, see the [`FSDP` documentation](../usage_guides/fsdp.md).
## How to Switch to FSDP2
### If using Python code:
Simply set `fsdp_version=2` when creating your plugin and replace options according to the table above.
```python
from accelerate import FullyShardedDataParallelPlugin, Accelerator
fsdp_plugin = FullyShardedDataParallelPlugin(
fsdp_version=2
# other options...
)
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
```
### If using YAML config:
Use our conversion tool:
```bash
accelerate to-fsdp2 --config_file config.yaml --output_file new_config.yaml
```
This will automatically convert all FSDP1 settings to their FSDP2 equivalents. Use `--overwrite` to update the existing file instead of creating a new one.

View File

@ -109,7 +109,7 @@ While FSDP require an explicit `--fsdp_cpu_ram_efficient_loading true` to activa
<Tip>
For FSDP, whenever setting `--fsdp_cpu_ram_efficient_loading true`, `accelerate` will automatically set `sync_module_states` to true.
For RAM efficient loading the weights will be loaded only in a singe rank, and thus requires `sync_module_states` to broadcast weights to other ranks.
For RAM efficient loading the weights will be loaded only in a single rank, and thus requires `sync_module_states` to broadcast weights to other ranks.
</Tip>
@ -125,7 +125,7 @@ FSDP requires an explicit `--fsdp_auto_wrap_policy` for the algorithm to decide
### Parameters Summoning
FSDP requires an explicit `--fsdp_use_orig_params` flag if using `torch.compile`, see [the pytorch documenation](https://pytorch.org/docs/stable/fsdp.html#module-torch.distributed.fsdp). For DeepSpeed this is transparent to the user.
FSDP requires an explicit `--fsdp_use_orig_params` flag if using `torch.compile`, see [the pytorch documentation](https://pytorch.org/docs/stable/fsdp.html#module-torch.distributed.fsdp). For DeepSpeed this is transparent to the user.
<Tip>
@ -147,7 +147,7 @@ Deepspeed requires explicit `--gradient_accumulation_steps` and `--gradient_clip
## On Differences in Data Precision Handling
To discuss the how data precision is handled in both FSDP and Deepspeed, it is instructive to first give an overview of how model parameters are handled in these frameworks. Before the model / optimizer parameters are distributed across GPUs, parameter preparation is involved to first "flatten" them to one-dimensional [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor). The implementation of FSDP / DeepSpeed varies in the respect of the `dtype` in which these "flattened" parameters are stored, and there are ramifications with regards to how [`torch.Optimizer`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) allocate their `dtype`s. The table below outlines the processes for both frameworks; the "Local" column indicates the process occurring at a per-gpu level, therefore any memory overheads by upcasting should be understood to be amortized by the number of gpus used.
To discuss how data precision is handled in both FSDP and Deepspeed, it is instructive to first give an overview of how model parameters are handled in these frameworks. Before the model / optimizer parameters are distributed across GPUs, parameter preparation is involved to first "flatten" them to one-dimensional [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor). The implementation of FSDP / DeepSpeed varies in the respect of the `dtype` in which these "flattened" parameters are stored, and there are ramifications with regards to how [`torch.Optimizer`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) allocate their `dtype`s. The table below outlines the processes for both frameworks; the "Local" column indicates the process occurring at a per-gpu level, therefore any memory overheads by upcasting should be understood to be amortized by the number of gpus used.
<Tip>
@ -166,7 +166,7 @@ Optimizer (Actual Step) | ✅ | FSDP<br>DeepSpeed | occurs in `torch_dtype` <br
<Tip warning={true}>
Therefore when using DeepSpeed a small number of GPUs, be aware of potentially significant memory overheads due to the upcasting during preperation.
Therefore when using DeepSpeed a small number of GPUs, be aware of potentially significant memory overheads due to the upcasting during preparation.
</Tip>

View File

@ -71,4 +71,4 @@ setting the same seed in the main random number generator in all processes.
If you have [`torchdata>=0.8.0`](https://github.com/pytorch/data/tree/main) installed, and you have passed `use_stateful_dataloader=True` into your [`~utils.DataLoaderConfiguration`], these classes will directly inherit from `StatefulDataLoader` instead, and maintain a `state_dict`.
For more details about the internals, see the [Internals page](package_reference/torch_wrappers).
For more details about the internals, see the [Internals page](../package_reference/torch_wrappers).

View File

@ -63,6 +63,10 @@ rendered properly in your Markdown viewer.
[[autodoc]] hooks.SequentialHook
### LayerwiseCastingHook
[[autodoc]] hooks.LayerwiseCastingHook
## Adding Hooks
### add_hook_to_module
@ -81,6 +85,10 @@ rendered properly in your Markdown viewer.
[[autodoc]] hooks.attach_align_device_hook_on_blocks
### attach_layerwise_casting_hooks
[[autodoc]] big_modeling.attach_layerwise_casting_hooks
## Removing Hooks
### remove_hook_from_module
@ -99,4 +107,4 @@ rendered properly in your Markdown viewer.
### align_module_device
[[autodoc]] utils.align_module_device
[[autodoc]] utils.align_module_device

View File

@ -158,13 +158,13 @@ The following arguments are useful for selecting which training paradigm to use.
* `--use_deepspeed` (`bool`) -- Whether or not to use DeepSpeed for training.
* `--use_fsdp` (`bool`) -- Whether or not to use FullyShardedDataParallel for training.
* `--use_megatron_lm` (`bool`) -- Whether or not to use Megatron-LM for training.
* `--use_xpu` (`bool`) -- Whether to use IPEX plugin to speed up training on XPU specifically.
* `--use_xpu` (`bool`) -- Whether to use IPEX plugin to speed up training on XPU specifically. **This argument is deprecated and ignored, will be removed in Accelerate v1.20**
**Distributed GPU Arguments**:
The following arguments are only useful when `multi_gpu` is passed or multi-gpu training is configured through `accelerate config`:
* `--gpu_ids` (`str`) -- What GPUs (by id) should be used for training on this machine as a comma-seperated list
* `--gpu_ids` (`str`) -- What GPUs (by id) should be used for training on this machine as a comma-separated list
* `--same_network` (`bool`) -- Whether all machines used for multinode training exist on the same local network.
* `--machine_rank` (`int`) -- The rank of the machine on which this script is launched.
* `--main_process_ip` (`str`) -- The IP address of the machine of rank 0.
@ -202,8 +202,8 @@ The following arguments are only useful when `use_deepspeed` is passed or `deeps
* `--zero3_init_flag` (`str`) -- Decides Whether (true|false) to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with DeepSpeed ZeRO Stage-3.
* `--zero3_save_16bit_model` (`str`) -- Decides Whether (true|false) to save 16-bit model weights when using ZeRO Stage-3. Only applicable with DeepSpeed ZeRO Stage-3.
* `--deepspeed_hostfile` (`str`) -- DeepSpeed hostfile for configuring multi-node compute resources.
* `--deepspeed_exclusion_filter` (`str`) -- DeepSpeed exclusion filter string when using mutli-node setup.
* `--deepspeed_inclusion_filter` (`str`) -- DeepSpeed inclusion filter string when using mutli-node setup.
* `--deepspeed_exclusion_filter` (`str`) -- DeepSpeed exclusion filter string when using multi-node setup.
* `--deepspeed_inclusion_filter` (`str`) -- DeepSpeed inclusion filter string when using multi-node setup.
* `--deepspeed_multinode_launcher` (`str`) -- DeepSpeed multi-node launcher to use.
* `--deepspeed_moe_layer_cls_names` (`str`) -- comma-separated list of transformer MoE layer class names (case-sensitive) to wrap, e.g, `MixtralSparseMoeBlock` `Qwen2MoeSparseMoeBlock`, `JetMoEAttention,JetMoEBlock`

View File

@ -30,3 +30,17 @@ rendered properly in your Markdown viewer.
## FullyShardedDataParallelPlugin
[[autodoc]] utils.FullyShardedDataParallelPlugin
## fsdp2_load_full_state_dict
[[autodoc]] utils.fsdp2_load_full_state_dict
## fsdp2_switch_optimizer_parameters
[[autodoc]] utils.fsdp2_switch_optimizer_parameters
## fsdp2_prepare_model
[[autodoc]] utils.fsdp2_prepare_model
## fsdp2_prepare_auto_wrap_policy

View File

@ -208,6 +208,7 @@ These utilities relate to interacting with PyTorch models
[[autodoc]] utils.set_module_tensor_to_device
[[autodoc]] utils.get_module_children_bottom_up
## Parallel

View File

@ -168,7 +168,7 @@ with init_empty_weights():
The [`~accelerate.load_checkpoint_and_dispatch`] function loads full or sharded checkpoints into the empty model, and automatically distribute weights across all available devices.
The `device_map` parameter determines where to place each model layer, and specifiying `"auto"` places them on the GPU first, then the CPU, and finally the hard drive as memory-mapped tensors if there's still not enough memory. Use the `no_split_module_classes` parameter to indicate which modules shouldn't be split across devices (typically those with a residual connection).
The `device_map` parameter determines where to place each model layer, and specifying `"auto"` places them on the GPU first, then the CPU, and finally the hard drive as memory-mapped tensors if there's still not enough memory. Use the `no_split_module_classes` parameter to indicate which modules shouldn't be split across devices (typically those with a residual connection).
```py
from accelerate import load_checkpoint_and_dispatch

View File

@ -21,7 +21,7 @@ This tutorial will show you how to use Big Model Inference in Accelerate and the
## Accelerate
A typical workflow for loading a PyTorch model is shown below. `ModelClass` is a model that exceeds the GPU memory of your device (mps or cuda).
A typical workflow for loading a PyTorch model is shown below. `ModelClass` is a model that exceeds the GPU memory of your device (mps or cuda or xpu).
```py
import torch
@ -41,7 +41,7 @@ with init_empty_weights():
Next, the weights are loaded into the model for inference.
The [`load_checkpoint_and_dispatch`] method loads a checkpoint inside your empty model and dispatches the weights for each layer across all available devices, starting with the fastest devices (GPU, MPS, XPU, NPU, MLU, MUSA) first before moving to the slower ones (CPU and hard drive).
The [`load_checkpoint_and_dispatch`] method loads a checkpoint inside your empty model and dispatches the weights for each layer across all available devices, starting with the fastest devices (GPU, MPS, XPU, NPU, MLU, SDAA, MUSA) first before moving to the slower ones (CPU and hard drive).
Setting `device_map="auto"` automatically fills all available space on the GPU(s) first, then the CPU, and finally, the hard drive (the absolute slowest option) if there is still not enough memory.
@ -64,7 +64,8 @@ Now that the model is fully dispatched, you can perform inference.
```py
input = torch.randn(2,3)
input = input.to("cuda")
device_type = next(iter(model.parameters())).device.type
input = input.to(device_type)
output = model(input)
```
@ -91,7 +92,8 @@ model = load_checkpoint_and_dispatch(
)
input = torch.randn(2,3)
input = input.to("cuda")
device_type = next(iter(model.parameters())).device.type
input = input.to(device_type)
output = model(input)
```

View File

@ -0,0 +1,76 @@
# Compilation
## Overview
Pytorch 2.0 introduced `torch.compile`, a powerful feature that makes PyTorch code run faster by JIT-compiling PyTorch code into optimized kernels. Key features of `torch.compile` include:
- **Performance Improvement**: Significantly speeds up model execution by optimizing the computation graph.
- **Ease of Use**: Requires minimal code changes to implement, making it highly accessible.
- **Compatibility**: Works seamlessly with existing PyTorch code and models.
When used with Accelerate, `torch.compile` integrates smoothly into distributed training workflows, allowing you to benefit from both distributed execution and compilation optimizations simultaneously.
The first execution of compiled code typically takes longer as it includes the compilation time, but subsequent runs are significantly faster. For optimal performance in different scenarios, `torch.compile` offers various modes like `"default"`, `"reduce-overhead"` (which uses CUDA graphs to further reduce overhead), and `"max-autotune"` (which performs extensive autotuning to find the best kernels for your model).
## Using `torch.compile` with Accelerate
Accelerate provides `TorchDynamoPlugin` for easy and seemless integration of `torch.compile` into your training scripts.
```python
from accelerate import Accelerator
from accelerate.utils import TorchDynamoPlugin
# Configure the compilation backend
dynamo_plugin = TorchDynamoPlugin(
backend="inductor", # Options: "inductor", "aot_eager", "aot_nvfuser", etc.
mode="default", # Options: "default", "reduce-overhead", "max-autotune"
fullgraph=True,
dynamic=False
)
# Initialize accelerator with the plugin
accelerator = Accelerator(dynamo_plugin=dynamo_plugin)
# This will apply torch.compile to your model
model = accelerator.prepare(model)
```
It is compatible with all other features and plugins of Accelerate, including mixed precision, distributed training (DDP, FSDP, Deepspeed), etc.
## Regional Compilation
Instead of trying to compile the whole model, which usually has a big problem space for optimization. Regional compilation targets repeated blocks of the same class and compiles them sequentially to hit the compiler's cache. For example, in `GPT2LMHeadModel`, the repeated block/class is `GPT2Block`, and can be accessed as `model.transformer.h[0]`. The rest of the model (e.g model.lm_head) is compiled separately.
This allows us to speed up the compilation overhead / cold start of models like LLMs and Transformers in general.
See <https://pytorch.org/tutorials/recipes/regional_compilation.html> for more details.
### How to Use Regional Compilation
It can be enabled by setting `use_regional_compilation=True` in the `TorchDynamoPlugin` configuration:
```python
# Configure the compilation backend
dynamo_plugin = TorchDynamoPlugin(
use_regional_compilation=True,
... # other parameters
)
# Initialize accelerator with the plugin
accelerator = Accelerator(dynamo_plugin=dynamo_plugin)
# This will apply compile_regions to your model
model = accelerator.prepare(model)
```
You could also use the `accelerate.utils.compile_regions` utility directly the same way you would use `torch.compile`.
### Benefits of Regional Compilation
We have conducted extensive benchmarks comparing full compilation and regional compilation using the `torch.compile` feature in PyTorch. The full results are available in the [accelerate repository](https://github.com/huggingface/accelerate/tree/main/benchmarks/torch.compile/regional_compilation). The key findings from our benchmarks are:
1. **Comparable Performance**: Regional compilation delivers performance speedups similar to full compilation, especially for larger models.
2. **Faster Compilation**: Regional compilation significantly reduces the time taken to compile models, making it a more efficient choice for deployment.
3. **Batch Size Impact**: The performance difference between compilation strategies diminishes with larger batch sizes, indicating that the overhead of compilation is less impactful in those scenarios.
4. **Model Size Consideration**: The benefits of regional compilation are more pronounced in larger models, where the compilation time savings can be substantial.
5. **Practical Application**: For real-world applications, regional compilation is a practical choice for optimizing training cold start times, especially when working with large models.
## Conclusion
Both full and regional compilation can significantly speed up your models. Regional compilation offers a practical balance between compilation time and runtime performance, especially for training large models with substantial batch sizes.

View File

@ -34,6 +34,10 @@ In this tutorial, you will see how to quickly set up DDP communication hooks and
import torch
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed.algorithms.ddp_comm_hooks import default_hooks
from accelerate.test_utils.testing import get_backend
device_type, _, _ = get_backend()
device_id = getattr(torch, device_type, torch.cuda).current_device()
class MyModel(torch.nn.Module):
def __init__(self):
@ -44,7 +48,7 @@ class MyModel(torch.nn.Module):
return self.layer(x)
model = MyModel()
model = DDP(model, device_ids=[torch.cuda.current_device()])
model = DDP(model, device_ids=[device_id])
model.register_comm_hook(state=None, hook=default_hooks.fp16_compress_hook)
# Training loop
@ -108,6 +112,10 @@ BF16 Compression Hook API is experimental, and it requires NCCL version later th
import torch
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed.algorithms.ddp_comm_hooks import default_hooks
from accelerate.test_utils.testing import get_backend
device_type, _, _ = get_backend()
device_id = getattr(torch, device_type, torch.cuda).current_device()
class MyModel(torch.nn.Module):
def __init__(self):
@ -118,7 +126,7 @@ class MyModel(torch.nn.Module):
return self.layer(x)
model = MyModel()
model = DDP(model, device_ids=[torch.cuda.current_device()])
model = DDP(model, device_ids=[device_id])
model.register_comm_hook(state=None, hook=default_hooks.bf16_compress_hook)
# Training loop
@ -182,6 +190,10 @@ PowerSGD typically requires extra memory of the same size as the models gradi
import torch
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed.algorithms.ddp_comm_hooks import powerSGD_hook
from accelerate.test_utils.testing import get_backend
device_type, _, _ = get_backend()
device_id = getattr(torch, device_type, torch.cuda).current_device()
class MyModel(torch.nn.Module):
def __init__(self):
@ -192,7 +204,7 @@ class MyModel(torch.nn.Module):
return self.layer(x)
model = MyModel()
model = DDP(model, device_ids=[torch.cuda.current_device()])
model = DDP(model, device_ids=[device_id])
state = powerSGD_hook.PowerSGDState(process_group=None)
model.register_comm_hook(state=state, hook=powerSGD_hook.powerSGD_hook)

View File

@ -15,7 +15,7 @@ rendered properly in your Markdown viewer.
# DeepSpeed
[DeepSpeed](https://github.com/microsoft/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Some of the salient optimizations are:
[DeepSpeed](https://github.com/deepspeedai/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Some of the salient optimizations are:
1. Optimizer state partitioning (ZeRO stage 1)
2. Gradient partitioning (ZeRO stage 2)
@ -33,7 +33,7 @@ DeepSpeed ZeRO-2 is primarily used only for training, as its features are of no
DeepSpeed ZeRO-3 can be used for inference as well since it allows huge models to be loaded on multiple GPUs, which
won't be possible on a single GPU.
Accelerate integrates [DeepSpeed](https://github.com/microsoft/DeepSpeed) via 2 options:
Accelerate integrates [DeepSpeed](https://github.com/deepspeedai/DeepSpeed) via 2 options:
1. Integration of the DeepSpeed features via `deepspeed config file` specification in `accelerate config` . You just supply your custom config file or use our template. Most of
this document is focused on this feature. This supports all the core features of DeepSpeed and gives user a lot of flexibility.
@ -74,7 +74,7 @@ Inference:
## How it works?
**Pre-Requisites**: Install DeepSpeed version >=0.6.5. Please refer to the [DeepSpeed Installation details](https://github.com/microsoft/DeepSpeed#installation)
**Pre-Requisites**: Install DeepSpeed version >=0.6.5. Please refer to the [DeepSpeed Installation details](https://github.com/deepspeedai/DeepSpeed#installation)
for more information.
We will first look at easy to use integration via `accelerate config`.
@ -167,7 +167,7 @@ Currently, `Accelerate` supports following config through the CLI:
`deepspeed_hostfile`: DeepSpeed hostfile for configuring multi-node compute resources.
`deepspeed_exclusion_filter`: DeepSpeed exclusion filter string when using mutli-node setup.
`deepspeed_inclusion_filter`: DeepSpeed inclusion filter string when using mutli-node setup.
`deepspeed_multinode_launcher`: DeepSpeed multi-node launcher to use. If unspecified, will default to `pdsh`.
`deepspeed_multinode_launcher`: DeepSpeed multi-node launcher to use, e.g. `pdsh`, `standard`, `openmpi`, `mvapich`, `mpich`, `slurm`, `nossh` (requires DeepSpeed >= 0.14.5). If unspecified, will default to `pdsh`.
`deepspeed_config_file`: path to the DeepSpeed config file in `json` format. See the next section for more details on this.
```
To be able to tweak more options, you will need to use a DeepSpeed config file.
@ -194,7 +194,7 @@ For instance, here is how you would run the NLP example `examples/by_feature/dee
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/ubuntu/accelerate/examples/configs/deepspeed_config_templates/zero_stage2_config.json
deepspeed_config_file: /home/ubuntu/accelerate/examples/deepspeed_config_templates/zero_stage2_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
fsdp_config: {}
@ -275,7 +275,7 @@ accelerate launch examples/by_feature/deepspeed_with_config_support.py \
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/ubuntu/accelerate/examples/configs/deepspeed_config_templates/zero_stage3_offload_config.json
deepspeed_config_file: /home/ubuntu/accelerate/examples/deepspeed_config_templates/zero_stage3_offload_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
fsdp_config: {}
@ -710,11 +710,18 @@ model, eval_dataloader = accelerator.prepare(model, eval_dataloader)
2. Current integration doesnt support `mpu`, limiting the tensor parallelism which is supported in Megatron-LM.
3. Current integration doesnt support multiple models.
## Multi-node DeepSpeed
DeepSpeed supports multi-node inference and training over a variety of different launchers. You can specify a different launcher by setting the `deepspeed_multinode_launcher` config in the CLI or in the DeepSpeed config file.
Currently, accelerate supports passing configuration for the following DeepSpeed multi-node launchers: `pdsh` (default), `standard`, `openmpi`, `mvapich`, `mpich`, `slurm`, `nossh` (requires DeepSpeed >= 0.14.5).
Please read the [DeepSpeed documentation](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) for more information on the different launchers. By default, DeepSpeed will attempt to use passwordless SSH from the main machine node to the other nodes to perform the launcher command. In this configuration, the accelerate launch command only needs to be run on the main node. If using the `nossh` launcher, you will need to run the accelerate launch command on every node using copied configuration.
## DeepSpeed Resources
The documentation for the internals related to deepspeed can be found [here](../package_reference/deepspeed).
- [Project's github](https://github.com/microsoft/deepspeed)
- [Project's github](https://github.com/deepspeedai/DeepSpeed)
- [Usage docs](https://www.deepspeed.ai/getting-started/)
- [API docs](https://deepspeed.readthedocs.io/en/latest/index.html)
- [Blog posts](https://www.microsoft.com/en-us/research/search/?q=deepspeed)
@ -728,7 +735,7 @@ Papers:
Finally, please, remember that `Accelerate` only integrates DeepSpeed, therefore if you
have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues).
have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/deepspeedai/DeepSpeed/issues).
<Tip>

View File

@ -69,6 +69,7 @@ to be padded) for you to use right away.
Let's rewrite the above example using this context manager:
```python
import torch
from accelerate import PartialState # Can also be Accelerator or AcceleratorState
from diffusers import DiffusionPipeline
@ -125,6 +126,7 @@ needs to be the same length. Basic inference does not require this.
For instance:
```python
import torch
from accelerate import PartialState # Can also be Accelerator or AcceleratorState
from diffusers import DiffusionPipeline

View File

@ -0,0 +1,38 @@
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Intel Gaudi
Users can take advantage of Intel Gaudi AI accelerators for significantly faster and cost-effective model training and inference.
The Intel Gaudi AI accelerator family currently includes three product generations: [Intel Gaudi 1](https://habana.ai/products/gaudi/), [Intel Gaudi 2](https://habana.ai/products/gaudi2/), and [Intel Gaudi 3](https://habana.ai/products/gaudi3/). Each server is equipped with 8 devices, known as Habana Processing Units (HPUs), providing 128GB of memory on Gaudi 3, 96GB on Gaudi 2, and 32GB on the first-gen Gaudi. For more details on the underlying hardware architecture, check out the [Gaudi Architecture Overview](https://docs.habana.ai/en/latest/Gaudi_Overview/Gaudi_Architecture.html).
## How it works out of the box
It is enabled by default if an Intel Gaudi device is detected.
To disable it, pass `--cpu` flag to `accelerate launch` command or answer the corresponding question when answering the `accelerate config` questionnaire.
You can directly run the following script to test it out on Intel Gaudi:
```bash
accelerate launch /examples/cv_example.py --data_dir images
```
## Limitations
The following features are not part of the Accelerate library and requires [Optimum for Intel Gaudi](https://huggingface.co/docs/optimum/main/en/habana/index):
- `fast_ddp` which implements DDP by applying an all-reduce on gradients instead of the Torch DDP wrapper.
- `minimize_memory` which is used for fp8 training and enables keeping fp8 weights in memory between the forward and backward passes, leading to a smaller memory footprint at the cost of additional fp8 casts.
- `context_parallel_size` which is used for Context/Sequence Parallelism (CP/SP) and partitions the network inputs and activations along sequence dimension to reduce memory footprint and increase throughput.

View File

@ -187,11 +187,11 @@ set_seed(0)
x = torch.tensor([1., 2., 3., 4., 5., 6., 7., 8.])
y = torch.tensor([2., 4., 6., 8., 10., 12., 14., 16.])
gradient_accumulation_steps = 4
batch_size = len(x) // gradient_accumulation_steps
per_device_batch_size = len(x) // gradient_accumulation_steps
# define dataset and dataloader
dataset = TensorDataset(x, y)
dataloader = DataLoader(dataset, batch_size=batch_size)
dataloader = DataLoader(dataset, batch_size=per_device_batch_size)
# define model, optimizer and loss function
class SimpleLinearModel(torch.nn.Module):
@ -238,3 +238,233 @@ initial model weight is 0.00000
w/ accumulation, the final model weight is 2.04000
w/o accumulation, the final model weight is 2.04000
```
## Gradient accumulation on training samples of variable size
As was pointed out in this [blog-post](https://huggingface.co/blog/gradient_accumulation), which points out a common error that occurs when performing gradient accumulation on training samples of variable size:
> [...] for gradient accumulation across token-level tasks like causal LM training, the correct loss should be computed by the **total loss across all batches in a gradient accumulation step** divided by the **total number of all non padding tokens in those batches**. This is not the same as the average of the per-batch loss values.
In other words, some adjustements must be made on losses that operate on a token-level basis.
### Skeleton code
```python
from accelerate import Accelerator
import math
import contextlib
gradient_accumulation_steps = 2
accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps)
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
training_iterator = iter(training_dataloader)
num_samples_in_epoch = len(training_dataloader)
remainder = num_samples_in_epoch % gradient_accumulation_steps
remainder = remainder if remainder != 0 else gradient_accumulation_steps
total_updates = math.ceil(num_samples_in_epoch / gradient_accumulation_steps)
total_batched_samples = 0
for update_step in range(total_updates):
# In order to correctly the total number of non-padded tokens on which we'll compute the cross-entropy loss
# we need to pre-load the full local batch - i.e the next per_device_batch_size * accumulation_steps samples
batch_samples = []
num_batches_in_step = gradient_accumulation_steps if update_step != (total_updates - 1) else remainder
for _ in range(num_batches_in_step):
batch_samples += [next(training_iterator)]
# get local num items in batch
num_items_in_batch = sum([(batch["labels"].ne(-100)).sum() for batch in batch_samples])
# to compute it correctly in a multi-device DDP training, we need to gather the total number of items in the full batch.
num_items_in_batch = accelerator.gather(num_items_in_batch).sum().item()
for i, batch in enumerate(batch_samples):
# if we perform gradient accumulation in a multi-devices set-up, we want to avoid unecessary communications when accumulating
# cf: https://muellerzr.github.io/blog/gradient_accumulation.html
if (i < len(batch_samples) - 1 and accelerator.num_processes > 1):
ctx = model.no_sync
else:
ctx = contextlib.nullcontext
total_batched_samples += 1
with ctx():
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets) # the loss function shoud sum over samples rather than averaging
# We multiply by num_processes because the DDP calculates the average gradient across all devices whereas dividing by num_items_in_batch already takes into account all devices
# Same reason for gradient_accumulation_steps, but this times it's Accelerate that calculate the average gradient across the accumulated steps
loss = (loss * gradient_accumulation_steps * accelerator.num_processes) / num_items_in_batch
accelerator.backward(loss)
# Sync gradients and perform optimization steps once every gradient_accumulation_steps
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
### Self-contained causal LM example
```py
import torch
import copy
from accelerate import Accelerator
from accelerate.utils import set_seed
from accelerate.logging import get_logger
from torch.utils.data import Dataset, DataLoader
import math
import contexlib
# seed
set_seed(0)
logger = get_logger(__name__)
class MyDataset(Dataset):
def __init__(self, num_samples):
super().__init__()
self.len = num_samples
def __getitem__(self, index):
input_ids = torch.arange(1, index+2, dtype=torch.float32)
labels = torch.remainder(input_ids, 2)
return {"input_ids": input_ids, "labels": labels}
def __len__(self):
return self.len
def collate_fn(features):
input_ids = torch.nn.utils.rnn.pad_sequence([f["input_ids"] for f in features], batch_first=True, padding_value=-100)
labels = torch.nn.utils.rnn.pad_sequence([f["labels"] for f in features], batch_first=True, padding_value=-100)
return {"input_ids": input_ids[..., None], "labels": labels[..., None]}
# define toy inputs and labels
gradient_accumulation_steps = 2
per_device_batch_size = 4
# define accelerator
accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps)
# define dataset and dataloader
# for this toy example, we'll compute gradient descent over one single global batch
dataset = MyDataset(per_device_batch_size*gradient_accumulation_steps*accelerator.num_processes)
dataloader = DataLoader(dataset, batch_size=per_device_batch_size, collate_fn=collate_fn)
# define model, model_optimizer and loss function
model = torch.nn.Linear(1, 2, bias=False)
model_clone = copy.deepcopy(model)
criterion = torch.nn.CrossEntropyLoss(reduction="sum") # must sum over samples rather than averaging
model_optimizer = torch.optim.SGD(model.parameters(), lr=0.08)
logger.warning(f"initial model weight is {model.weight.detach().cpu().squeeze()}")
logger.warning(f"initial model clone weight is {model_clone.weight.detach().cpu().squeeze()}")
# prepare artifacts - accelerator handles device placement and dataloader splitting
model, model_optimizer = accelerator.prepare(model, model_optimizer)
dataloader = accelerator.prepare_data_loader(dataloader, device_placement=True)
training_iterator = iter(dataloader)
num_samples_in_epoch = len(dataloader)
remainder = num_samples_in_epoch % gradient_accumulation_steps
remainder = remainder if remainder != 0 else gradient_accumulation_steps
total_gradient_updates = math.ceil(num_samples_in_epoch / gradient_accumulation_steps)
total_batched_samples = 0
for update_step in range(total_gradient_updates):
# In order to correctly the total number of non-padded tokens on which we'll compute the cross-entropy loss
# we need to pre-load the full local batch - i.e the next per_device_batch_size * accumulation_steps samples
batch_samples = []
num_batches_in_step = gradient_accumulation_steps if update_step != (total_gradient_updates - 1) else remainder
for _ in range(num_batches_in_step):
batch_samples += [next(training_iterator)]
# get local num items in batch
local_num_items_in_batch = sum([(batch["labels"].ne(-100)).sum() for batch in batch_samples])
logger.warning(f"Step {update_step} - Device {accelerator.process_index} - num items in the local batch {local_num_items_in_batch}", main_process_only=False)
# to compute it correctly in a multi-device DDP training, we need to gather the total number of items in the full batch.
num_items_in_batch = accelerator.gather(local_num_items_in_batch).sum().item()
logger.warning(f"Total num items {num_items_in_batch}")
for i, batch in enumerate(batch_samples):
inputs, labels = batch["input_ids"], batch["labels"]
total_batched_samples += 1
# if we perform gradient accumulation in a multi-devices set-up, we want to avoid unecessary communications when accumulating
# cf: https://muellerzr.github.io/blog/gradient_accumulation.html
if (i < len(batch_samples) - 1 and accelerator.num_processes > 1):
ctx = model.no_sync
else:
ctx = contextlib.nullcontext
with ctx():
outputs = model(inputs)
loss = criterion(outputs.view(-1, 2), labels.view(-1).to(torch.int64))
# We multiply by num_processes because the DDP calculates the average gradient across all devices whereas dividing by num_items_in_batch already takes into account all devices
# Same reason for gradient_accumulation_steps, but this times it's Accelerate that calculate the average gradient across the accumulated steps
loss = (loss * gradient_accumulation_steps * accelerator.num_processes) / num_items_in_batch
accelerator.backward(loss)
model_optimizer.step()
model_optimizer.zero_grad()
logger.warning(f"Device {accelerator.process_index} - w/ accumulation, the final model weight is {accelerator.unwrap_model(model).weight.detach().cpu().squeeze()}", main_process_only=False)
# We know do the same operation but on a single device and without gradient accumulation
if accelerator.is_main_process:
# prepare one single entire batch
dataloader = DataLoader(dataset, batch_size=len(dataset), collate_fn=collate_fn)
full_batch_without_accum = next(iter(dataloader))
total_inputs, total_labels = full_batch_without_accum["input_ids"], full_batch_without_accum["labels"]
model_clone_optimizer = torch.optim.SGD(model_clone.parameters(), lr=0.08)
# train the cloned model
loss = torch.nn.CrossEntropyLoss(reduction="mean")(model_clone(total_inputs).view(-1, 2), total_labels.view(-1).to(torch.int64))
model_clone_optimizer.zero_grad()
loss.backward()
model_clone_optimizer.step()
# We should have the same final weights.
logger.warning(f"w/o accumulation, the final model weight is {model_clone.weight.detach().cpu().squeeze()}")
```
Results on a single device - gradient accumulation steps set to 1 and batch_size set to 8:
```
initial model weight is tensor([-0.0075, 0.5364])
initial model clone weight is tensor([-0.0075, 0.5364])
Step 0 - Device 0 - num items in the local batch 36
Total num items 36
Device 0 - w/ accumulation, the final model weight is tensor([0.0953, 0.4337])
w/o accumulation, the final model weight is tensor([0.0953, 0.4337])
```
Results on a two devices set-up - gradient accumulation steps set to 2 and batch_size set to 4.
```
initial model weight is tensor([-0.0075, 0.5364])
initial model clone weight is tensor([-0.0075, 0.5364])
Step 0 - Device 0 - num items in the local batch 52
Step 0 - Device 1 - num items in the local batch 84
Total num items 136
Device 1 - w/ accumulation, the final model weight is tensor([0.2117, 0.3172])
Device 0 - w/ accumulation, the final model weight is tensor([0.2117, 0.3172])
w/o accumulation, the final model weight is tensor([0.2117, 0.3172])
```
### To go further:
Please find a complete example script on a real world training run in the examples folder at the path [`accelerate/examples/by_feature/gradient_accumulation_for_autoregressive_models.py`](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/gradient_accumulation_for_autoregressive_models.py).
Running it on several training configurations with constant global batch size equal to 32 gives the following graph:
<div style="text-align: center">
<img src="https://huggingface.co/datasets/hf-audio/gradient_accumulation_example/resolve/main/training_losses.png">
</div>
Note that the training losses are exactly the same up to training step 20. The small deviation after this training step occurs at the very end of the first epoch, because, by [default](https://huggingface.co/docs/accelerate/en/package_reference/torch_wrappers#accelerate.data_loader.prepare_data_loader.even_batches), the dataloader duplicates the samples at the beginning of the dataset when the total batch size doesn't exactly divide the dataset.

View File

@ -94,6 +94,9 @@ use_cpu: true
accelerate launch examples/nlp_example.py
```
> [!CAUTION]
> `accelerator.prepare` can currently only handle simultaneously preparing multiple models (and no optimizer) OR a single model-optimizer pair for training. Other attempts (e.g., two model-optimizer pairs) will raise a verbose error. To work around this limitation, consider separately using `accelerator.prepare` for each model-optimizer pair.
**Scenario 2**: Acceleration of distributed CPU training
we use Intel oneCCL for communication, combined with Intel® MPI library to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. you could refer the [here](https://huggingface.co/docs/transformers/perf_train_cpu_many) for the installation guide

View File

@ -92,7 +92,7 @@ Under the hood, the Local SGD code **disables** automatic gradient synchronizati
## Limitations
The current implementation works only with basic multi-GPU (or multi-CPU) training without, e.g., [DeepSpeed.](https://github.com/microsoft/DeepSpeed).
The current implementation works only with basic multi-GPU (or multi-CPU) training without, e.g., [DeepSpeed.](https://github.com/deepspeedai/DeepSpeed).
## References

View File

@ -15,7 +15,7 @@ rendered properly in your Markdown viewer.
# Low Precision Training Methods
Accelerate provides integrations to train on lower precision methods using specified supported hardware through the `TransformersEngine` and `MS-AMP` packages. This documentation will help guide you through what hardware is supported, how to configure your [`Accelerator`] to leverage the low precision methods, and what you can expect when training.
Accelerate provides integrations to train on lower precision methods using specified supported hardware through the `TransformersEngine`, `MS-AMP`, and `torchao` packages. This documentation will help guide you through what hardware is supported, how to configure your [`Accelerator`] to leverage the low precision methods, and what you can expect when training.
## What training on FP8 means
@ -26,11 +26,11 @@ This is only enabled on specific NVIDIA hardware, namely:
* Anything after the 3000 series consumer graphics cards (such as the 4090)
* Hopper-based GPU architectures (such as the `H100` and `H200`)
What this will result in is some gain in the memory used (as we've cut the needed memory in half for some parts of training) and an increase in throughput *should* be seen as well for larger models that can replace certain layers with FP8-enabled ones.
What this will result in is some reduction in the memory used (as we've cut the needed memory in half for some parts of training) and an increase in throughput *should* be seen as well for larger models that can replace certain layers with FP8-enabled ones.
## Configuring the Accelerator
Currently two different backends for FP8 are supported (`TransformersEngine` and `MS-AMP`), each with different capabilities and configurations.
Currently three different backends for FP8 are supported (`TransformersEngine`, `torchao`, and `MS-AMP`), each with different capabilities and configurations.
To use either, the same core API is used. Just pass `mixed_precision="fp8"` to either the [`Accelerator`], during `accelerate config` when prompted about mixed precision, or as part of your `config.yaml` file in the `mixed_precision` key:
@ -39,27 +39,29 @@ from accelerate import Accelerator
accelerator = Accelerator(mixed_precision="fp8")
```
By default, if `MS-AMP` is available in your environment, Accelerate will automatically utilize it as a backend. To specify it yourself (and customize other parts of the FP8 mixed precision setup), you can utilize the [`utils.FP8RecipeKwargs`] or clarify it in your config `yaml`/during `accelerate launch`:
By default, if `MS-AMP` is available in your environment, Accelerate will automatically utilize it as a backend. To specify it yourself (and customize other parts of the FP8 mixed precision setup), you can utilize one of the `RecipeKwargs` dataclasses such as [`utils.AORecipeKwargs`], [`utils.TERecipeKwargs`], or [`utils.MSAMPRecipeKwargs`]; you can also clarify it in your config `yaml`/during `accelerate launch`:
```{python}
from accelerate import Accelerator
from accelerate.utils import FP8RecipeKwargs
kwargs = [FP8RecipeKwargs(backend="msamp")]
from accelerate.utils import MSAMPRecipeKwargs
kwargs = [MSAMPRecipeKwargs()]
# Or to specify the backend as `TransformersEngine` even if MS-AMP is installed
# kwargs = [FP8RecipeKwargs(backend="te")]
# kwargs = [TERecipeKwargs()]
# Or to use torchao
# kwargs = [AORecipeKwargs()]
accelerator = Accelerator(mixed_precision="fp8", kwarg_handlers=kwargs)
```
```{yaml}
mixed_precision: fp8
fp8_config:
amax_compute_algorithm: max
amax_history_length: 1024
amax_compute_algo: max
amax_history_len: 1024
backend: TE
fp8_format: HYBRID
interval: 1
margin: 0
override_linear_precision: false
override_linear_precision: (false, false, false)
use_autocast_during_eval: false
```
@ -94,7 +96,7 @@ fp8_config:
## Configuring TransformersEngine
TransformersEngine has much more available for customizing how and what FP8 calculations are performed. A full list of supported arguments and what they mean are available in [NVIDIA's documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html), however they are restated as part of [`FP8KwargsHandler`]'s docstring for your convenience.
TransformersEngine has many options for customizing how and what FP8 calculations are performed. A full list of supported arguments and what they mean are available in [NVIDIA's documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html), however they are restated as part of [`FP8KwargsHandler`]'s docstring for your convenience.
Accelerate tries to set sensible defaults, but exploring and tweaking the various parameters yourself can lead to better performance potentially.
@ -114,16 +116,32 @@ Similarly this can be set in your `config.yaml`:
```{yaml}
mixed_precision: fp8
fp8_config:
amax_compute_algorithm: max
amax_history_length: 1024
amax_compute_algo: max
amax_history_len: 1024
backend: TE
fp8_format: HYBRID
interval: 1
margin: 0
override_linear_precision: false
override_linear_precision: (false, false, false)
use_autocast_during_eval: false
```
## Configuring `torchao`
`torchao` is a [PyTorch-driven](https://github.com/pytorch/ao/tree/main/torchao/float8) hackable FP8 backend, aiming to be more approchable than the prior two engines. One of the core differences with `ao` compared to the prior two is that for numerical stability, it's found to be generally better off keeping the first *and* last layers in the model at the regular precision (be it FP32 or BF16), and then the other layers quantized down to FP8. As a result, a config for `ao` looks a bit differently:
> Note: this API is experimental and is subject to change
```{python}
from accelerate import Accelerator
from accelerate.utils import AORecipeKwargs
kwargs = [AORecipeKwargs()]
accelerator = Accelerator(mixed_precision="fp8", kwarg_handlers=kwargs)
```
To learn more about the specific parameters to be used, please see the official `torchao` repo.
## Example Zoo
We have examples showcasing training with FP8 both with accelerate and its underlying implementation available in the accelerate repo.
@ -143,3 +161,4 @@ To learn more about training in FP8 please check out the following resources:
* [Our concept guide](../concept_guides/low_precision_training) detailing into more about both TransformersEngine and MS-AMP
* [The `transformers-engine` documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html)
* [The `MS-AMP` documentation](https://azure.github.io/MS-AMP/docs/)
* [The `torchao` documentation](https://github.com/pytorch/ao/tree/main/torchao/float8)

View File

@ -19,7 +19,7 @@ rendered properly in your Markdown viewer.
[Megatron-LM](https://github.com/NVIDIA/Megatron-LM) enables training large transformer language models at scale.
It provides efficient tensor, pipeline and sequence based model parallelism for pre-training transformer based
Language Models such as [GPT](https://arxiv.org/abs/2005.14165) (Decoder Only), [BERT](https://arxiv.org/pdf/1810.04805.pdf) (Encoder Only) and [T5](https://arxiv.org/abs/1910.10683) (Encoder-Decoder).
For detailed information and how things work behind the scene please refer the github [repo](https://github.com/NVIDIA/Megatron-LM).
For detailed information and how things work behind the scene please refer to the github [repo](https://github.com/NVIDIA/Megatron-LM).
## What is integrated?
@ -30,7 +30,7 @@ a. **Tensor Parallelism (TP)**: Reduces memory footprint without much additional
Each tensor is split into multiple chunks with each shard residing on separate GPU. At each step, the same mini-batch of data is processed
independently and in parallel by each shard followed by syncing across all GPUs (`all-reduce` operation).
In a simple transformer layer, this leads to 2 `all-reduces` in the forward path and 2 in the backward path.
For more details, please refer research paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using
For more details, please refer to the research paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using
Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf) and
this section of blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#tensor-parallelism).
@ -45,7 +45,7 @@ this section of blogpost [The Technology Behind BLOOM Training](https://huggingf
c. **Sequence Parallelism (SP)**: Reduces memory footprint without any additional communication. Only applicable when using TP.
It reduces activation memory required as it prevents the same copies to be on the tensor parallel ranks
post `all-reduce` by replacing then with `reduce-scatter` and `no-op` operation would be replaced by `all-gather`.
post `all-reduce` by replacing them with `reduce-scatter` and `no-op` operation would be replaced by `all-gather`.
As `all-reduce = reduce-scatter + all-gather`, this saves a ton of activation memory at no added communication cost.
To put it simply, it shards the outputs of each transformer layer along sequence dimension, e.g.,
if the sequence length is `1024` and the TP size is `4`, each GPU will have `256` tokens (1024/4) for each sample.
@ -56,7 +56,7 @@ d. **Data Parallelism (DP)** via Distributed Optimizer: Reduces the memory footp
(versus the traditional method of replicating the optimizer state across data parallel ranks).
For example, when using Adam optimizer with mixed-precision training, each parameter accounts for 12 bytes of memory.
This gets distributed equally across the GPUs, i.e., each parameter would account for 3 bytes (12/4) if we have 4 GPUs.
For more details, please refer the research paper [ZeRO: Memory Optimizations Toward Training Trillion
For more details, please refer to the research paper [ZeRO: Memory Optimizations Toward Training Trillion
Parameter Models](https://arxiv.org/pdf/1910.02054.pdf) and following section of blog
[The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#zero-data-parallelism).
@ -66,7 +66,7 @@ For example, for GPT-3, this leads to 70% reduction in required memory for activ
only 2.7% FLOPs overhead for recomputation of activations. For more details, please refer to the research paper
[Reducing Activation Recomputation in Large Transformer Models](https://arxiv.org/pdf/2205.05198.pdf).
f. **Fused Kernels**: Fused Softmax, Mixed Precision Fused Layer Norm and Fused gradient accumulation to weight gradient computation of linear layer.
f. **Fused Kernels**: Fused Softmax, Mixed Precision Fused Layer Norm and Fused gradient accumulation to weight gradient computation of linear layer.
PyTorch JIT compiled Fused GeLU and Fused Bias+Dropout+Residual addition.
g. **Support for Indexed datasets**: Efficient binary format of datasets for large scale training. Support for the `mmap`, `cached` index file and the `lazy` loader format.
@ -445,7 +445,7 @@ python checkpoint_utils/megatgron_gpt2/checkpoint_reshaping_and_interoperability
## Megatron-LM GPT models support returning logits and `megatron_generate` function for text generation
1. Returning logits require setting `require_logits=True` in MegatronLMPlugin as shown below.
These would be available on the in the last stage of pipeline.
These would be available in the last stage of pipeline.
```python
megatron_lm_plugin = MegatronLMPlugin(return_logits=True)
```
@ -569,7 +569,7 @@ setting is synonymous with gradient accumulation.
7. When using Megatron-LM, use `accelerator.save_state` and `accelerator.load_state` for saving and loading checkpoints.
8. Below are the mapping from Megatron-LM model architectures to the the equivalent transformers model architectures.
8. Below are the mapping from Megatron-LM model architectures to the equivalent transformers model architectures.
Only these transformers model architectures are supported.
a. Megatron-LM [BertModel](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/bert_model.py) :

View File

@ -44,10 +44,7 @@ accelerate launch /examples/cv_example.py --data_dir images
## A few caveats to be aware of
1. We strongly recommend to install PyTorch >= 1.13 (nightly version at the time of writing) on your MacOS machine.
It has major fixes related to model correctness and performance improvements for transformer based models.
Please refer to https://github.com/pytorch/pytorch/issues/82707 for more details.
2. Distributed setups `gloo` and `nccl` are not working with `mps` device.
1. Distributed setups `gloo` and `nccl` are not working with `mps` device.
This means that currently only single GPU of `mps` device type can be used.
Finally, please, remember that, `Accelerate` only integrates MPS backend, therefore if you

View File

@ -86,7 +86,7 @@ To quantize your empty model with the selected configuration, you need to use [`
```py
from accelerate.utils import load_and_quantize_model
quantized_model = load_and_quantize_model(empty_model, weights_location=weights_location, bnb_quantization_config=bnb_quantization_config, device_map = "auto")
quantized_model = load_and_quantize_model(empty_model, weights_location=weights_location, bnb_quantization_config=bnb_quantization_config)
```
### Saving and loading 8-bit model

View File

@ -145,8 +145,8 @@ image_uri: null
mixed_precision: fp16
num_machines: 1
profile: xxxxx
py_version: py38
pytorch_version: 1.10.2
py_version: py10
pytorch_version: 2.5.0
region: us-east-1
transformers_version: 4.17.0
use_cpu: false

View File

@ -225,7 +225,7 @@ In [/slurm/submit_multinode.sh](./slurm/submit_multinode.sh) we must specify the
In [/slurm/submit_multicpu.sh](./slurm/submit_multicpu.sh) we must specify the number of nodes that will be part of the training (`--num_machines`), how many CPU processes we will use in total (`--num_processes`), the [`backend`](https://pytorch.org/docs/stable/elastic/run.html#note-on-rendezvous-backend), `--main_process_ip` which will be the address the master node and the `--main_process_port`. `mpirun_hostfile` specifies to run the job using MPIRun.
In both scripts, we run `activateEnviroment.sh` at the beginning. This script should contain the necessary instructions to initialize the environment for execution. Below, we show an example that loads the necessary libraries ([Environment modules](https://github.com/cea-hpc/modules)), activates the Python environment, and sets up various environment variables, most of them to run the scripts in offline mode in case we don't have internet connection from the cluster.
In both scripts, we run `activateEnvironment.sh` at the beginning. This script should contain the necessary instructions to initialize the environment for execution. Below, we show an example that loads the necessary libraries ([Environment modules](https://github.com/cea-hpc/modules)), activates the Python environment, and sets up various environment variables, most of them to run the scripts in offline mode in case we don't have internet connection from the cluster.
```bash
# activateEnvironment.sh

View File

@ -12,7 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from typing import List
import evaluate
import numpy as np
@ -61,7 +60,7 @@ EVAL_BATCH_SIZE = 32
def get_fold_dataloaders(
accelerator: Accelerator, dataset: DatasetDict, train_idxs: List[int], valid_idxs: List[int], batch_size: int = 16
accelerator: Accelerator, dataset: DatasetDict, train_idxs: list[int], valid_idxs: list[int], batch_size: int = 16
):
"""
Gets a set of train, valid, and test dataloaders for a particular fold

View File

@ -0,0 +1,341 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import contextlib
import math
import os
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForCausalLM, AutoTokenizer, get_constant_schedule, set_seed
from accelerate import Accelerator, DistributedType
########################################################################
# This is a fully working simple example to use Accelerate
# and perform gradient accumulation on samples of variable size
#
# This example trains a SmolLM base model on WikiText-2 v1
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
EVAL_BATCH_SIZE = 32
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16, max_training_samples=500):
"""
Creates a set of `DataLoader`s for the `Salesforce/wikitext` dataset,
using "HuggingFaceTB/SmolLM-360M" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-360M")
tokenizer.pad_token = tokenizer.eos_token
with accelerator.local_main_process_first():
datasets = load_dataset("Salesforce/wikitext", "wikitext-2-v1")
datasets["train"] = datasets["train"].select(range(max_training_samples))
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["text"], truncation=True, max_length=None, return_attention_mask=False)
return outputs
# Filter out empty texts
with accelerator.main_process_first():
datasets = datasets.filter(
lambda x: len(x) > 0,
input_columns="text",
)
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["text"],
)
# Filter out empty samples
with accelerator.main_process_first():
tokenized_datasets = tokenized_datasets.filter(
lambda x: len(x) > 0,
input_columns="input_ids",
)
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
max_length = (
128
if accelerator.distributed_type == DistributedType.XLA
else max([len(e["input_ids"]) for e in examples])
)
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
batch = tokenizer.pad(
examples,
padding="max_length",
max_length=max_length + 1,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
batch["labels"] = batch["input_ids"][:, 1:]
batch["input_ids"] = batch["input_ids"][:, :-1]
batch["labels"] = torch.where(batch["labels"] == tokenizer.pad_token_id, -100, batch["labels"])
return batch
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=False, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
return train_dataloader, eval_dataloader
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
from accelerate.test_utils.training import mocked_dataloaders_for_autoregressive_models
get_dataloaders = mocked_dataloaders_for_autoregressive_models # noqa: F811
def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
gradient_accumulation_steps = int(args.gradient_accumulation_steps)
# Initialize accelerator
if args.with_wandb_tracking:
accelerator = Accelerator(
cpu=args.cpu,
mixed_precision=args.mixed_precision,
gradient_accumulation_steps=gradient_accumulation_steps,
log_with="wandb",
)
else:
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, gradient_accumulation_steps=gradient_accumulation_steps
)
if accelerator.distributed_type == DistributedType.XLA and gradient_accumulation_steps > 1:
raise NotImplementedError(
"Gradient accumulation on TPUs is currently not supported. Pass `gradient_accumulation_steps=1`"
)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
max_grad_norm = config["max_grad_norm"]
# We need to initialize the trackers we use, and also store our configuration
if args.with_wandb_tracking:
run = os.path.split(__file__)[-1].split(".")[0]
run_name = f"{accelerator.num_processes}GPU-grad{gradient_accumulation_steps}-bs{batch_size}"
accelerator.init_trackers(
run,
config,
init_kwargs={"wandb": {"name": run_name}},
)
set_seed(seed)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-360M")
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_constant_schedule(
optimizer=optimizer,
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
num_samples_in_epoch = len(train_dataloader)
remainder = num_samples_in_epoch % gradient_accumulation_steps
remainder = remainder if remainder != 0 else gradient_accumulation_steps
total_gradient_updates = math.ceil(num_samples_in_epoch / gradient_accumulation_steps)
total_batched_samples = 0
# Now we train the model
for epoch in range(num_epochs):
model.train()
training_iterator = iter(train_dataloader)
for update_step in range(total_gradient_updates):
# In order to correctly the total number of non-padded tokens on which we'll compute the cross-entropy loss
# we need to pre-load the full local batch - i.e the next per_device_batch_size * accumulation_steps samples
batch_samples = []
num_batches_in_step = (
gradient_accumulation_steps if update_step != (total_gradient_updates - 1) else remainder
)
for _ in range(num_batches_in_step):
batch_samples += [next(training_iterator)]
# get local num items in batch
local_num_items_in_batch = sum([(batch["labels"].ne(-100)).sum() for batch in batch_samples])
# to compute it correctly in a multi-device DDP training, we need to gather the total number of items in the full batch.
num_items_in_batch = accelerator.gather(local_num_items_in_batch).sum().item()
losses = []
for i, batch in enumerate(batch_samples):
# if we perform gradient accumulation in a multi-devices set-up, we want to avoid unecessary communications when accumulating
# cf: https://muellerzr.github.io/blog/gradient_accumulation.html
ctx = (
model.no_sync
if (i < len(batch_samples) - 1 and accelerator.num_processes > 1)
else contextlib.nullcontext
)
with ctx():
total_batched_samples += 1
outputs = model(**batch, use_cache=False, num_items_in_batch=num_items_in_batch)
loss = outputs.loss
# We multiply by num_processes because the DDP calculates the average gradient across all devices whereas dividing by num_items_in_batch already takes into account all devices
# Same reason for gradient_accumulation_steps, but this times it's Accelerate that calculate the average gradient across the accumulated steps
# Because the loss is already divided by `num_items_in_batch` in the `transformers` code, we don't need to do it again
loss = loss * gradient_accumulation_steps * accelerator.num_processes
accelerator.backward(loss)
losses.append(loss.detach())
# Sync gradients and perform optimization steps once every gradient_accumulation_steps
grad_norm = accelerator.clip_grad_norm_(model.parameters(), max_grad_norm)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
losses = accelerator.gather(sum(losses)).sum().item() / (
accelerator.num_processes * gradient_accumulation_steps
)
grad_norm = grad_norm.detach().item() if isinstance(grad_norm, torch.Tensor) else grad_norm
accelerator.print(
f"epoch {epoch} - update step {update_step}:: grad norm: {grad_norm} ::train loss: {losses}"
)
if args.with_wandb_tracking:
accelerator.log(
{
"train/grad_norm": grad_norm,
"train/epoch": epoch,
"train/loss": losses,
},
step=update_step + total_gradient_updates * epoch,
)
model.eval()
losses = []
for step, batch in enumerate(eval_dataloader):
with torch.no_grad():
outputs = model(**batch, use_cache=False)
eval_loss = outputs.loss
losses.append(accelerator.gather_for_metrics(loss.repeat(EVAL_BATCH_SIZE)))
losses = torch.cat(losses)
try:
eval_loss = torch.mean(losses)
perplexity = math.exp(eval_loss)
except OverflowError:
perplexity = float("inf")
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:: eval perplexity: {perplexity} eval_loss: {eval_loss}")
if args.with_wandb_tracking:
accelerator.log(
{
"eval/perplexity": perplexity,
"eval/loss": eval_loss,
"eval/epoch": epoch,
},
step=update_step + total_gradient_updates * epoch,
)
accelerator.end_training()
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument(
"--gradient_accumulation_steps",
type=int,
default=1,
help="The number of minibatches to be ran before gradients are accumulated.",
)
parser.add_argument(
"--per_device_batch_size",
type=int,
default=2,
help="The size of each minibatch",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
parser.add_argument(
"--with_wandb_tracking",
action="store_true",
help="Whether to load in wandb from the environment and use them for logging.",
)
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": args.per_device_batch_size, "max_grad_norm": 1.0}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -611,7 +611,7 @@ def main():
if isinstance(checkpointing_steps, int):
if completed_steps % checkpointing_steps == 0:
output_dir = f"step_{completed_steps }"
output_dir = f"step_{completed_steps}"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)

View File

@ -7,11 +7,11 @@ fp8_config:
backend: TE # Can be TE | MS-AMP
# The following are TE specific arguments.
# See https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html#common-api for more details
amax_history_length: 1024
amax_history_len: 1024
fp8_format: E4M3
interval: 1
margin: 0
override_linear_precision: false
override_linear_precision: (false, false, false)
# Generally this should always be set to `false` to have the most realistic fp8 eval performance
use_autocast_during_eval: false
# If using MS-AMP, we ignore all of the prior and set a opt_level

View File

@ -24,6 +24,7 @@ from torch.utils.data import DataLoader, Dataset
from torchvision.transforms import Compose, RandomResizedCrop, Resize, ToTensor
from accelerate import Accelerator
from accelerate.utils import set_seed
########################################################################
@ -93,10 +94,7 @@ def training_function(config, args):
label_to_id = {lbl: i for i, lbl in enumerate(id_to_label)}
# Set the seed before splitting the data.
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
set_seed(seed)
# Split our filenames between train and validation
random_perm = np.random.permutation(len(file_names))
cut = int(0.8 * len(file_names))

36
examples/fsdp2/README.md Normal file
View File

@ -0,0 +1,36 @@
## FSDP2 Examples
This folder contains examples of using FSDP2 with Accelerate, utilizing extra methods to improve training speed, performance or accuracy.
### FSDP2 + ao Float8Linear
In file `fsdp2_fp8.py` we use `Float8Linear` from `ao` to train a model partially in FP8 precision. We utilize `AORecipeKwargs` to pass the `Float8LinearConfig` to the accelerator,
which replaces the default `torch.nn.Linear` with `Float8Linear`. We also utilize `TorchDynamoPlugin` together with regional compilation to compile the model,
gaining even more speed and memory savings, as `ao` doesn't ship with any kernels by default, so we have to gain the performance from compiling the model.
Replacing linear layers with `Float8Linear` can greatly improve performance, if used correctly and on hardware that supports FP8 tensor cores. This highly depends on the model dimensions and sequence length used for training.
You can view the performance of `Float8Linear` as a function of matrix dimensions in [this document](https://github.com/pytorch/ao/blob/main/torchao/float8/README.md#performance).
In our example, we use a 8B Llama3.1 model, which has a hidden dimension of 4096 and we train on sequence length of 8192. In the below images, we can see that this improves performance by ~25% compared to `bf16`, reaching ~10000 tokens per second, per device on 8x H100 GPUs, compared to ~8000 tokens per second using `bf16`, while loss function stays roughly the same. We can also see that the FLOPS raise by using FP8.
<div style="display: flex; gap: 25px;">
<div style="text-align: center; width: 49%;">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/examples/fsdp2/fp8_tps.png" alt="tps" style="width: 100%;">
<p style="text-align: center; margin-top: 8px;">TPs per device, bf16 vs fp8</p>
</div>
<div style="text-align: center; width: 49%;">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/examples/fsdp2/fp8_tflops.png" alt="tflops" style="width: 100%;">
<p style="text-align: center; margin-top: 8px;">TFLOPS per device, bf16 vs fp8. We cannot really compare MFU as fp8 tensor cores are used as well.</p>
</div>
<div style="text-align: center; width: 49%;">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/examples/fsdp2/fp8_loss.png" alt="loss" style="width: 100%; max-width: 900px;">
<p style="text-align: center; margin-top: 8px;">Loss curve, bf16 vs fp8, it's hard to see the difference as the curves mostly overlap</p>
</div>
</div>
The figures above were generated on 8x H100 SXM GPUs, with 8192 sequence length and 1000 steps. To run the example, you can use the following command, where you can specify the precision to train in:
```bash
accelerate launch --fsdp2_fp8.py --sequence_length 8192 --num_steps 1000 --log_with wandb --precision [fp8 | bf16]
```

255
examples/fsdp2/fsdp2_fp8.py Normal file
View File

@ -0,0 +1,255 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Minimal example of training with FP8 precision using FSDP2 via Accelerate.
This example demonstrates how to use torchao's Float8LinearConfig with Accelerate's AORecipeKwargs.
"""
import argparse
import time
import torch
from datasets import Dataset, load_dataset
from torch.utils.data import DataLoader
from torchao.float8 import Float8LinearConfig
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
from accelerate import Accelerator
from accelerate.utils import AORecipeKwargs, FullyShardedDataParallelPlugin, TorchDynamoPlugin, set_seed
WARMUP_STEPS = 10
MODEL_ID = "NousResearch/Hermes-3-Llama-3.1-8B"
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--sequence-length", type=int, default=8192, help="Sequence length for the dataset")
parser.add_argument("--num-steps", type=int, default=1000, help="Number of steps to train for")
parser.add_argument("--precision", type=str, default="fp8", choices=["fp8", "bf16"], help="Precision to train in")
parser.add_argument("--log-with", type=str, default="wandb", help="Log with wandb or tensorboard")
return parser.parse_args()
def get_model_flops_per_token(model: AutoModelForCausalLM, args: argparse.Namespace) -> float:
"""
Get the number of flops per token for the model.
Args:
model (AutoModelForCausalLM): Model to get the flops for
"""
cfg = model.config
head_dim = cfg.hidden_size // cfg.num_attention_heads
# MLP: 3 matmuls
mlp_flops = 18 * cfg.hidden_size * cfg.intermediate_size
# Attn (w/o dotproduct)
attn_flops = 12 * head_dim * (cfg.num_attention_heads + cfg.num_key_value_heads)
# attn (dotproduct) - this scales quadratically with sequence length, therefore we have to account for it here too
attn_dotproduct_flops = 12 * cfg.num_attention_heads * head_dim * args.sequence_length
# we also ignore embeddings and layernorms, etc
return (mlp_flops + attn_flops + attn_dotproduct_flops) * cfg.num_hidden_layers
def get_dataset(accelerator: Accelerator, tokenizer: AutoTokenizer, seq_len: int) -> Dataset:
"""
Load and prepare TinyStories dataset.
Args:
accelerator (Accelerate): Accelerate accelerator instance
tokenizer (AutoTokenizer): Hugging Face tokenizer
seq_len (int): Sequence length for the dataset
Returns:
Dataset: Packed dataset
"""
raw_dataset = load_dataset("roneneldan/TinyStories", split="train[:50%]")
def tokenize_function(examples):
tokenized_batch = tokenizer(
examples["text"],
padding=False,
truncation=True,
max_length=seq_len,
return_tensors=None,
)
tokenized_batch["labels"] = tokenized_batch["input_ids"].copy()
return tokenized_batch
with accelerator.main_process_first():
tokenized_dataset = raw_dataset.map(tokenize_function, batched=True, remove_columns=["text"])
def create_packed_sequences(examples):
all_tokens = []
for input_ids in examples["input_ids"]:
all_tokens.extend(input_ids)
num_sequences = len(all_tokens) // (seq_len + 1)
packed_input_ids = []
packed_labels = []
for i in range(num_sequences):
start_idx = i * (seq_len + 1)
end_idx = start_idx + (seq_len + 1)
full_sequence = all_tokens[start_idx:end_idx]
packed_input_ids.append(full_sequence[:-1])
packed_labels.append(full_sequence[1:])
return {"input_ids": packed_input_ids, "labels": packed_labels}
with accelerator.main_process_first():
packed_dataset = tokenized_dataset.map(
create_packed_sequences,
batched=True,
remove_columns=tokenized_dataset.column_names,
batch_size=1000,
)
return packed_dataset.shuffle(seed=42)
def main():
"""
Main function to train the model.
"""
set_seed(42)
args = parse_args()
fsdp2_plugin = FullyShardedDataParallelPlugin(
fsdp_version=2,
cpu_ram_efficient_loading=False, # CPU RAM efficient loading CANNOT work with fp8 torchao
auto_wrap_policy="transformer_based_wrap",
transformer_cls_names_to_wrap=["LlamaDecoderLayer"],
)
fsdp2_plugin.set_mixed_precision(args.precision)
dynamo_plugin = TorchDynamoPlugin(
backend="inductor",
use_regional_compilation=True, # We use regional compilation to compile the model way faster
)
fp8_config = Float8LinearConfig(
enable_fsdp_float8_all_gather=True, # extra saving by gathering parameters in fp8 and upcasting after
force_recompute_fp8_weight_in_bwd=True,
)
kwargs = []
if args.precision == "fp8":
kwargs = [AORecipeKwargs(config=fp8_config)]
accelerator = Accelerator(
fsdp_plugin=fsdp2_plugin,
dynamo_plugin=dynamo_plugin,
kwargs_handlers=kwargs,
log_with=args.log_with,
)
accelerator.init_trackers(
project_name="FSDP2_torchao_fp8",
config={"sequence_length": args.sequence_length, "num_steps": args.num_steps},
)
model = AutoModelForCausalLM.from_config(
AutoConfig.from_pretrained(MODEL_ID, use_cache=False),
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-5)
model, optimizer = accelerator.prepare(model, optimizer)
dataset = get_dataset(accelerator, tokenizer, args.sequence_length)
def collate_fn(batch):
input_ids = torch.tensor([item["input_ids"] for item in batch], dtype=torch.long)
labels = torch.tensor([item["labels"] for item in batch], dtype=torch.long)
return {"input_ids": input_ids, "labels": labels}
# We keep batch size at 1, as it is basically the same as sequence length, which we use instead
dataloader = DataLoader(dataset, batch_size=1, collate_fn=collate_fn)
dataloader = accelerator.prepare(dataloader)
model.train()
total_num_steps = min(args.num_steps, len(dataloader))
num_tokens = 0
is_in_warmup = True
model_flops_per_token = get_model_flops_per_token(model, args)
accelerator.print(f"Warming up for {WARMUP_STEPS} steps...")
for step, batch in enumerate(dataloader):
if step == WARMUP_STEPS:
accelerator.print("Warm up completed! Starting training")
start_time = time.perf_counter()
num_tokens = 0
is_in_warmup = False
if step >= total_num_steps:
break
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
steps_from_warmup = step - WARMUP_STEPS
print_msg = f"Step {step}/{total_num_steps}, Loss: {loss.item():.4f}"
metrics = {"loss": loss.item()}
if not is_in_warmup and steps_from_warmup > 0:
num_tokens += batch["input_ids"].shape[1]
total_time = time.perf_counter() - start_time
tps = num_tokens / total_time
tflops = num_tokens * model_flops_per_token / (total_time * 1e12)
# it's rather hard to get a good estimate of MFU as we train with FP8, so both FP8 and BF16 tensor cores are used, therefore we just report TFLOPS (Tera floating point operations per second)
# Given H100 SXM, the theoretical peak flops are ~990 TFLOPS for bf16 and ~1980 TFLOPS for fp8 [https://resources.nvidia.com/en-us-gpu-resources/h100-datasheet-24306]
# This is WITH sparsity, so we divide by 2 to get the answer w/o sparsity
print_msg += f", Average steps/s: {steps_from_warmup / total_time:.2f}, TPS per device: {tps:.2f}, TFLOPS per device: {tflops:.2f}"
metrics.update(
{
"steps_per_second": steps_from_warmup / total_time,
"tps_per_device": tps,
"tflops_per_device": tflops,
}
)
if steps_from_warmup % 10 == 0 or step == total_num_steps:
accelerator.print(print_msg)
accelerator.log(metrics)
accelerator.wait_for_everyone()
accelerator.end_training()
accelerator.print("Training completed!")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,192 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
import pathlib
import queue
import time
from concurrent.futures import ThreadPoolExecutor
import av
import fire
import numpy as np
import torch
from huggingface_hub import snapshot_download
from tqdm import tqdm
from transformers import LlavaNextVideoForConditionalGeneration, LlavaNextVideoProcessor
from accelerate import PartialState
START_TIME = time.strftime("%Y%m%d_%H%M%S")
DTYPE_MAP = {"fp32": torch.float32, "fp16": torch.float16, "bf16": torch.bfloat16}
"""
Example:
accelerate launch llava_next_video.py
"""
def save_results(output_queue: queue.Queue, output_dir: pathlib.Path):
count = 0
while True:
try:
item = output_queue.get(timeout=5)
if item is None:
break
prompt, video, generated_text = item
example_file = f"example_{count}"
temp_dir = os.path.join(output_dir, example_file)
metadata = {"prompt": prompt, "video": video, "generated_text": generated_text}
with open(temp_dir, "w") as f:
json.dump(metadata, f, indent=4)
count += 1
except queue.Empty:
continue
def get_batches(processed_videos, batch_size):
num_batches = (len(processed_videos) + batch_size - 1) // batch_size
batches = []
for i in range(num_batches):
start_index = i * batch_size
end_index = min((i + 1) * batch_size, len(processed_videos))
batch = processed_videos[start_index:end_index]
batches.append(batch)
return batches
def read_video_pyav(container, indices):
"""
Decode the video with PyAV decoder.
Args:
container (`av.container.input.InputContainer`): PyAV container.
indices (`List[int]`): List of frame indices to decode.
Returns:
result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
"""
frames = []
container.seek(0)
start_index = indices[0]
end_index = indices[-1]
for i, frame in enumerate(container.decode(video=0)):
if i > end_index:
break
if i >= start_index and i in indices:
frames.append(frame)
return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def get_video_paths(video_dir):
"""Get paths to all video files in the directory and its subdirectories."""
video_extensions = (".mp4", ".avi", ".mov", ".mkv") # Add more extensions if needed
video_paths = []
for root, _, files in os.walk(video_dir):
for file in files:
if file.lower().endswith(video_extensions):
video_paths.append(os.path.join(root, file))
return video_paths
def process_videos(video_paths, processor, prompt, frames_per_video):
"""Process a batch of videos and prepare them for the model."""
batch_inputs = []
for video_path in video_paths:
try:
with av.open(video_path) as container:
total_frames = container.streams.video[0].frames
indices = np.arange(0, total_frames, total_frames / frames_per_video).astype(int)
clip = read_video_pyav(container, indices)
processed = processor(text=prompt, videos=clip, return_tensors="pt")
batch_inputs.append(
{
"input_ids": processed["input_ids"],
"pixel_values_videos": processed["pixel_values_videos"],
"video": video_path,
}
)
except Exception as e:
print(f"Error processing video {video_path}: {str(e)}")
continue
return batch_inputs
def main(
model_name: str = "llava-hf/LLaVA-NeXT-Video-7B-hf",
save_dir: str = "./evaluation/examples",
prompt: str = "USER: <video>\nGenerate caption ASSISTANT:",
frames_per_video: int = 8,
max_new_tokens: int = 100,
batch_size: int = 4,
dtype: str = "fp16",
num_workers: int = 1,
low_mem: bool = True,
):
# Start up the distributed environment without needing the Accelerator.
distributed_state = PartialState()
processor = LlavaNextVideoProcessor.from_pretrained(model_name)
model = LlavaNextVideoForConditionalGeneration.from_pretrained(
model_name, torch_dtype=DTYPE_MAP[dtype], low_cpu_mem_usage=low_mem, device_map=distributed_state.device
)
if distributed_state.is_main_process:
if not os.path.exists(save_dir):
os.makedirs(save_dir)
print(f"Directory '{save_dir}' created successfully.")
else:
print(f"Directory '{save_dir}' already exists.")
videos_dir = snapshot_download(repo_id="malterei/LLaVA-Video-small-swift", repo_type="dataset")
video_paths = get_video_paths(videos_dir)
processed_videos = process_videos(video_paths, processor, prompt, frames_per_video)
batches = get_batches(processed_videos, batch_size)
output_queue = queue.Queue()
save_thread = ThreadPoolExecutor(max_workers=num_workers)
save_future = save_thread.submit(save_results, output_queue, save_dir)
for _, batch_raw in tqdm(enumerate(batches), total=len(batches)):
try:
with distributed_state.split_between_processes(batch_raw) as batched_inputs:
for batch in batched_inputs:
output = model.generate(
input_ids=batch["input_ids"].to(distributed_state.device),
pixel_values_videos=batch["pixel_values_videos"].to(distributed_state.device, model.dtype),
max_new_tokens=max_new_tokens,
)
generated_text = processor.batch_decode(output, skip_special_tokens=True)
output_queue.put((prompt, batch["video"], generated_text))
finally:
output_queue.put(None)
save_thread.shutdown(wait=True)
save_future.result()
distributed_state.destroy_process_group()
if __name__ == "__main__":
fire.Fire(main)

View File

@ -18,7 +18,7 @@ from diffusers import DiffusionPipeline
from accelerate import PartialState # Can also be Accelerator or AcceleratorState
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16)
distributed_state = PartialState()
pipe.to(distributed_state.device)

View File

@ -17,9 +17,12 @@ import torch
from transformers import AutoModelForMaskedLM
from accelerate import PartialState, prepare_pippy
from accelerate.test_utils import torch_device
from accelerate.utils import set_seed
synchronize_func = getattr(torch, torch_device, torch.cuda).synchronize
# Set the random seed to have reproducable outputs
set_seed(42)
@ -60,25 +63,25 @@ input = torch.randint(
)
# Move the inputs to the first device
input = input.to("cuda:0")
input = input.to(torch_device)
# Take an average of 5 times
# Measure first batch
torch.cuda.synchronize()
synchronize_func()
start_time = time.time()
with torch.no_grad():
output = model(input)
torch.cuda.synchronize()
synchronize_func()
end_time = time.time()
first_batch = end_time - start_time
# Now that CUDA is init, measure after
torch.cuda.synchronize()
# Now that hpu is init, measure after
synchronize_func()
start_time = time.time()
for i in range(5):
with torch.no_grad():
output = model(input)
torch.cuda.synchronize()
synchronize_func()
end_time = time.time()
# The outputs are only on the final process by default

View File

@ -17,9 +17,12 @@ import torch
from transformers import AutoModelForSequenceClassification
from accelerate import PartialState, prepare_pippy
from accelerate.test_utils import torch_device
from accelerate.utils import set_seed
synchronize_func = getattr(torch, torch_device, torch.cuda).synchronize
# Set the random seed to have reproducable outputs
set_seed(42)
@ -59,25 +62,25 @@ input = torch.randint(
)
# Move the inputs to the first device
input = input.to("cuda:0")
input = input.to(torch_device)
# Take an average of 5 times
# Measure first batch
torch.cuda.synchronize()
synchronize_func()
start_time = time.time()
with torch.no_grad():
output = model(input)
torch.cuda.synchronize()
synchronize_func()
end_time = time.time()
first_batch = end_time - start_time
# Now that CUDA is init, measure after
torch.cuda.synchronize()
# Now that device/backend is init, measure after
synchronize_func()
start_time = time.time()
for i in range(5):
with torch.no_grad():
output = model(input)
torch.cuda.synchronize()
synchronize_func()
end_time = time.time()
# The outputs are only on the final process by default

View File

@ -8,7 +8,7 @@
#SBATCH --error=E-%x.%j
######################
### Set enviroment ###
### Set environment ###
######################
source activateEnvironment.sh

View File

@ -11,7 +11,7 @@
#SBATCH --time=01:59:00 # maximum execution time (HH:MM:SS)
######################
### Set enviroment ###
### Set environment ###
######################
source activateEnvironment.sh
export GPUS_PER_NODE=4

View File

@ -11,7 +11,7 @@
#SBATCH --time=01:59:00 # maximum execution time (HH:MM:SS)
######################
### Set enviroment ###
### Set environment ###
######################
source activateEnvironment.sh
export GPUS_PER_NODE=4

View File

@ -11,7 +11,7 @@
#SBATCH --time=01:59:00 # maximum execution time (HH:MM:SS)
######################
### Set enviroment ###
### Set environment ###
######################
source activateEnvironment.sh
export GPUS_PER_NODE=4
@ -25,7 +25,7 @@ head_node_ip=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1)
export ACCELERATE_DIR="${ACCELERATE_DIR:-/accelerate}"
export LAUNCHER="accelerate launch \
--config ${ACCELERATE_DIR}/examples/slurm/fsdp_config.yaml \
--config_file ${ACCELERATE_DIR}/examples/slurm/fsdp_config.yaml \
--num_processes $((SLURM_NNODES * GPUS_PER_NODE)) \
--num_machines $SLURM_NNODES \
--rdzv_backend c10d \

View File

@ -1,6 +1,6 @@
[tool.ruff]
line-length = 119
target-version = "py38"
target-version = "py39"
[tool.ruff.lint]
preview = true

View File

@ -19,10 +19,10 @@ extras = {}
extras["quality"] = [
"black ~= 23.1", # hf-doc-builder has a hidden dependency on `black`
"hf-doc-builder >= 0.3.0",
"ruff ~= 0.6.4",
"ruff ~= 0.11.2",
]
extras["docs"] = []
extras["test_prod"] = ["pytest>=7.2.0,<=8.0.0", "pytest-xdist", "pytest-subtests", "parameterized"]
extras["test_prod"] = ["pytest>=7.2.0,<=8.0.0", "pytest-xdist", "pytest-subtests", "parameterized", "pytest-order"]
extras["test_dev"] = [
"datasets",
"diffusers",
@ -40,7 +40,8 @@ extras["testing"] = extras["test_prod"] + extras["test_dev"]
extras["deepspeed"] = ["deepspeed"]
extras["rich"] = ["rich"]
extras["test_trackers"] = ["wandb", "comet-ml", "tensorboard", "dvclive"]
extras["test_fp8"] = ["torchao"] # note: TE for now needs to be done via pulling down the docker image directly
extras["test_trackers"] = ["wandb", "comet-ml", "tensorboard", "dvclive", "mlflow", "matplotlib"]
extras["dev"] = extras["quality"] + extras["testing"] + extras["rich"]
extras["sagemaker"] = [
@ -49,7 +50,7 @@ extras["sagemaker"] = [
setup(
name="accelerate",
version="1.2.0.dev0",
version="1.8.0.dev0",
description="Accelerate",
long_description=open("README.md", encoding="utf-8").read(),
long_description_content_type="text/markdown",
@ -75,7 +76,7 @@ setup(
"packaging>=20.0",
"psutil",
"pyyaml",
"torch>=1.10.0",
"torch>=2.0.0",
"huggingface_hub>=0.21.0",
"safetensors>=0.4.3",
],
@ -88,7 +89,7 @@ setup(
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
],
)
@ -103,20 +104,15 @@ setup(
# git tag v<VERSION> -m 'Adds tag v<VERSION> for pypi'
# Push the tag and release commit to git: git push --tags origin vXX.xx-release
# 5. Run the following commands in the top-level directory:
# rm -rf dist
# rm -rf build
# python setup.py bdist_wheel
# python setup.py sdist
# make prepare_release
# 6. Upload the package to the pypi test server first:
# twine upload dist/* -r testpypi
# make target=testpypi upload_release
# 7. Check that you can install it in a virtualenv by running:
# pip install accelerate
# pip uninstall accelerate
# pip install -i https://testpypi.python.org/pypi accelerate
# make install_test_release
# accelerate env
# accelerate test
# 8. Upload the final version to actual pypi:
# twine upload dist/* -r pypi
# make target=pypi upload_release
# 9. Add release notes to the tag in github once everything is looking hunky-dory.
# 10. Go back to the main branch and update the version in __init__.py, setup.py to the new version ".dev" and push to
# main.

View File

@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
__version__ = "1.2.0.dev0"
__version__ = "1.8.0.dev0"
from .accelerator import Accelerator
from .big_modeling import (

File diff suppressed because it is too large Load Diff

View File

@ -14,9 +14,10 @@
import logging
import os
import re
from contextlib import contextmanager
from functools import wraps
from typing import Dict, List, Optional, Union
from typing import Optional, Union
import torch
import torch.nn as nn
@ -24,6 +25,7 @@ import torch.nn as nn
from .hooks import (
AlignDevicesHook,
CpuOffload,
LayerwiseCastingHook,
UserCpuOffloadHook,
add_hook_to_module,
attach_align_device_hook,
@ -41,13 +43,14 @@ from .utils import (
is_mlu_available,
is_musa_available,
is_npu_available,
is_torch_version,
is_sdaa_available,
is_xpu_available,
load_checkpoint_in_model,
offload_state_dict,
parse_flag_from_env,
retie_parameters,
)
from .utils.constants import SUPPORTED_PYTORCH_LAYERS_FOR_UPCASTING
from .utils.other import recursive_getattr
@ -114,8 +117,7 @@ def init_on_device(device: torch.device, include_buffers: bool = None):
if include_buffers is None:
include_buffers = parse_flag_from_env("ACCELERATE_INIT_INCLUDE_BUFFERS", False)
# TODO(shingjan): remove the torch version check once older versions are deprecated
if is_torch_version(">=", "2.0") and include_buffers:
if include_buffers:
with device:
yield
return
@ -172,8 +174,8 @@ def cpu_offload(
model: nn.Module,
execution_device: Optional[torch.device] = None,
offload_buffers: bool = False,
state_dict: Optional[Dict[str, torch.Tensor]] = None,
preload_module_classes: Optional[List[str]] = None,
state_dict: Optional[dict[str, torch.Tensor]] = None,
preload_module_classes: Optional[list[str]] = None,
):
"""
Activates full CPU offload for a model. As a result, all parameters of the model will be offloaded and only one
@ -263,7 +265,7 @@ def disk_offload(
offload_dir: Union[str, os.PathLike],
execution_device: Optional[torch.device] = None,
offload_buffers: bool = False,
preload_module_classes: Optional[List[str]] = None,
preload_module_classes: Optional[list[str]] = None,
):
"""
Activates full disk offload for a model. As a result, all parameters of the model will be offloaded as
@ -306,14 +308,14 @@ def disk_offload(
def dispatch_model(
model: nn.Module,
device_map: Dict[str, Union[str, int, torch.device]],
device_map: dict[str, Union[str, int, torch.device]],
main_device: Optional[torch.device] = None,
state_dict: Optional[Dict[str, torch.Tensor]] = None,
state_dict: Optional[dict[str, torch.Tensor]] = None,
offload_dir: Optional[Union[str, os.PathLike]] = None,
offload_index: Optional[Dict[str, str]] = None,
offload_index: Optional[dict[str, str]] = None,
offload_buffers: bool = False,
skip_keys: Optional[Union[str, List[str]]] = None,
preload_module_classes: Optional[List[str]] = None,
skip_keys: Optional[Union[str, list[str]]] = None,
preload_module_classes: Optional[list[str]] = None,
force_hooks: bool = False,
):
"""
@ -468,6 +470,8 @@ def dispatch_model(
model.npu = add_warning(model.npu, model)
elif is_mlu_available():
model.mlu = add_warning(model.mlu, model)
elif is_sdaa_available():
model.sdaa = add_warning(model.sdaa, model)
elif is_musa_available():
model.musa = add_warning(model.musa, model)
elif is_xpu_available():
@ -490,10 +494,10 @@ def dispatch_model(
device = f"npu:{device}"
elif is_mlu_available() and isinstance(device, int):
device = f"mlu:{device}"
elif is_sdaa_available() and isinstance(device, int):
device = f"sdaa:{device}"
elif is_musa_available() and isinstance(device, int):
device = f"musa:{device}"
elif is_xpu_available() and isinstance(device, int):
device = f"xpu:{device}"
if device != "disk":
model.to(device)
else:
@ -508,17 +512,19 @@ def dispatch_model(
def load_checkpoint_and_dispatch(
model: nn.Module,
checkpoint: Union[str, os.PathLike],
device_map: Optional[Union[str, Dict[str, Union[int, str, torch.device]]]] = None,
max_memory: Optional[Dict[Union[int, str], Union[int, str]]] = None,
no_split_module_classes: Optional[List[str]] = None,
device_map: Optional[Union[str, dict[str, Union[int, str, torch.device]]]] = None,
max_memory: Optional[dict[Union[int, str], Union[int, str]]] = None,
no_split_module_classes: Optional[list[str]] = None,
offload_folder: Optional[Union[str, os.PathLike]] = None,
offload_buffers: bool = False,
dtype: Optional[Union[str, torch.dtype]] = None,
offload_state_dict: Optional[bool] = None,
skip_keys: Optional[Union[str, List[str]]] = None,
preload_module_classes: Optional[List[str]] = None,
skip_keys: Optional[Union[str, list[str]]] = None,
preload_module_classes: Optional[list[str]] = None,
force_hooks: bool = False,
strict: bool = False,
full_state_dict: bool = True,
broadcast_from_rank0: bool = False,
):
"""
Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are
@ -568,6 +574,12 @@ def load_checkpoint_and_dispatch(
strict (`bool`, *optional*, defaults to `False`):
Whether to strictly enforce that the keys in the checkpoint state_dict match the keys of the model's
state_dict.
full_state_dict (`bool`, *optional*, defaults to `True`): if this is set to `True`, all the tensors in the
loaded state_dict will be gathered. No ShardedTensor and DTensor will be in the loaded state_dict.
broadcast_from_rank0 (`False`, *optional*, defaults to `False`): when the option is `True`, a distributed
`ProcessGroup` must be initialized. rank0 should receive a full state_dict and will broadcast the tensors
in the state_dict one by one to other ranks. Other ranks will receive the tensors and shard (if applicable)
according to the local shards in the model.
Example:
@ -593,8 +605,7 @@ def load_checkpoint_and_dispatch(
"""
if isinstance(device_map, str) and device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]:
raise ValueError(
"If passing a string for `device_map`, please choose 'auto', 'balanced', 'balanced_low_0' or "
"'sequential'."
"If passing a string for `device_map`, please choose 'auto', 'balanced', 'balanced_low_0' or 'sequential'."
)
if isinstance(device_map, str):
if device_map != "sequential":
@ -623,6 +634,8 @@ def load_checkpoint_and_dispatch(
offload_state_dict=offload_state_dict,
offload_buffers=offload_buffers,
strict=strict,
full_state_dict=full_state_dict,
broadcast_from_rank0=broadcast_from_rank0,
)
if device_map is None:
return model
@ -635,3 +648,102 @@ def load_checkpoint_and_dispatch(
preload_module_classes=preload_module_classes,
force_hooks=force_hooks,
)
def attach_layerwise_casting_hooks(
module: torch.nn.Module,
storage_dtype: torch.dtype,
compute_dtype: torch.dtype,
skip_modules_pattern: Union[str, tuple[str, ...]] = None,
skip_modules_classes: Optional[tuple[type[torch.nn.Module], ...]] = None,
non_blocking: bool = False,
) -> None:
r"""
Applies layerwise casting to a given module. The module expected here is a PyTorch `nn.Module`. This is helpful for
reducing memory requirements when one doesn't want to fully quantize a model. Model params can be kept in say,
`torch.float8_e4m3fn` and upcasted to a higher precision like `torch.bfloat16` during forward pass and downcasted
back to `torch.float8_e4m3fn` to realize memory savings.
Args:
module (`torch.nn.Module`):
The module whose leaf modules will be cast to a high precision dtype for computation, and to a low
precision dtype for storage.
storage_dtype (`torch.dtype`):
The dtype to cast the module to before/after the forward pass for storage.
compute_dtype (`torch.dtype`):
The dtype to cast the module to during the forward pass for computation.
skip_modules_pattern (`tuple[str, ...]`, defaults to `None`):
A list of patterns to match the names of the modules to skip during the layerwise casting process. If set
to `None` alongside `skip_modules_classes` being `None`, the layerwise casting is applied directly to the
module instead of its internal submodules.
skip_modules_classes (`tuple[type[torch.nn.Module], ...]`, defaults to `None`):
A list of module classes to skip during the layerwise casting process.
non_blocking (`bool`, defaults to `False`):
If `True`, the weight casting operations are non-blocking.
Example:
```python
>>> from accelerate.hooks import attach_layerwise_casting_hooks
>>> from transformers import AutoModelForCausalLM
>>> import torch
>>> # Model
>>> checkpoint = "EleutherAI/gpt-j-6B"
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> # Attach hooks and perform inference
>>> attach_layerwise_casting_hooks(model, storage_dtype=torch.float8_e4m3fn, compute_dtype=torch.bfloat16)
>>> with torch.no_grad():
... model(...)
```
Users can also pass modules they want to avoid from getting downcasted.
```py
>>> attach_layerwise_casting_hooks(
... model, storage_dtype=torch.float8_e4m3fn, compute_dtype=torch.bfloat16, skip_modules_pattern=["norm"]
... )
```
"""
_attach_layerwise_casting_hooks(
module, storage_dtype, compute_dtype, skip_modules_pattern, skip_modules_classes, non_blocking
)
def _attach_layerwise_casting_hooks(
module: torch.nn.Module,
storage_dtype: torch.dtype,
compute_dtype: torch.dtype,
skip_modules_pattern: Union[str, tuple[str, ...]] = None,
skip_modules_classes: Optional[tuple[type[torch.nn.Module], ...]] = None,
non_blocking: bool = False,
_prefix: str = "",
):
should_skip = (skip_modules_classes is not None and isinstance(module, skip_modules_classes)) or (
skip_modules_pattern is not None and any(re.search(pattern, _prefix) for pattern in skip_modules_pattern)
)
if should_skip:
logger.debug(f'Skipping layerwise casting for layer "{_prefix}"')
return
if isinstance(module, SUPPORTED_PYTORCH_LAYERS_FOR_UPCASTING):
logger.debug(f'Applying layerwise casting to layer "{_prefix}"')
add_hook_to_module(
module,
LayerwiseCastingHook(storage_dtype=storage_dtype, compute_dtype=compute_dtype, non_blocking=non_blocking),
append=True,
)
return
for name, submodule in module.named_children():
layer_name = f"{_prefix}.{name}" if _prefix else name
_attach_layerwise_casting_hooks(
submodule,
storage_dtype,
compute_dtype,
skip_modules_pattern,
skip_modules_classes,
non_blocking,
_prefix=layer_name,
)

View File

@ -14,12 +14,10 @@
import random
from pathlib import Path
from typing import List
import numpy as np
import torch
from safetensors.torch import load_model
from torch.cuda.amp import GradScaler
from .utils import (
MODEL_NAME,
@ -32,7 +30,12 @@ from .utils import (
SCHEDULER_NAME,
WEIGHTS_NAME,
get_pretty_name,
is_cuda_available,
is_hpu_available,
is_mlu_available,
is_musa_available,
is_sdaa_available,
is_torch_version,
is_torch_xla_available,
is_xpu_available,
load,
@ -40,6 +43,11 @@ from .utils import (
)
if is_torch_version(">=", "2.4.0"):
from torch.amp import GradScaler
else:
from torch.cuda.amp import GradScaler
if is_torch_xla_available():
import torch_xla.core.xla_model as xm
@ -52,7 +60,7 @@ logger = get_logger(__name__)
def save_accelerator_state(
output_dir: str,
model_states: List[dict],
model_states: list[dict],
optimizers: list,
schedulers: list,
dataloaders: list,
@ -152,7 +160,13 @@ def save_accelerator_state(
states["torch_xpu_manual_seed"] = torch.xpu.get_rng_state_all()
if is_mlu_available():
states["torch_mlu_manual_seed"] = torch.mlu.get_rng_state_all()
else:
elif is_sdaa_available():
states["torch_sdaa_manual_seed"] = torch.sdaa.get_rng_state_all()
elif is_musa_available():
states["torch_musa_manual_seed"] = torch.musa.get_rng_state_all()
if is_hpu_available():
states["torch_hpu_manual_seed"] = torch.hpu.get_rng_state_all()
if is_cuda_available():
states["torch_cuda_manual_seed"] = torch.cuda.get_rng_state_all()
if is_torch_xla_available():
states["xm_seed"] = xm.get_rng_state()
@ -171,6 +185,7 @@ def load_accelerator_state(
process_index,
scaler=None,
map_location=None,
load_kwargs=None,
**load_model_func_kwargs,
):
"""
@ -191,6 +206,8 @@ def load_accelerator_state(
An optional *GradScaler* instance to load
map_location (`str`, *optional*):
What device to load the optimizer state onto. Should be one of either "cpu" or "on_device".
load_kwargs (`dict`, *optional*):
Additional arguments that can be passed to the `load` function.
load_model_func_kwargs (`dict`, *optional*):
Additional arguments that can be passed to the model's `load_state_dict` method.
@ -208,6 +225,9 @@ def load_accelerator_state(
elif map_location == "on_device":
map_location = PartialState().device
if load_kwargs is None:
load_kwargs = {}
input_dir = Path(input_dir)
# Model states
for i, model in enumerate(models):
@ -226,7 +246,7 @@ def load_accelerator_state(
for i, opt in enumerate(optimizers):
optimizer_name = f"{OPTIMIZER_NAME}.bin" if i == 0 else f"{OPTIMIZER_NAME}_{i}.bin"
input_optimizer_file = input_dir.joinpath(optimizer_name)
optimizer_state = load(input_optimizer_file, map_location=map_location)
optimizer_state = load(input_optimizer_file, map_location=map_location, **load_kwargs)
optimizers[i].load_state_dict(optimizer_state)
logger.info("All optimizer states loaded successfully")
@ -234,7 +254,7 @@ def load_accelerator_state(
for i, scheduler in enumerate(schedulers):
scheduler_name = f"{SCHEDULER_NAME}.bin" if i == 0 else f"{SCHEDULER_NAME}_{i}.bin"
input_scheduler_file = input_dir.joinpath(scheduler_name)
scheduler_state = load(input_scheduler_file)
scheduler_state = load(input_scheduler_file, **load_kwargs)
scheduler.load_state_dict(scheduler_state)
logger.info("All scheduler states loaded successfully")
@ -252,7 +272,7 @@ def load_accelerator_state(
dataloader_state_dict_name = "dl_state_dict.bin" if i == 0 else f"dl_state_dict_{i}.bin"
input_dataloader_state_dict_file = input_dir.joinpath(dataloader_state_dict_name)
if input_dataloader_state_dict_file.exists():
state_dict = load(input_dataloader_state_dict_file)
state_dict = load(input_dataloader_state_dict_file, **load_kwargs)
dataloader.load_state_dict(state_dict)
logger.info("All dataloader sampler states loaded successfully")
@ -275,6 +295,10 @@ def load_accelerator_state(
torch.xpu.set_rng_state_all(states["torch_xpu_manual_seed"])
if is_mlu_available():
torch.mlu.set_rng_state_all(states["torch_mlu_manual_seed"])
elif is_sdaa_available():
torch.sdaa.set_rng_state_all(states["torch_sdaa_manual_seed"])
elif is_musa_available():
torch.musa.set_rng_state_all(states["torch_musa_manual_seed"])
else:
torch.cuda.set_rng_state_all(states["torch_cuda_manual_seed"])
if is_torch_xla_available():

View File

@ -20,6 +20,7 @@ from accelerate.commands.estimate import estimate_command_parser
from accelerate.commands.launch import launch_command_parser
from accelerate.commands.merge import merge_command_parser
from accelerate.commands.test import test_command_parser
from accelerate.commands.to_fsdp2 import to_fsdp2_command_parser
from accelerate.commands.tpu import tpu_command_parser
from accelerate.commands.utils import CustomArgumentParser
@ -36,6 +37,7 @@ def main():
merge_command_parser(subparsers=subparsers)
tpu_command_parser(subparsers=subparsers)
test_command_parser(subparsers=subparsers)
to_fsdp2_command_parser(subparsers=subparsers)
# Let's go
args = parser.parse_args()

View File

@ -21,17 +21,20 @@ from ...utils import (
DistributedType,
is_deepspeed_available,
is_fp8_available,
is_hpu_available,
is_mlu_available,
is_mps_available,
is_msamp_available,
is_musa_available,
is_npu_available,
is_sdaa_available,
is_transformer_engine_available,
is_transformers_available,
is_xpu_available,
)
from ...utils.constants import (
DEEPSPEED_MULTINODE_LAUNCHERS,
FSDP2_STATE_DICT_TYPE,
FSDP_AUTO_WRAP_POLICY,
FSDP_BACKWARD_PREFETCH,
FSDP_SHARDING_STRATEGY,
@ -58,9 +61,11 @@ def get_cluster_input():
"No distributed training",
"multi-CPU",
"multi-XPU",
"multi-HPU",
"multi-GPU",
"multi-NPU",
"multi-MLU",
"multi-SDAA",
"multi-MUSA",
"TPU",
],
@ -80,10 +85,12 @@ def get_cluster_input():
if distributed_type in [
DistributedType.MULTI_GPU,
DistributedType.MULTI_MLU,
DistributedType.MULTI_SDAA,
DistributedType.MULTI_MUSA,
DistributedType.MULTI_NPU,
DistributedType.MULTI_XPU,
DistributedType.MULTI_CPU,
DistributedType.MULTI_HPU,
]:
num_machines = _ask_field(
"How many different machines will you use (use more than 1 for multi-node training)? [1]: ",
@ -134,13 +141,15 @@ def get_cluster_input():
ipex_config = {}
mpirun_config = {}
if use_cpu:
if use_cpu or is_xpu_available():
ipex_config["ipex"] = _ask_field(
"Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU? [yes/NO]:",
"Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU/XPU? [yes/NO]:",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if use_cpu:
if distributed_type == DistributedType.MULTI_CPU:
use_mpirun = _ask_field(
"Do you want accelerate to launch mpirun? [yes/NO]: ",
@ -156,24 +165,6 @@ def get_cluster_input():
)
mpirun_config["mpirun_hostfile"] = os.path.expanduser(mpirun_hostfile.strip())
mpirun_config["mpirun_ccl"] = _ask_field("Enter the number of oneCCL worker threads [1]: ", default=1)
if (
not use_cpu
and is_xpu_available()
and distributed_type
not in [
DistributedType.MULTI_GPU,
DistributedType.MULTI_NPU,
DistributedType.MULTI_MLU,
DistributedType.XLA,
DistributedType.MULTI_MUSA,
]
):
ipex_config["use_xpu"] = _ask_field(
"Do you want to use XPU plugin to speed up training on XPU? [yes/NO]:",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
dynamo_config = {}
use_dynamo = _ask_field(
@ -216,6 +207,12 @@ def get_cluster_input():
default=False,
error_message="Please enter yes or no.",
)
dynamo_config[prefix + "use_regional_compilation"] = _ask_field(
"Do you want to enable regional compilation? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
use_mps = not use_cpu and is_mps_available()
deepspeed_config = {}
@ -224,8 +221,10 @@ def get_cluster_input():
in [
DistributedType.MULTI_GPU,
DistributedType.MULTI_XPU,
DistributedType.MULTI_HPU,
DistributedType.MULTI_NPU,
DistributedType.MULTI_MLU,
DistributedType.MULTI_SDAA,
DistributedType.MULTI_MUSA,
DistributedType.NO,
]
@ -239,9 +238,9 @@ def get_cluster_input():
)
if use_deepspeed:
distributed_type = DistributedType.DEEPSPEED
assert (
is_deepspeed_available()
), "DeepSpeed is not installed => run `pip3 install deepspeed` or build it from source"
assert is_deepspeed_available(), (
"DeepSpeed is not installed => run `pip3 install deepspeed` or build it from source"
)
if distributed_type == DistributedType.DEEPSPEED:
use_deepspeed_config = _ask_field(
@ -376,12 +375,15 @@ def get_cluster_input():
)
fsdp_config = {}
if distributed_type in [
DistributedType.MULTI_GPU,
DistributedType.MULTI_NPU,
DistributedType.MULTI_MLU,
DistributedType.MULTI_SDAA,
DistributedType.MULTI_MUSA,
DistributedType.MULTI_XPU,
DistributedType.MULTI_HPU,
]:
use_fsdp = _ask_field(
"Do you want to use FullyShardedDataParallel? [yes/NO]: ",
@ -392,18 +394,36 @@ def get_cluster_input():
if use_fsdp:
distributed_type = DistributedType.FSDP
if distributed_type == DistributedType.FSDP:
sharding_strategy_query = "What should be your sharding strategy?"
fsdp_config["fsdp_sharding_strategy"] = _ask_options(
sharding_strategy_query,
FSDP_SHARDING_STRATEGY,
lambda x: FSDP_SHARDING_STRATEGY[int(x)],
fsdp_config["fsdp_version"] = _ask_options(
"What should be your FSDP version? [2]: ",
[1, 2],
lambda x: int(x) + 1,
default=1,
)
fsdp_version = fsdp_config["fsdp_version"] # extract to a variable to simplify usage later
if fsdp_version == 1:
sharding_strategy_query = "What should be your sharding strategy?"
fsdp_config["fsdp_reshard_after_forward"] = _ask_options(
sharding_strategy_query,
FSDP_SHARDING_STRATEGY,
lambda x: FSDP_SHARDING_STRATEGY[int(x)],
)
else:
fsdp_config["fsdp_reshard_after_forward"] = _ask_field(
"Do you want to enable resharding after forward? [YES/no]: ",
_convert_yes_no_to_bool,
default=True,
error_message="Please enter yes or no.",
)
fsdp_config["fsdp_offload_params"] = _ask_field(
"Do you want to offload parameters and gradients to CPU? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
fsdp_wrap_query = "What should be your auto wrap policy?"
fsdp_config["fsdp_auto_wrap_policy"] = _ask_options(
fsdp_wrap_query,
@ -429,46 +449,55 @@ def get_cluster_input():
int,
default=100000000,
)
fsdp_backward_prefetch_query = "What should be your FSDP's backward prefetch policy?"
fsdp_config["fsdp_backward_prefetch"] = _ask_options(
fsdp_backward_prefetch_query,
FSDP_BACKWARD_PREFETCH,
lambda x: FSDP_BACKWARD_PREFETCH[int(x)],
)
# Removed in FSDP2, ask for user input for FSDP1
if fsdp_version == 1:
fsdp_backward_prefetch_query = "What should be your FSDP's backward prefetch policy?"
fsdp_config["fsdp_backward_prefetch"] = _ask_options(
fsdp_backward_prefetch_query,
FSDP_BACKWARD_PREFETCH,
lambda x: FSDP_BACKWARD_PREFETCH[int(x)],
)
fsdp_state_dict_type_query = "What should be your FSDP's state dict type?"
fsdp_config["fsdp_state_dict_type"] = _ask_options(
fsdp_state_dict_type_query,
FSDP_STATE_DICT_TYPE,
lambda x: FSDP_STATE_DICT_TYPE[int(x)],
default=2,
)
fsdp_config["fsdp_forward_prefetch"] = _ask_field(
"Do you want to enable FSDP's forward prefetch policy? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
fsdp_config["fsdp_use_orig_params"] = _ask_field(
"Do you want to enable FSDP's `use_orig_params` feature? [YES/no]: ",
_convert_yes_no_to_bool,
default=True,
error_message="Please enter yes or no.",
FSDP_STATE_DICT_TYPE if fsdp_version == 1 else FSDP2_STATE_DICT_TYPE,
lambda x: FSDP_STATE_DICT_TYPE[int(x)] if fsdp_version == 1 else FSDP2_STATE_DICT_TYPE[int(x)],
default=0,
)
# Not implemented in FSDP2, ask for user input for FSDP1
if fsdp_version == 1:
fsdp_config["fsdp_forward_prefetch"] = _ask_field(
"Do you want to enable FSDP's forward prefetch policy? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
# Obsolete in FSDP2, ask for user input for FSDP1
if fsdp_version == 1:
fsdp_config["fsdp_use_orig_params"] = _ask_field(
"Do you want to enable FSDP's `use_orig_params` feature? [YES/no]: ",
_convert_yes_no_to_bool,
default=True,
error_message="Please enter yes or no.",
)
fsdp_config["fsdp_cpu_ram_efficient_loading"] = _ask_field(
"Do you want to enable CPU RAM efficient model loading? Only applicable for 🤗 Transformers models. [YES/no]: ",
_convert_yes_no_to_bool,
default=True,
error_message="Please enter yes or no.",
)
if fsdp_config["fsdp_cpu_ram_efficient_loading"]:
fsdp_config["fsdp_sync_module_states"] = True
else:
fsdp_config["fsdp_sync_module_states"] = _ask_field(
"Do you want each individually wrapped FSDP unit to broadcast module parameters from rank 0 at the start? [YES/no]: ",
_convert_yes_no_to_bool,
default=True,
error_message="Please enter yes or no.",
)
# Obsolete in FSDP2, ask for user input for FSDP1
if fsdp_version == 1:
if fsdp_config["fsdp_cpu_ram_efficient_loading"]:
fsdp_config["fsdp_sync_module_states"] = True
else:
fsdp_config["fsdp_sync_module_states"] = _ask_field(
"Do you want each individually wrapped FSDP unit to broadcast module parameters from rank 0 at the start? [YES/no]: ",
_convert_yes_no_to_bool,
default=True,
error_message="Please enter yes or no.",
)
fsdp_config["fsdp_activation_checkpointing"] = _ask_field(
"Do you want to enable FSDP activation checkpointing? [yes/NO]: ",
_convert_yes_no_to_bool,
@ -550,8 +579,10 @@ def get_cluster_input():
if distributed_type in [
DistributedType.MULTI_CPU,
DistributedType.MULTI_XPU,
DistributedType.MULTI_HPU,
DistributedType.MULTI_GPU,
DistributedType.MULTI_MLU,
DistributedType.MULTI_SDAA,
DistributedType.MULTI_MUSA,
DistributedType.MULTI_NPU,
DistributedType.XLA,
@ -589,9 +620,11 @@ def get_cluster_input():
in [
DistributedType.MULTI_GPU,
DistributedType.MULTI_MLU,
DistributedType.MULTI_SDAA,
DistributedType.MULTI_MUSA,
DistributedType.MULTI_NPU,
DistributedType.MULTI_XPU,
DistributedType.MULTI_HPU,
DistributedType.NO,
]
and not use_cpu
@ -601,14 +634,18 @@ def get_cluster_input():
machine_type = "NPU(s)"
elif is_mlu_available():
machine_type = "MLU(s)"
elif is_sdaa_available():
machine_type = "SDAA(s)"
elif is_musa_available():
machine_type = "MUSA(s)"
elif is_xpu_available():
machine_type = "XPU(s)"
elif is_hpu_available():
machine_type = "HPU(s)"
else:
machine_type = "GPU(s)"
gpu_ids = _ask_field(
f"What {machine_type} (by id) should be used for training on this machine as a comma-seperated list? [all]:",
f"What {machine_type} (by id) should be used for training on this machine as a comma-separated list? [all]:",
default="all",
)
@ -672,7 +709,7 @@ def get_cluster_input():
)
tpu_command_file = os.path.abspath(tpu_command_file)
else:
print("Please enter each command seperately you wish to run on startup in each pod.")
print("Please enter each command separately you wish to run on startup in each pod.")
tpu_commands = []
another_command = True
while another_command:
@ -690,11 +727,11 @@ def get_cluster_input():
error_message="Please enter yes or no.",
)
tpu_vm = _ask_field(
"If not using an instance group, what are the names of the Compute VM instances to be used, seperated by a comma: ",
"If not using an instance group, what are the names of the Compute VM instances to be used, separated by a comma: ",
default="",
).split(",")
tpu_env = _ask_field(
"What environment variables do you wish to set in each pod, seperated by a comma: ",
"What environment variables do you wish to set in each pod, separated by a comma: ",
default="",
).split(",")
@ -774,6 +811,8 @@ def get_cluster_input():
default=False,
)
fp8_config["override_linear_precision"] = (fprop, dgrad, wgrad)
else:
fp8_config["override_linear_precision"] = (False, False, False)
elif fp8_config["backend"] == "MSAMP":
if not is_msamp_available():

View File

@ -18,7 +18,7 @@ import json
import os
from dataclasses import dataclass
from enum import Enum
from typing import List, Optional, Union
from typing import Optional, Union
import yaml
@ -209,9 +209,9 @@ class ClusterConfig(BaseConfig):
tpu_use_cluster: bool = False
tpu_use_sudo: bool = False
command_file: str = None
commands: List[str] = None
tpu_vm: List[str] = None
tpu_env: List[str] = None
commands: list[str] = None
tpu_vm: list[str] = None
tpu_env: list[str] = None
# args for dynamo
dynamo_config: dict = None

View File

@ -72,7 +72,18 @@ def _convert_compute_environment(value):
def _convert_distributed_mode(value):
value = int(value)
return DistributedType(
["NO", "MULTI_CPU", "MULTI_XPU", "MULTI_GPU", "MULTI_NPU", "MULTI_MLU", "MULTI_MUSA", "XLA"][value]
[
"NO",
"MULTI_CPU",
"MULTI_XPU",
"MULTI_HPU",
"MULTI_GPU",
"MULTI_NPU",
"MULTI_MLU",
"MULTI_SDAA",
"MULTI_MUSA",
"XLA",
][value]
)

View File

@ -18,7 +18,14 @@ from pathlib import Path
import torch
from ...utils import is_mlu_available, is_musa_available, is_npu_available, is_xpu_available
from ...utils import (
is_hpu_available,
is_mlu_available,
is_musa_available,
is_npu_available,
is_sdaa_available,
is_xpu_available,
)
from .config_args import ClusterConfig, default_json_config_file
from .config_utils import SubcommandHelpFormatter
@ -26,7 +33,7 @@ from .config_utils import SubcommandHelpFormatter
description = "Create a default config file for Accelerate with only a few flags set."
def write_basic_config(mixed_precision="no", save_location: str = default_json_config_file, use_xpu: bool = False):
def write_basic_config(mixed_precision="no", save_location: str = default_json_config_file):
"""
Creates and saves a basic cluster config to be used on a local machine with potentially multiple GPUs. Will also
set CPU if it is a CPU-only machine.
@ -36,10 +43,8 @@ def write_basic_config(mixed_precision="no", save_location: str = default_json_c
Mixed Precision to use. Should be one of "no", "fp16", or "bf16"
save_location (`str`, *optional*, defaults to `default_json_config_file`):
Optional custom save location. Should be passed to `--config_file` when using `accelerate launch`. Default
location is inside the huggingface cache folder (`~/.cache/huggingface`) but can be overriden by setting
location is inside the huggingface cache folder (`~/.cache/huggingface`) but can be overridden by setting
the `HF_HOME` environmental variable, followed by `accelerate/default_config.yaml`.
use_xpu (`bool`, *optional*, defaults to `False`):
Whether to use XPU if available.
"""
path = Path(save_location)
path.parent.mkdir(parents=True, exist_ok=True)
@ -65,6 +70,14 @@ def write_basic_config(mixed_precision="no", save_location: str = default_json_c
config["distributed_type"] = "MULTI_MLU"
else:
config["distributed_type"] = "NO"
if is_sdaa_available():
num_sdaas = torch.sdaa.device_count()
config["num_processes"] = num_sdaas
config["use_cpu"] = False
if num_sdaas > 1:
config["distributed_type"] = "MULTI_SDAA"
else:
config["distributed_type"] = "NO"
elif is_musa_available():
num_musas = torch.musa.device_count()
config["num_processes"] = num_musas
@ -73,6 +86,14 @@ def write_basic_config(mixed_precision="no", save_location: str = default_json_c
config["distributed_type"] = "MULTI_MUSA"
else:
config["distributed_type"] = "NO"
elif is_hpu_available():
num_hpus = torch.hpu.device_count()
config["num_processes"] = num_hpus
config["use_cpu"] = False
if num_hpus > 1:
config["distributed_type"] = "MULTI_HPU"
else:
config["distributed_type"] = "NO"
elif torch.cuda.is_available():
num_gpus = torch.cuda.device_count()
config["num_processes"] = num_gpus
@ -81,7 +102,7 @@ def write_basic_config(mixed_precision="no", save_location: str = default_json_c
config["distributed_type"] = "MULTI_GPU"
else:
config["distributed_type"] = "NO"
elif is_xpu_available() and use_xpu:
elif is_xpu_available():
num_xpus = torch.xpu.device_count()
config["num_processes"] = num_xpus
config["use_cpu"] = False

View File

@ -212,6 +212,13 @@ def get_sagemaker_input():
default=False,
error_message="Please enter yes or no.",
)
dynamo_config[prefix + "use_regional_compilation"] = _ask_field(
"Do you want to enable regional compilation? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
ec2_instance_query = "Which EC2 instance type you want to use for your training?"
if distributed_type != SageMakerDistributedType.NO:
ec2_instance_type = _ask_options(

View File

@ -26,7 +26,7 @@ import torch
from accelerate import __version__ as version
from accelerate.commands.config import default_config_file, load_config_from_file
from ..utils import is_mlu_available, is_musa_available, is_npu_available, is_xpu_available
from ..utils import is_mlu_available, is_musa_available, is_npu_available, is_sdaa_available, is_xpu_available
def env_command_parser(subparsers=None):
@ -49,9 +49,24 @@ def env_command(args):
pt_cuda_available = torch.cuda.is_available()
pt_xpu_available = is_xpu_available()
pt_mlu_available = is_mlu_available()
pt_sdaa_available = is_sdaa_available()
pt_musa_available = is_musa_available()
pt_npu_available = is_npu_available()
accelerator = "N/A"
if pt_cuda_available:
accelerator = "CUDA"
elif pt_xpu_available:
accelerator = "XPU"
elif pt_mlu_available:
accelerator = "MLU"
elif pt_sdaa_available:
accelerator = "SDAA"
elif pt_musa_available:
accelerator = "MUSA"
elif pt_npu_available:
accelerator = "NPU"
accelerate_config = "Not found"
# Get the default from the config file.
if args.config_file is not None or os.path.isfile(default_config_file):
@ -72,18 +87,21 @@ def env_command(args):
"`accelerate` bash location": bash_location,
"Python version": platform.python_version(),
"Numpy version": np.__version__,
"PyTorch version (GPU?)": f"{pt_version} ({pt_cuda_available})",
"PyTorch XPU available": str(pt_xpu_available),
"PyTorch NPU available": str(pt_npu_available),
"PyTorch MLU available": str(pt_mlu_available),
"PyTorch MUSA available": str(pt_musa_available),
"System RAM": f"{psutil.virtual_memory().total / 1024 ** 3:.2f} GB",
"PyTorch version": f"{pt_version}",
"PyTorch accelerator": accelerator,
"System RAM": f"{psutil.virtual_memory().total / 1024**3:.2f} GB",
}
if pt_cuda_available:
info["GPU type"] = torch.cuda.get_device_name()
if pt_mlu_available:
elif pt_xpu_available:
info["XPU type"] = torch.xpu.get_device_name()
elif pt_mlu_available:
info["MLU type"] = torch.mlu.get_device_name()
if pt_npu_available:
elif pt_sdaa_available:
info["SDAA type"] = torch.sdaa.get_device_name()
elif pt_musa_available:
info["MUSA type"] = torch.musa.get_device_name()
elif pt_npu_available:
info["CANN version"] = torch.version.cann
print("\nCopy-and-paste the text below in your GitHub issue\n")

View File

@ -13,6 +13,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from huggingface_hub import model_info
from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError
@ -62,7 +63,8 @@ def check_has_model(error):
def create_empty_model(model_name: str, library_name: str, trust_remote_code: bool = False, access_token: str = None):
"""
Creates an empty model from its parent library on the `Hub` to calculate the overall memory consumption.
Creates an empty model in full precision from its parent library on the `Hub` to calculate the overall memory
consumption.
Args:
model_name (`str`):
@ -120,7 +122,8 @@ def create_empty_model(model_name: str, library_name: str, trust_remote_code: bo
break
if value is not None:
constructor = getattr(transformers, value)
model = constructor.from_config(config, trust_remote_code=trust_remote_code)
# we need to pass the dtype, otherwise it is going to use the torch_dtype that is saved in the config
model = constructor.from_config(config, torch_dtype=torch.float32, trust_remote_code=trust_remote_code)
elif library_name == "timm":
if not is_timm_available():
raise ImportError(
@ -172,7 +175,7 @@ def create_ascii_table(headers: list, rows: list, title: str):
for i, line in enumerate(rows):
centered_line = [t.center(column_widths[i]) for i, t in enumerate(line)]
table += f"{pattern % tuple(centered_line)}\n"
table += f'{"".join([in_between * n for n in column_widths])}'
table += f"{''.join([in_between * n for n in column_widths])}"
return table

View File

@ -39,12 +39,13 @@ from accelerate.utils import (
convert_dict_to_env_variables,
is_bf16_available,
is_deepspeed_available,
is_hpu_available,
is_mlu_available,
is_musa_available,
is_npu_available,
is_rich_available,
is_sagemaker_available,
is_torch_version,
is_sdaa_available,
is_torch_xla_available,
is_xpu_available,
patch_environment,
@ -245,6 +246,12 @@ def launch_command_parser(subparsers=None):
action="store_true",
help="Whether to enable dynamic shape tracing.",
)
resource_args.add_argument(
"--dynamo_use_regional_compilation",
default=False,
action="store_true",
help="Whether to enable regional compilation.",
)
# Training Paradigm arguments
paradigm_args = parser.add_argument_group(
@ -268,11 +275,12 @@ def launch_command_parser(subparsers=None):
action="store_true",
help="Whether to use Megatron-LM.",
)
paradigm_args.add_argument(
"--use_xpu",
default=False,
default=None,
action="store_true",
help="Whether to use IPEX plugin to speed up training on XPU specifically.",
help="Whether to use IPEX plugin to speed up training on XPU specifically. This argument is deprecated and ignored, will be removed in Accelerate v1.20.",
)
# distributed GPU training arguments
@ -280,7 +288,7 @@ def launch_command_parser(subparsers=None):
distributed_args.add_argument(
"--gpu_ids",
default=None,
help="What GPUs (by id) should be used for training on this machine as a comma-seperated list",
help="What GPUs (by id) should be used for training on this machine as a comma-separated list",
)
distributed_args.add_argument(
"--same_network",
@ -498,7 +506,7 @@ def launch_command_parser(subparsers=None):
"--deepspeed_multinode_launcher",
default=None,
type=str,
help="DeepSpeed multi-node launcher to use. If unspecified, will default to `pdsh`.",
help="DeepSpeed multi-node launcher to use, e.g. `pdsh`, `standard`, `openmpi`, `mvapich`, `mpich`, `slurm`, `nossh` (requires DeepSpeed >= 0.14.5). If unspecified, will default to `pdsh`.",
)
deepspeed_args.add_argument(
"--deepspeed_moe_layer_cls_names",
@ -510,6 +518,13 @@ def launch_command_parser(subparsers=None):
# fsdp arguments
fsdp_args = parser.add_argument_group("FSDP Arguments", "Arguments related to Fully Shared Data Parallelism.")
fsdp_args.add_argument(
"--fsdp_version",
type=str,
default="1",
choices=["1", "2"],
help="FSDP version to use. (useful only when `use_fsdp` flag is passed).",
)
fsdp_args.add_argument(
"--fsdp_offload_params",
default="false",
@ -522,11 +537,18 @@ def launch_command_parser(subparsers=None):
default=1e8,
help="FSDP's minimum number of parameters for Default Auto Wrapping. (useful only when `use_fsdp` flag is passed).",
)
# We enable this for backwards compatibility, throw a warning if this is set in `FullyShardedDataParallelPlugin`
fsdp_args.add_argument(
"--fsdp_sharding_strategy",
type=str,
default="FULL_SHARD",
help="FSDP's Sharding Strategy. (useful only when `use_fsdp` flag is passed).",
help="FSDP's sharding strategy. (useful only when `use_fsdp` flag is passed and `fsdp_version=1`).",
)
fsdp_args.add_argument(
"--fsdp_reshard_after_forward",
type=str,
default="true",
help="FSDP's Reshard After Forward Strategy. (useful only when `use_fsdp` flag is passed). Supports either boolean (FSDP2) or `FULL_SHARD | SHARD_GRAD_OP | NO_RESHARD` (FSDP1).",
)
fsdp_args.add_argument(
"--fsdp_auto_wrap_policy",
@ -691,7 +713,7 @@ def launch_command_parser(subparsers=None):
"--fp8_override_linear_precision",
type=lambda x: tuple(map(str_to_bool, x.split(","))),
default=(False, False, False),
help="Whether or not to execute `fprop`, `dgrad`, and `wgrad` GEMMS in higher precision. Should be passed in a comma-seperated string of booleans (useful only when `--fp8_backend=te` is passed).",
help="Whether or not to execute `fprop`, `dgrad`, and `wgrad` GEMMS in higher precision. Should be passed in a comma-separated string of booleans (useful only when `--fp8_backend=te` is passed).",
)
fp8_args.add_argument(
"--fp8_opt_level",
@ -881,7 +903,7 @@ def tpu_launcher(args):
main_function = getattr(mod, args.main_training_function)
with patch_environment(**current_env):
xmp.spawn(PrepareForLaunch(main_function), args=(), nprocs=args.num_processes)
xmp.spawn(PrepareForLaunch(main_function), args=())
def tpu_pod_launcher(args):
@ -994,8 +1016,10 @@ def _validate_launch_command(args):
DistributedType.MULTI_GPU,
DistributedType.MULTI_NPU,
DistributedType.MULTI_MLU,
DistributedType.MULTI_SDAA,
DistributedType.MULTI_MUSA,
DistributedType.MULTI_XPU,
DistributedType.MULTI_HPU,
)
else False
)
@ -1021,28 +1045,20 @@ def _validate_launch_command(args):
# Update args with the defaults
for name, attr in defaults.__dict__.items():
if isinstance(attr, dict):
for k in defaults.deepspeed_config:
setattr(args, k, defaults.deepspeed_config[k])
for k in defaults.fsdp_config:
arg_to_set = k
if "fsdp" not in arg_to_set:
arg_to_set = "fsdp_" + arg_to_set
setattr(args, arg_to_set, defaults.fsdp_config[k])
for k in defaults.megatron_lm_config:
setattr(args, k, defaults.megatron_lm_config[k])
for k in defaults.dynamo_config:
setattr(args, k, defaults.dynamo_config[k])
for k in defaults.ipex_config:
setattr(args, k, defaults.ipex_config[k])
for k in defaults.mpirun_config:
setattr(args, k, defaults.mpirun_config[k])
continue
# Those args are handled separately
if (
# Copy defaults.somedict.somearg to args.somearg and
# defaults.fsdp_config.x to args.fsdp_x
for key, value in attr.items():
if name == "fsdp_config" and not key.startswith("fsdp"):
key = "fsdp_" + key
elif name == "fp8_config" and not key.startswith("fp8"):
key = "fp8_" + key
if hasattr(args, "nondefault") and key not in args.nondefault:
setattr(args, key, value)
elif (
name not in ["compute_environment", "mixed_precision", "distributed_type"]
and getattr(args, name, None) is None
):
# Those args are handled separately
setattr(args, name, attr)
if not args.debug:
args.debug = defaults.debug
@ -1054,10 +1070,7 @@ def _validate_launch_command(args):
args.mixed_precision = defaults.mixed_precision
mp_from_config_flag = True
else:
if args.use_cpu or (args.use_xpu and torch.xpu.is_available()):
native_amp = is_torch_version(">=", "1.10")
else:
native_amp = is_bf16_available(True)
native_amp = is_bf16_available(True)
if (
args.mixed_precision == "bf16"
and not native_amp
@ -1072,14 +1085,18 @@ def _validate_launch_command(args):
raise ValueError("You need to manually pass in `--num_processes` using this config yaml.")
else:
if args.num_processes is None:
if args.use_xpu and is_xpu_available():
if is_xpu_available():
args.num_processes = torch.xpu.device_count()
elif is_mlu_available():
args.num_processes = torch.mlu.device_count()
elif is_sdaa_available():
args.num_processes = torch.sdaa.device_count()
elif is_musa_available():
args.num_processes = torch.musa.device_count()
elif is_npu_available():
args.num_processes = torch.npu.device_count()
elif is_hpu_available():
args.num_processes = torch.hpu.device_count()
else:
args.num_processes = torch.cuda.device_count()
warned.append(f"\t`--num_processes` was set to a value of `{args.num_processes}`")
@ -1089,11 +1106,13 @@ def _validate_launch_command(args):
not args.multi_gpu
and args.num_processes > 1
and (
(args.use_xpu and is_xpu_available() and torch.xpu.device_count() > 1)
or (is_mlu_available() and torch.mlu.device_count() > 1)
or (is_musa_available() and torch.musa.device_count() > 1)
(is_xpu_available() and torch.xpu.device_count() > 1)
or (is_npu_available() and torch.npu.device_count() > 1)
or (torch.cuda.device_count() > 1)
or (is_hpu_available() and torch.hpu.device_count() > 1)
or (is_mlu_available() and torch.mlu.device_count() > 1)
or (is_sdaa_available() and torch.sdaa.device_count() > 1)
or (is_musa_available() and torch.musa.device_count() > 1)
or (torch.cuda.is_available() and torch.cuda.device_count() > 1)
)
):
warned.append(
@ -1132,6 +1151,12 @@ def _validate_launch_command(args):
f"\t`--num_cpu_threads_per_process` was set to `{args.num_cpu_threads_per_process}` to improve out-of-box performance when training on CPUs"
)
if args.use_xpu is not None:
logger.warning(
"use_xpu is deprecated and ignored, will be removed in Accelerate v1.20. "
"XPU is a PyTorch native citizen now, we don't need extra argument to enable it any more."
)
if any(warned):
message = "The following values were not passed to `accelerate launch` and had defaults used instead:\n"
message += "\n".join(warned)

Some files were not shown because too many files have changed in this diff Show More