Compare commits

...

1340 Commits

Author SHA1 Message Date
50042518db Some stuff 2025-09-16 12:08:49 +00:00
571ca0200d Merge branch 'main' into feat/async-checkpointing 2025-09-13 14:16:07 +00:00
0cb1a33475 fix Muti node CUDA error: invalid device ordinal #3775 (#3779) 2025-09-13 15:32:47 +02:00
dfdc219018 use reset_peak_memory_stats on xpu (#3772)
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-09-12 15:05:31 +02:00
45959d7b96 fix FSDP2 test case failure on XPU (#3771)
* fix FSDP2 test case failure on XPU

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-09-12 15:05:05 +02:00
8b493524c8 Fix: typo makes tests fail (#3765) 2025-09-09 12:06:05 +02:00
9ead94e556 fix: torch_npu import error (#3764) 2025-09-09 11:38:57 +02:00
a0bc36e8ed feat: allow mixed precision policy as dtype (#3751)
* feat: allow mixed precision as dtype

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* feat: allow mixed precision as dtype

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* feat: allow mixed precision as dtype

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* test: extend test for MP as str dtype

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* Fix: style

---------

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
Co-authored-by: S1ro1 <matej.sirovatka@gmail.com>
2025-09-08 23:29:20 +02:00
8830e58a91 Fix typos (#3753)
* Fix typos

Signed-off-by: cyy <cyyever@outlook.com>

* Fix: style

---------

Signed-off-by: cyy <cyyever@outlook.com>
Co-authored-by: S1ro1 <matej.sirovatka@gmail.com>
2025-09-08 13:33:18 +02:00
40ebb4bea3 make torch_native_parallelism examples device agnostic (#3759)
* make torch_native_parallelism examples device agnostic

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* xxx

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* xxx

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* Style + deprecation warning

---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Co-authored-by: S1ro1 <matej.sirovatka@gmail.com>
2025-09-08 12:16:56 +02:00
ec92b1af7a fix: model.set_requires_gradient_sync(False) should be called to turn off gradient synchronization in FSDP2 (#3762)
* fix :`model.set_requires_gradient_sync(False)` should be called to turn off gradient synchronization in FSDP2.

* fix: remove trailing whitespace
2025-09-06 23:57:46 +02:00
62ede1ed2a CP docs typos fixed (#3761) 2025-09-05 12:23:33 +02:00
9f9c490c6b fix: specify device for process_tensor in example usage (#3755) 2025-09-03 11:05:24 +02:00
8b55e62b2c xpu INT64 all_gather issue fixed in 2.9 (#3756)
* xpu gather issue fixed in 2.9 and validated config_yamls on XPU

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* xxx

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-09-03 10:56:14 +02:00
0e4419b347 Add bf16/fp16 support for amp with mps device (#3373)
* Fix tests

* format

* amp mps support for fp16/bf16

* add error

* revert

* revert

* fix

* ruff
2025-08-28 14:20:56 +02:00
3b67c21696 Add support for TE MXFP8 recipe in accelerate (#3688)
* Add support for MXFP8 recipe in accelerate

* ruff reformat

* add and fix test for deepspeed / fp8 from config

* minor lints

Signed-off-by: Peter St. John <pstjohn@nvidia.com>

---------

Signed-off-by: Peter St. John <pstjohn@nvidia.com>
2025-08-27 14:08:34 +02:00
7b981788ca [ND Parallel] Update examples, cleanup (#3737)
* Fix: update cp example

* Feat: add rename examples

* WIP: Cleanup with_trainer

* Feat: more cleanup

* Feat: more refactor + better readme + more configs

* Fin
2025-08-26 14:41:14 +02:00
c4460e33ef fix: specify device_ids in torch.distributed.barrier for PartialState (#3744) 2025-08-26 14:05:33 +02:00
5dd3d0b690 Protect import for device_mesh (#3742) 2025-08-22 15:44:56 +02:00
5fe4460ccd Feat: add to_json (#3743) 2025-08-22 15:25:38 +02:00
979d81e4a9 fix: cpu ram efficient loading for nd or hsdp parallelisms (#3740)
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-08-21 13:40:06 +02:00
7c25f696b8 Fix convert LayerNorm without bias to fp8 (#3725) 2025-08-18 22:28:48 +02:00
a7d6f28f99 feat: add ignored_params support for fsdp2 (#3731)
* feat: add ignored_params support for fsdp2

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* feat: add ignored_params support for fsdp2

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* feat: add ignored_params support for fsdp2

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* feat: add ignored_params support for fsdp2

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* test: update testcase for fsdp2 ignored_params

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: add defensive use of ignored params

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: styling errors

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

---------

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-08-18 14:31:19 +02:00
23cf4ef8a3 Fix tests (#3722)
* fix tests

* fix skorch tests

* fix deepspeed

* pin torch as compile tests don't pass and create segmentation fault

* skip compile tests

* fix

* forgot v ...

* style
2025-08-07 16:59:29 +02:00
ff872f5f71 bump to 1.11.0dev0 2025-08-07 12:58:08 +02:00
2941a6b0fb remove (#3721) 2025-08-07 12:48:11 +02:00
c0a3aefea8 feature: CpuOffload pre_forward don't attempt to move if already on device (#3695)
* feature: added optimisation to not attempt to move devices if allready on that the device. This is more noticiable in large step itterations on diffusion loops when the pre_froward can get called many times

* fix: linting

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-08-06 19:46:13 +02:00
42fdda1c1f Remove ParallelismConfig from PartialState (#3720)
* remove

* style

* fix

* valueerror instead

* add device_mesh
2025-08-06 19:00:26 +02:00
e23b004b30 TST Add test for FSDP ignored_modules as str (#3719)
Follow up to #3698.
2025-08-06 18:05:54 +02:00
898cad39e8 Fix: tp size wouldn't read from env (#3716) 2025-08-06 15:08:55 +02:00
24c8157bba Set parallelism_config in constructor due to Trainer reset of State (#3713) 2025-08-06 13:47:49 +02:00
6891c57072 Feat: context parallel v2.0 (#3700)
* Cleanup: context parallel

* Feat: cleanup

* Feat: concept guide

* Fix: rename + version check

* Style

* Fix: add to namespace in a test

* Fix: add skip_if on dataclass tests

* Fix: proper version for version check

* Feat: add tests and cleanup

* Fix: properly version check added tests

* Feat: address comments

* Fix: add both shift_labels and labels to make the model.forward calculate loss

* Fix: remove import, improve comment

* Fix: final checks

* Fix: style

* Fix: style
2025-08-05 16:17:13 +02:00
24e48f3d20 ENH: Allow FSDP ignored modules to be regex (#3698)
* ENH: Allow FSDP ignored modules to be regex

Description

For FSDP, there is an option to indicate ignored_modules, which should
be a list of modules are ignored by FSDP. Even though this argument was
supported in accelerate, it was not very usable:

1. Listing all modules can tricky, especially with something like PEFT,
where the whole model is wrapped and thus the module structure changes.
2. When configuring this argument, accelerate takes a detour via
environment variables. These can only be strings. Therefore, passing a
list of modules is not feasible.

Moreover, I noticed that the environment variable for ignored_modules
was not even set, so configuring this argument didn't even work.

Status

This PR is lacking tests. I would be happy for pointers on how to add
those.

Context

When using PEFT with LoRA and the target_parameters feature, I ran into
an issue training such a model with FSDP. The only working fix I found
was to ignore the layers targeted by LoRA. However, I could not
configure accelerate to do that. With this PR, it is possible. I could
successfully trained such a PEFT model that targets q_proj and v_proj by
setting fsdp_ignored_modules: '.*\.(q_proj$|v_proj$)'.

* Fix type annotation

* Fix failing test
2025-08-05 14:23:14 +02:00
jp
6640ff415c Fix: Ensure environment variable values are case-insensitive in Accelerate (#3712)
* Add: lower

* apply ruff
2025-08-05 13:22:00 +02:00
c173b4fdd6 Fix: prepare works even if nothing except tp specified (rare) (#3707) 2025-08-05 13:07:37 +02:00
cb343c63d7 Add Parallelism getter property to Accelerator class (#3703)
* Add rank property to Accelerator class

Signed-off-by: WoosungMyung <dntjd517@naver.com>

* Raise errors when parallelism configuration is not enabled

Signed-off-by: WoosungMyung <dntjd517@naver.com>

* Fix: PR feedback

Signed-off-by: WoosungMyung <dntjd517@naver.com>

* Fix: style

---------

Signed-off-by: WoosungMyung <dntjd517@naver.com>
Co-authored-by: S1ro1 <matej.sirovatka@gmail.com>
2025-08-02 18:20:08 +02:00
354b0b5da3 WIP: very much wip but works (probably) 2025-08-01 01:28:49 +00:00
9359a0194f Parallelism config + TP + HSDP + BYODM (Bring Your Own Device Mesh) (#3682)
* Feat: init

* Feat: add validation + init from kwargs

* Fix: minor fixes

* Feat: more cleanup

* Minor refactor

* remove import

* adding support for pre-configured device mesh

* adding device mesh to fsdp2

* moving mesh dim defn to parralismconfig

* tests

* WIP device mesh/accelerator validation

* WIP more tests

* Test Driven Development (TDD)

* fixing build_device_mesh

* FSDP dim names

* adding example

* WIP

* fixing HSDP

* Feat: add back old options

* working example

* debugging

* adding parallelism config to partialstate

* Feat: revert ddp changes

* Revert DDP

* Feat: (untested) update mesh dims and some minor tweaks

* adding dp_cp dims

* updating comments

* WIP

* wip 2

* reverting

* storing state in accelerator rather than acceleratorstate

* Fix: minor tweaks

* wip example update

* Fixes for non-fsdp2 case

* Feat: ensure ddp/tp only works

* updating example

* updating example

* updating examples, fixing state

* fixed state

* comments

* fixing partial state check

* linting

* comments

* removing fn

* WIP: fix tp

* comments

* removing return

* reverting upcast

* add guards

* guards for empty self.parallelism_config

* use len on tuple to check if empty

* Feat: cleanup example

* Feat: some cleanup of example

* Feat: add trackio

* Fix: improve trackio

* Feat: TP works

* Feat: some fsdp2 improv

* Feat: working examples

* handle clipping for tensor parallel

* Implicit replicate

* Refactor: move to separate file + cleanup + basic comments

* Fix: add unadded files, fix circular import

* Feat: better readme

* Feat: add blog + ultrascale links

* Tmp: should_save_model now returns only true

* Fix: remove implicit_replication and style

* Fix: remove optional

* add guard on parallelism_config.tp_enabled

* fix import

* fixing empty parallelism_config

* fix import path for test patch

* fixing patch

---------

Co-authored-by: S1ro1 <matej.sirovatka@gmail.com>
Co-authored-by: Salman Mohammadi <“salman.mohammadi@outlook.com”>
Co-authored-by: Wing Lian <wing@axolotl.ai>
2025-07-30 21:03:13 +02:00
2f075c724c set default submesh_tp_size to prevent unset local variable error (#3687)
* set default submesh_tp_size to prevent unset local variable error

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-22 12:31:03 +02:00
7ecc2d7f39 bump to v1.10.0-release 2025-07-16 16:26:03 +00:00
12f89bb754 do not call partial state if not initialized 2025-07-16 13:42:58 +00:00
348aabaaaf Update Gaudi runner image to latest SynapseAI and enable previously disabled tests (#3653)
* update synapse and add tp tests

* only skip regional compile speedup check

* pass sdp test on hpu
2025-07-16 14:33:36 +02:00
3b13453bbf “Stop Halving My Batch!” · Default back-off 0.5 → 0.9 (#3684)
* feat(memory): change default find_executable_batch_size to change by 10% instead of 50%

* Update test_memory_utils.py

* Apply style fixes

---------

Co-authored-by: Amit Moryossef <amitmoryossef@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-16 12:32:46 +02:00
0408ab12d7 warn for invalid keys (#3613)
* warn for invalid keys

* add test for check_device_map invalid keys

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-16 12:23:41 +02:00
55e518a762 accelerate/data_loader.py: do not yield if the base_dataloader is empty (#3659)
* accelerate/data_loader.py: do not yield if the base_dataloader is empty

in the code:
```
        dataloader_iter = self.base_dataloader.__iter__()
        # We iterate one batch ahead to check when we are at the end
        try:
            current_batch = next(dataloader_iter)
        except StopIteration:
            yield
```

If the base dataloader is empty then the exception is raised but `yield`
yields nothing.

This at the time of:
```
if self.device is not None:
                    current_batch = send_to_device(current_batch, self.device, non_blocking=self._non_blocking)
```

would lead to uncaught exception like:
 File "/root/rl-swarm/.venv/lib/python3.10/site-packages/accelerate/data_loader.py", line 575, in iter
    current_batch = send_to_device(current_batch, self.device, non_blocking=self._non_blocking)
UnboundLocalError: local variable 'current_batch' referenced before assignment because `current_batch`
was never assigned because `next(dataloader_iter)` returned with exception `StopIteration`.

Signed-off-by: 0xnightwind <nightwind1899@gmail.com>

* Update src/accelerate/data_loader.py

---------

Signed-off-by: 0xnightwind <nightwind1899@gmail.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-07-16 12:04:25 +02:00
7e11ac43f0 fix: wandb config not saved in offline mode (#3648)
* fix: wandb config not saved in offline mode

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-15 17:51:44 +02:00
e2cc537db8 trackio (#3669)
* trackio

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Abubakar Abid <abubakar@huggingface.co>

* seven -> eight

* Add trackio as a real tracker instead

* Sort

* Style

* Style

* Remove step

* Disable trackio on Python < 3.10

* Update src/accelerate/tracking.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* More style

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Abubakar Abid <abubakar@huggingface.co>
2025-07-15 17:17:49 +02:00
847ae58c74 Fix FP8 tests, enable FP8 to be used without direct Accelerator() configuring (#3677)
* single-gpu tests passing

* install deepspeed in fp8 container

* revert mixed_precision check
2025-07-15 15:20:57 +02:00
6e104f31de unpin datasets (#3681) 2025-07-15 15:00:35 +02:00
524e5f9828 Speedup model loading by 4-5x in Diffusers (#3674)
* update

* update

* make style

* update

* merge if statements
2025-07-11 16:58:35 +02:00
d6c986c3f2 Bunch of FSDP improvements (#3671)
* Feat: split tests

* Feat: finito

* Fix

* Final, tests pass
2025-07-09 16:05:22 +02:00
1ac8643df7 xpu enablement on left cases (#3654)
* 1. enable xpu for launcher 2. expand cuda only ds uts to xpu 3. expand profiler example to xpu

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* rename

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* Update profiler.py

* Apply style fixes

---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-07 18:10:53 +02:00
07ce74868c Fix: properly error when DDP + Dtensor model (#3629)
* Feat: add check

* Refactor: nits
2025-06-27 01:33:45 +02:00
175fe91589 Added a check in the no_sync() function to avoid errors when using deepspeed zero2/3. (#3656) 2025-06-26 14:39:04 +02:00
fe16ce8bce Fix fsdp2 example (#3657) 2025-06-26 14:08:51 +02:00
5987d79a53 Update gradient_accumulation.md (#3649) 2025-06-23 11:58:31 +02:00
31af8d4e8e shards (#3645) 2025-06-20 11:24:20 +02:00
b7493a82b1 Add support for e5e2 and default to hybrid when launcher is used (#3640)
* add support for e5e2 and defaumt to hybrid when launcher is used

* style
2025-06-20 11:11:32 +02:00
a16d2bb3c1 bump to v1.9.0dev 2025-06-19 15:13:41 +02:00
cac22ed980 fix grad acc deepspeed (#3638)
* fix grad acc deepspeed

* style
2025-06-19 12:06:21 +02:00
be826a6b7b Fix: correct labels (#3637) 2025-06-19 11:01:56 +02:00
5939640829 Feat: add cpu offload (#3636) 2025-06-18 18:13:45 +02:00
7f9c8cbe34 [DeepSpeed] sync gradient accum steps from deepspeed plugin (#3632)
* sync steps

* add a debug log when overriding

* make grad accum always consistent

* remove debug
2025-06-18 16:45:57 +02:00
9888c7ed23 feat: use datasets.IterableDataset shard if possible (#3635)
* feat: use datasets.IterableDataset shard if possible.

When `accelerator.prepare` is called on a
`datasets.IterableDataset`, use the `shard` method to
split the dataset across the available processes. This
allows for more efficient data loading and processing.
Without load and slice overhead of `IterableDatasetShard`

* dataset

* remove unused import

* style

---------

Co-authored-by: wuwenxu.01 <wuwenxu.01@bytedance.com>
2025-06-18 16:45:17 +02:00
42a68c30dc Fix Typos in Documentation and Comments (#3621)
* Update state.py

* Update tracking.py
2025-06-18 15:53:02 +02:00
6597dae780 Integrate SwanLab for offline/online experiment tracking for Accelerate (#3605)
* add support for SwanLabTracker and update related documentation

* add emoji in FRAMWORK

* apply the style corrections and quality control

* add support for SwanLabTracker in tests

* fix bug in test_tracking
2025-06-18 15:42:29 +02:00
8878d93745 remove hardcoded cuda from fsdpv2 (#3631) 2025-06-17 14:32:10 +02:00
2eaf5cdbbc remove ipex.optimize in accelerate (#3608)
* remove ipex.optimize in accelerate

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix mis-style

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* Update intel_cpu.md

* Update launch.py

* fix comments

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* add logging

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* Update launch.py

* Apply style fixes

---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-17 11:08:19 +02:00
23c1d8db89 [Deepspeed] deepspeed auto grad accum (#3630)
* deepspeed auto grad accum

* add tests for grad accum

* use tiny-random-gpt2

* Update tests/deepspeed/test_deepspeed_gradient_accumulation.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix redundant code

* set_gradient_accumulation_boundary is always there

* remove unused helper

* no need for this

* full revert

* Apply style fixes

* get_global_grad_norm is always there

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-16 16:28:24 +02:00
0af621bbec add xpu support in TorchTensorParallelPlugin (#3627)
* add xpu support in TorchTensorParallelPlugin

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix typo

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-06-13 17:45:51 +02:00
bee04f1b01 Add fp8_e5m2 support in dtype_byte_size (#3625)
* float8_e5m2 device_map

* remove prints
2025-06-12 16:27:32 +02:00
8a953f08c6 fix xpu 8bit value loading (#3623)
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
2025-06-12 14:55:14 +02:00
3518c03584 small fix (#3619) 2025-06-11 14:02:45 +02:00
2f8fd72e51 Remove device_count (#3587) 2025-06-10 14:50:34 +02:00
d2e6b0313d [FSDP2] Refactor + FP8 (#3585)
* Fix double wrap

* Clocking off, ~equal to torch baseline

* works?

* Working version

* Partial rewrite

* FSDP2 path works

* Fix back prepare

* Almost done, proper AC left

* Feat: should work, cleanup + test more benchmarks left

* Style+quality

* Feat: fp8 example

* Feat: better example

* Feat: add readme

* Docs + should be done

* Fix: typos

* Fix: protect imports

* Feat: address comments

* Feat: add flops image
2025-06-10 14:26:48 +02:00
b9fee48c85 better handle FP8 with and without deepspeed (#3611)
* use the state mixed precision which has undergone all preprocessing

* Update src/accelerate/accelerator.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/accelerator.py

* accelerator state sets the mixed precision for deepspeed and fp8_enabled

* fix

* fix

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-06-10 14:24:43 +02:00
3a82b056cf Fix bf16 training with TP (#3610)
* fix

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-10 11:29:59 +02:00
6b61a373a2 fix deepspeed regional compilation (#3609) 2025-06-06 14:48:43 +02:00
682691deac Update Gaudi Runners (#3593)
* test

* fix

* push

* in the morning

* fix backend

* run first

* set habana modules

* dynamo backend

* trigger

* remove on pr

* remove on file change
2025-06-03 12:36:56 +02:00
791055b484 Fix: list object has no attribute keys (#3603) 2025-06-03 12:24:20 +02:00
16bf1d8901 enable torchao and pippy test cases on XPU (#3599)
* enable torchao and pippy test cases on XPU

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
2025-05-30 17:36:34 +02:00
ab3c604e48 enable big_model_inference on xpu (#3595)
* enable big_model_inference on XPU

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix quality

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
2025-05-30 17:23:26 +02:00
273799c85d enable fsdp2 benchmark on XPU (#3590)
* enable fsdp2 benchmark on XPU

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* add deterministic

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
2025-05-27 14:08:59 +02:00
43526c5c08 add device-agnostic GradScaler (#3588)
* add device-agnostic GradScaler

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix bug

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix review comments

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* fix

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* format

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* Apply style fixes

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-27 11:44:50 +02:00
07f2392f40 change to use torch.device (#3594)
Signed-off-by: Matrix YAO <matrix.yao@intel.com>
2025-05-27 11:17:18 +02:00
ee2f48c2c3 [docs] no hard-coded cuda in the ddp documentation (#3589)
* make device-agnostic

* refactor
2025-05-27 11:16:42 +02:00
4f3abb73a7 Set ccl and KMP param in simple launch (#3575)
* Even 1 CPU mechine can also run multi process

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix ccl and kml param setting

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* set master addr only when processes > 1

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix num process check

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix ccl args check

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

---------

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
2025-05-26 15:55:10 +02:00
db536cbfeb Fix: Defer Tracker Initialization to Prevent Premature Distributed Setup (#3581)
* Fix tracker initialize distributed before InitProcessGroupKwargs

* Fix tracker initialize distributed before InitProcessGroupKwargs

* Add test for bug #3550

* Improve test for #3550

* Remove redundant code

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* fix style

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-05-26 15:08:13 +02:00
4e9d0deba6 enable regional_compilation benchmark on xpu (#3592)
* enable regional_compilation benchmark on xpu

Signed-off-by: Matrix YAO <matrix.yao@intel.com>

* Apply style fixes

---------

Signed-off-by: Matrix YAO <matrix.yao@intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-26 15:05:42 +02:00
8cb3ace894 Add kwargs to optimizer, scheduler and dataloader using function accelerator().load_state() (#3540)
* Added artifacts and figure tracking at MLFlow tracker

* Added `log_artifact` to the MLFlowTracker

* Remove changes

* Added kwargs when loading state.

* added doc string

* Adjusted correct default types of kwargs

* Changed the load kwargs to a single one

* removed None value from kwargs

* fix kwargs for loading the model

* removed load_kwargs from optimizer state dict

* make load_kwargs a dictionary

* revert last changes

* reverted load_kwargs

* fix docstring

* added dict initiation

* Fix quality error during PR
2025-05-22 17:21:54 +02:00
b6d97cb856 Resolve logger warnings (#3582)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-05-22 16:26:31 +02:00
33967d4733 Add support for standalone mode when default port is occupied on single node (#3576)
* add standalone mode and replace ConnectionError with a warning when the main process port is in use, allowing for automatic port selection

* address review feedback: warn on port conflict only for single-node; raise error for multi-node

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-20 12:29:53 +02:00
5b1fcda371 enable test_cli & test_example cases on XPU (#3578)
* enable test_cli & test_example cases on XPU

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* remove print

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix ci issue

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

---------

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-05-20 12:04:24 +02:00
f55f0533b5 goodbye torch_ccl (#3580)
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-20 12:02:14 +02:00
1ec99f0b58 enable test_load_checkpoint_and_dispatch_with_broadcast cases on XPU (#3579)
* enable test_load_checkpoint_and_dispatch_with_broadcast cases on XPU

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* Update test_load_checkpoint_and_dispatch_with_broadcast.py

---------

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-19 11:27:40 +02:00
417bc52965 bump to v1.8.0dev 2025-05-15 12:02:44 +02:00
97c93c4809 enable test_dispatch_model_tied_weights_memory_with_nested_offload_cpu on xpu (#3569)
* enable test_dispatch_model_tied_weights_memory_with_nested_offload_cpu
case on XPU

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* replace hard-coded torch.cuda w/ device-dependent callings

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* use device agnostic clear_device_cache

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* fix style

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

---------

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-15 11:40:55 +02:00
cd37bbb629 set backend correctly for CUDA+FSDP2+cpu-offload (#3574)
* set backend correctly for CUDA+FSDP2+cpu-offload

* offload

* format

---------

Co-authored-by: Wing Lian <wing@axolotl.ai>
2025-05-15 11:38:53 +02:00
7aa3b56c80 Fix prevent duplicate GPU usage in distributed processing (#3526)
* check if num_extrs>0 and test

* test pass

* test passes

* make quality fix

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-15 11:31:20 +02:00
14f4306ca6 reenable FSDP2+qlora support (#3546) 2025-05-15 11:30:55 +02:00
e6e717589e Add regional compilation to cli tools and env vars (#3572)
* add regional compilation to cli tools and env vars

* added seq parallel to gaudi docs

* explain that lm_head is also compiled separately

* style

* docstring

* style
2025-05-15 11:30:27 +02:00
1f6efcea0b tune env command output (#3570)
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-15 10:51:43 +02:00
9fa97f9600 simplify model.to logic (#3562)
* simplify model.to logic

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

* revert device_type == "cuda" changes

Signed-off-by: Matrix Yao <matrix.yao@intel.com>

---------

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
2025-05-15 10:31:08 +02:00
764eee4a48 add xpu synchronize (#3563) 2025-05-14 19:20:24 +02:00
202e6c178a Update dynamic env handling to preserve None when USE_DYNAMIC is unset (#3567)
* Update dynamic env handling to preserve None when USE_DYNAMIC is unset

* Apply suggestions from code review

---------

Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>
2025-05-14 16:34:08 +02:00
32874257f3 Add Gaudi doc (#3537)
* Add Gaudi doc

* Address comment from review

* Remove point about region compilation

---------

Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>
2025-05-13 18:27:33 +02:00
281314b479 preserve parameter keys when removing prefix (#3564)
* preserve parameter keys when removing  prefix

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-13 17:11:42 +02:00
3524a504c8 update path (#3561) 2025-05-13 13:57:29 +02:00
f48d95c493 canonicalize fsdp2 names when fixing optimizer (#3560) 2025-05-12 19:40:50 +02:00
f76208f5a8 make env var and dataclass flag consistent (#3307)
Signed-off-by: SumanthRH <sumanthrh@anyscale.com>
2025-05-12 17:57:58 +02:00
ae0499ea96 cast if dtype is not None (#3559)
Co-authored-by: dpappadopulo <dpappadopulo@bloomberg.net>
2025-05-12 15:27:11 +02:00
ddc49f1e9a Fix the issue where set_epoch does not take effect. (#3556)
* Fix the issue where `set_epoch` does not take effect.

* Apply style fixes

---------

Co-authored-by: root <root@hjx-dev-h20-3-0.hjx-dev-h20-3.bcloud.svc.cluster.local>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-12 14:30:19 +02:00
9b2d6eaf32 add support for port 0 auto-selection in multi-GPU environments (#3501)
* add support for port 0 auto-selection in multi-GPU environments

* address review feedback: [add implementation for DeepSpeed, simplify code logic]

---------

Co-authored-by: biondi <biondi_lee@htx.ht.gov.sg>
2025-05-12 13:36:45 +02:00
7b5774ac55 Dynamo regional compilation (#3529) 2025-05-12 09:49:29 +02:00
7013365791 fix typos (#3549) 2025-05-08 14:10:12 +02:00
8d8fd83672 fix notebook_launcher for Colab TPU compatibility. (#3541)
* fixes for notebook_launcher for google colab TPU compatibility.

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-06 17:55:18 +02:00
3a941d4b4e Fix: param is not a parameter or buffer (#3545) 2025-05-06 14:28:48 +02:00
d02e51cc21 Update big_modeling.md for layerwise casting (#3548)
* Update big_modeling.md for layerwise casting

* doc fix
2025-05-06 09:50:53 +02:00
c5caa11e85 Fix CI due to missing package (#3535)
* fix test

* fix

* fix

* fix

* fix worflow

* check

* revert
2025-04-29 10:48:39 +02:00
39e2bebb12 Update Docker builds to align with CI requirements (#3532) 2025-04-28 10:50:50 +02:00
0af45bf1e8 Fix logic in accelerator.prepare + IPEX for 2+ nn.Models and/or optim.Optimizers (#3517)
* Fix logic in _prepare_ipex

* Add caution about prepare in IPEX docs

* Add suggested workaround to IPEX docs

* Revert unnecessary change

* Update docs/source/usage_guides/ipex.md

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Remove double space

* Simplify logical checks for IPEX availability

* Revert unnecessary change

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-04-25 17:31:36 +02:00
806ac848c9 [FSDP2] Issues in Wrap Policy and Mixed Precision (#3528)
* fix fsdp2 wrap policy

* nn.Module doesn't have the dtype attribute

* Revert "nn.Module doesn't have the dtype attribute"

This reverts commit 513c7892876f81ec76ce32bcdce83bfe8556491d.

* Fix dtype handling in fsdp2_prepare_model to accommodate nn.Module without dtype attribute

* fix format problem
2025-04-24 22:59:13 +02:00
23b092507a [FSDP2] Fix memory spike with cpu_ram_efficient_loading=True (#3482)
* Feat: shard on meta device

* Feat: support fqns in get_non_persistent_buffers

* Fix: retie weights after loading
2025-04-24 12:19:49 +02:00
8fb073536a [FSDP2] Enable FULL_STATE_DICT (#3527)
* Feat: enable FULL_STATE_DICT in config

* Feat: support FSDP2 FULL_STATE_DICT

* Refactor: remove deprecated save/load_state_dict

* Docs: add FULL_STATE_DICT as supported to docs

* Feat: update tests

* Feat: change Accelerator.get_state_dict() to use new api
2025-04-23 18:03:45 +02:00
4f35cf713c Solve link error in internal_mechanism documentation (#3506) (#3507)
* Solve link error in internal_mechanism (#3506)

* Link correctly to documentation (#3506)
2025-04-23 17:47:25 +02:00
ada21cfbbd fix cuda init (#3530) 2025-04-23 15:57:40 +02:00
b451956fd6 Add torchao to FP8 error message (#3514) 2025-04-22 14:06:47 +02:00
6a9a61520d [Feat] Layerwise casting hook (#3427)
* start

* method implementation.

* updates.

* updates

* remove print.

* aryan as one of the contributors

Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com>

* change to attach_layerwise_casting_hooks

* enable skipping modules.

* tests

* revert style changes to other files.

* feedback

* remove comments

* add example

* fix test case for edges.

* reviewer feedback

---------

Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com>
2025-04-22 13:49:43 +02:00
423fbbfdea fix cache (#3513) 2025-04-18 18:07:46 +02:00
34c1779828 Remove deprecated PyTorch/XLA APIs (#3484) 2025-04-15 11:44:14 +02:00
54496571fd Fix: require transformers version for tp tests (#3504) 2025-04-15 11:42:26 +02:00
4a3cbcb63c fix: apply torchfix to set weights_only=True (#3497)
* fix: apply torchfix

* fix: apply torchfix
2025-04-15 11:41:05 +02:00
583b26db3c Add FP8 runners + tweak building FP8 image (#3493)
* Initial test

* Try on push

* Only wf dispatch now

* keep trying

* Try again

* Try again

* source activate?

* Force bash

* Source activate accelerate to make it get the env propelry

* try using nightly docker

* Try this?

* Try this?

* Try this, proper output

* Try this, proper output

* Try via full conda activate(?)

* rm conda

* te fp8 tests

* add ao

* ao in setup too

* actually include fp8 deps

* FP8 docker image, use newer version

* Update docker image to take in input

* Test

* prior month

* igpu?

* Use only last 2 digits of year

* Build rest

* Apply style fixes

---------

Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-15 11:39:43 +02:00
7812d979c3 Fix deepspeed tests (#3503)
* Fix: check for tp size when creating accelerator in tests

* Fix: better error handling in TorchTensorParallelPlugin

* Fix: make tp related args optional in tests (cmt by @kmehant)
2025-04-14 16:16:01 +02:00
67adb473a4 (Part 1) fix: make TP training compatible with new transformers (#3457)
* feat: support new tp refactor for training

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: @S1ro1 review cmt

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: @S1ro1 review cmt - tp_plan flag docstr

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: @SunMarc review cmt on un used flag

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: pick approach 3 as discussed in the PR

see https://github.com/huggingface/accelerate/pull/3457#discussion_r2037909077 for more details

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: styling errors

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: bump up transformers for tp_size feature

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

---------

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-04-11 18:31:28 +02:00
ee4cab96ed nit: needed sanity checks for fsdp2 (#3499)
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-04-11 17:04:34 +02:00
73c2378c55 Use torch.distributed.checkpoint.state_dict.set_model_state_dict in load_checkpoint_in_model (#3432)
* Use torch.distributed.checkpoint.state_dict.set_model_state_dict in load_checkpoint_in_model

load_checkpoint_in_model now supports loading into FSDP2-wrapped models when using device_map=None

for large models in a distributed setting, by leveraging broadcast_from_rank0, the reduced file system reads results in much faster loading (for loading a 70B model on a single node of 8 GPUs, 60 seconds vs 90 seconds)

* Guard torch.distributed.checkpoint.state_dict with is_torch_version('>=', '2.2.0')

This should fix issues with slow import and also fixes versioning issues

https://github.com/huggingface/accelerate/pull/3432#discussion_r1989782680
https://github.com/huggingface/accelerate/pull/3432#discussion_r1989946020

* Add test for non-distributed, TP, and DDP for load_checkpoint_and_dispatch(device_map=None) using set_model_state_dict

https://github.com/huggingface/accelerate/pull/3432#discussion_r1989741480
https://github.com/huggingface/accelerate/pull/3432#discussion_r1989960317

* Verify minimum version for broadcast_from_rank0

* Mark transformers as required for broadcast_from_rank0 tests, mark min version of torch to test as 2.4.0

* Add model_devices guard to set_model_state_dict

set_model_state_dict will fail if the model state_dict is not on at most one device

* Move decorators to top of test class

* https://github.com/huggingface/accelerate/pull/3432/files#r1993272280
* https://github.com/huggingface/accelerate/pull/3432/files#r1993268932

* Unindent functions

https://github.com/huggingface/accelerate/pull/3432/files#r1993275663

* Add condition for w/ explanatory links for set_model_state_dict model device restrictions

* Fix distribution of 2.2.0 condition

* Remove tensor parallel test

* Fix model materialization example

* Fix materialization example

* Remove old tensor parallel test
2025-04-11 17:01:33 +02:00
b2f937faec Add the HPU into accelerate config (#3495)
* Add the HPU into accelerate config

Signed-off-by: yuanwu <yuan.wu@intel.com>

* Fix the error of make style

Signed-off-by: yuanwu <yuan.wu@intel.com>

---------

Signed-off-by: yuanwu <yuan.wu@intel.com>
2025-04-10 17:41:47 +02:00
3b89987710 [bug] unsafe_serialization option doesn't work (#3496) 2025-04-09 15:16:28 +02:00
a43e4170fc fix warning error (#3491)
* fix warning error

* use logger.warning
2025-04-09 14:26:40 +02:00
334d6ab957 fix fp8 config (#3492) 2025-04-09 14:19:07 +02:00
650b6659c0 add support for custom function for reducing the batch size (#3071)
* add support for custom function for reducing the batch size

* fix scoping

* Apply style fixes

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-08 14:08:07 +02:00
fb90996365 Don't create new param for TorchAO sequential offloading due to weak BC guarantees (#3444)
* update

* make style

* use assignment to set device
2025-04-08 12:29:12 +02:00
32b2e1606f Fix check_tied_parameters_in_config for multimodal models (#3479)
* fix

* fix
2025-04-08 12:27:49 +02:00
8c0a29626d Update low_precision_training.md (#3488) 2025-04-08 11:39:58 +02:00
63168b151f Adds style bot (#3478)
* Style bot

* Use reusable style bot

---------

Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
2025-04-03 17:09:49 +02:00
3cf5e4c802 use device agnostic torch.OutOfMemoryError from pytorch 2.5.0 (#3475)
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-04-02 15:08:22 +02:00
9642a1ac81 bump to v1.7.0dev 2025-04-01 13:55:11 +02:00
3169339f5b Bump ruff to 0.11.2 (#3471)
* ruff format

* Bump ruff to 0.11.2
2025-04-01 11:57:06 +02:00
67a768be07 remove use_xpu to fix ut issues, we don't need this since XPU is OOB … (#3460)
* remove use_xpu to fix ut issues, we don't need this since XPU is OOB supported now

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* add deprecate warnings

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

---------

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-04-01 11:55:37 +02:00
531643436e [MLU] fix deepspeed dependency (#3472) 2025-04-01 11:55:23 +02:00
83e09a9331 Update ruff target-version to py39 and apply more fixes (#3470)
Signed-off-by: cyy <cyyever@outlook.com>
2025-03-31 15:00:25 -04:00
9c4eeb9ba8 xpu: enable xccl distributed backend (#3401)
xccl distributed backend is available for XPU device backend starting
from torch 2.7 (requires torch built with `USE_XCCL=1 USE_C10D_XCCL=1`).

This change is verified with the following Transformers tests:
* `tests/extended/test_trainer_ext.py`
* `tests/trainer/test_trainer_distributed.py`

This commit does not impact IPEX which currently remains using custom
distributed backend.

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-03-31 19:11:47 +02:00
a0edc8dcf2 Apply ruff py39 fixes (#3461)
* Apply ruff py39 fixes

* Ruff format
2025-03-31 19:10:08 +02:00
11a3c0001d Update CometMLTracker to allow re-using experiment (#3328)
* Update CometMLTracker to allow re-using experiment

Update CometMLTracker to use new `comet_ml.start` function to create
Experiments, this way end-users can create online, offline experiments, append
data to an existing experiment and it also automatically re-use a running
experiment if one is present rather than creating a new one.

* Add back calling Experiment.end in finish

As `accelerator.end_training` is supposed to be called at the very end of
training by the user, users will still be able to log data after the main
training loop and this is needed for Offline Experiment to create the offline
archive.

* Update CometTracker behavior based on the version of the package

Use new method only for recent version of comet_ml
2025-03-31 19:09:34 +02:00
8b31a2fe2c Fix get_balanced_memory for MPS (#3464)
This also fixes a failure in test_get_balanced_memory:

```
assert {0: 215, 1: 300} == {0: 300, 1: 300}
[...]
tests/test_modeling_utils.py:871: AssertionError
```

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-03-31 17:33:33 +02:00
3f636d6260 Fix seeding of new generator for multi GPU (#3459)
* fix new generator seeding

* remaining arbitrary fixed seed

* test
2025-03-28 12:48:05 -04:00
803b6648b4 Update @ (#3466)
* Update @

* DS

* Add marc everywhere, he's always watching
2025-03-28 12:43:06 -04:00
17f9c19f48 Fix: clip grad norm in fsdp2 (#3465) 2025-03-28 15:55:49 +01:00
d7c741a6bc Initial FSDP2 support (#3394)
* Feat: initial conversion tool draft

* Feat: add value mapping to conversion tool

* Refactor: move from os to pathlib

* Feat: add first tests

* Feat: more tests

* Feat: minor fixes + dataclass conversions

* Feat: more remapping

* Fix: namespace has no attribute version + style

* Fix: offload params behavior

* Feat: add option to only rename keys in the config file to

* Fix: wrong attr name

* Fix: partially resolve comments

* Feat: work on config command + minor fixes to reflect changes

* Refactor: style + quality

* Feat: fsdp2 initial work

* Feat: some cleanups and first running fsdp2

* Fix: version checks + mixed precision policy

* Refactor: style + quality

* Remove obsolete todos

* Feat: grad norm clipping

* Fix: tests + rename attrs

* Refactor: style + quality

* Fix: None object is not iterable

* Fix: default cpu_offload for fsdp2

* Fix: cpu offload now behaves correctly

* Feat: apply_activation_checkpointing

* Fix: append to models

* Feat: start on concept guide

* wip: concept guide

* Fix: toctree

* cleanup of the concept guide

* Fix: minor fixes + mp

* Fix: quality + | to union

* Feat: backwards compatibility + args cleanup

* Fix: style + quality

* Feat: enable dropping refs when getting named params

* Fix: memory footprint with fsdp2

* Feat: cpu ram efficient loading

* Fix: mp

* Fix: not warn about sync_modules if fsdp version is 1

* Refactor: minor changes

* Small fixes + refactors

* Feat: docs + cleanup

* Feat: saving works (not sure about optim)

* More loading/saving work

* Feat: disable local_state_dict for fsdp2

* Fix: fsdp2 convergence

* Feat: working comparison script

* Feat: memory tracking fsdp2

* Feat: memory visualizer

* Feat: more work on benchmark

* Fix: raise error if model+optimizer arent prepared together

* Minor fixes

* Style

* More warnings

* Fix: reshard_after_forward vs sharding_strategy conflict

* Refactor: clean up accelerator

* Feat: more testing in fsdp2 benchmark

* Fix: memory visualizer

* Untested: support load/save_state

* Feat: concept guide improvements

* Refactor: concept guide

* Feat: benchmark works

* Feat: more work on fsdp2 benchmark

* Fix: note syntax

* Fix: small fixes + make original tests work

* Fix: grad scaling

* Feat: reshard after forward tests

* Feat: backward prefetch tests

* Feat: tests for fsdp2

* Refactor: minor fixes

* Feat: fsdp_utils docstrings

* Feat: autodoc fsdp.md

* Docs: get_module_children_bottom_up

* Fix: remove unused images

* Refactor: benchmark cleanup

* Fix: docs

* Feat: final doc changes

* Fix: torch.distributed has no attribute tensor

* Fix: style

* Feat: tests include version in failures

* Fix: benchmark force model to load in fp32

* Fix: rename runs

* Feat: last minor fixes

* Feat: new benchmark images
2025-03-27 15:01:18 -04:00
8ab01d32cf Fix device KeyError in tied_params_map (#3403)
Fixes: #3402

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-03-25 16:25:02 +01:00
140acb356e Fix AMD GPU support with should_reduce_batch_size() (#3405)
* Fix AMD GPU support with should_reduce_batch_size()

Even though torch has NVIDIA and AMD GPUs operate under the cuda namespace, the out of memory error for AMD GPUs is different. When trying to determine if a model can fit on an AMD GPU, this function will evaluate to false for a `torch.OutOfMemoryError`. This PR adds another check for the error string.

Example error messge:
```
'HIP out of memory. Tried to allocate 64.00 GiB. GPU 0 has a total capacity of 63.98 GiB of which 48.63 GiB is free. Of the allocated memory 15.02 GiB is allocated by PyTorch, and 129.49 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)'
```

* Missing comma

* Update memory.py

Consolidate OOM error check string
2025-03-25 10:32:29 -04:00
8576112bc8 enable 2 UT cases on XPU (#3445)
* enable test_dispatch_model_tied_weights_memory_with_nested_offload_cpu test case on XPU

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* enable test_dispatch_model_tied_weights_memory on XPU

Signed-off-by: N <matrix.yao@intel.com>

* fix bug

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>

* Update src/accelerate/test_utils/testing.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/test_utils/testing.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update tests/test_big_modeling.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

---------

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>
Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
Signed-off-by: N <matrix.yao@intel.com>
Co-authored-by: root <root@a4bf01945cfe.jf.intel.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2025-03-25 14:19:26 +01:00
806f661cd3 remove device index workaround on xpu since xpu supports integer device index as cuda now (#3448)
* remove xpu device index WAs since pytorch xpu supports integer index now

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>

* remove print

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

---------

Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>
Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
Co-authored-by: root <root@a4bf01945cfe.jf.intel.com>
2025-03-24 14:49:05 +01:00
9015a26f09 Fixup ao module filter func (#3450) 2025-03-21 10:21:54 -04:00
6de900e10a feat: Add no_ssh and slurm multinode launcher options for deepspeed (#3329)
* feat: Add no_ssh multinode launcher option for deepspeed

* fix: Add CLI hints and brief documentation, add slurm launcher, and ensure that deepspeed 0.14.5 version is used for nossh
2025-03-20 10:33:00 -04:00
ffb27138f7 Changed --config arg to --config_file in the slurm multinode fsdp example. (#3447) 2025-03-20 10:14:18 -04:00
4b6be89910 Update build_and_run_tests.yml 2025-03-15 11:33:32 +01:00
a702364256 Fix attribute issue with deepspeed tp (#3443) 2025-03-13 18:27:25 +01:00
a31bd767c1 Fix prod issues (#3441)
* Fix default device

* Use CPU
2025-03-13 11:21:11 -04:00
71036329f7 tensor parallel dataloder for deepspeed accelerator (#3390)
* ds tp change

* update

* format

* add version check

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* put device_mesh logic to func + format

* fix comments

* format

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2025-03-13 12:40:34 +01:00
f648feba97 Add log_artifact, log_artifacts and log_figure capabilities to the MLflowTracker. (#3419)
* Added artifacts and figure tracking at MLFlow tracker

* Added `log_artifact` to the MLFlowTracker

* Remove changes

* Added artifacts, artifacts and figure tracking at MLFlow tracker

* Improved the docstring

* added require_mlflow function at test_utils

* add test for MLflowTracker

* Bit of litting

* Refactor to a more robust test

* Revised the test asserts to something more robust.

* Removed incorrect import and some litting.

* removed commented code

* initiate tracker using Accelerator

* Added mlflow and matplotlib to setup.py. Guarded and decoredated the functions that required them.

* Guarded mlflow import

* added matplotlib required warning.

* ran style and quality
2025-03-12 18:11:29 +01:00
14fc61eeac Bump to 1.6.0.dev0 2025-03-12 10:13:18 -04:00
d9e6af8773 HPU support (#3378)
* init

* style

* is_hpu_available

* fix

* import habana_frameworks.torch.distributed.hccl

* style

* test

* initialize dist proc group

* revert

* set backend to hccl only if hccl initialization sets a local rank

* force backend hccl and multi_hpu type when sure of distributed launch

* style

* pass accelerator tests

* pas big modeling tests with bigger atol/rtol for accelerators

* fix hpu device count and skip tests requiring hpu:x

* hpu autocast

* hpu rng_state

* hpu launch

* hpu special device placement

* hpu launch

* rng state

* distributed data loop tests

* enforce non contiguity after device memory allocation

* pass fsdp tests

* enforce pt_hpu_lazy_mode=0 when fsdp testing

* pass cli tests

* pass and document grad sync tests

* pass kwargs handler and autocast tests

* memory utils

* found source of int64 errors

* skip some modeling utils tests

* enable int64

* skip optimizer tests

* pass checkpointing tests

* pass accelerator tests with safetensors main

* more hpu stuff

* style

* remove PT_HPU_LAZY_MODE and PT_ENABLE_INT64_SUPPORT as they should be in the testing environment

* start testing on gaudi2

* support fp16 on gaudi2

* add testing order

* custom hpu fsdp env dict

* fix torch trace malloc

* test ddp half precision comm hooks

* fix

* fix

* remove lower bound for hpu

* use 0.72 as lower bound

* lower lower bound

* order deepspeed tests

* fix

* deepspeed_use_hpu

* assert non lazy mode with offloaded optimizer

* make patching torch with habana frameworks the default

* less of require_non_hpu

* skip test_multi_device_merge_fsdp_weights for now as it halts

* skip another flaky test

* format

* use habana_visible_modules

* patch torch hpu device count

* avoid setting HABANA_VISIBLE_MODULES

* don't play with habana visible devices/modules

* only with hpu

* fixes and skips

* skip

* fix device ids and add some todos

* skip offloading with generate()

* fix

* reduced atol/rtol for hpu

* fix

* tag deepspeed tests that should run first

* enable a test path that was skipped

* revert a test that was customized for gaudi1

* some patching to enable HABANA_VISIBLE_MODULES

* fix zero3 test

* misc

* test DTensor TP

* remove gaudi1

* test

* style

* comment

* pass pad_across_processes

* require_fp16

* pass memory utils test

* test_ddp_comm_hook

* skip half precision comm hooks on hpu

* fix

* is_fp16_available

* fp16

* tp as part of integration tests

* fix

* write_basic_config

* safetensors

* local sgd and masked_fill_fwd_i64

* fix num_processes in test_load_states_by_steps

* fp8 support

* test

* fix

* add a workflow

* Update src/accelerate/accelerator.py

* review comments

* ci

* style

* comments

* test

* habana_frameworks.torch

* patch device count

* fix

* fix

* require_fp8

* fix

* fix

* gaudi 1

* remove unnecessary

* fixed maskd fill error in transformers

* style

* balanced_memory pass on hpu

* remove for now

* run first

* Apply suggestions from code review

* style after merge

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/utils/transformer_engine.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* empty cache review comments

* test_scirpt.py error messages

* AccelerateTestCase for accelerator state cleanup

* test

* add gaudi1 workflow

* fp8 avilability

* fix

* reduce batch size

* concurrency

* check cuda as well

* nits and comments

* mark fsdp tests that require_fp16

* style

* mark deepspeed fp16 tests

* update image

* fix

* updated

* better msgs

* skip pippy

* test

* test on 2 device

* support up to 1% relative error in test_accelerate

* skip hpu fp16

* allow for 1 byte differene

* revert torch_device change

* style

* skip memory release since it's flaky

* add accelerator state cleanup to fixture

* fix

* atol

* fix

* more rtol

* equal grad test

* revert

* pass pippy on gaudi2 and skip on gaudi1

* enable sd 1.5 test with require fp16

* added warning on memory release

* don't log warning in memory release as it requires PartialState to be initialized

* Apply suggestions from code review

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2025-03-11 11:16:57 -04:00
b271eb1365 add distributed example for llava next video (#3417) 2025-03-11 11:07:46 -04:00
4677b8089f Fix quality (#3424)
* Run quality

* Update src/accelerate/test_utils/scripts/test_script.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-03-06 12:33:34 +01:00
e456796be8 fix typo : thier -> their (#3423) 2025-03-06 11:27:51 +01:00
ac3749dc11 Add Tecorigin SDAA accelerator support (#3330)
Co-authored-by: siqi <siqi@tecorigin.com>
2025-03-05 10:11:21 +01:00
6e8eea2e73 fix: Add device=torch.get_default_device() in torch.Generators (#3420) 2025-03-05 10:08:49 +01:00
c7b3625592 fix: ensure CLI args take precedence over config file. (#3409)
* fix: ensure CLI args take precedence over config file.

* add test case

* remove inappropriate comment

---------

Co-authored-by: 차영록 <jaycha@ncsoft.com>
2025-02-28 09:15:42 -05:00
90f81986b9 minor doc fixes (#3365) 2025-02-25 15:52:26 +01:00
fa26dc6156 add missing import (#3396) 2025-02-25 11:07:14 +01:00
6fcc8efd2e fix device bug (#3408) 2025-02-24 16:12:14 +01:00
8039158d71 Torchao float8 training (#3348)
* Bookmark

* bookmark

* Add torchao base example

* Currently broken

* Clean

* DDP varient working

* FSDP as well

* Works for all but zero3

* Bookmark: currently zero3 is underperforming

* Bookmark

* Another diff

* Fin

* Fin

* Add req huggingface suite

* update tests for fp8/torchao/ddp

* Log FP8 backend used and adjust typing

* add documentation for convert_to_float8_training

* Rename to convert_model_to_fp8_ao

* Call superinit"

* Add types

* Clean

* Use filter_first_and_last_linear_layers

* Update usage guide docs

* Actually loop through the zero stages

* Clean
2025-02-17 11:51:47 -05:00
e34db4d0d2 enable xpu (#3397) 2025-02-17 17:41:50 +01:00
526925b48c [memory leak] Replace GradientState -> DataLoader reference with weakrefs (#3391)
* Replace GradientState -> DataLoader reference with weakrefs

So they can be cleaned up. Otherwise, they will always stay in memory, leading to notable memory leaks. Note: even accelerator.free_memory() did not work!

* Add comments; initialize _dataloader_references_ref directly instead of indirectly
2025-02-11 12:47:40 -05:00
24f8d0276c [examples] upgrade code for seed setting (#3387)
* replace set_seed

* update import
2025-02-11 16:31:41 +01:00
5cc99e6e02 fix: typos in documentation files (#3388)
* Update test_scheduler.py

* Update test_big_modeling.py

* Update test_state_checkpointing.py

* Update test_script.py

* Update cli.md

* Update quicktour.md
2025-02-10 13:11:50 -05:00
ce63623421 works for fp8 with deepspeed (#3361)
* works for fp8 with deepspeed

* Add tests

---------

Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
2025-02-10 09:31:15 -05:00
f19b95700f fix torch_dtype in estimate memory (#3383)
* fix torch_dtype

* style

* add comments

* style
2025-02-07 15:58:13 +01:00
81d8a0356c [tests] Fix bnb cpu error (#3351)
* enable bnb tests

* bug fix

* enable more bnb tests on pxu

* fix on xpu

* fix quality issue

* furter fix quality

* fix style

* only use xpu check
2025-02-06 11:26:02 +01:00
f076495580 deepspeed github repo move (#3376) 2025-02-03 13:52:08 -05:00
03153658f4 feat: support tensor parallel & Data loader (#3173)
* feat: add dataloader for TP and n-dim parallel in non-dispatch mode

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* feat: add support for CLI usage

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: test cases

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

* fix: when tp not in use fix num_procs

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>

---------

Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-01-29 09:44:18 -05:00
675e35bcd4 [tests] enable more bnb tests on XPU (#3350)
* enable bnb tests

* bug fix

* enable more bnb tests on pxu

* fix quality issue

* furter fix quality

* fix style
2025-01-23 15:23:38 +01:00
8f2d31c5b9 Support more functionalities for MUSA backend (#3359)
* Support more functionalities for MUSA backend

* fix lint
2025-01-23 15:05:33 +01:00
4c2c89ea90 [tests] remove require_non_xpu test markers (#3301)
* remove non-xpu marker

* fix import
2025-01-22 16:10:17 +01:00
28c171b05a [tests] make cuda-only test work on other hardware accelerators (#3302)
* enable on xpu

* remove require_cuda
2025-01-22 16:09:50 +01:00
65356780d4 [Dev] Update release directions (#3352)
* Update release directions

* Update directions and makefile to account for testpypi fun
2025-01-21 08:59:43 -05:00
78b8126bff v1.4.0.dev0 2025-01-17 10:36:00 -05:00
7e324103c4 [tests] enable BNB test cases in tests/test_quantization.py on XPU (#3349)
* enable bnb tests

* bug fix

* fix quality issue

* furter fix quality

* fix style
2025-01-17 10:22:27 -05:00
02d25612a5 fix triton version check (#3345)
* fiix triton version check

* add xpu check
2025-01-17 10:21:52 -05:00
fbfa53bc5e dataloader: check that in_order is in kwargs before trying to drop it (#3346)
This fixes tests/test_data_loader.py::StatefulDataLoaderTester tests which
started to fail after 828aae4:
```
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_dispatcher_state_dict_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_dispatcher_state_dict_num_workers_2 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_inheritance - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_state_dict_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_state_dict_num_workers_2 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_decoupled_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_decoupled_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_2 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_end_of_dataloader - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_end_of_dataloader_dispatcher - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_skip_data_loader - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_2 - KeyError: 'in_order'
```

The reason for the failure is that "in_order" is added only if data loader
is created with `prepare_data_loader` or `skip_first_batches()`. Tests in
`tests/test_data_loader.py::StatefulDataLoaderTester` however are creating
data loaders directly as classes and "in_order" was not added. Hence the
issue.

Fixes: 828aae4 ("add torchdata version check to avoid in_order error (#3344)")

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-01-15 17:55:31 -05:00
d09040dfc9 [docs] fix typo, change "backoff_filter" to "backoff_factor" (#3296) 2025-01-15 11:55:38 -05:00
828aae4e32 add torchdata version check to avoid "in_order" error (#3344) 2025-01-15 09:04:03 -05:00
f0b030554c Fix for offloading when using TorchAO >= 0.7.0 (#3332)
* fix

* update

* fix

* apply suggestions from review

Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

Co-Authored-By: Xuehai Pan <XuehaiPan@pku.edu.cn>

* make style

---------

Co-authored-by: Xuehai Pan <XuehaiPan@pku.edu.cn>
2025-01-13 16:54:28 +01:00
80973430ee latest bnb no longer has optim_args attribute on optimizer (#3311)
* latest bnb no longer has optim_args attribute on optimizer

* update the other bnb based optimizer checks
2025-01-13 16:53:02 +01:00
c67d47ae79 [tests] make cuda-only test case device-agnostic (#3340)
* enable on xpu

* bug fix
2025-01-13 09:59:35 -05:00
8c423cff79 Fix offload generate tests (#3334)
* Fix tests

* format
2025-01-13 15:45:46 +01:00
95f34d6243 feat(tpu): remove nprocs from xla.spawn (#3324)
This parameter will cause issues on recent version of torch_xla.
2025-01-13 04:37:00 -05:00
ba90f85627 Fixup docker build err (#3333) 2025-01-10 04:54:05 -05:00
b13aadcb67 Bye bye torch <2 (#3331)
* Bye bye torch <1

* Add 2.6.0 dl args

* Rm require fsdp

* Adjust imports + 2.0 specific modeling code

* Bring back is_bf16
2025-01-09 12:11:08 -05:00
58f14364d5 Ensure that tied parameter is children of module (#3327)
Ensure that tied parameters are assigned to their parent module in
get_module_size_with_ties

Fixes: https://github.com/huggingface/accelerate/issues/3308
2025-01-09 12:03:51 -05:00
54370d4504 Adding keep_torch_compile argument to unwrap_model and extract_model_from_parallel. (#3282) 2025-01-08 12:45:22 -05:00
d6d3e03cd4 Use torch.xpu.mem_get_info for XPU (#3275)
torch.xpu.mem_get_info API is available starting from PyTorch 2.6 (and
in nightly 2.6.0.dev20241206+xpu or later). To work properly this method
requires PyTorch built with the SYCL runtime which supports API to query
device memory stats. If not available, exception will be raised.

Requires: https://github.com/pytorch/pytorch/pull/141230
Fixes: #2929
Fixes: https://github.com/huggingface/transformers/issues/31922

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2024-12-24 16:48:00 +01:00
acfbf72a7f Give example on how to handle gradient accumulation with cross-entropy (#3193)
* Add cross-entropy example in the gradient accumulation docs

* add example of logs

* correct skeleton code

* replace gather_for_metrics with gather

* batch_size -> per_device_batch_size

* remove main_process_only=True

* add autoregressive example in examples/

* Update docs/source/usage_guides/gradient_accumulation.md

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* ruff format

* add grad accum test

* update docs

* Update examples/by_feature/gradient_accumulation_for_autoregressive_models.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* update tests

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-12-24 12:26:45 +01:00
200c9eb783 fix: add max_memory to _init_infer_auto_device_map's return statement (#3279) 2024-12-13 10:47:33 -05:00
7b2edc0bf2 Fix test_nested_hook (#3289) 2024-12-11 10:00:45 -05:00
b92fb4774f fix load_state_dict for npu (#3211)
* fix load_state_dict for npu

* update
2024-12-10 21:38:00 -05:00
3e62fbb09c [docs] no hard-coding cuda (#3270)
* no hard-coding cuda

* Update docs/source/usage_guides/big_modeling.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* update device_type

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-12-10 21:32:10 -05:00
cb8b7c637a Fixed typos for Tutorials and Guides docs (#3274) 2024-12-06 10:39:45 -05:00
aa16d69561 [docs] use real path for checkpoint (#3220)
* fix bug

* update
2024-12-06 10:39:29 -05:00
f9a2e7902f fix typo (#3221) 2024-12-06 10:39:15 -05:00
51fd482d6e [docs] update set-seed (#3228)
* update set-seed

* update comment
2024-12-06 10:38:59 -05:00
60461ff7c4 Fix: Resolve #3060, preload_module_classes is lost for nested modules (#3248)
* resolve 3060

* format

* add tests

* fix

* fix

* format
2024-12-03 13:44:59 +01:00
f8c77f0522 Revert default behavior of get_state_dict_from_offload (#3253)
* change default to None

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

* introduce move_to_device argument

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

* remove move_to_device

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

---------

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
2024-12-02 13:47:02 -05:00
b626ef5f00 Select the DeepSpeedCPUOptimizer based on the original optimizer class. (#3255)
* Select the DeepSpeedCPUOptimizer based on the original optimizer class.

* abstract out optimizer selection to a deepspeed util

* add deepspeed cpu Adam & AdamW
2024-12-02 13:45:30 -05:00
dd68af886a Update troubleshooting.md (#3259)
I think the terminology of set_breakpoint and check_breakpoint has become set_trigger and check_trigger
2024-12-02 13:41:10 -05:00
11818e657b Fix: Resolve #3257 (#3261) 2024-12-02 13:41:00 -05:00
1f508a6df6 Update deferring_execution.md (#3262) 2024-12-02 13:40:33 -05:00
4a100eef43 support for wrapped schedulefree optimizer when using deepspeed (#3266)
* support for wrapped schedulefree optimizer when using deepspeed

* add comment and lint
2024-12-02 13:40:20 -05:00
c6f34a060f add xpu check (#3268) 2024-12-02 13:39:20 -05:00
29be478862 [WIP] FEAT Decorator to purge accelerate env vars (#3252)
* [WIP] FEAT Decorator to purge accelerate env vars

In some circumstances, calling certain classes or functions can result
in accelerate env vars being set and not being cleaned up afterwards. As
an example, when calling:

TrainingArguments(fp16=True, ...)

The following env var will be set:

ACCELERATE_MIXED_PRECISION=fp16

This can affect subsequent code, since the env var takes precedence over
TrainingArguments(fp16=False). This is especially relevant for unit
testing, where we want to avoid the individual tests to have side
effects on one another. Decorate the unit test function or whole class
with this decorator to ensure that after each test, the env vars are
cleaned up. This works for both unittest.TestCase and normal
classes (pytest); it also works when decorating the parent class.

In its current state, this PR adds the new decorator and tests it, but
the decorator is not yet applied to potentially problematic functions or
classes.

* Linter

* Refactor code to be more readable

---------

Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
2024-11-25 12:04:56 -05:00
e11d3ceff3 Allow for full dynamo config passed to Accelerator (#3251)
* Allow for full dynamo config

* Clean
2024-11-22 15:18:15 -05:00
08101b9dde Use numpy._core instead of numpy.core (#3247)
* Update other.py

* Update other.py

* add missing import

* use Version instead of version.parse

* Update np_core import in save function
2024-11-21 17:06:21 +01:00
5f96369161 v1.2.0.dev 2024-11-20 19:24:51 -05:00
069743775e [docs] add instruction to install bnb on non-cuda devices (#3227)
* ad bnb installation link

* add period

* add xpu comment and fix some bugs

* style fix
2024-11-20 16:58:46 -05:00
77f2b6235e [data_loader] Optionally also propagate set_epoch to batch sampler (#3246)
* Optionally also propagate set_epoch to batch sampler

* Add simple batch sampler set_epoch test
2024-11-20 16:58:04 -05:00
d7b1b368e9 Add warnings and fallback for unassigned devices in infer_auto_device_map (#3066)
* feat: feat: Add warning for unassigned main devices

* refactor: Improve warning for unassigned main devices

* feat: impl fallback_allocate; fix output format

* fix: include last dot index in the iteration

* feat: incorporate fallback allocation into infer_auto_device_map

* Revert "feat: incorporate fallback allocation into infer_auto_device_map"

This reverts commit d607bfb530517478b90aa89c2a87a03c318a2e58.

* refactor: add helper functions and eliminate redundant variables

The fallback allocation will be reintroduced once the branching logic is fully refactored. This commit prepares the function infer_auto_device_map for further refactoring.

* refactor: simplify allocation logic by removing duplicates and reducing nesting

* feat: incorporate fallback allocation into infer_auto_device_map

Implemented fallback allocation to allow modules to be allocated to devices using BFS when regular allocation fails. This enhancement improves the allocation process by ensuring that at least one module is assigned to the device, even under tight memory constraints.

* fix: fix module splitting logic

* styles: fix styling errors

* test: add test coverage for no-warning cases

test_infer_auto_device_map and test_infer_auto_device_map_with_fallback_allocation now each have a no-warning test case.

Simplified and rewrote code sections that were made unreadable by the linter.

* refactor: simplify control flow in infer_auto_device_map

Added complete return type hinting for _init_infer_auto_device_map

* refactor: replace warnings.warn with logger.info for allocation failures

* fix: use assertLogs to capture no allocation warning messages correctly
2024-11-20 10:10:01 -05:00
8ad2b3b8e7 [docs] update code in tracking documentation (#3235)
* update example code

* revert
2024-11-20 10:04:07 -05:00
e724c9a97f take care of case when "_tied_weights_keys" is not an attribute (#3226)
* take care of case when "_tied_weights_keys" is not an attribute

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>

* fix style

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>

---------

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
2024-11-20 09:57:42 -05:00
cf169a1ae6 enable find_executable_batch_size on XPU (#3236)
* enable on XPU

* Update src/accelerate/utils/memory.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-11-19 12:29:05 -05:00
8ade23cc6a remove hook for bnb 4-bit (#3223)
* relax dispatch for bnb

* style
2024-11-15 17:29:41 +01:00
c0552c9012 Fix align_module_device, ensure only cpu tensors for get_state_dict_offloaded_model (#3217)
* only onload direct parameter descendants, move buffers to cpu, add tests

* remove no longer applicable comment
2024-11-05 13:39:53 +01:00
bf4572b6ce [Utils] align_module_device (#3204)
* implement align_module

* add docs

* move to modeling utils, integrate into existing source code

* update source, expose through utils

* Suggested docstring

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Rewrite for readability, add try finally

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Use try-finally when aligning with hook

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* apply style

* improve get_state_dict_from_offload readability

* Update docstring

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* rename to align_module_device, update docstring

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-11-01 09:05:50 -04:00
a4a44aca1f Update big_modeling.py (#3207) 2024-11-01 08:41:01 -04:00
b0e5fd353c add xpu (#3163) 2024-10-31 10:50:51 -04:00
8159c98d43 Models With Tied Weights Need Re-Tieing After FSDP Param Init (#3154)
* add fsdp_tool to retie after param init

* make it handle generic param_init_fn

* fix quality

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>

---------

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
2024-10-31 10:50:28 -04:00
497eb3cf86 fix bug (#3166) 2024-10-31 09:08:20 -04:00
87732a4c32 take torch.nn.Module model into account when moving to device (#3167)
* bug fix

* update code
2024-10-31 09:08:00 -04:00
ffbca15979 eliminate dead code (#3198)
* eliminate dead code

* make style
2024-10-31 09:01:07 -04:00
ba7ab93f5e Update transformers.deepspeed references from transformers 4.46.0 release (#3196)
* Update dataclasses.py

* Update test_deepspeed.py
2024-10-24 19:42:45 -04:00
85f35647db 🚨 🚨 🚨 Goodbye Python 3.8! 🚨 🚨 🚨 (#3194) 2024-10-24 10:16:47 -04:00
2f39575bbd update Megatron-LM plugin code to version 0.8.0 or higher. (#3174)
* I have adapted the Megatron-LM plugin code to version 0.8.0 or higher.

* update megatron import in set_tensorboard_logging_options
2024-10-24 10:03:53 -04:00
1ace241db4 MLU devices : Checks if mlu is available via an cndev-based check which won't trigger the drivers and leave mlu (#3187)
* Add Cambricon MLU accelerator support

* up mlu support for test

* fix mlu device MULTI_MLU

* Update src/accelerate/utils/imports.py

it's beautiful !

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* up mlu for quality check

* fix mlu device longTensor error

* fix mlu device tensor dtype check

* fix mlu device send_to_device with torch dynamo error

* Refactor AcceleratorState

* Should be near complete now

* Last missing piece

* Make my way to the acceleratorstate

* Include update to global var

* Don't use global

* gpu -> cuda

* Don't use update for dict, easier to read

* Fix tests

* stash

* Getting closer...

* Needed to spawn at the very end after env was setup

* Explain set_device before deepspeed

* Make docstring more accurate

* Early return insteaD

* Delineat blocks

* Make prepare_backend return state + backend for clarity/less magic

* fix mlu longtensor.to() bugs.

* fix MLU devices rng state save and load.

* Cambricon MLU features, Checks if `mlu` is available via an `cndev-based` check which won't trigger the drivers and leave mlu uninitialized.

* MLU devices : Checks if mlu is available via an cndev-based check which won't trigger the drivers and leave mlu

* fix code style and quality

* fix is_cuda_available error

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-10-24 09:30:59 -04:00
78e1bdd088 Fix typo (#3191) 2024-10-23 14:11:15 -04:00
4dda5797bd [docs] use nn.module instead of tensor as model (#3157)
* use nn.module instead of tensor

Signed-off-by: Lin, Fanli <fanli.lin@intel.com>

* fix neptune

---------

Signed-off-by: Lin, Fanli <fanli.lin@intel.com>
2024-10-23 12:23:16 -04:00
1f4fbb77a2 docs: fix a wrong word in comment in src/accelerate/accelerate.py:1255 (#3183) 2024-10-23 12:15:00 -04:00
c809f8e45c [docs] update neptune API (#3181) 2024-10-23 12:14:52 -04:00
39dc2b120f fix bnb (#3186)
* bnb_4bit_compute_dtype is str

* fix error message

* fix _replace_with_bnb_layers of bnb.py in case of meta device

* undo with meta device in bnb.py
2024-10-23 17:08:52 +02:00
735dfa3018 [Utils] has_offloaded_params (#3188)
* implement has_offloaded_params

* update docstring

* expose to utils

* add docs

* apply style, quality

* add tests
2024-10-23 16:44:02 +02:00
a84327e596 enable cpu bnb distributed lora finetune (#3159)
* enable cpu bnb distributed lora finetune

* check bnb multi-backend
2024-10-15 13:56:55 +02:00
292954b547 fix version check bug in get_xpu_available_memory (#3165) 2024-10-14 10:21:25 -04:00
0e61127b5a Remove broken dynamo test (#3155) 2024-10-11 06:55:18 -04:00
6f79b63b86 Trigger weights_only=True by default for all compatible objects (#3036)
* rebase

* Update torch v

* Rename

* Prop to docs

* Actually reverse states

* Rebase fully

* Restore old state

* Keep as load()

* No need for explicit anymore

* Check numpy version, dtypes was added in 1.25

* Clean up diff

* Fix hang
2024-10-10 14:08:24 -04:00
1d2ca747f1 Fixup Zero3 + save_model (#3146)
* Fixup + test

* Easier diff

* Move os.makedirs to under return statement
2024-10-10 12:54:14 -04:00
cba3f2d5e0 support torch dynamo for deepspeed>=0.14.4 (#3069)
* compile after deepspeed 0.14.4

* fix

* fmt

* add test
2024-10-10 18:53:07 +02:00
f1f2b4d1a8 Adding multi gpu speech generation (#3149)
* skeleton code

* fix some errors for downloading the model

* fix some tqdm error

* fix some error

* fix some gpu errors with torch

* fix some gpu errors with torch

* testing simple way

* testing simple way

* testing simple way

* testing simple way

* actual code

* actual code

* final testing with serialization

* add multi_gpu speech generation

* fix some comments

* fix some style and quality
2024-10-10 12:40:15 -04:00
fd9880da91 POC: Allow for a data_seed (#3150) 2024-10-09 12:12:04 -04:00
21c994c298 Merge branch 'main' of https://github.com/huggingface/accelerate 2024-10-09 10:50:19 -04:00
52581c3f01 Change version 2024-10-09 10:50:12 -04:00
f4ee5a2dc7 Florence2 distributed inference example (#3123)
* Florence2 distributed inference example

* optimized

* Documentation
2024-10-09 05:49:05 -04:00
55136b8dc4 DS fix, continued (#3145) 2024-10-08 14:31:14 -04:00
fb68cb9d0e Refactor scaler to util (#3142)
* Refactor scaler to util

* Document

* Use the distributed_type directly
2024-10-08 11:07:01 -04:00
506d732230 Fixup DS issue with weakref (#3143)
* Fixup DS issue with weakref

* Clean
2024-10-08 11:04:13 -04:00
ae9cb6e4db Handle negative values for dim input in pad_across_processes (#3114)
* Handle negative values for dim

* Add tests for negative dimension
2024-10-08 16:01:26 +02:00
127818fc27 MNT Permission for PRs for GH token in stale.yml (#3112)
Continuation of #3102.

The equivalent PR in
PEFT (https://github.com/huggingface/peft/pull/2064) was successful to
restore stale bot function to PRs as well. Hence also making the same
change for accelerate.
2024-10-07 09:35:36 -04:00
bcc13c00b5 typo of "scalar" instead of "scaler" (#3116) 2024-10-07 09:34:34 -04:00
d4d6b6e7f5 fix tip brackets typo (#3129) 2024-10-07 09:34:24 -04:00
1077611552 only move model to device when model is in cpu and target device is xpu (#3133) 2024-10-07 09:34:08 -04:00
YH
cd93e35e08 🐛 [HotFix] Handle Profiler Activities Based on PyTorch Version (#3136) 2024-10-07 09:33:23 -04:00
e93b056687 fix deprecated torch.cuda.amp.GradScaler FutureWarning for pytorch 2.4+ (#3132)
* fix deprecated FutureWarning for pytorch 2.4+

* perform `make style` and `make quality`

* try to fix `Quality Check` on `actions/workflows/quality.yml`

* undo changes for `src/accelerate/utils/memory.py`

* adapt scaler for pytorch.__version__

* fix scalar waning for npu device deps on pytorch2.4 version check

* fallback to default npu scaler

* fallback to default `GradScaler` doc
2024-10-07 09:26:59 -04:00
5060574827 remove cpu restriction for bnb training (#3062)
* rm cpu restriction for 8-bit training

* check bnb version

* def is bnb multi backend avaliable

* fix log
2024-09-30 14:50:29 +02:00
018a99e5f6 Fixup multiple model DS tests (#3131)
* Multiple model multi GPU fixed, different issues than torch

* Fix multiple-model issues
2024-09-26 12:57:16 -04:00
4305033f80 add xpu skip (#3119) 2024-09-18 19:13:16 +02:00
4617be3760 Switch to XLA instead of TPU (#3118) 2024-09-18 04:13:32 +02:00
521eb5bee4 Fixup test_sync w/ deprecated stuff (#3109) 2024-09-13 10:16:52 -04:00
9f9951325c Patch: fix cpu flag never being set as true 2024-09-13 08:47:05 -04:00
e9e5a73fcc POC: multiple model/configuration DeepSpeed support (#3097)
* Bookmark

* Migratory

* Uncomment

* Rm name to model for now

* Rm container

* Left: test

* Allow only wrapping one model

* Add warning but only ref once

* Refine

* Update src/accelerate/accelerator.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Finish stas nits

* Clean

* Fixup test + test writing

* Fully working

* Fin

* Nit

* Quality

* Update src/accelerate/accelerator.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Actionable error

* Make note of when its enabled

* Apply suggestions from code review

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Merge tests

* Merge

* Add currently broken test script

* Push the working implementation

* Fin

* Add guards for user behavior

* Test nits

* TODO: finish knowledge distillation example

* Update tests/deepspeed/test_deepspeed_multiple_model.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Allow for dict-like interface

* Get rid of disable

* Uncomment

* Complete rewrite to force a dict to be used

* Working tests/fin

* Use name as stas suggestion

* Clean

* docnit

* toctree

* toctree

* Missing ref

* Put in break

* Smaller diff

* Make note on how to use zeroinit

* Make note about accelerator ds plugin

* More docnits

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Limit users to not pass in another ds plugin to another accelerator

* not implemented err + Make a note about why no params

* Apply suggestions from code review from Stas

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Add deepspeed_plugins arg + update doc

* Plugin -> plugins

* Change enable() -> select()

* Update ref properly + test

* Be consistent, model1,model2...

* first_, second_

* A few more auto values

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
2024-09-13 07:28:06 -04:00
79a8426416 🚨🚨🚨 The Great Deprecation 🚨🚨🚨 (#3098)
* The great purge

* Clean

* Some more fixings

* Some more deprecations Benjamin found

* Fix kwarghandler test
2024-09-12 21:12:32 -04:00
8a43837cc9 [docs] More docstrings (#3108) 2024-09-12 15:28:36 -04:00
a768b2b753 No more t5 (#3107) 2024-09-12 13:27:15 -04:00
85b1a03552 Update image ref for docs (#3105)
* Update image

* Fin
2024-09-11 15:44:39 -04:00
fc52fa969e [docs] Doc sprint (#3099)
* docs sprint

* youtube id

* feedback
2024-09-11 13:31:47 -04:00
3a670bd0da MAINT: Permission for GH token in stale.yml (#3102)
See https://github.com/huggingface/peft/pull/2061 in PEFT.

This restores the functionality of the stale bot after permissions for
the token have been limited. The action still shows errors for PEFT but
the bot appears to work fine.
2024-09-11 13:27:15 -04:00
b32d8bcb75 [docs] DataLoaderConfiguration docstring (#3103) 2024-09-11 13:26:56 -04:00
d5b7b70e06 MS-AMP support (w/o FSDP) (#3093)
* MS-AMP support sans FSDP

* Fix import

* Fixings

* Last Benjamin nit

* New ruff version cleaning

* Update src/accelerate/accelerator.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-09-10 12:25:45 -04:00
1ce2eb6385 Revert "Enable Unwrapping for Model State Dicts (FSDP) (#2959)" (#3096)
This reverts commit f35cbd1f023db2c7a4972388df3a34274cca7939.
2024-09-10 17:37:22 +02:00
3fd02e60dc MAINT: Upgrade ruff to v0.6.4 (#3095)
* MNT Upgrade ruff to 0.6.4

Currently used version, 0.2.1, is quite old at this point.

Not a lot needed to be changed:

- Change ruff version in setup.py
- Remove deprecated ignore-init-module-imports option for ruff
- Type comparison should use is and not ==
- Use f-string instead of % formatting
- Some line wrapping and empty lines

* Oops
2024-09-10 10:43:37 -04:00
ed9a574564 Update README.md to include distributed image generation gist (#3077)
* Update README.md to include distributed image generation gist

* add script
2024-09-10 10:42:35 -04:00
7d3bbe721b fix skip_keys usage in forward hooks (#3088)
* fix skip_keys

* fix linting
2024-09-10 14:12:17 +02:00
4b4c036933 use the correct available memory API for XPU (#3076)
* fix

* update

* remove blank line

* update

* add check

* add  imports

* warning for both

* reformat
2024-09-09 10:31:31 -04:00
e7e01812df fix bug in _get_named_modules (#3052)
* bug fix

* bug fix
2024-09-06 18:30:45 +02:00
5ad982ac51 Support sequential cpu offloading with torchao quantized tensors (#3085) 2024-09-06 08:49:23 +02:00
9d67867ad9 Re-enable setting state dict type (#3084) 2024-09-05 12:56:26 -04:00
52b3421d8f Fix three typos in src/accelerate/data_loader.py (#3082)
* Update data_loader.py

Fix a typo in line 678: "datalaoder" -> "dataloader"

* Fix typos in data_loader.py
2024-09-05 11:38:47 -04:00
f1ca8ac78f Allow DataLoaderAdapter subclasses to be pickled by implementing __reduce__ (#3074)
* initial fix for breaking accelerator pickling

* cleanup

* skip_first_batches should be used on raw dls

* multigpu sanity test

* bugs

* does this work with iterable dsets?

* fix typo

* ignore these commits, i'm just syncing the origin so i can test on my cloud workstation

* comment out failing tests, unsure if those are existing bugs or a recent regression

* torch 2.4.0?

* pickling generator issues

* test_pickle_accelerator

* test_pickle_accelerator should work now)

* base.__len__() -> len(base)

* undo reduce

* undo super().__reduce__() again

* pass args through superclass

* remove prints

* doc changes + make style && make quality
2024-09-05 11:25:37 -04:00
ab89fc7e1d Fix FSDP auto_wrap using characters instead of full str for layers (#3075) 2024-09-04 12:44:32 -04:00
b5235f21d8 0.35.0.dev 2024-09-02 18:18:42 -04:00
8931e5e48c Remove skip_first_batches support for StatefulDataloader and fix all the tests (#3068)
* Pippy tests - good

* Fix dataloader example tests

* SD issue

* Rm test

* Docs

* Rm from doc
2024-09-02 18:14:24 -04:00
a84859242d Speed up tests by shaving off subprocess when not needed (#3042)
* bookmark

* Continue making improvements

* Bookmark

* More

* Format
2024-09-02 12:12:55 -04:00
758d6243a7 add set_epoch for MpDeviceLoaderWrapper (#3053)
* add set_epoch for MpDeviceLoaderWrapper

* fix one over-indented space
2024-09-02 11:47:39 -04:00
b07ad2adf2 Fix typo in comment (#3045)
* Fix typo in comment

* Fix typo in comment: quality check
2024-09-02 11:47:04 -04:00
1d09a20fc1 use duck-typing to ensure underlying optimizer supports schedulefree hooks (#3055)
* use duck-typing to ensure underlying optimizer supports schedulefree hooks

* fixup
2024-09-02 11:43:18 -04:00
3fcc9461c4 Do not import transformer_engine on import (#3056)
* Do not import `transformer_engine` on import

* fix message

* add test

* Update test_imports.py

* resolve comment 1/2

* resolve comment 1.5/2

* lint

* more lint

* Update tests/test_imports.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* fmt

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-08-28 09:06:13 -04:00
939ce400cb Update torchpippy (#2938)
* rm warning

* Take 3

* Take 4

* Annotate

* Take 6

* Updated

* Spec

* Last fix

* Don't padd input

* Finished

* Continue refactor

* Rm comment

* Adjust the err

* Start adjustment

* GPT2 works, T5 does not

* llama too now I think

* Flag the t5 example
2024-08-26 14:21:13 -04:00
c2120927b0 Add FP8 docker images (#3048)
* Add fp8 docker images

* Add more docker images

* Rv

* bring back ds

* Less diffy

* No need for sep tag
2024-08-26 12:12:34 -04:00
654e1d9984 Add a SLURM example with minimal config (#2950)
* Add an example with minimal config

* Improve

* Even more minimal

* Rm slurm arg

* Update examples/slurm/submit_multinode_fsdp.sh

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-08-26 10:38:10 -04:00
8c3aded21a Update CONTRIBUTING.md Setup Instructions (#3046) 2024-08-26 10:22:29 -04:00
2789933938 Decouple prepare_data_loader() from Accelerator (#3047) 2024-08-26 10:19:59 -04:00
726140cad2 Fixup dataloader state dict bugs + incorporate load/save_state API (#3034)
* v1

* More testing, need to try on H100

* Bigger batch for h100 test

* test tweak

* Fixup all tests!

* Bookmark

* Fix issues, working now

* rm num samples

* Uncomment

* Give stateful dl end of dl

* Make skip DL stateful

* Migrate to update_state_dict

* try/finally

* Add comments to test

* rm comment

* Document

* refactor out for eventual override

* Doc nit

* Brute force it
2024-08-23 15:13:33 -04:00
2d4f1dda7e Fix batch_sampler maybe None error (#3025)
* Fix batch_sampler maybe None

For more details, see: https://github.com/huggingface/accelerate/issues/3011

* Update test_data_loader.py

Add unit test for dataloader with batch_size=None when using Iterabledataset

* Update tests/test_data_loader.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Fix inconsistent indentation

Fix inconsistent indentation

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-08-22 20:02:33 -04:00
c0cf860dc6 Fix fp8 benchmark on single GPU (#3032) 2024-08-22 16:54:32 -04:00
ad3f574a3b Add early support for torchdata.stateful_dataloader.StatefulDataLoader within the Accelerator (#2895)
* temporary commit

* checkout?

* dataloader wrapper

* tmp

* weird failing test

* trying multiple inheritance

* DataLoaderAdapter

* make style

* Some dark magic dynamic reflection (for backwards compat)

* typo

* some tests

* more mixin stuff

* maybe found broken test?

* this is a very invasive feature

* i think the feature is done?

* add xpu support (#2864)

* better tests

* discovered a bug

* maybe fixed bug?

* make style

* hopefully this is PR ready

* properly skip tests

* parameterize

* temporary commit

* checkout?

* dataloader wrapper

* tmp

* weird failing test

* trying multiple inheritance

* DataLoaderAdapter

* make style

* Some dark magic dynamic reflection (for backwards compat)

* typo

* some tests

* more mixin stuff

* maybe found broken test?

* this is a very invasive feature

* i think the feature is done?

* better tests

* discovered a bug

* maybe fixed bug?

* make style

* hopefully this is PR ready

* properly skip tests

* parameterize

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/data_loader.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* merge conflicts

* move imports

* make style

* merges are breaking tests

* fix test name

* Require safetensors>=0.4.3

* undo last commit

* minor style

* address pr comments

* Torchdata version 0.8.0 is stable now

* added docs and require torchdata>=0.8.0 for testing

* test base_dataloader attr doesn't cause infinite recursion

* address pr

* replace super().__iter__ with self.base_dataloader.__iter__

---------

Co-authored-by: Fanli Lin <fanli.lin@intel.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-08-22 08:43:45 -04:00
1a6af0bd6d Improve config handling and add a zoo (#3029)
* Improve config handling and add a zoo

* Docs

* rm comment

* Tweak doc
2024-08-20 10:40:21 -04:00
52fae0960c Add end_training/destroy_pg to everything and unpin numpy (#3030)
* Add end_training/destroy_pg to everything

* Carry over to AcceleratorState

* If forked, ignore

* More numpy fun

* Skip only init
2024-08-20 10:40:12 -04:00
7ffe7662ca Fix torch version check (#3024)
* Fix torch version check

* Adjust to simply change the FSDP pytorch v

* Forgot one, but keep consistent
2024-08-19 11:42:20 -04:00
5536a3a893 Set correct NPU backend and distributed_type when using transfer_to_npu (#3021)
* fix npu setting

* fix npu setting

* add code comments

---------

Co-authored-by: yangyuanhang7 <yangyuanhang7@jd.com>
2024-08-19 11:18:16 -04:00
7ec8eab955 Tweak defaults for quantized-typed FP8 TE weights (#3018)
* Tweak defaults

* Can't forget about CLI

* Update docs
2024-08-19 07:47:54 -04:00
589fddd317 destroy process group in end_training (#3012)
* destroy process group

* rephrase

* style

* fix on_main_process
2024-08-15 08:31:21 -04:00
99c69aaf73 Wrong import check for TE (#3016) 2024-08-15 07:06:38 -04:00
00785cd9fc fix default value for rank size in cpu threads_per_process assignment logic (#3009)
* fix default value for rank size

* fix style

* apply int in case ratio is decimal

* style quality fix
2024-08-14 21:49:38 -04:00
a452327e8e Enable FSDP & Deepspeed + FP8 (#2983)
* Working version rebased from main

* kwargs

* Clean

* Fix more nits

* Fin

* Delay autocast flag

* Enable FP8 autocast during eval only if specified

* Fin

* Rm comment

* All done

* Zero3 works!

* Let the wrapper come off during unwrap_model

* Add import check

* Migrate all to benchmarks folder and make TE import check work

* Add readme

* Add README to benchmarks folder

* Update CLI to now include fp8 args

* Add test config for 0_34

* Finish adding to config yaml

* Write docs

* Expound docs w/ FP8

* Add to toctree
2024-08-14 14:57:01 -04:00
851cf34351 Fix find_tied_params for models with shared layers (#2986)
* Add test case

* Fix find_tied_params

* Sort params in test

* Refactor variable naming, add comments

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Fix docstrings quality

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-08-13 08:27:26 -04:00
cd5698bb32 update version to 0.34.dev0 (#3007) 2024-08-12 12:13:37 -04:00
90d5023901 Add small util to enable FSDP offloading quickly (#3006)
* Wrap up util

* Add small util

* Update doc

* Don't req

* Clean
2024-08-12 11:53:02 -04:00
3bde615607 Make env variables optional for FSDP (#2998)
* Bookmark

* Tests pass!

* Fix imports

* Try with raw dict

* Make diff easier

* Add defaults to all relevent areas

* Rest of refactor

* Fix all of benjamin's nits

* Adjust logic based on Benjamin's feedback

* Adjust for new logic
2024-08-12 11:01:50 -04:00
dc3b5ad82e Fix deepspeed tests (#3003)
* Unpin deepspeed

* Include proper branch for docker image

* Properly working

* Revert all other changes
2024-08-09 15:35:25 -04:00
12a5befdd6 clear memory after offload (#2994) 2024-08-09 09:36:33 +02:00
79ca85c27d Support skip_first_batches for XLA (#2966)
* Fix skip_first_batches for XLA

* Use state to check XLA

* Change to PartialState
2024-08-08 08:55:44 -04:00
13d93c4f50 Fix typo on warning str: "meta device device" -> "meta device" (#2997) 2024-08-07 13:30:48 +02:00
d982751aec Explicit check for step when loading the state (#2992)
* Explicit check

* Nit
2024-08-06 12:26:51 -04:00
95edc68cb3 Fix gated test (#2993)
* Fix gated test

* Clean

* Finally, adjust test
2024-08-06 11:50:15 -04:00
288accc0ec Fix bug of clip_grad_norm_ for xla fsdp (#2941)
* fix bug of clip_grad_norm_ for xla

* modify
2024-08-01 16:58:21 -04:00
83b0610155 remove .md to allow proper linking (#2977) 2024-08-01 11:52:59 -04:00
386f7d2825 add MLU devices for rng state saving and loading. (#2940)
* Add Cambricon MLU accelerator support

* up mlu support for test

* fix mlu device MULTI_MLU

* Update src/accelerate/utils/imports.py

it's beautiful !

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* up mlu for quality check

* fix mlu device longTensor error

* fix mlu device tensor dtype check

* fix mlu device send_to_device with torch dynamo error

* Refactor AcceleratorState

* Should be near complete now

* Last missing piece

* Make my way to the acceleratorstate

* Include update to global var

* Don't use global

* gpu -> cuda

* Don't use update for dict, easier to read

* Fix tests

* stash

* Getting closer...

* Needed to spawn at the very end after env was setup

* Explain set_device before deepspeed

* Make docstring more accurate

* Early return insteaD

* Delineat blocks

* Make prepare_backend return state + backend for clarity/less magic

* fix mlu longtensor.to() bugs.

* fix MLU devices rng state save and load.

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-07-31 16:33:15 -04:00
308a8e9689 chore: Update runs-on configuration for CI workflows (#2981)
Signed-off-by: Adrien <adrien@huggingface.co>
2024-07-31 16:24:36 -04:00
f35cbd1f02 Enable Unwrapping for Model State Dicts (FSDP) (#2959)
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
2024-07-31 16:03:23 -04:00
a14260c9da Fix torchvision to be compatible with torch version in CI (#2982)
* skip test due to torchvision issue

* Revert "skip test due to torchvision issue"

This reverts commit b12b6b4ffafea6ec6c65b9721a30b8a54bf7af1e.

* change min version

* test upgrade

* exact version

* update

* add back
2024-07-31 18:16:12 +02:00
32f368ec3f Require safetensors>=0.4.3 (#2957) 2024-07-29 07:35:34 -04:00
415eddf1be feat(ci): add pip caching in CI (#2952) 2024-07-22 16:55:08 -04:00
230857691a Properly handle Params4bit in set_module_tensor_to_device (#2934)
* Properly handle  in

* Add comment to explain Params4bit skipping shape check for set_module_tensor_to_device
2024-07-22 08:42:49 -04:00
a5a3e57125 Add torch.float8_e4m3fn format dtype_byte_size (#2945)
* add new format

* check torch version

* style
2024-07-20 03:07:07 +02:00
0af1d8b8de delete CCL env var setting (#2927)
* delete CCL env var setting

* fix format
2024-07-17 22:15:46 -04:00
d16d7371a1 Improve test reliability for Accelerator.free_memory() (#2935) 2024-07-16 08:40:51 -04:00
7a5c231b9e Consider pynvml available when installed through the nvidia-ml-py distribution (#2936) 2024-07-16 08:40:16 -04:00
4f02bb764a Fix import test (#2931)
* Fix import test

* Tweak threash
2024-07-15 11:13:23 -04:00
YH
709fd1e42b Hotfix PyTorch Version Installation in CI Workflow for Minimum Version Matrix (#2889)
* Fix ci torch version matrix

* Patch torch minor version
2024-07-15 10:31:12 -04:00
f4f1260a0e Correct loading of models with shared tensors when using accelerator.load_state() (#2875)
* Enabled correct loading of models with shared tensors when using accelerator.load_state()

* removed unused import

* added a test for a model with shared weights

* removed unnecessary bits

* fixed linting errors
2024-07-15 10:29:17 -04:00
c6da9f8693 Allow multiple process per device (#2916)
* Allow more processes than devices

* Accept suggestion

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-07-15 10:18:15 -04:00
3ebbe573ad Add huggingface_hub version to setup.py (#2932) 2024-07-15 10:11:41 -04:00
24bf5ec546 add xpu device check before moving tensor directly to xpu device (#2928)
* add ipex check

* fix type

* fix bug
2024-07-15 09:30:22 -04:00
e1247de01e Better error when a bad directory is given for weight merging (#2852) 2024-07-12 13:20:00 -04:00
12a007d559 Support MUSA (Moore Threads GPU) backend in accelerate (#2917) 2024-07-10 13:42:28 +02:00
5bdcd7e169 fix: bug where mulit_gpu was being set and warning being printed even with num_processes=1 (#2921)
Signed-off-by: Harikrishnan Balagopal <harikrishmenon@gmail.com>
2024-07-08 12:06:30 -04:00
2471eacdd6 Fix slowdown on init with device_map="auto" (#2914) 2024-07-04 09:10:21 -04:00
167cb5eb20 [tests] fix bug in torch_device (#2909) 2024-07-04 06:44:40 -04:00
947f64ee62 Version update 2024-07-03 13:27:34 -04:00
8330b375d4 Fix get_backend bug and add clear_device_cache function (#2857)
* added clear_device_cache

* set lambda: 0 for mps and cpu
2024-07-03 06:59:10 -04:00
92404fbf5f fix load_state_dict for xpu and refine xpu safetensor version check (#2879)
* add fix

* update warning

* no and
2024-07-03 06:36:36 -04:00
3a02754915 add require_triton and enable test_dynamo work on xpu (#2878) 2024-07-03 04:52:09 -04:00
fec1170e35 fix mlu device longTensor bugs (#2887)
* Add Cambricon MLU accelerator support

* up mlu support for test

* fix mlu device MULTI_MLU

* Update src/accelerate/utils/imports.py

it's beautiful !

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* up mlu for quality check

* fix mlu device longTensor error

* fix mlu device tensor dtype check

* fix mlu device send_to_device with torch dynamo error

* Refactor AcceleratorState

* Should be near complete now

* Last missing piece

* Make my way to the acceleratorstate

* Include update to global var

* Don't use global

* gpu -> cuda

* Don't use update for dict, easier to read

* Fix tests

* stash

* Getting closer...

* Needed to spawn at the very end after env was setup

* Explain set_device before deepspeed

* Make docstring more accurate

* Early return insteaD

* Delineat blocks

* Make prepare_backend return state + backend for clarity/less magic

* fix mlu longtensor.to() bugs.

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-07-03 04:50:11 -04:00
eac206f063 make more cuda-only tests device-agnostic (#2876)
* enable 3 cases

* add ests

* add 2 more

* revert 1 back

* revert 1 more

* enable on xpu
2024-07-03 04:49:53 -04:00
6882ff2bea Added a MultiCPU SLURM example using Accelerate Launch and MPIRun (#2902)
* initial commit for slurm multicpu script

* changed output path

* Added multicpu example using accelerate + mpirun + slurm

* removed file

* rename file

* deleted file

* refactored for cleanliness

* updated docs

* fixed variable names

* quality update

* test fix

* addressed review comments

* fix typo for activateEnvironment.sh

* added ACCELERATE path

* Edit wording

Co-authored-by: Dina Suehiro Jones <dina.s.jones@intel.com>

* added back mistakenly deleted line

---------

Co-authored-by: Dina Suehiro Jones <dina.s.jones@intel.com>
2024-07-03 04:14:02 -04:00
57a4c7465e Add XLA Dynamo backends for training and inference (#2892) 2024-07-03 04:10:13 -04:00
YH
404510a5ec Make log_line_prefix_template Optional in Elastic Launcher for Backward Compatibility (#2888)
* Fix unexpected keyword argument err for elastic launch config

* Update torch version flow

* Del log prefix template from env vars
2024-07-03 04:06:08 -04:00
3086e26db9 Speed up imports and add a CI (#2845)
* Working test

* Timing cleanup

* Add CI

* Fix nits

* Mixup imports

* Clean

* tuna -> tuna-interpreter

* Refactor pippy imports

* Accelerator

* Fin

* Fin

* Keep specific ones for docs
2024-07-01 18:50:18 -04:00
YH
5d5d07abfc Add Profiler Support for Performance Analysis (#2883)
* Add torch profiler

* Add example

* Fix rank 0 saving

* Add docstring

* Add profile readme

* Fix minor

* Fix example path

* Add exp test code

* Rename profile dir

* Change readme

* Change save format

* Minor

* Enhance docstring example

* Add user guide

* Add memory profile guide

* Enhance error msg

* Fix type hinting

* Minor refactor

* Fix hf tag

* Fix copyright year

* Mv toctree

* Fix image path

* Fix license year

* Change profiler pattern name

* Update package reference

* Add slow decorator

* Check output value
2024-07-01 18:01:09 -04:00
5a0b7dc597 Support saving and loading of step while saving and loading state (#2765)
* Add feature to save step when saving state

* Update docstring for `load_accelerate_state`
2024-07-01 14:57:19 -04:00
c799c198e9 add xpu support (#2864) 2024-06-26 14:56:13 +02:00
1f7a79b428 Potentially fix tests (#2862)
* Potentially fix tests

* Try again with numpy sub 2
2024-06-18 11:38:30 +02:00
4cc3530b64 [tests] skip bnb-related tests instead of failing on xpu (#2860)
* fix requirement

* add one more

* add one more case

* remove files

* remove more file

* bug fix

* revert
2024-06-18 11:22:03 +02:00
5d4a3beb01 [tests] use torch_device instead of 0 for device check (#2861)
* bug fix

* fix one more case

* add more cases

* refine
2024-06-18 10:01:52 +02:00
0284f9a9f6 [tests] fix bug in test_tracking.ClearMLTest (#2863) 2024-06-17 21:40:45 +02:00
573d22d48f Default FSDP weights merge to safetensors (#2853) 2024-06-17 11:23:17 +02:00
13ca7dccb6 Drop torch re-imports in npu and mlu paths (#2856)
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2024-06-14 07:13:59 -04:00
3b5a00e048 xpu: support xpu backend from stock pytorch (>=2.4) (#2825)
Fixes: https://github.com/huggingface/transformers/issues/31237

XPU backend is available in the stock PyTorch starting from
version 2.4, see [1]. This commit extends huggingface accelerate
to support XPU from both IPEX and the stock pytorch. IPEX is being
tried first.

See: https://github.com/pytorch/pytorch/issues/114842

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2024-06-13 11:20:30 -04:00
3c4eaedd46 Refactor logging to use logger in dispatch_model (#2855) 2024-06-13 11:18:48 -04:00
YH
c0faec766c Add DDP Communication Hooks (#2841)
* Add ddp comm hook

* Fix dataclass order

* Merge ddp grad hook to ddp kwargs handler

* Reset ddp kwargs key

* Add test

* Fix test case

* Split ddp grad test

* Fix test case

* Ehance docstring

* Minor

* Use naive baseenum for ddp comm hook type

* Add by feature example

* Add multi device deco

* Add user guide

* Update examples/by_feature/ddp_comm_hook.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update examples/by_feature/ddp_comm_hook.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Add wrapper and state option details

* Update toctree

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Mv ddp comm hook index

* Fix ddp comm hook user guid

* Del empty line

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2024-06-13 10:34:20 -04:00
91a2599f93 Auto create dir when merging FSDP weights (#2854) 2024-06-13 05:32:22 -04:00
5f9235a731 Remove underlines between badges (#2851) 2024-06-12 15:30:28 -04:00
7a36a75c7c remove warning hook addede during dispatch_model (#2843)
* remove-warning-hook

* add _accelerate_added_attributes

* add comments
2024-06-12 16:24:45 +02:00
f62854a281 Revert "Slight rename" (#2850)
This reverts commit a9869ea0dc49652e49607d5f111caed79ed5cb67.
2024-06-12 08:10:13 -04:00
a9869ea0dc Slight rename 2024-06-11 10:15:28 -04:00
6d59614603 doc: fix link (#2844) 2024-06-11 07:41:09 -04:00
2d74c0c077 fix(ci): remove unnecessary permissions (#2842) 2024-06-10 05:35:19 -04:00
40007b4e97 feat(ci): add trufflehog secrets detection (#2836) 2024-06-07 18:29:14 +02:00
7141881b1f Push new release version 2024-06-07 10:05:51 -04:00
f0049b2cfb Use shard saving from huggingface_hub (#2795)
* use shard saving from huggingface hub

* move import

* add shard_checkpoint back but with deprecation msg

* add shard_checkpoint back
2024-06-07 10:03:46 -04:00
83bad87559 fix fstr format (#2810)
* fix fstr format

* Quality pass
2024-06-07 08:46:21 -04:00
24d8b63fc3 Optimize the megatron plugin (#2822)
* Update megatron_lm.md

* Update accelerator.py

* Update dataclasses.py

* Update imports.py

* Update megatron_lm.py

* Update megatron_lm.py
2024-06-07 07:49:52 -04:00
4a83ee5382 monitor-interval, take 2 (#2833)
* monitor-interval

* Update defaults
2024-06-06 09:43:08 -04:00
05d240af95 Improve test speeds by up to 30% in multi-gpu settings (#2830) 2024-06-06 06:12:59 -04:00
bad2ce42ed Fix DeepSpeed config validation error by changing stage3_prefetch_bucket_size value to an integer (#2814) 2024-06-05 21:41:35 -04:00
30cb7ece76 Remove out-dated xpu device check code in get_balanced_memory (#2826)
* fix xpu device check

* simplify
2024-06-05 12:34:43 -04:00
b7fa2fa956 add cuda dep for a test (#2820)
* add cuda dep for a test

* hmmm
2024-06-03 08:37:44 -04:00
d5d378d64e State dictionary retrieval from offloaded modules (#2619)
* added get_state_dict_from_offloaded

* cleaned

* make style

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* implemented suggestions, refactored, make style

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-06-03 14:16:07 +02:00
065e74d11a 4-bit quantization meta device bias loading bug (#2805)
* 4-bit quantization meta device bias loading bug: fixes #2742

* move condition

---------

Co-authored-by: mh <mh@mhs-Mac-mini.local>
2024-05-31 15:26:17 +02:00
86b6deaea1 Fix access error for torch.mps when using torch==1.13.1 on macOS (#2806)
* Fix access error for torch.mps when using torch==1.13.1

* Add missing parentheses

* add min_version

---------

Co-authored-by: Matthew Hoffman <matthew@protopia.ai>
2024-05-31 14:48:37 +02:00
b24a0ef5db New template (#2808) 2024-05-28 10:10:13 -04:00
e061edc6e7 fix comet test (#2804) 2024-05-28 13:45:24 +02:00
c3f422699a Fix type in accelerator.py (#2800)
* Fix type in accelerator.py

* Update accelerator.py
2024-05-24 19:38:43 -04:00
0553483638 Fix Wrong use of sync_gradients used to implement sync_each_batch (#2790)
* fix wrong use of sync_gradients to implement sync_each_batch as pointed out by @Nightmare-n

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>

* fix test

---------

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
2024-05-23 10:55:52 -04:00
YH
415789d0e4 Add Elastic Launch Support to notebook_launcher (#2788)
* Support elastic launcher

* Update src/accelerate/launchers.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Typo

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-05-23 10:52:41 -04:00
hkz
ae472bac48 fix duplicate elements in split_between_processes (#2781)
* fix duplicate elements in split_between_processes

* add test

* use divmod

* fix apply_padding=True

* fix unused import
2024-05-23 10:51:49 -04:00
4f2c2ba45c Fixup CLI test (#2796) 2024-05-23 09:06:14 -04:00
e26065a265 Upgrade huggingface's megatron to nvidia's megatron when use MegatronLMPlugin (#2501)
* nvidia-megatron

* Update megatron_lm.py

* Update megatron_lm.py

* ruff fix

* ruff format

* Update megatron_lm.py

* Update dataclasses.py

* Update megatron_lm.py

* 直接使用megatron接口

---------

Co-authored-by: zhenwenqi <zhenwenqi_2022@qq.com>
2024-05-23 08:07:27 -04:00
1cb6fdcf7b FIX / FSDP : Guard fsdp utils for earlier PyTorch versions (#2794)
* guard fsdp utils

* Update src/accelerate/utils/fsdp_utils.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/utils/fsdp_utils.py

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-05-21 19:29:30 -04:00
4ba436eccc Introduce shard-merging util for FSDP (#2772)
* Initial commit

* Now to test

* Store false

* Slight tweaks

* Fix naming

* Got it all working with tests

* Use not for safetensors arg

* rm change

* Add docs

* Adjust based on Marc's feedback

* Specify just weights

* Update tests to include CLI and swap namings

* Fin

* Rm unused

* Rm again
2024-05-16 13:49:50 -04:00
91e8a3ced4 Skip tied weights disk offload test (#2782)
* skip

* fix

* quality

* fix comment
2024-05-16 14:09:58 +02:00
4ad4d28c49 Add arg from CLI to fix failing test (#2783) 2024-05-15 12:49:54 -04:00
befd87f043 Enable config for fsdp activation checkpointing (#2779)
* Enable config for fsdp activation checkpointing

* Fix ruff errors
2024-05-14 20:17:49 -04:00
abce3604f0 Skip deepspeed test (#2776)
* skip test

* style
2024-05-14 18:28:10 +02:00
27a607ea90 Fix small edge case in get_module_leaves (#2774)
* fix edge case

* fix
2024-05-14 11:52:51 +02:00
aa21174de9 fix minor typo (#2767) 2024-05-13 08:24:01 -04:00
6cf1cc0a39 optimize get_module_leaves speed (#2756)
* optimize get_module_leaves

* fix format

* Update modeling.py
2024-05-13 08:23:38 -04:00
bb465a9cf0 Sets default to PyTorch defaults based on backend (#2758)
* Amd

* Add timeout defaults to match pytorch

* forward contrib credits from discussions

* oop

---------

Co-authored-by: Julian Buchel <jubueche@users.noreply.github.com>
2024-05-13 05:41:15 -04:00
67308ca6ef Enable sharded cpu resume (#2762) 2024-05-10 11:39:37 -04:00
63772f6ac2 Revert "Simplify CLI args validation and ensure CLI args take precedence over config file." (#2763)
This reverts commit 724824abbe0aed8606661bbce5e057c0d2447794.
2024-05-10 11:22:56 -04:00
8798cf06ab fix cpu omp num threads set (#2755)
* fix cpu omp num threads set

* fix OMP_NUM_THREADS

* consider no-cpu usage

* fix style
2024-05-10 11:16:06 -04:00
47bb2dd53e Fix sagemaker config (#2753)
* Fix sagemaker

* Default to False

* Include fixes

* Nit

* Ignore launching
2024-05-10 09:09:36 -04:00
724824abbe Simplify CLI args validation and ensure CLI args take precedence over config file. (#2757)
* Remove unnecessary args.debug statement

* Add expected test failure for config sub-sections

* Remove redundancy in config file args parsing

* Make config file --cpu logic more explicit
2024-05-09 09:30:13 -04:00
YH
afc2c99e6a Fix duplicate environment variable check in multi-cpu condition (#2752)
* Del duplicted key

* Apply format
2024-05-07 14:27:29 -04:00
0fb95a2d3b Fix max_memory assignment (#2751) 2024-05-07 11:53:25 +02:00
7ac153f404 LOMO / FIX: Support multiple optimizers (#2745) 2024-05-06 08:28:14 -04:00
0f1b91bb74 Fix stacklevel in logging to log the actual user call site (instead of the call site inside the logger wrapper) of log functions (#2730)
* fix stacklevel in logging to log info about the actual user callsite

* Add two tests for stacklevel in logging

---------

Co-authored-by: luowyang <luowyang@github.com>
2024-05-06 08:21:19 -04:00
d1eb44c856 Fixed the problem of incorrect conditional judgment statement when configuring enable_cpu_affinity (#2748) 2024-05-06 08:20:22 -04:00
11a363287a Update modeling.py by adding try-catch section to skip the unavailable devices (#2681)
* Update modeling.py to ignore the unavailable devices

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-05-06 12:44:35 +02:00
LFu
5cfe409443 Add feature to allow redirecting std streams into log files when using torchrun as the launcher. (#2740)
* Add --log-dir/--log_dir to `distributed_args` to allow redirecting std
streams into log files when using torchrun as the launcher. Used with
--tee this will acheive similar effect as running with `torchrun --tee X
--log-dir=logs`.

* Deleted the unecessary "--log-dir" argument following suggestion from
@muellerzr, since it will be automatically generated from "--log_dir".
2024-05-04 15:03:05 -04:00
5b3a7f3892 Update setup.py + test falures found during release 2024-05-03 10:40:25 -04:00
060361fca3 Fix tests on main (#2739)
* Start

* Fixings
2024-05-03 10:18:20 -04:00
6ac27e2383 FEAT: Add LOMO optimizer (#2695)
* add v1 lomo

* final fixes

* fix

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* add comment

* more comments

* fix

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-05-03 10:55:44 +02:00
YH
ba5f49219f Fix offload device type (#2717) 2024-05-02 17:07:24 +05:30
2c767338f2 Fix Documentation in FSDP and DeepSpeed Concept Guide (#2725)
* address part of stats comments

* automatically set sync_module_states if low_cpu_mem is set

* Apply suggestions from @stas00

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* add links from fsdp and deepspeed docs. fix deepspeed imports

* replace raise in accelerate.launch

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2024-05-01 09:25:18 -04:00
234a85506d Docs: Fix build main documentation (#2729) 2024-05-01 08:18:52 -04:00
232ebd159a Fix sampler (#2728) 2024-05-01 12:20:26 +02:00
4d3d4bc88f fix sampler serialization (#2723)
* fix sampler serialization

* add getter and setter for sampler

* more maintenable
2024-04-30 11:19:05 +02:00
2b1e7bd462 Fixup free_memory to deal with garbage collection (#2716)
* Fixup cleanup

* Return

* Fixup test

* Fix test

* DeepSpeed

* More careful guard

* bring back as none

* passing

* bring forward
2024-04-30 03:28:57 -04:00
c7e5e41b8c Segment out a deepspeed docker image (#2707)
* Segment out a deepspeed docker image

* Update readme

* Keep pinned ds
2024-04-29 11:25:22 -04:00
9557598c45 Add Upcasting for FSDP in Mixed Precision. Add Concept Guide for FSPD and DeepSpeed. (#2674)
* draft fsdp vs ds

* reframe to migration doc

* updated functionality section

* cast to float32

* improvements to float32 casting

* some cleanup

* addressed @pacman100's comments

* Apply some of @muellerz suggestions

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* change to subsections

* changed the manner upcasting warnings are surfaced

* update document to discuss fsdp and ds plugins. minor fixes.

* @muellerzr's new suggestions

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* explain all-or-nothing

* add @pacman100's comments

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* minor fix

---------

Co-authored-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2024-04-29 11:19:03 -04:00
156331aecd allow gather_for_metrics to be more flexible (#2710)
* allow gather_for_metrics to be more flexible

* style

* udapte doc

* fix

* style

* typo

* typo

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* remove distributed

* clean

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-04-29 12:14:22 +02:00
cd7df4117d fix bnb multi gpu training (#2714)
* fix bnb multi gpu training

* style

* elif instead

* fix

* style

* fix
2024-04-26 15:52:15 +02:00
6af157ea93 Add diffusers to req (#2711) 2024-04-25 08:31:54 -04:00
83317b3081 add distributed examples (#2672)
* add distributed examples

* typo

* uncomment

* require multigpu

* add stable diffusion example

* style

* add copyright

* style

* remove tqdm

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* add comments

* remove print

* More comments

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-04-25 11:13:56 +02:00
e831bcb3b1 Change dataloader send_to_device calls to non-blocking (#2685)
* Change dataloader send_to_device calls to non-blocking

* add non_blocking to dataloader dataclass

* add dataloader non blocking option from dataclass

* add handling for non blocking to accelerator

* add notes on non-blocking transfers to quicktour

* link to dataloaderconfiguration in docs

* linting

* "requires" -> "recommended" on non-blocking setting

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: drhead <a@a.a>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-04-24 15:45:57 -04:00
092c3af0c4 Add version checks for the import of DeepSpeed moe utils (#2705)
* fix import for moe utils

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-04-25 00:38:56 +05:30
3e944c5583 add cann version info to command accelerate env (#2689) 2024-04-24 09:17:09 -04:00
f67737363c Do a pip freeze during workflows (#2704)
* Do a pip freeze

* No need to do source activate on non-conda workflow
2024-04-24 08:46:13 -04:00
f7daaaa305 fix support (#2699) 2024-04-23 15:32:43 +02:00
3dc131cd8d Add source code for DataLoader Animation (#2696)
* dl animation

* oops

* Export
2024-04-23 04:28:28 -04:00
ef0f62c12a Simplify test logic (#2697)
* simplify test logic 😅

* 😅
2024-04-23 02:49:55 +05:30
baafaf4a6e Fix the rng states of sampler's generator to be synchronized for correct sharding of dataset across GPUs (#2694)
* Fix the rng states of sampler's generator to be synchronized for correct sharding of dataset across GPUs

* add tests
2024-04-22 13:50:04 -04:00
abc86c0e35 Enable BF16 autocast to everything during FP8 + some tweaks to enable FSDP (#2655)
* Basic autocasting stuff

* Delay fp8 autocast until after DDP wrapping

* More fixes

* Bookmark: without dtype change

* Bookmark: with dtype changes

* Different alternative, better results

* Didn't matter what order, same result

* Revert + maintain

* Fin

* Refactor based on feedback

* native_amp bool

* Final nits
2024-04-18 10:14:35 -04:00
4450cb3132 Deprecate tqdm args + slight logic tweaks (#2673)
* Deprecate + slight logic fix

* Maybe fix test?
2024-04-17 06:26:55 -04:00
fd0dcd1c45 fix backend check (#2670)
* fix backend check

* reformat backend check

* Update src/accelerate/state.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/state.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* raise value error if backend mismatch

* Update src/accelerate/state.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-04-16 21:22:27 -04:00
f478201c28 Pin DS...again.. (#2679) 2024-04-16 12:07:59 -04:00
c7046845e7 Fix deepspeed moe test with version check (#2677) 2024-04-16 10:22:41 -04:00
701e24c539 Handle MoE models with DeepSpeed (#2662)
* Handle MoE models with DeepSpeed

* Update launch.py

* Update test_deepspeed.py

* Update test_deepspeed.py

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* address comments

* Update deepspeed.md

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-04-16 16:11:49 +05:30
37da848e6c tqdm: *args should come ahead of main_process_only (#2654)
* Update tqdm.py

* add unit test

* add test to test_utils

* ruff changes
2024-04-15 12:30:28 -04:00
c470a1336a Revert "fix backend check (#2652)" (#2669)
This reverts commit 2fc48c7eeea67e747a39be2dec822b07a27bae71.
2024-04-15 04:30:33 -04:00
581a390e2f Megatron plugin can support NPU (#2667) 2024-04-15 03:02:13 -04:00
2fc48c7eee fix backend check (#2652)
* fix backend check

* fix ccl check
2024-04-15 02:59:29 -04:00
1024231133 Add MLU rng state setter (#2664) 2024-04-15 02:59:10 -04:00
5ca095a34f Fix test_from_pretrained_low_cpu_mem_usage_measured failure (#2644)
This test is to test the change in the memory size occupied by model loading when low_cpu_mem_usage is used.
Therefore, the default device used is cpu. However, when judging whether other devices are available,
new packages will be introduced, causing memory changes and interfering with the test results.

Signed-off-by: yuanwu <yuan.wu@intel.com>
2024-04-12 18:23:28 +02:00
b77c65398c Don't use deprecated Repository anymore (#2658)
* Don't use deprecated Repository anymore

* oops

* Update requirements.txt
2024-04-12 09:05:54 -04:00
YH
a91691463b Fix deepspeed plugin attr type (#2646) 2024-04-12 15:29:16 +05:30
5056d327f8 Allow "auto" for gradient clipping in YAML (#2649)
* Allow "auto" for gradient clipping in YAML

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Make style

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2024-04-12 13:44:39 +05:30
c0a37015e3 Typo fix in tracking.md (#2650) 2024-04-10 17:16:11 -04:00
e9b9c7d022 device agnostic testing for hooks&utils&big_modeling (#2602)
* device agnostic testing for hooks&utils&big_modeling

* fix failed test cased on cpu

* make style
2024-04-10 13:56:50 -04:00
6c09584f73 add strict arg to load_checkpoint_and_dispatch (#2641) 2024-04-10 11:20:07 +02:00
b8c8583953 add third-party device prefix to execution_device (#2612)
* add xpu device_map

* fix
2024-04-09 13:47:41 +02:00
df485ae1e3 Parenthesis on xpu_available (#2639) 2024-04-09 06:33:38 -04:00
6386f70103 Fix up state with xla + performance regression (#2634)
* Fix up state with xla

* use backend

* Change last time

* Cmoment

* Slight tweak to use dtype
2024-04-09 06:06:28 -04:00
6d92198ef4 Schedule free optimizer support (#2631)
* Schedule free optimizer supporT

* Fin

* Doc

* Add in eval

* Add to exclude

* Fix module issue
2024-04-08 11:28:27 -04:00
16488be9a4 Update version 2024-04-05 13:11:05 -04:00
685bd3a439 CLean 2024-04-05 13:05:05 -04:00
2e69948c1a Patchfix 2024-04-05 13:04:44 -04:00
7531e8c13e Unpin hub (#2625) 2024-04-04 10:33:49 -04:00
8e439de744 Link to bash in env reporting (#2623)
* link to bash in env reporting

* Not found

* Use check_output

* Support windows
2024-04-04 09:47:08 -04:00
d96a5aa730 Fix links in Quick Tour (#2617) 2024-04-03 12:47:31 -04:00
d7bcd85d4d fix llama example for pippy (#2616)
* fix llama example

* remove llama from tests
2024-04-03 08:22:16 -04:00
d927b8f3a2 Default false for trust_remote_code (#2607) 2024-04-02 10:58:24 -04:00
f579d9550d Pin hub for tests (#2608) 2024-04-02 10:58:17 -04:00
bbecad4e8e Allow for force unwrapping (#2595)
* Try new method

* Clean a bit more

* Use spmd

* reported typo

* Forward contrib credits

* Comment

* Comments

---------

Co-authored-by: Shubham Krishna <shubhamkrishna.ism@gmail.com>
2024-04-02 09:59:07 -04:00
b82999a84b Re-put in zero3 failure 2024-04-02 09:57:07 -04:00
11568e562c Refactor PartialState and AcceleratorState (#2576)
* Refactor AcceleratorState

* Should be near complete now

* Last missing piece

* Make my way to the acceleratorstate

* Include update to global var

* Don't use global

* gpu -> cuda

* Don't use update for dict, easier to read

* Fix tests

* stash

* Getting closer...

* Needed to spawn at the very end after env was setup

* Explain set_device before deepspeed

* Make docstring more accurate

* Early return insteaD

* Delineat blocks

* Make prepare_backend return state + backend for clarity/less magic

* Check if it's None and then return

* Use a dataclass

* Forgot one

* Clean

* Style

* Docstring fix?

* Fix deepspeed

* Move slighly

* Final fix

* Fix state for deepspeed

* rm comment
2024-04-02 09:55:34 -04:00
d9a1b8f975 Resolve ZeRO-3 Initialization Failure in Pre-Set Torch Distributed Environments (huggingface/transformers#28803) (#2578)
* Resolve ZeRO-3 Initialization Failure in Pre-Set Torch Distributed Environments (huggingface/transformers#28803)

* add unit test for deepspeed zero3 intergation

* update test case then keep it accelerate spec
2024-04-01 10:46:08 +05:30
b634388ef1 Fix warning log for unused checkpoint keys (#2594)
As per title
2024-03-28 15:32:44 +01:00
4d415f2129 Allow notebook_launcher to launch to multiple GPUs from Colab (#2561)
* changed notebook_launcher to not ignore num_processes parameter on colab

* clarified documentation on notebook_launcher (that config file is ignored by notebook_launcher)

* simplified logic in launcher to retain prev elif, imported get_gpu_info from environment

* run quality and style fixes

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-03-26 22:49:14 -04:00
829171a9a4 [docs] Fix kwarg docstring (#2590)
* fix kwarg docstrings

* **
2024-03-26 13:24:23 -07:00
5a232de2fa Expound PartialState docstring (#2589)
* Expound docstring

* Reword

* Weird spacing

* Move example

* Move to solve formatting issues

* Link to the spec class

* Take 3

* Copy kwargs format to others

* Take 4...

* Special thingy
2024-03-26 13:41:23 -04:00
5f8048cd04 Guard stateful objects (#2572)
* Guard stateful objects

* Add test

* Add a test

* MOre tests

* Update AcceleratorState

* Decision: early return

* Test accelerator as well

* use right assert check

* Use getattr
2024-03-26 12:04:40 -04:00
4378b560e8 Fix load_checkpoint_in_model behavior when unexpected keys are in the checkpoint (#2588)
* fix load_checkpoint_in_model when unexpected keys are in the checkpoint

* fix test

* style
2024-03-26 23:36:00 +08:00
8644e23b71 Refactor and improve model estimator tool (#2581)
* Start

* Stash

* Mark

* Better mixed precision

* Can confirm transformerengine

* Finish refactor

* Update training usage

* Slight tweak

* Fin

* Fixup test

* Add comment about FP8
2024-03-26 10:33:14 -04:00
b2fc3a3b0e Refactor affinity and make it stateful (#2579)
* Move under initialized check

* One more

* Numa affinity

* Docs

* Import

* Add verbosity

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Improve import err

* Test + fix bug

* Update src/accelerate/utils/environment.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Clean

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2024-03-26 09:51:37 -04:00
UNI
290446d446 Update data_loader.py to Ensure Reproducibility in Multi-Process Environments with Dataloader Shuffle (#2584)
* Update data_loader.py

* fix reformatting bug

* add unit test

* add Accelerator initialization in unit test

* move unit test of seedable sampler to test_script.py

* reformatted
2024-03-25 15:04:05 -04:00
85a75d4c3d [docs] Missing functions from API (#2580) 2024-03-22 13:40:21 -04:00
f94f0ff912 Allow for custom deepspeed env files (#2566)
* Allow for any .env file

* Messed up merge conflicts
2024-03-22 08:20:43 -04:00
1b2e634970 Rm uv install (#2577) 2024-03-22 07:59:18 -04:00
dd62fc90ce Unpin deepspeed (#2570) 2024-03-21 09:42:03 -04:00
10b418495e Allow for setting deterministic algorithms (#2569)
* Allow for setting deterministic algorithms

* Expound doc

* English fails me again
2024-03-21 09:12:02 -04:00
c2f193a25c Improve deepspeed env gen (#2565)
* Improve .deepspeed_env generation

Co-authored-by: Rick Lamers <ricklamers@gmail.com>

* Leave for a latter date

---------

Co-authored-by: Rick Lamers <ricklamers@gmail.com>
2024-03-20 14:29:27 -04:00
1812152392 Add log message for RTX 4000 series when performing multi-gpu inference with device_map (#2557)
* add log message for RTX 4000 series when using device_map multi-gpu

* style

* style

* switch to warning

* Update src/accelerate/big_modeling.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-03-20 12:30:41 -04:00
b8b353b7a7 Add NUMA affinity control for NVIDIA GPUs (#2535)
* Beta test, could break!

* Cleanup and get rid of unneded files

* Work on integration

* Add numa affinity to config

* Add to config command

* Fix some of Stas' notes

* Use raw os to make things easier

* Update questionairre

* Use CPU_AFFINITY instead

* Change doc

* Update test

* Fix numa, I submit

* include ref to original

* Fix

---------

Co-authored-by: zach.mueller@huggingface.co <muellerzr@ip-26-0-160-100.ec2.internal>
2024-03-20 11:12:30 -04:00
f2778d6502 Add Cambricon MLU accelerator support (#2552)
* Add Cambricon MLU accelerator support

* up mlu support for test

* fix mlu device MULTI_MLU

* Update src/accelerate/utils/imports.py

it's beautiful !

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* up mlu for quality check

* fix mlu device longTensor error

* fix mlu device tensor dtype check

* fix mlu device send_to_device with torch dynamo error

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-03-20 10:59:00 -04:00
2ad42e77c3 🚨🚨🚨Move to using tags rather than latest for docker images and consolidate image repos 🚨 🚨🚨 (#2554)
* Move to using tags

* Add readme

* Include hf repo description in auto-build

* Test

* Even with an a...

* Rm readme things

* Symlink README for docker repo

* Include readme

* Fin

* Try now?

* Finally got symlink working

* Let's try this

* Forgot runs-on

* Still perm issues, revert
2024-03-18 09:35:32 -04:00
e8aaee5d9b Include working driver check (#2558)
* Include working driver

* Style
2024-03-15 10:12:22 -04:00
910c1b6a8f split_between_processes for Dataset (#2433)
* split_between_processes for Dataset

* Update state.py

* remove param datasets.Dataset from split_between_processes, add note to function doc

* is_datasets_available is a function not a var

* reformat to make ruff happy

* isinstance(inputs, Dataset) only if is_datasets_available()

* add test_split_between_processes_dataset

* split_between_processes for Dataset: pad if apply_padding

* removed trailing whitespace

* complete test_split_between_processes_dataset

* fix test_split_between_processes_dataset for single GPU
2024-03-14 17:39:47 -04:00
92d3240bb5 Add mapping main_process_ip and ip-master_addr when not using standard as deepspeed launcher (#2495)
Co-authored-by: 정수현 <soohyun.jung@ten1010.io>
2024-03-14 16:43:55 +05:30
02a8a9a3a7 Fix test_script.py on TPU v2/v3 (#2542)
* fix replication

* Set generator on each thread. The test passed.

* remove comments

* fix up

* fix format

* fix comment

* not setting the dataloader.batch_sampler
2024-03-13 13:20:16 -04:00
ee163b66fb Update version 2024-03-12 11:55:22 -04:00
354db5b5f7 Use uv instead of pip install for github CI (#2546)
* Test uv

* Workflow dispatch

* Modify

* Setuptools...apparently?

* No need for -y

* Rm cache

* Rm workflow dispatch

* Trainer tests

* Might need to be -e

* Try keeping it at absolute home

* Undo integration
2024-03-12 08:06:27 -04:00
92b1ad01f3 Update FSDP mixed precision setter to enable fsdp+qlora (#2544)
* update FSDP mp setter to enable fsdp+qlora

* fixes

* Update test_fsdp.py
2024-03-12 16:17:29 +05:30
60bfdaa934 Allow Gradients to be Synced Each Data Batch While Performing Gradient Accumulation (#2531)
* add force flag in _do_sync class method and add sync_each_batch in GradientAccumulationPlugin

* modify test_sync to consider sync_each_batch. fix old tests involving optimizer

* run style checker

* minor refactoring based on @muellerzr's comments.

* update docs: gradient_synchronization.md

* Apply @muellerzr's documentation suggestions.

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Apply suggestions from @BenjaminBossan

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

---------

Co-authored-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-03-11 11:13:31 -04:00
16eb6d76bf Remove extra double-dash in error message (#2541)
Error messages should read `--main_process_port`, not `----main_process_port`.  Users who copy and paste the message as it was will get this error message:
```
Accelerate CLI tool: error: unrecognized arguments: ----main_process_port
```
2024-03-10 08:59:44 -04:00
c8acfa700b [docs] Troubleshoot (#2538)
* reorg and light edits

* fix hfoption

* move doc

* move
2024-03-08 13:28:42 -05:00
e70e3c87de Overdue email change... (#2534) 2024-03-08 12:55:42 -05:00
bc8dfe3caf init (#2438) 2024-03-08 11:36:10 -05:00
e3d324240f Check if the buffers fit GPU memory after device map auto inferred (#2412)
* Check if the buffers fit GPU memory after device map auto inferred

  * For some models, like TheBloke/WizardCoder-33B-V1.1-GPTQ, contain a
    huge buffer, which may cause OOM on GPU memory if not using
    offload_buffers. This commit adds a check for such case.

* Minor refactors.

* Add missing assertions
2024-03-08 11:05:38 -05:00
10882eeddd Update link to dynamo/compile doc (#2533) 2024-03-07 09:36:43 -05:00
145a98fc12 Update the default behavior of zero_grad(set_to_none=None) (#2472)
Now, the behavior of the wrapped optimizer is that the gradient is cleared by default when `set_to_none=None`. This aligns with `torch.optim.Optimizer` and saves memory.
2024-03-07 09:31:21 -05:00
64ae9ea3fe Enable using dash or underscore for CLI args (#2527)
* New approach

* New version, good

* Complete rewrite, and works for testing

* More nits

* Simplify option_string filtering

* More suggestions from codereview

* Add test

* Fix broken tests
2024-03-07 07:22:34 -05:00
8aa72b9748 Launch mpirun from accelerate launch for multi-CPU training (#2493)
* Update accelerate config and launch to abstract out mpirun

* Fix var

* Documentation updates, updating the launch script to work with other MPI programs, and fixing the nlp example when using IPEX

* Style fixes

* Add a test

* Style fixes

* Formatting fix

* Updates based on review feedback.

* Remove model.train()

* Doc update

* Update doc regarding the accelerate config with the old method of mpirun and accelerate

* Fix typo in comment

* Quality and test updates

* Updates based on review feedback

* Quality fix

* Fix mock patch path

* Updates based on review feedback

* Quality fixes
2024-03-06 13:52:08 -05:00
97d115a266 Remove unnecessary env=os.environ.copy()s (#2449) 2024-03-06 06:36:56 -05:00
63cfd9efdc qbitstensor compatibility (#2526) 2024-03-04 17:55:28 -05:00
6cf8221a09 Don't manage PYTORCH_NVML_BASED_CUDA_CHECK when calling accelerate.utils.imports.is_cuda_available() (#2524)
* Don't manage PYTORCH_NVML_BASED_CUDA_CHECK

PYTORCH_NVML_BASED_CUDA_CHECK will use an NVML-based check when
determining how many devices are available. That's useful for preventing
CUDA initialization when doing that check (or calling
`torch.cuda.is_available()`). Instead of manipulating that env var, one
can call the torch utility `_device_count_nvml` directly preventing the
manipulation of the env var.

* Uses env var instead of private torch function

* Fixes flake8 check
2024-03-04 14:18:17 -05:00
7a2feecad4 Add copyright + some ruff lint things (#2523)
* Copyright and ruff stuff

* lol
2024-03-04 09:14:31 -05:00
ee004674b9 fix typo in launch.py (#2516) 2024-03-03 04:51:57 -05:00
65544d8fe9 [docs] Fix typos (#2490)
* fix typos

* fix typos

* fix typo

* fix typos

* fix typos

* fix typos

* fix typo

* fix typo

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-03-01 12:19:05 -05:00
5fce525f90 Fix edge case in infer_auto_device_map when dealing with buffers (#2511)
* fix buffer

* style
2024-03-01 10:32:31 -05:00
ca37b0e471 Fixed 0MiB bug in convert_file_size_to_int (#2507) 2024-02-29 09:32:59 -05:00
82a1258ffc Remove offline stuff (#2509)
* Better check

* Fully remove

* Trail
2024-02-29 09:17:37 -05:00
21b225e8d5 Check if hub down (#2506)
* Let's try it out

* Let's try this out

* Some more cases

* String

* Require hub online for estimator

* Add CI checker to alert on hub status

* Format

* Oops death by ctrl z

* Fix import
2024-02-28 18:56:37 -05:00
25ee6ab3b7 [docs] Quicktour (#2456)
* first draft

* fix callouts

* save, load, training features

* fix hfoption tag

* execution, tpu

* fix toctree

* move from accelerator api

* feedback
2024-02-28 15:45:41 -08:00
2d3e822d11 quanto compatibility for cpu/disk offload (#2481)
* quanto compatibility

* fix
2024-02-28 18:05:14 -05:00
811dc1e464 add custom dtype INT2 (#2505)
* add-custom-dtype

* style
2024-02-28 18:05:02 -05:00
c59c6c9bff [docs] Divide training and inference (#2466)
* divide training and inference

* nest
2024-02-28 09:01:25 -08:00
422bd23f3f Docstring fixup (#2504)
* Docstring fixup

* Tense
2024-02-28 11:56:52 -05:00
c0b16b684f [docs] Accelerator API (#2465)
* update

* make style

* align toctree title

* feedback
2024-02-28 08:55:36 -08:00
78b15561a1 fix link typo (#2503) 2024-02-28 10:48:34 -05:00
8f9673f509 hotfix test 2024-02-27 13:30:37 -05:00
9c071103f0 Remove all cases of torchrun in tests and centralize as accelerate launch (#2498)
* Migrate torchrun to a full helper for tests

* keep old namings

* Metrics too

* Fix examples

* Bronked tests

* Refactor

* No need for setup
2024-02-27 13:09:05 -05:00
1127e670ca Fix CI tests due to pathlib issues (#2491)
* Fix tests

* Fixup tests

* Fix test

* Actually cast to string!

* Fixup deepspeed

* fsdp and deepspeed fix

* Since we're doing this, may as well get it all

* Stragglers

* Split only if we require config_file

* Make list

* Only convert if it's a path

* type

* Other func

* rm parenth
2024-02-27 10:39:31 -05:00
fa83efc33e [FIX] allow Accelerator to detect distributed type from the "LOCAL_RANK" env variable for XPU (#2473)
* add LOCAL_RANK

* style
2024-02-27 09:41:51 -05:00
4aa71049c3 Free mps memory (#2483) 2024-02-26 15:14:19 -05:00
c0b441f6be Fix TPU with new XLA device type (#2467)
* Fix TPU after new `XLA` device type

* use `torch_xla.runtime.device_type`

* format
2024-02-26 14:50:21 -05:00
34fdddd7df Context manager fixes (#2450)
* Ban use of `os.*env`

* Fix `clear_environment` to actually clear environment variables

Assigning to `os.environ` does not clear the environment (Ruff B003)

* Have environment context managers restore state even if the block raises

* Add tests for environment CMs
2024-02-26 14:35:06 -05:00
3fb9a3a231 DOC: Fixes to Accelerator docstring (#2443)
* DOC Fixes to Accelerator docstring

- Add more links to accelerator classes where applicable
- Fix a typo: KwargHandler => KwargsHandler

* Fix syntax issues

Not sure how to add a link of the type is `list[SomeType]`, so just
removed it for now.

* Fixing link for KwargsHandler

* Add KwargsHandler to API docs

* Also add doc entry to kwargs.md
2024-02-26 14:11:36 -05:00
065d88729b Replace os.path.sep.join path manipulations with a helper (#2446)
* Replace `os.path.sep.join` path manipulations with a helper

* Fix `base_cmd` being modified in CLI tests
2024-02-26 14:10:23 -05:00
67e698cf4d Add pre-commit configuration (#2451) 2024-02-26 14:05:24 -05:00
46ac6c9bba Use grad-accum on TPU (#2453)
* Use grad-accum on TPU

* Better logic
2024-02-26 14:03:57 -05:00
9b24f56e42 Fix wrong is_namedtuple implementation (#2475)
* fix

* add test
2024-02-26 12:11:03 +01:00
f20445d4ac Fix the pytest version to be less than 8.0.1 (#2461)
* Fix the pytest version to be less than 8.0.0

We're getting errors such as:

> /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/testing_utils.py:129: in <module>
>     from _pytest.doctest import (
> E   ImportError: cannot import name 'import_path' from '_pytest.doctest' (/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/_pytest/doctest.py)

* Update setup.py

Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>
2024-02-23 16:03:29 -05:00
97d2168e59 Check for None (#2452) 2024-02-15 10:38:54 -05:00
79016eb163 Fix test 2024-02-14 14:38:01 -05:00
164193fa7e [Big deprecation] Introduces a DataLoaderConfig (#2441)
* Deprecate and introduce dataloader_config

* Update docs

* Doc nits

* More tests, adjust based on PR review

* Fixup tests

* Nits

* Update docs/source/quicktour.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Clean

* Actually create one

* Forgot to change one

* Use pytest

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-02-14 13:26:02 -05:00
482a9f9fa4 Point to right file 2024-02-14 12:52:49 -05:00
d7de8d1794 Include pippy_file_path (#2444) 2024-02-14 11:24:07 -05:00
b443be70fb Make torch xla available on GPU (#2176)
* Make torch xla available on GPU

* format code

* fix documentation build error

* update according to the comments

* Replace DistributedType.TPU with DistributedType.XLA

* make all ut pass

* format code

* update comments

* skip test

* format code

* skip FSDPPluginIntegration for torchxla

* bring back custom_sampler_check

* fix ut

* format code

* format code

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-02-14 10:19:25 -05:00
613ad7089a Fix warning when dispatching model (#2442)
* Fix warning when moving the model

* oups
2024-02-14 09:06:14 -05:00
13e79ccfab Enable more Ruff lints & fix issues (#2419)
* Remove antiquated flake8 and isort configuration

* Bump to Ruff 0.2.1

* Explain ruff options

* Autofix Ruff B010 (static `setattr`)

* Autofix Ruff B009 (static `getattr`)

* Enable Ruff UP (not UP007); auto-fix

* Fix remaining Ruff UP complaints

* Fix a couple more format calls
2024-02-14 08:59:42 -05:00
aba3b8c72f Prefer is_torch_tensor over hasattr for torch.compile. (#2387)
* Prefer `is_torch_tensor` over `hasattr` for `torch.compile`.

`torch.compile` breaks when using `hasattr` but succeeds when using `isinstance(torch.Tensor)`.  This commit short-circuits the `hasattr` call for `torch.Tensor`s if possible.

Note: `is_npu_available` is also not torch.compila compatible due to (1) lru_cache and (2) importlib checks, so I've moved it into the try block, catching the AssertionError instead.

* Fix torch.device("npu").

This is not available in non-npu pytorch. Note that
torch.device automatically assigns an index when created as torch.device("npu"), so overwriting device with `"npu:0"` is only required if device is a string "npu".

* Remove unittest.main execution.

* Fix style broken by merge save.

* Import operations functions directly.

* fix style

* Fix imports attempt 2.

* Re-raise error if no NPU available.
2024-02-14 08:59:28 -05:00
70cdf5fe52 Make test assertions more idiomatic (#2420)
* Codemod `unittest` assertions into native assertions

With https://github.com/akx/codemod-unittest-to-pytest-asserts

* Use plain asserts instead of `assertDict` and `assertList`

Done with

```
ast-grep run --pattern 'self.assertDictEqual($A, $B)' --rewrite 'assert $A == $B' -l python -i
ast-grep run --pattern 'self.assertListEqual($A, $B)' --rewrite 'assert $A == $B' -l python -i
``

* DRY some Deepspeed tests
2024-02-13 14:23:18 -05:00
b38590a28a fix tied_pointers_to_remove (#2439) 2024-02-13 16:07:06 +01:00
5318bc7733 Dev version 2024-02-13 10:04:34 -05:00
ef68b4655c Fix seedable sampler logic and expound docs (#2434)
* Fix and add more docs

* Add tests + ensure working

* Fixup all tests!
2024-02-13 09:19:42 -05:00
ecebfa19c9 3.9 image (#2436) 2024-02-12 15:02:32 -05:00
5a39359fb2 Fix test (#2435) 2024-02-12 14:23:36 -05:00
b3d2111708 Version 0.28.0.dev 2024-02-09 10:51:07 -05:00
f75c6245ba [Fix] make all tests pass on XPU (#2427)
* fix tests

* style
2024-02-09 10:11:41 -05:00
9c1d5bac15 bug fix (#2426) 2024-02-09 10:11:08 -05:00
b0b867da85 Fix fp8 things (#2403)
* Fix fp8 things

* if
2024-02-09 10:03:29 -05:00
433d693b70 [FIX] fix the wrong nproc_per_node in the multi gpu test (#2422)
* bug fix

* style fix
2024-02-09 10:02:28 -05:00
c3aec59b12 Migrate pippy examples over and run tests (#2424)
* Migrate examples over

* Finish updating doc

* torchpippy

* Readme review nits

* Mention gather op in examples
2024-02-09 10:01:56 -05:00
9467a62744 Make output end up on all GPUs at the end (#2423)
* Make output end up on the cpu at the end

* Rework a bit

* Remove the CPU part

* Update to include a new util to copy tensors across devices

* Update test

* Update doc

* Update docstring

* Make False by default and change if community feedback says yes

* Apply suggestions from code review

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update default to False in doc and make a tip

* Update typing

* Defaults

* Explain

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-02-09 10:01:00 -05:00
86228e321d Update FSDP docs (#2430)
* Update fsdp.md

* address comments
2024-02-09 20:29:02 +05:30
06b138d845 Try again 2024-02-06 13:10:43 -05:00
0867c09318 torch-native pipeline parallelism for big models (#2345)
* Broken version

* Timing I would expect

* Working version!

* Use MethodType

* working test

* Tests

* Use no split module classes explicitly

* Put split_points in pipelien

* Store split points in hf_split_points

* fix case num_process=1

* Allow for dynamic batch padding (#2352)

* Allow for dynamic batch paddign

* Fix test

* Update src/accelerate/inference.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Break early after the first valid bs is found

* Less slicy-dicy

* Test cv model

* Start, need to test

* Use dataloader-like logic

* Refactor to utils

* With tests

* Update the source

* Clean

* bs=1 case

* Add test

* add some failing test

* Almost working version

* Much cleaner implementation

* Use pad_input_tensor

* All tests passing!

* Do it at tracing too

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Marc Sun <marc@huggingface.co>

* Rm literal

* Allow users to pass in max_memory

* Note about recursion

* Document, document, document

* Right import check

* Fix bug, add tests to multigpu runners

* Change default to None

* Start of docs

* Try again?

* Try again x2

* Trailing comma

* Move import

* Clean

* typehint

* typo

* From code review

* Use num_chunks

* Update tests/test_utils.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Bad copy/paste

* hf_split_points

---------

Co-authored-by: Marc Sun <marc@huggingface.co>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-02-06 13:00:40 -05:00
0e1ee4b92d Use Ruff for formatting too (#2400)
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-02-06 08:18:18 -05:00
d8a64cb79d Unpin (#2418) 2024-02-06 08:00:33 -05:00
b703efdcc3 Adding Local SGD support for NPU (#2415) 2024-02-05 10:26:48 -05:00
68f54720dc Fix the size of int and bool type when computing module size (#2411)
* According to the code in set_module_tensor_to_device, uint, int and bool type
  won't be converted, so let's keep its original size, or the module size will be
  under-estimated.
2024-02-02 12:15:50 -05:00
46f1391b79 Fix XPU inference (#2383)
Though it will complain about "Device xpu is not recognized, available devices are integers(for GPU/XPU),
'mps', 'cpu' and 'disk'", but you cannot just put 0 as device, or it will treat 0 as CUDA device, then complains
again that torch is not compiled with CUDA enabled.

You will need safetensors >= 0.4.2 if using safetensors files.
2024-02-02 11:08:22 -05:00
cd7ff5e137 Added activateEnviroment.sh to readme (#2409)
Clarification of the activateEnviroment.sh script in the examples working on a cluster with Slurm&Enviroment Modules
2024-02-01 14:21:55 -05:00
f4b411f84b Fix CI due to pytest (#2408)
* New makefile

* Big modeling, oops
2024-02-01 12:28:10 -05:00
7ba64e632c Revert "[don't merge yet] unpin torch (#2406)" (#2407)
This reverts commit 8b770a7dabd957ae54f1abb028d1ce53db6cf4d4.
2024-02-01 10:13:15 -05:00
8b770a7dab [don't merge yet] unpin torch (#2406)
* unpin torch

* unpin torch

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-02-01 09:56:16 -05:00
3d8b998fbb Address PIP-632 deprecation of distutils (#2388) 2024-01-31 05:54:23 -05:00
03365a3d17 Pin torch version (#2394) 2024-01-30 19:15:33 +00:00
7aafa25673 Fix batch_size sanity check logic for split_batches (#2344)
* fix

* lets raise an error

* Update error message

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* fix error message style

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2024-01-27 19:33:48 +01:00
f88661b5d9 device agnostic cli/data_loader/grad_sync/kwargs_handlers/memory_utils testing (#2356)
* test_cli

* test_data_loader

* test_grad_sync

* test_kwargs_handlers

* test_memory_utils

* test_data_loader

* style check
2024-01-26 09:26:40 +01:00
581fabba48 Add adapter_only option to save_fsdp_model and load_fsdp_model to only save/load PEFT weights (#2321)
* Add adapter_only option to save_fsdp_model and load_fsdp_model

* Gate with adapter_only

* Black format

* Change unwrapping behavior

* Use extract_model_from_parallel for model unwrapping

* Fix quality

* Move functions to utils files

* Fix quality
2024-01-26 08:58:40 +01:00
e909eb34e2 modified big_modeling.py (#2376)
Co-authored-by: Andrei Panferov <blacksamorez@yandex-team.ru>
2024-01-25 14:16:52 +01:00
7644a02e6b add_hook_to_module and remove_hook_from_module compatibility with fx.GraphModule (#2369)
* fix add & remove hook with torch fx

* comment test
2024-01-25 10:53:53 +01:00
162a82164e device agnosic optimizer testing (#2363) 2024-01-23 10:12:22 +01:00
0d6a5fa8ee remove init_hook_kwargs (#2365) 2024-01-22 13:05:29 +01:00
53845d2596 Fix deepspeed issue (#2366) 2024-01-22 11:47:01 +01:00
5ec00da2be bugfix that doesnt let fp8recipekwarg use TE or MSAMP (#2355)
Signed-off-by: Sudhakar Singh <sudhakars@nvidia.com>
2024-01-19 09:24:51 -05:00
649e65b542 fix test (#2354)
Co-authored-by: Ubuntu <ubuntu@ip-172-31-18-207.ec2.internal>
2024-01-18 15:33:34 -05:00
14d7c3fca6 Fix block_size picking in megatron_lm_gpt_pretraining.py (#2342)
Only cap `block_size` to 1024 if `tokenizer.model_max_length` is actually greater than 1024.
2024-01-18 13:04:23 -05:00
c7d11d7e40 Fix mpi4py/failing deepspeed test issues (#2353)
* Try deepspeed after installing mpi4py

* Try again

* Just GPU needed

* Run slow deepspeed

* Fin

* Uncomment

* Uncomment x2
2024-01-18 13:01:44 -05:00
ec4f01a099 device agnostic test_accelerator/test_multigpu (#2343) 2024-01-18 09:03:20 -05:00
f5c01eeb63 FIX: add oneCCL environment variable for non-MPI launcher (accelerate launch) (#2339)
* add ccl env

* add local world size

* set env vars for deepspeed path

* adapt style
2024-01-18 09:01:34 -05:00
20ff458d80 Show DeepSpeed option when multi-XPU is selected in accelerate config (#2346)
* add XPU

* adapt style
2024-01-18 06:32:03 -05:00
6719cb6db3 Avoid duplicating memory for tied weights in dispatch_model, and in forward with offloading (#2330)
* wip

* fix

* add test

* cleanup

* style

* style & tests pass

* fix offload, submodules

* cleanup

* Update tests/test_big_modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update tests/test_big_modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* disk offloading do not reload tied parameters in memory

* remove outdated comment

---------

Co-authored-by: Your Name <you@example.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-01-17 10:58:05 +01:00
31fd2b1ad6 Just 40* (#2332) 2024-01-12 15:34:50 -05:00
fce61a99ec Fixed typos in readme files of docs folder. (#2329) 2024-01-12 05:44:28 -05:00
6ec92cf06b Fix model memory issue (#2327)
* Potential fix

* REmove config part?
2024-01-11 13:47:59 -05:00
2a4037322f convert it back to dict (#2326) 2024-01-11 13:29:21 -05:00
f823404f69 Raise error when using batches of different sizes with dispatch_batches=True (#2325)
* raise err

* typo

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* remove from e

* fix

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-01-11 10:13:07 -05:00
ef2fe912c5 Update versions to dev 2024-01-10 14:43:29 -05:00
e3e9b87592 Fix infer_auto_device_map when tied weights share the same prefix name (#2324)
* fix auto device map with tied weights sharing a prefix name

Co-authored-by: Giuseppe Franco <giuseppefranco4@gmail.com>
Co-authored-by: Nick Fraser <icanlosh@gmail.com>

* precise comment

---------

Co-authored-by: Giuseppe Franco <giuseppefranco4@gmail.com>
Co-authored-by: Nick Fraser <icanlosh@gmail.com>
2024-01-10 15:57:37 +01:00
456afd92ce Params4bit added to bnb classes in set_module_tensor_to_device() (#2315) 2024-01-10 09:25:01 -05:00
0d2280dadc fix sanity check (#2310) 2024-01-09 14:11:51 -05:00
55d4a496dd Bring old seed technique back (#2319)
* Redo stage 1

* Fix rest of tests

* Expand doc

* Expand x2

* Expand x2
2024-01-09 14:10:57 -05:00
2a8829d9a5 Update test_deepspeed.py (#2323) 2024-01-10 00:15:19 +05:30
3969731ce8 Fix DeepSpeed related regression (#2304)
* Update accelerator.py

* Update test_performance.py

* add test
2024-01-09 15:08:12 +05:30
411aa58a77 DeepSpeed refactoring (#2313)
* DeepSpeed refactoring

Co-Authored-By: Stas Bekman <stas00@users.noreply.github.com>

* add tests

* Update test_deepspeed.py

* Update test_deepspeed.py

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2024-01-09 15:07:27 +05:30
4420ec641d Update accelerator.py (#2295) 2024-01-09 10:23:03 +05:30
2241725ad6 Update docs: Add warning for device_map=None for load_checkpoint_and_dispatch (#2308)
* Update docs: Add warning for device_map=None for load_checkpoint_and_dispatch

* Fix style errors.
2024-01-08 19:24:11 -05:00
5cac878984 Add more missing items (#2309)
* Add more missing items

* Update docs/source/package_reference/utilities.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-01-08 14:58:23 -05:00
5d31423308 [deepspeed] documentation (#2296)
* Update dataclasses.py

* expand docs
2024-01-08 13:38:12 +05:30
2721387b98 make test_state_checkpointing device agnostic (#2290) 2024-01-05 12:47:58 -05:00
2cfa88bdf1 Fix breakpoint API in test_script.py on TPU. (#2263)
* Fix breakpoint API in test_script.py on TPU.

* only call set_trigger on the main process

* The test passed.

* add a comment

* Call mark_step after all_reduce to make torch_xla run collective op like the torch.distributed below, rather than waiting untill the tensor is referenced again to run the pending operations.
2024-01-05 12:47:30 -05:00
102caf4fab bugfix in swapping init module weights (#2305)
Signed-off-by: Sudhakar Singh <sudhakars@nvidia.com>
2024-01-05 12:45:21 -05:00
07df5d268f add back dvclive to tests (#2280)
* add back dvclive

* dvclive tracker: handle and test step increments

* fix python<3.9 compatibility
2024-01-05 12:22:22 -05:00
68b3dbf666 Bump tj-actions/changed-files from 22.2 to 41 in /.github/workflows (#2300)
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 22.2 to 41.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](https://github.com/tj-actions/changed-files/compare/v22.2...v41)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-05 09:52:22 -05:00
403c0714d1 Update dataclasses.py (#2292) 2023-12-28 23:59:26 +05:30
848ed800fa Improve FSDP config usability (#2288)
* Improve FSDP config usability

* quality 

* Update tests

* fix cmd arg

* fix

* update docs

* address comments
2023-12-27 20:41:29 +05:30
ad957ce556 Update deepspeed.md (#2286) 2023-12-27 15:05:42 +05:30
3db088f5d6 [doc] FSDP improvements (#2274)
* Update fsdp.md

* fix typo

* fix readability

* resolve the "static models" ambiguity

* rewrite section

* typo
2023-12-27 15:04:55 +05:30
d1abd59114 fix (#2218) 2023-12-26 14:21:08 +01:00
ceb7c699bc typo fix (#2276)
* typo

* style
2023-12-22 14:10:22 -05:00
c5baa055c0 Rm DVCLive as latest version causes failures (#2279) 2023-12-22 11:47:04 -05:00
349be97ccb Uninstall DVC in the Trainer tests (#2271)
* Test using my branch

* Uninstall DVCLive only
2023-12-22 08:04:16 -05:00
b60061dfd2 Solve CUDA issues (#2272)
* Solve CUDA issues

* import
2023-12-22 08:03:59 -05:00
b565a6c58a device agnostic deepspeed&fsdp testing (#2235)
* device agnostic deepspeed testing

* device agnostic fsdp testing

* fix failing deepspeed test

* make style

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-12-20 10:47:39 -05:00
a03c361ffb refactor deepspeed dataloader prepare logic (#2238)
* refactor deepspeed dataloader prepare logic

Co-Authored-By: Stas Bekman <stas00@users.noreply.github.com>

* address comments and fix issues

Co-Authored-By: Stas Bekman <stas00@users.noreply.github.com>

* further refactor

* add test

* rename test

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2023-12-19 12:45:14 +05:30
b0528392c8 Integrate MS-AMP Support for FP8 as a seperate backend (#2232)
* Redo with new version

* Store

* Working version

* Seperate for now

* Min diff

* check if available

* Better docstring

* Check for multiple models and optimizers

* Check for TE and MSAMP args seperately

* String clarity

* Better docstring and types

* Quality

* Simplify a bunch for fp8

* Convert literals to type alias

* Better err

* Docs

* toc typo

* Apply suggestions from code review

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Maria Khalusova <kafooster@gmail.com>

* Address doc nits

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Maria Khalusova <kafooster@gmail.com>
2023-12-15 13:07:55 -05:00
060678415a Support log_images for aim tracker (#2257)
* support `log_images` for aim tracker

* fix the potential kwargs issue for aim tracker's `log_images`

* remove ambiguous import statement

* use `aim` directly to avoid potential conflict
2023-12-15 11:25:53 -05:00
6b2d968897 [Big-Modeling] Harmonize device check to handle corner cases (#2254)
* harmonize device check

* make style

* oops

* oops again
2023-12-14 09:55:31 -05:00
ad3a5bc920 Fix MpDeviceLoaderWrapper not having attribute batch_sampler (#2242)
* Fix MpDeviceLoaderWrapper not having attribute batch_sampler

* fix style
2023-12-13 12:31:51 -05:00
eafcea07f6 fix BFloat16 is not supported on MPS (#2226) (#2227)
* fix BFloat16 is not supported on MPS (#2226)

* fix style

* add comments
2023-12-11 22:27:07 -05:00
eff30e2130 Fix nb tests (#2230)
* Fix nb tests

* INclude bnb import

* pprint

* Try this time

* greater than zero

* Fix test

* bnb

* Clean
2023-12-11 09:58:12 -05:00
694f2e2c12 fix the failing test (#2237) 2023-12-11 16:15:23 +05:30
9964f90fd7 Add npu support to big model inference (#2222)
* Add npu support to big model inference

* make style

* add warning when using npu

* fix typo

* replace `.to(<num>)` with `.to("npu:<num>") when using `torch_npu`

* empty_cache

* fix
2023-12-08 11:58:32 -05:00
f86876d56d Make cleaning optional for device map (#2233)
* Make cleaning optional for device map

* Apply suggestions from code review

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Change order

* Nit

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2023-12-08 11:55:03 -05:00
0a37e2042e device agnostic testing (#2123)
* device agnostic testing

* initilaize accelerate state before using the logging utility

* apply review suggestion

* apply review suggestion

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* use `hardware accelerator` to disambiguate

* remove redundant guard code

* rename variable name for consistency

* remove the overkilled codes

* fix ci-error

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-12-08 07:29:25 -05:00
54d670be41 [Docs] Add doc for cpu/disk offload (#2231)
* Add doc offload

* fix

* Update docs/source/concept_guides/big_model_inference.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-12-07 12:02:06 -05:00
339854a9a4 Update the 'Frameworks using Accelerate' section to include Amphion (#2225)
* Extend the frameworks using accelerate to include Amphion

* Update integration examples to include Amphion

* fix some typos
2023-12-07 11:28:41 -05:00
5296419df4 [data_loader] expand the error message (#2221)
* Update data_loader.py

* style
2023-12-07 10:38:39 -05:00
6a4857fec2 fix tqdm wrapper to print when job id ==0 (#2223) 2023-12-06 08:45:31 -05:00
9569150174 Fix dtype bug when offload_state_dict=True and dtype is specified (#2116)
* fix bug when using offload_state_dict

* fix wrong docstring & type hint

* fix & add test

* style

* fix device_map

* Update tests/test_modeling_utils.py

* fix style
2023-12-06 02:04:26 +09:00
8f871f41f1 Check notebook launcher for 3090+ (#2212)
* Include dist launch

* Better way

* CLean

* Just do it always

* Account for notebook launcher

* Use better gpu check

* Clean output

* Set logic
2023-12-05 11:21:44 -05:00
47e6c36155 Add allgather check for xpu (#2199)
* add  allgather check for xpu

* style fix

* fix test

* fix test and review
2023-12-05 11:21:07 -05:00
47c144570c Update docker images (#2213) 2023-12-05 11:07:18 -05:00
6a54d0781b MNT Delete the delete doc workflows (#2217)
They are failing because the corresponding GH action no longer exists.
Docs are now cleaned up automatically.

See discussion in #open-source-interal
2023-12-05 08:35:35 -05:00
0482548363 Update accelerator.py (#2206) 2023-12-02 00:09:59 -05:00
0e48b2358d allow deepspeed without distributed launcher (#2204) 2023-12-01 09:09:36 -05:00
3499cf25aa Assemble state dictionary for offloaded models (#2156)
* changed meta alignment device to cpu

* reverted alignment device and init weight map

* trace on values

* trace on values

* trace on values

* added offload model state dict save and test

* removed hook traces

* removed n

* Update src/accelerate/accelerator.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* suggestions and make style

* fixed circular import and make style

* debugged test

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* function level import and make style

* Update src/accelerate/utils/modeling.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update tests/test_accelerator.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update tests/test_accelerator.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* make style

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-11-30 09:18:28 -05:00
68d63ee15f unpins dvc (#2200) 2023-11-29 13:45:02 -05:00
151637920d Better error when device mismatches when calling gather() on CUDA (#2180)
* Better err

* Update src/accelerate/utils/operations.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-11-29 12:11:52 -05:00
0ba3e9bb50 Explicitly disable P2P using launch, and pick up in state if a user will face issues. (#2195)
* Disable P2P automatically

* Clean

* Right check

* Set better

* Check if just cuda

* Spacing

* replace str int for int as str
2023-11-29 12:10:01 -05:00
b04d36c75f Apply DVC warning to Accelerate (#2197)
* Use logger warn instead

* Warn

* Right import

* Clean up logs

* Apply suggestions from code review

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2023-11-28 15:02:20 -05:00
5fc1b230d3 Pin DVC (#2196)
* Remove dvc

* Pin instead
2023-11-28 13:34:11 -05:00
244122c736 fsdp refactoring (#2177)
* remove the redundant code post the torch 2.1 release

* make `use_orig_params=True` by default.

* fix `save_state` optimizer saving for fsdp and update the fsdp example

* quality

* fixing the utils and tests. Updating the docs

* bump up the minimum version for FSDP support.

* address comment

* rename fsdp model checkpointing variables
2023-11-24 09:31:57 +05:30
d25efa71ce Don't install comet 2023-11-21 09:54:33 -05:00
1aeb1e8997 Don't make integration tests wait 2023-11-21 08:41:57 -05:00
0e51680994 Right URL 2023-11-20 14:03:49 -05:00
7d430cf8de skorch 2023-11-20 13:30:23 -05:00
b8ca803f98 Don't make it wait 2023-11-20 13:11:08 -05:00
1243191ecb [Working again] New CI (#2173)
* Try merge tests

* Fix

* Checkout branch

* Fix pip install

* rebase

* Colons

* right one

* use master

* Rm

* Add needs

* Better clean

* always

* Forgot other

* test on AWS

* update all labels

* fix multi-gpu working directory

* limit to 2 GPU

* force run on kube

* move build docker image to new ci

* test build on CPU instance

* move build docker image release to new ci

* move scheduled slow tests to new ci

* move integration test to new ci

* Comments

* Right CPU tags

* Right machines

* PR comments

* Fix issues

* Some trailers

---------

Co-authored-by: Guillaume LEGENDRE <glegendre01@gmail.com>
2023-11-20 13:01:12 -05:00
2b25b8b3c5 Revert "New CI Runners (#2087)" (#2172)
This reverts commit ca300c0a04f843da2c5c8559e7d728926f7e8bf2.
2023-11-20 12:06:33 -05:00
ca300c0a04 New CI Runners (#2087)
* Try merge tests

* Fix

* Checkout branch

* Fix pip install

* rebase

* Colons

* right one

* use master

* Rm

* Add needs

* Better clean

* always

* Forgot other

* test on AWS

* update all labels

* fix multi-gpu working directory

* limit to 2 GPU

* force run on kube

* move build docker image to new ci

* test build on CPU instance

* move build docker image release to new ci

* move scheduled slow tests to new ci

* move integration test to new ci

* Comments

* Right CPU tags

* Right machines

* PR comments

---------

Co-authored-by: Guillaume LEGENDRE <glegendre01@gmail.com>
2023-11-20 11:41:57 -05:00
427ef8bd00 Updated torchrun instructions (#2096)
* Updated torchrun instructions

* Update examples/README.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update examples/README.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update examples/README.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update examples/README.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update README.md for torchrun instructions

* Added SLURM scripts and updated README

* Update examples/Slurm/submit-multinode.sh

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update examples/Slurm/submit-multiGPU.sh

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update examples/README.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update examples/README.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* final details

* modified argument parser

* modified slurm multigpu script

* modified multinode slurm script

* Added accelerate multine issue

* Update examples/README.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* fixed readme commnad

* added --main_process_port specification to readme

* Revert "modified argument parser"

This reverts commit c3bef5cdd11a8a120602b5b7ce158f7400881d7f.

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-11-20 10:42:49 -05:00
35b0206353 Fix non persistant buffer dispatch (#1941)
* offload only persistant buffer

* add tests and fix naming

* remove_non_persistant=True by default

* style

* style again

* fix hooks

* fix logic
2023-11-20 09:49:50 -05:00
fbe00d7897 Update dataclasses.py (#2168)
Bug fix: recompute_activation -> recompute_activations
2023-11-20 07:53:10 -05:00
62af737219 Add ZeRO++ to DeepSpeed usage docs (#2166)
* added zeropp to deepspeed doc file

* minor edit to clarify hpz size
2023-11-20 17:54:30 +05:30
cd51581248 Add warning for problematic libraries (#2151)
* Test bnb and fix nb launcher skip

* Fin

* Rm comment

* PR Review comments

* Just star
2023-11-17 09:24:20 -05:00
a5a7c039a0 Do not attempt to pad nested tensors (#2041) 2023-11-17 09:01:35 -05:00
cf745c936d check port availability only in main deepspeed/torchrun launcher (#2078)
* check port availability only in main deepspeed launcher

* check port availability only in main launcher for deepspeed/torchrun

* Update launch.py

add comments

---------

Co-authored-by: 聂靖入 <niejingru@bytedance.com>
2023-11-17 09:00:55 -05:00
99877f56d6 Adds dvclive tracker (#2139)
* dvclive tracker

* add dvclive to test_trackers

* fix dvclive tests

* add dvclive example and respond to other feedback

* fix dvclive tests

* fix quality
2023-11-17 08:49:13 -05:00
0f2686c8d3 Disable pypi for merge workflows + fix trainer tests (#2153)
* Disable workflows for PR + merge

* skorch

* Fix transformers tests too
2023-11-15 11:29:39 -05:00
a912b2ee09 Add examples to tests (#2131)
* Add examples to tests

* Try now

* Right name

* Right path

* Fin

* Too slow, just test on runner
2023-11-14 15:03:41 -05:00
e9fd72a613 Deprecated stuff (#2152) 2023-11-14 14:42:01 -05:00
8dedb140ef Add note about GradientState being in-sync with the dataloader by default (#2134)
* NOte about sync

* PR review comments
2023-11-14 11:53:57 -05:00
b55855a3d4 fix initial typos (#2150) 2023-11-14 09:44:30 -05:00
2b53a9089c [docs] troubleshooting guide (#2133)
* first take at troubleshooting guide

* logging moved to the troubleshooting guide

* TOC updates and gudie edits

* minor edits

* moved to tutorials

* feedback addressed

* batch size clarifications

* typo

* kernel, early stopping hanging, feedback
2023-11-13 17:58:56 -05:00
39d255b3d0 fixed a couple of broken links (#2147) 2023-11-13 12:26:10 -05:00
99dff1a167 Fix more tests (#2146)
* Fix some tests

* Contiguous

* Leave Marc alone ;)
2023-11-13 10:42:35 -05:00
a0a16e118a fix (#2145) 2023-11-13 10:32:15 -05:00
15458c5737 specify config file path on README (#2140)
* specify config file path

* set the path of generated config file for configuring and executing commands
2023-11-13 09:37:00 -05:00
fc0a43c3c1 Deal with shared memory scenarios (#2136)
* Deal with duplicates

* refactor

* Keep false for save

* Clean

* Better test for logs
2023-11-10 10:49:22 -05:00
8256a9c2d4 fix retie (#2137) 2023-11-10 10:12:23 -05:00
6727ac4394 Leave native save as False (#2138)
* Custom objects are not saved using saftensors

* Leave save as false
2023-11-09 13:39:11 -05:00
9674b40580 For testing transformers CI 2023-11-09 11:39:38 -05:00
0b0d9215a9 Raise error when saving with param on meta device (#2132)
* add error

* style

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* style

* move before creating the directory

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-11-08 10:37:27 -05:00
e638b1e21a Make safetensors the default (#2120)
* Make safetensors default

* Rm location

* Actually flip flags

* Tests + update checkpointing

* Add to setup

* Start of tests with both safetensors and without

* Update tests to use both

* Remove from load state

* Explicit tip

* With suggestions

* Simplify, don't abstract. Need to bring back to deepspeed however

* Refactor to use consts

* Keep how it was

* Typo fix
2023-11-08 09:07:22 -05:00
76de60dbdc Fix import error when torch>=2.0.1 and torch.distributed is disabled (#2121) 2023-11-08 17:38:32 +05:30
JQ
217e1a248c Sync states for npu fsdp (#2113) 2023-11-08 14:13:54 +05:30
5e0eb0d750 add DeepSpeed support for NPU (#2054) 2023-11-08 13:01:30 +05:30
183c9dd3ce Allow for ACCELERATE_SEED env var (#2126)
* Manual seeds

* None

* Add to docs

* Document

* Use torch seed for simplicity

* Rm from doc

* Better version
2023-11-07 12:05:42 -05:00
4f100318f4 Add explicit error if empty batch received (#2115)
* Add explicit error if empty batch received

* Move error check to cover all empty iterables
2023-11-03 14:06:12 -04:00
fa6f43033c Update README.md (#2119) 2023-11-03 12:57:46 -04:00
820fc4ca7a Make SeedableRandomSampler the default always (#2117)
* Fix tests

* Simplify logic a ~lot~
2023-11-03 08:28:42 -04:00
bd72a5f1a8 Revert "Always use SeedableRandomSampler (#2110)"
This reverts commit d8e12854098988d2162948c9a853081fcf00b73f.
2023-11-01 15:20:25 -04:00
55088a2cf5 Revert "Fix issue with tests (#2111)"
This reverts commit c2d8e245e9fa603b29986cb3b677cb0d44b41f6a.
2023-11-01 15:20:21 -04:00
c2d8e245e9 Fix issue with tests (#2111) 2023-11-01 15:03:59 -04:00
d8e1285409 Always use SeedableRandomSampler (#2110)
* Fix tests fully

* Change comment

* Further comments

* Clean

* CPU specific

* Just use device

* Rewrite differently

* Rewrite
2023-11-01 13:39:53 -04:00
5b3f3b99d6 fix warning (#2105) 2023-10-31 15:10:06 -04:00
2935057606 Fix memory leak in fp8 causing OOM (and potentially 3x vRAM usage) (#2089)
* Fix memory leak

* Change when model is moved to cuda

* Add from PR

* Remove link

* Undo original forward link
2023-10-31 09:34:53 -04:00
bb6759d634 fixed ip address typo (#2099) 2023-10-31 09:10:11 -04:00
55747318a0 Fix batch sampler (#2097)
* Fix batch sampler

* Clean

* Fix tests

* Fix

* Better comment

* Base case
2023-10-30 09:57:28 -04:00
217faafe08 Fix flag typo (#2090) 2023-10-27 08:46:13 -04:00
5440387529 CRITICAL: fix failing ci (#2088) 2023-10-26 16:12:58 -04:00
e1fab05ce7 Add ClearML tracker (#2034)
* add clearml tracker

* fix style in tracking.py

* run ruff --fix

* run ruff fix on src/accelerate/utils/__init__.py as well

* properly run make style

* add tests

* modify code based on code review

* changes based on code review

* quote data_frame

* fix docs

* remove pandas req in log_table

* style changes

* add tracker to docs
2023-10-26 12:13:28 -04:00
c3ec7ff5a9 Add logs offloading (#2075)
* add logs

* fix comm

* rework comment
2023-10-24 16:05:23 -04:00
d8535921ad v0.25.0.dev 2023-10-24 13:12:40 -04:00
eb8c535c17 Fix (#2080) 2023-10-24 12:55:06 -04:00
b7686ccb44 Warn when kernel version is too low on Linux (#2077)
* Warn when kernel version is too low on Linux

See #1929

On Linux with kernel version < 5.5, issues with hanging processes have
been reported. It is not clear how to fix the issue, so instead we warn
the user that they may encounter problems.

Notes

As logging requires an initialized PartialState, the actual check
happens at the end of Accelerator.__init__.

In a similar vein, the docstring of get_logger has been adjusted to
first initialize the Accelerator, as it is not working as currently
shown.

* Reviewer comment: small change to docstring
2023-10-24 12:43:55 -04:00
f3229872bc fix docstring typo (#2072) 2023-10-24 12:42:59 -04:00
7843286f2e Allow for samplers to be seedable and reproducable (#2057)
* bookmark

* Works!

* Working!

* Fully working now

* Cover dataset

* Needed for dispatch

* Check both

* Bring back pop, fix hang

* Fully working

* Change back to epoch

* Adjust for new methods

* Clean

* Fix tests

* Avoid circular import

* Clean

* Fix test

* Comment

* Add a comment

* Comment

* Use yield from instead
2023-10-24 06:41:06 -04:00
11e2e99cfc Let iterable dataset shard have a len (#2066) 2023-10-23 08:12:26 -04:00
07e745f1c4 DOC: Fix broken link to designing a device map (#2073)
There is a typo in the link.
2023-10-23 07:42:24 -04:00
c7c99a30ea fix: remove useless token (#2069) 2023-10-19 14:29:55 +02:00
8f45a2eae8 remove unused constants (#2045) 2023-10-18 14:24:01 -07:00
9fd64b7ea9 Fix the error when the "train_batch_size" is absent in DeepSpeed config (#2060)
* Update dataclasses.py

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-10-16 15:13:20 -07:00
5be16ad90b Add space to docs (#2055)
* Add space to docs

* Phrasing
2023-10-16 06:33:12 -07:00
dab62832de Reset state to pass failing test 2023-10-13 13:13:41 -04:00
caa9f9bcbb Fix stalebot (#2052) 2023-10-13 12:20:37 -04:00
943efedb88 fix docstring (#2053) 2023-10-13 07:42:26 -04:00
50acb0c2ec Let drop_last modify gather_for_metrics (#2048)
* Drop last

* Test

* Uncomment out tests

* Update src/accelerate/test_utils/scripts/external_deps/test_metrics.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Document better

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-10-12 14:27:06 -04:00
e6d96e5f70 Make fsdp ram efficient loading optional (#2037)
* make fsdp ram efficient loading optional

* Add documentation

* address comments

* address comments

* address comments

* nit
2023-10-12 20:44:09 +05:30
1dfb6e9304 Fix integration CI (#2047)
* Different method

* Should fix version
2023-10-12 07:40:11 -04:00
4bef6bc511 Safely end training even if trackers weren't initialized (#1994)
* Update accelerator.py

* init trackers on class init

* dont need getattr because trackers exists
2023-10-11 08:24:04 -04:00
73640d0463 Reduce memory by using all_gather_into_tensor (#1968)
* all_gather_into_tensor

* Cleanup

* Reduce memory on non-gloo

* Fin

* Check for backend too on cpu

* CPU comment

* Change scope for performance

* Bring back zeros after remembering why

* Add comment

* Add comment

* Use empty

* Comment
2023-10-10 10:10:32 -04:00
7a1159143e Unpin transformers (#2044) 2023-10-10 05:33:22 -04:00
cbb0b82fa2 Fix DeepSpeed version to <0.11 (#2043)
This is a temporary fix to prevent a DeepSpeed installation error that
was introduced in DeepSpeed 0.11.0.
2023-10-09 10:47:33 -04:00
5ae6111180 Allow FSDP to use with torch.autocast for bfloat16 mixed precision (#2033)
* Ignore native_amp when FSDP is used

* Rollback condition

* Fix mixed precision of bfloat16 for FSDP
2023-10-06 18:26:04 +05:30
230a5f541b Fix save on each node (#2036) 2023-10-06 05:18:02 -04:00
956114ac92 Enable shared file system with save and save_state via ProjectConfiguration (#1953)
* Support shared storage, start

* Pass use_local_node_storage

* Reverse and different namings

* Not global only

* Addres comments

* Clean

* Apply suggestions from code review

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Save on each node as explicit arg

* More explicit

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2023-10-03 12:04:01 -04:00
76ee7f211d update fsdp docs (#2026) 2023-10-03 17:40:23 +05:30
420743af22 Sync states for xpu fsdp (#2005)
* sync states for xpu fsdp

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-10-02 17:16:36 -04:00
206ab491ed update torch_dynamo backends (#1992)
* update torch_dynamo choice

* fix test
2023-10-02 14:31:44 -04:00
936d2f4f5c Add basic documentation for multi node training (#1988)
* initial commit for adding multinode training doc

* removed stray changes

* fix formatting issue and switch to bulleted list

* Update docs/source/basic_tutorials/launch.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update docs/source/basic_tutorials/launch.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* added link to new blog post

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-10-02 14:19:59 -04:00
da98d601b5 [docs] Quick tour refactor (#2008)
* quick tour refactor, moved internal mechanism into a conceptual guide

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-10-02 13:19:41 -04:00
658492fb41 fix resuming from checkpoint (#2001) 2023-09-29 13:12:41 +05:30
80da9cfb09 FIX Automatic checkpoint path inference issue (#1989)
Resolves #1983

Fixes an issue where the checkpoint directory would be incorrectly set while
loading when using relative paths.
2023-09-19 14:20:51 +02:00
03deec2a01 Fix model copy after dispatch_model (#1971)
* Fix model copy after dispatch_model

* Minor hook update to fix failing test

* address reviewer comments
2023-09-19 06:05:30 -04:00
629d02c844 Update big_modeling.md (#1976) 2023-09-18 10:11:57 -04:00
a87c95da9e Dev version 2023-09-14 15:24:15 -04:00
bbcdbbaffc Remove checkpoints only on main process (#1974)
* Remove checkpoints only on main process

shutil.rmtree might throw errors if called on multiply processes. Make a call only on main process

* Apply style
2023-09-14 08:31:55 -04:00
ce53708e0e fix for xpu (#1972) 2023-09-14 08:18:20 -04:00
53209ce6d8 update FSDP and DeepSpeed docs (#1973) 2023-09-14 08:18:11 -04:00
bd083ae1bf Add force_hooks to dispatch_model (#1969)
* Add force_hooks to dispatch_model

* Minor documentation rephrasing
2023-09-14 07:57:19 -04:00
e5452a618d fix torch compile with FSDP (#1919)
* fix torch compile with FSDP

* Update accelerator.py

* fixes

* resolve comments

* fix bug

* address comments

* addressing comments

* address comments
2023-09-14 13:19:59 +05:30
40a73e0ae0 Introduce breakpoint API (#1940)
* early stopping

* Fix tests

* Works on multi-gpu, uncomment

* Rm reset

* Check for >=1

* equal

* Trigger

* Fix test

* Update docs/source/concept_guides/deferring_execution.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Explicit example loop

* Set to zero, not None

* rename test

* Check again to ensure it's been reset

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-09-13 12:42:38 -04:00
937e08ce75 add bf16 mixed precision support for NPU (#1949)
* add bf16 mixed precision support for NPU

* Explicitly register the NPU backend to PyTorch via `import torch_npu`

---------

Co-authored-by: statelesshz <jihuazhong1@huawei.com>
2023-09-13 09:56:24 -04:00
5d558f21e2 [WIP] Implementing gather_for_metrics with dedup for non tensor objects (#1937)
* [feat] implementing gather_for_metrics for objects

* [lint] make style result

* [docs] improve fn docs gather for metrics

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [docs] update args description gather for metrics

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [refactor] gather for metrics for non tensor obj

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [fix] renaming tensor to data (was not defined and it is not just a tensor)

* [fix] else state

* [test] gather for metrics with non tensor objects

* [lint] make style result

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [test] removing useless assertion

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [test] add running on main

* [lint] style autoformat

---------

Co-authored-by: Lorenzobattistela <lorenzobattistela@gmail.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-09-12 12:17:43 -04:00
d9b5ce60b3 Rm strtobool (#1964)
* Rm strtobool

* Reorganize

* c/p

* Signature
2023-09-12 11:21:09 -04:00
61a87ab946 finish all todos (#1957) 2023-09-12 17:13:00 +02:00
5dec654aae Better guards for slow imports (#1963)
* Start

* Deepspeed

* Clean
2023-09-12 10:54:19 -04:00
b2a950205e FIX: patch_environment restores pre-existing environment variables when finished (#1960)
Resolves #1832

This fixes a bug in patch_environment that currently leads to
pre-existing items being deleted completely from the environment
variables, when they were temporarily modified by patch_environment,
once the context has finished. Now, the env vars are restored to their
previous values.
2023-09-12 15:39:54 +02:00
ca7b853abc fix safetensor saving (#1954)
* fix safetensor saving

* fix test

* fix

* better save

* modify as keyword arg
2023-09-12 09:14:41 -04:00
6832aa51a6 move tensorflow dep (#1959) 2023-09-12 06:19:26 -04:00
4a1d5b1fb6 Fix docs (#1951)
Signed-off-by: Peng Gao <peng.gao.dut@gmail.com>
2023-09-11 10:40:14 -04:00
82369c8314 fix the fsdp docs (#1947) 2023-09-11 15:30:09 +05:30
cdb001ca5f Enhance multi-node notebook launching (#1913)
* Introduce new arguments: master_addr, node_rank, and num_nodes.
  Relocate these arguments to the end of the notebook_launcher
  function for compatibility.

* Set defaults for NPROC and NODE_RANK environment variables in the
  PrepareForLaunch function to ensure compatibility.

* Thoroughly document the process and usage guidelines for
  multi-node launching.
2023-09-08 07:53:21 -04:00
c72e22419b Bring back pypi to runners (#1939)
* Bring back pypi

* Flipflop
2023-09-08 07:51:17 -04:00
c872c3086f clean num devices (#1936) 2023-09-07 10:14:52 -04:00
cec5ae8e4d Check for invalid keys (#1935)
* Check for invalid keys

* Revert else

* Better error

* Weird space
2023-09-06 12:22:22 -04:00
cd570b2e2a reduce gradient first for XLA when unscaling the gradients in mixed precision training with AMP. (#1926)
* reduce gradient first for XLA when unscaling the gradients in mixed
precision training with AMP.

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* update acceleartor.reduce and accelerate.utils.operations.reduce

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-09-06 11:00:24 -04:00
727d624322 Add support for deepspeed optimizer and custom scheduler (#1909)
* support for deepspeed optimizer and custom scheduler

* don't throw the error

* Add tests

* fix the tests

* fix the code quality

* Update tests/deepspeed/test_deepspeed.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* fix the docstrings

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-09-05 22:30:46 +05:30
afed2f75f8 Expose auto in dataclass (#1914)
* Auto

* Update str
2023-09-05 09:23:10 -04:00
739b135f83 More CI fun - run all test parts always (#1916)
* Run always

* Populate
2023-08-31 12:32:28 -04:00
4a9dd1cd82 support logging with mlflow in case of mlflow-skinny installed (#1874)
* - support a case of mlflow-skinny installed when log_with is set to mlflow.

* code beautification.
2023-08-31 12:11:02 -04:00
feab09908d improve help info when run accelerate config on npu (#1895) 2023-08-31 12:02:59 -04:00
e0baaa8df0 fix: add debug argument to sagemaker configuration (#1904)
* fix: add debug argument to sagemaker configuration #1903

* ignore:  address quality style

Signed-off-by: maximegmd <672982+maximegmd@users.noreply.github.com>

* tweak: ask if user wants debug information in SageMaker distributed operations

---------

Signed-off-by: maximegmd <672982+maximegmd@users.noreply.github.com>
2023-08-31 11:52:46 -04:00
1b998f1695 Use hosted CI runners for building docker images (#1915)
* New technique

* needs

* explicit all

* Volume prune not going

* Skip volume

* versions

* Avoid checkout perhaps?

* Working dir

* Don't include dot-slash?

* Accelerate prefix?

* Working directory?

* Context?

* other workingdir

* Faster iteration

* Right tag

* Full

* Release

* GPU
2023-08-31 11:28:48 -04:00
7befe580c2 Fix docker images (#1910)
* With driver

* Remove deps

* No bitsandbytes

* Try with raw push

* We can keep old docker images

* Also include release

* Skorch uses master

* Right tag
2023-08-31 07:14:38 -04:00
cd3d3a37f9 Skip pypi transformers until release (#1911)
* Skip release

* TODO comment
2023-08-31 07:14:06 -04:00
81fffe51fd deepspeed grad_acc_steps fixes (#1901)
* deepspeed grad_acc_steps fixes

* fix tests
2023-08-31 16:33:34 +05:30
0b5ac0253e Add PR template (#1906)
* Add PR template

* Sourab is not a fashion company
2023-08-30 03:19:15 -04:00
a16b843a1b deepspeed for ccl xpu (#1827) 2023-08-29 17:36:29 +05:30
bc86a9379f Solve at least one failing test (#1898) 2023-08-29 10:57:56 +05:30
87a096f95e Add FSDP activation checkpointing feature (#1891)
* add FSDP activation checkpointing feature

* fix formatting issue

* fix code formatting issue
2023-08-29 10:56:08 +05:30
44adf1e14f Fix nb launcher test (#1899)
* Try with raw subprocess

* Skip test for now

* Clean
2023-08-28 14:44:18 -04:00
ce870e1ce1 Final nits on model util (#1896)
* Nits

* Annoying markdown tables

* Try with one

* I give, try raw md

* Moot

* W/o code tick

* Markdown
2023-08-28 09:47:44 -04:00
1ace672d3e Update dataclasses.py (#1894) 2023-08-28 17:40:14 +05:30
e2ae254008 Add hub as core dep (#1885)
* Add hub as dep

* Missing refs
2023-08-25 10:05:58 -04:00
0fa291e707 Add doc on model memory usage (#1887)
* Doc

* Note on meta

* Phrase

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Clarity nit

* Nits

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-08-25 10:03:39 -04:00
ba6f11ec3e Enable a token to be used (#1886)
* Enable based on passing the token

* Doc more
2023-08-24 15:43:37 -04:00
430ee9df6b Update with new url (#1884) 2023-08-24 12:52:09 -04:00
409a9df0a4 Introduce model memory estimator (#1876)
* Estimator

* Right err

* Fixup tests

* trust remote code

* Print output for debugging purposes

* trust_remote_code

* Address some comments

* change doc to req arg

* Properly check for _no_split_modules in transformer models

* Note on transformer models

* Check/handle pentabytes

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Tests are passing locally again, better handle for no_split

* Adjust setup?

* Let's see if the cleaner version works

* Refactor and clean up for testing

* Specify in comments

* Better error handling

* A million tests later

* More tests + err handling

* Require hub

* More with remote code

* Clean up

* Add a test for no_split

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Docstring

* Address some comments

* rm einops

* Let it err out

* Adjust errs

* Tests

* Reduce test repeats

* Clean up borders

* Tip on 20%

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-08-24 12:12:01 -04:00
acad5bae5c Enable power users to bypass device_map="auto" training block (#1881)
* Enable TP greedy env var

* Right env setting

* Use true, not false

* Design nit

* ACCELERATE_BYPASS_DEVICE_MAP
2023-08-24 10:27:59 -04:00
81b19c4094 fix detach_hook (#1880) 2023-08-23 15:15:27 -04:00
3e97a9172b Update release instructions (#1877)
* Update release instructions

* Update setup.py

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>

---------

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
2023-08-23 16:04:09 +02:00
812719644d v0.23.0.dev0 2023-08-23 02:25:56 -04:00
16e5113f8a Improve big model inference docs (#1872)
* Start of rework

* Refactor doc

* Got too used to quarto

* They're top level

* md link

* phrasing

* Remove indent
2023-08-22 07:11:12 -04:00
3122a6164d Include a note to the forums in the bug report (#1871)
* gs

* New version
2023-08-21 11:48:39 -04:00
c8682ae74c support custom slice function in DataLoaderDispatcher (#1846)
* save progress

* work on suggestions

* work on some suggestions

* last suggestion

* oops, mini bug
2023-08-21 17:16:43 +02:00
0768905f77 remove casting to FP32 when saving state dict (#1868)
* remove casting to FP32 when saving state dict

* update docs.
2023-08-21 19:08:29 +05:30
d087be0156 add env variable for init_on_device (#1852) 2023-08-18 23:20:50 +02:00
41caaa56e1 Update fsdp_with_peak_mem_tracking.py (#1856) 2023-08-18 13:34:31 +05:30
21d127334e fix dispatch (#1855) 2023-08-17 12:23:50 -04:00
3cf7dee576 Loading logic safetensors (#1853)
* add logic in loading for safetensors

* fix style
2023-08-17 10:46:49 -04:00
64c586f5eb support for ram efficient loading of model with FSDP (#1777)
* support for ram efficient loading of model with FSDP

* with default behaviour of efficient loading when using FSDP, `sync_module_states` needs to be `True`

* fixes

* Update accelerator.py

* Update dataclasses.py
2023-08-16 15:23:20 +05:30
0e714f5ba4 Fix the noneffective parameter: gpu_ids (#1850)
Co-authored-by: Devymex <wangyumeng02@megvii.com>
2023-08-16 09:27:13 +02:00
92f23e123d Change CUDA check (#1833)
* Move into check-device

* Use proper solutiona nd write test

* Move test

* Avoid circular import

* Remove patchenv alltogether

* New version

* Better way, run a verification test

* Final working version

* Debug mode

* doc

* Just debug

* Doc

* print
2023-08-16 03:21:30 -04:00
f67e11afd7 Fix verify_device_map (#1842)
* make verify_device_map return True only if device map has more that 1 element

* Fix style and comment

* fix style
2023-08-14 11:44:41 -04:00
6458058559 FIX: Bug with unwrap_model and keep_fp32_wrapper=False (#1838)
Using accelerator.unwrap_model(model, keep_fp32_wrapper=False) results
in a defective forward method. This bug was (probably) introduced in
PR #872.

Wrapping the method in MethodType (as elsewhere in code) resolves the
issue.
2023-08-14 10:50:38 +02:00
4d13e4e474 fix bug in dev properties for ipex (#1834) 2023-08-11 09:15:15 +02:00
058a3546ea use device as context manager for init_on_device (#1826) 2023-08-10 09:35:00 +02:00
98ecab2083 Minor idiomatic change. (#1829) 2023-08-10 09:06:26 +02:00
b30a349078 Better test (#1825)
* Better test

* Test

* Comment
2023-08-09 02:22:31 -04:00
7cb19ae613 Expose a bit of args/docstring fixup (#1824)
* Expose a bit

* docstring
2023-08-08 11:26:50 -04:00
39897a0662 Update docs and docstrings to match load_and_quantize_model arg (#1822)
* Update quantization.md with correct bnb_quantization_config args

* Update load_and_quantize_model docstring to match bnb_quantization_config arg
2023-08-08 10:20:03 -04:00
aa71bb815a Fix bnb import (#1813)
* Fix import

* Fix bnb

* Comment
2023-08-08 10:17:27 -04:00
f43a08a9c5 add warning when using to and cuda (#1790)
* add warning when using to and cuda

* more warning

* style

* change warning msg

* fix typo

* better check

* Update src/accelerate/big_modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-08-08 10:08:50 -04:00
b42c65b729 Improve docs on grad accumulation (#1817)
* Improve docs on grad accumulation

* Update docs/source/usage_guides/gradient_accumulation.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* fix

* address feedback

* Update docs/source/usage_guides/gradient_accumulation.md

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-08-07 17:28:01 +02:00
7bad726935 Bibtex (#1820) 2023-08-07 11:21:40 -04:00
29ff7c3911 Expand device-map warning (#1819)
* Propagate to general prepare

* Move test to general tester

* Keep in model

* Keep in multi-gpu

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-08-07 11:04:29 -04:00
30eff605df Typo fix (#1812) 2023-08-04 11:18:14 -04:00
fc95663e03 Detect device map auto and raise a helpful error (#1810) 2023-08-04 10:02:27 -04:00
49cb83a423 More specific logging in gather_for_metrics (#1784)
* Start on testing behavior

* Add test to capture current behavior

* Cleanup test; add length to DummyIterableDataset

* Remove wip test from test_dataloader.py

* Only check on remainder state if we're at the end of a dataloader

* Cleanup

* Fix style

* Move test to test_metrics

* Remove 2 num_process assertion so that we test on single-GPU as well,
why not

* Use `isinstance()` instead of `type()` in test_metrics

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-08-03 12:38:58 -04:00
d2b159ea1a Fix pytest import (#1808)
* pytest

* Fully rm pytest

* Doc

* Works
2023-08-03 11:00:16 -04:00
40056c69d1 Add FSDP for NPU (#1806)
* Add FSDP for NPU

* enable fsdp's test case for npu&xpu
2023-08-03 11:35:29 +02:00
505b5be044 Add FSDP for XPU (#1803)
* fsdp for xpu

* add fsdp xpu
2023-08-02 15:34:55 -04:00
a6333f2e7c Changed allow_val_change param (#1796) 2023-08-02 13:42:11 -04:00
YQ
0dec477985 add support of float memory size in convert_file_size_to_int (#1799)
* support float memory size

* add unit test for
2023-07-31 15:43:19 -04:00
YQ
a24189db35 reserve 10% GPU in get_balanced_memory to avoid OOM (#1798)
* reserve 10% GPU to avoid OOM

* update warning message

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* use logger.info

* clean up comment

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-31 15:42:55 -04:00
a9aee447ee Fix import error when torch>=2.0.1 and torch.distributed is disabled (#1800) 2023-07-31 11:27:45 -04:00
d5894ab499 Set ipex default (#1776) 2023-07-26 12:20:13 -04:00
6f14928e28 simplify and correct the deepspeed example (#1775)
* simplify and correct the deepspeed example

* Update deepspeed_with_config_support.py

* 🐛 fix
2023-07-26 17:59:13 +05:30
777334a803 [FSDP] Fix load_fsdp_optimizer (#1755) 2023-07-26 14:23:01 +05:30
c3d82d24e2 Contigous on gather (#1771)
* For testing

* Contigous
2023-07-25 13:44:08 -04:00
6e70e79e4e Support wrapping multiple models in Accelerator.accumulate() (#1708)
* Support wrapping multiple models in Accelerator.accumulate()

* Fix style.

* Rename variable

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update doc.

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update variable name.

---------

Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-07-25 12:22:36 -04:00
b3fc3c9067 Introduce an experimental distributed operations framework (#1756)
* First version

* As decorator

* Better err

* Limit

* Partial state

* More work

* Tests + config

* Debug mode

* Flag

* Rm references to debug mode, debug

* Tests

* Docs

* Nit

* Disable debug in config

* Support dict
2023-07-25 11:39:31 -04:00
a9d79163e5 Change is_aim_available() function to not match aim >= 4.0.0 (#1769)
* Change is_aim_available() function to not match aim >= 4.0.0

* Use compare_versions utility function in is_aim_available
2023-07-25 09:07:06 -04:00
0b36ca6e64 Fix offload on disk when executing on CPU (#1762)
* Fix offload on disk when executing on CPU

* Actually refine the error instead
2023-07-24 11:09:29 -04:00
f3b7f9cf25 Fix error when max_memory argument is in unexpected order (#1759)
* sort the user-provided max_memory keys in gpu-cpu-disk order

* fixed the bug by adding disk to main devices

* add checking for max_memory argument

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix typo

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix typos

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-24 09:23:04 -04:00
b909bfacb9 Fix check failure in Accelerator.save_state using multi-gpu (#1760) 2023-07-24 09:03:45 -04:00
a2d8f540c3 FSDP enhancements and fixes (#1753)
* if the model is already an FSDP instance, remove the warning and prep overhead

* allow usage of `_no_split_modules` to simplify UX when using FSDP

* Update other.py

* fixes
2023-07-21 17:52:37 +05:30
e8ed10ae62 Fix FSDP related issues (#1745)
* Update fsdp_utils.py

* other FSDP fixes

* revert as this is resulting in more vram usage

* revert

* Update fsdp_utils.py
2023-07-21 12:16:45 +05:30
a6291e43b0 Expose autocast kwargs and simplify autocast wrapper (#1740)
* kwarg handler

* Proper default

* Enabled

* Rework

* Clean

* Ref autocast properly
2023-07-20 12:49:30 -04:00
2a289f6108 Rework new constant for operations (#1748)
* Rework new constant

* Naming for clarity

* Rm _cpu

* clean
2023-07-20 11:26:35 -04:00
cafc7f785f Remove unused constant (#1749)
* Rm unused

* Clean
2023-07-19 17:12:00 -04:00
39889c7304 Check for misconfiguration of single node & single GPU (#1746)
* Check for misuse

* Right area

* Sapce
2023-07-19 17:11:53 -04:00
12d5a2d0da fix typo (#1747) 2023-07-19 13:25:35 -04:00
243288627d fix KwargsHandler.to_kwargs not working with os.environ initialization in __post_init__ (#1738)
* fix KwargsHandler.to_kwargs not working with os.environ initialization in __post_init__

* fix test_torch_dynamo_plugin such that it wouldn't change os.environ permanently

* move clear_os_environ func to utils/other and rename it

* reformat code in order to pass ci quality check

* modifiy the comment of utils.other.clear_environment
2023-07-19 12:00:53 -04:00
efc1fa8376 Let load_state automatically grab the latest save (#1741)
* Automatic load state

* docstring

* Quality
2023-07-18 14:56:20 -04:00
18e3012489 Fixed the bug that split dict incorrectly (#1742)
* Fixed the bug that split dict incorrectly

* fix list out of index and test script
2023-07-18 14:54:25 -04:00
daa1952f47 Update docs (#1736)
* Still in works

* Utils to check

* More references

* Fin

* add utils

* toctree
2023-07-18 07:28:01 -04:00
653ba110d3 Fixed typo in repr of AlignDevicesHook (#1735)
Changed class name in the repr from AlignDeviceHook to AlignDevicesHook
2023-07-17 10:50:22 -04:00
f518b0ab03 make balanced memory able to work with non continguous GPUs ids (#1734) 2023-07-17 10:49:08 -04:00
3a05e0cf70 Fix errors when optimizer is not a Pytorch optimizer. (#1733)
* Fix errors when optimizer is not a Pytorch optimizer.

* update

---------

Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
2023-07-17 07:11:02 -04:00
299f3ef8ab Adding a shape check for set_module_tensor_to_device. (#1731)
* Fixing set_module_tensor_to_device.

* Adding a shape check for `set_module_tensor_to_device`.

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update error msg.

* Style.

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-14 17:46:52 +02:00
925a13eb04 fix the bug in npu (#1728)
* enable test_sync for npu

* fix the bug in get_cluster_input for npu

* fix the bug in broadcast for npu
2023-07-14 09:31:04 -04:00
4170f395d1 Get rid of calling get_scale() by patching the step method of optimizer. (#1720)
* Get rid of calling get_scale() by patching the step method of optimizer.

* Fix when step() is already patched by other parties.

* support pickle

* Minor updates.

* Change _accelerate_num_step_called to _accelerate_step_called

---------

Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
2023-07-14 07:56:45 -04:00
bb47344c77 Better control over DDP's no_sync (#1726)
* add `ddp_trigger_sync_in_bwd` to accelerator with test

* add example to `ddp_trigger_sync_in_bwd`

* support case of non-DDP model

* style

* make style

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* model_ddp -> model

* .

* .

* .

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* add comment

* style

* style

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-07-13 18:29:02 +02:00
243cd82409 fix failing test on 8GPU (#1724) 2023-07-13 11:45:45 -04:00
51f5e829a8 v0.22.0.dev0 2023-07-13 11:20:38 -04:00
5b9c5881b6 add compatibility with peft (#1725)
* add compatibility with peft

* update docs
2023-07-13 10:33:44 -04:00
0209606364 add Comfy-UI (#1723) 2023-07-13 19:02:50 +05:30
5909c1a514 Fix typo 2023-07-13 09:27:30 -04:00
e7150b0b15 New tactic (#1719) 2023-07-12 18:50:17 -04:00
e8c64f598b Remove duplicate code (#1717) 2023-07-12 14:22:07 -04:00
a14081ccc5 Optimize get_scale to reduce async calls (#1718)
* Optimize

* Comment
2023-07-12 14:00:28 -04:00
d895809613 Keep old behavior (#1716) 2023-07-12 13:24:31 -04:00
02015eb25c fix version (#1701) 2023-07-12 11:48:48 -04:00
19bcd43e14 Modify loading checkpoint behavior (#1715)
* Add check for the whole state dict

* fix style
2023-07-12 11:48:06 -04:00
59f2fff3cf add multi_gpu decorator (#1712) 2023-07-12 11:17:07 -04:00
c33adecc9f Add Ascend NPU accelerator support (#1676)
* add Ascend NPU accelerator support

* fix code  styles

* enable accelerate test on npu

* fix typo&code styles

---------

Co-authored-by: jihuazhong <jihuazhong1@huawei.com>
2023-07-12 08:43:02 -04:00
518c206a2a Fix the bug where DataLoaderDispatcher gets stuck in an infinite wait when the dataset is an IterDataPipe during multi-process training. (#1709)
Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
2023-07-12 07:44:36 -04:00
65b5c2cfad Fixes for issue #1683: failed to run accelerate config in colab (#1692)
* Fixes for issue #1683: failed to run accelerate config in colab

Fixes for issue #1683: failed to run accelerate config in colab

* Fixes for issue #1683: failed to run accelerate config in colab, change input2 to a formal variable name

change input2 to a formal variable name

* Fixes for issue #1683: failed to run accelerate config in colab

removed unnecessary spaces

* Fix for #1683 failed to run accelerate config in colab 

fixed reformatting issue, during the quality check

* Fixes for issue #1683: failed to run accelerate config in colab

refactor the code, passed black, ruff, doc-builder test; modified the prompt in colab.

* Fixes for issue #1683: failed to run accelerate config in colab

fixed black, ruff, doc-builder, modified prompt during choice input

* Fixes for issue #1683: failed to run accelerate config in colab

use utils.imports _is_package_available() method instead, to be consistent with the rest of the library code.

* Fixes for issue #1683: failed to run accelerate config in colab

add default choice, wrap up import check with try catch, passed quality check, style check and test cases
2023-07-12 07:15:02 -04:00
7954a28a71 Fix launcher validation (#1705)
* unstash

* fix validation of launcher args

* bug fix

* cond for tpu
2023-07-11 14:30:44 -04:00
3bdb35abfa Skip tests when bnb isn't available (#1706)
* bnb is available

* Some more
2023-07-11 14:29:17 -04:00
d58aac2e1e Update tracking.md (#1702) 2023-07-11 14:15:59 -04:00
a4c2654f50 Deepcopy on Accelerator to return self (#1694)
* Deepcopy

* Clean

* deepcopy
2023-07-11 14:14:15 -04:00
27d29087b2 Add offload for 8-bit model (#1699)
* Add offload for 8-bit model

* fix saved 8bit model offload and add tests

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* add doc on how offload works

* remove enable_offload

* make style doc

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-11 13:46:15 -04:00
c7698834fc Move mixed precision wrapping ahead of DDP/FSDP wrapping (#1682)
* Update accelerator.py

* Update accelerator.py

* Update accelerator.py

* Update accelerator.py

* Update accelerator.py

* Update test_script.py

* Update test_script.py

* Update test_script.py

* Update test_script.py

* Update test_script.py
2023-07-11 10:35:13 -04:00
64d7b58c44 Improve quality errors (#1698)
* Purposefully fail

* Step summary

* Right bash

* Take 2

* Post to job summary

* Extra space
2023-07-11 09:09:02 -04:00
e3aae2ac65 Fixup docs (#1697) 2023-07-11 08:36:37 -04:00
d0a7991b65 Fix nightly tests (#1696)
* Debug start

* Fix

* Workflow
2023-07-11 08:36:23 -04:00
180ef7c415 update readme in examples (#1678) 2023-07-10 12:19:27 -04:00
95bffdec43 remove duplicate class (#1691) 2023-07-07 10:29:00 -04:00
c74c28c6d1 Fix workflow CI (#1690)
* Try again

* Accelerate only

* Try pushing again
2023-07-07 09:46:00 -04:00
e0f5e03009 fix bnb tests (#1679)
* fix tests

* Fix 8bit serialization tests
2023-07-05 10:13:20 -04:00
dfbfbdfea8 Add docs for saving Transformers models (#1671)
* add section to package_reference/accelerator.md explaining saving for Transformers models

* rename `model` to `unwrapped_model`

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-03 10:34:30 -04:00
24ae624d96 Doc big model inference (#1670)
* change example

* fix spaces

* add link to transformers

* Fix style
2023-06-30 18:00:52 -04:00
40f822a1e3 replace save funct in doc (#1672) 2023-06-30 17:03:19 -04:00
a0bfe2140c Bnb quantization (#1626)
* Add get_quantized_model func

* Add tests for 4bit and 8bit quantization

* Add tests

* Fix style

* Add offload tests

* Fix style

* Fix

* Fix conflit

* fix generate quality test

* fix style

* add check for bnb layers and fix .to(cpu)

* Fix 8bit serialization and memory issue

* add import

* Change quantize_model to load_and_quantize_model

* Add tests for saving 8bit model

* Fix bnb dataclass

* fix style

* fix tests

* fix style

* remove depedency on tie_weights

* remove depedency on base_model_prefix

* remove depedency on device

* fix style

* Add doc about quantization

* fix import

* Fix text

* fix func name

* fix arg in dataclass

* Update docs/source/usage_guides/quantization.md

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix funct name

* Add real model

* Fix doc

* put bash tag

* Update src/accelerate/utils/bnb.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-30 10:59:04 -04:00
c6443f8bd4 Update broken Runhouse link in examples/README.md (#1668) 2023-06-30 08:51:28 -05:00
3cd02e9340 change the import place to avoid import error (#1653) 2023-06-30 11:55:30 +05:30
17ec2ede11 remove safetensor dep on shard_checkpoint (#1664)
* remove safetensor dep on shard_checkpoint

* fix style

* group function
2023-06-29 11:23:13 -04:00
e30938700a 🚨🚨🚨 Spring cleaning: PyTorch 1.10 🚨🚨🚨 (#1662)
* Bookmark

* Bump torch v

* More stuff

* Remove never called else
2023-06-29 09:26:15 -04:00
b864946606 🚨🚨🚨 Spring cleaning: Python 3.8 🚨🚨🚨 (#1661)
* Py 3.8

* Rm typed dict

* Workflows
2023-06-29 08:46:19 -04:00
bc234c040c [BigModeling] Final fix for dispatch int8 and fp4 models (#1660)
* final fix for dispatch int8 and fp4 models

* Update src/accelerate/big_modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2023-06-28 11:16:13 -04:00
662a7dd905 docker cpu py version (#1659) 2023-06-28 10:37:29 -04:00
d3db2d4fe5 TIL (#1657) 2023-06-28 10:36:49 -04:00
96f926a25e Bump integration (#1658) 2023-06-28 10:32:43 -04:00
a9d43cda80 [BigModeling] Add missing check for quantized models (#1652)
* add missing check

* better check

* better check

* much better check
2023-06-28 16:07:30 +02:00
effccbdc84 Check for port usage before launch (#1656)
* Check for port usage

* Just comm

* Right flag in err

* Better err, happy now
2023-06-28 09:10:01 -04:00
d141b4ce79 Fix device_map (#1651) 2023-06-27 21:36:00 -04:00
bc49d0f9b3 Doc save model (#1650)
* add doc for save_model func

* fix doc

* fix path issue

* add load_checkpoint_in_model doc in utilities

* oups

* Update docs/source/package_reference/utilities.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-06-27 16:08:56 -04:00
5ea7c81277 Change dispatch_model when we have only one device (#1648)
* Change dispatch_model when we have only one device

* Fix style

* add else statement

* fix style

* Fix error message

* Fix style
2023-06-27 14:58:11 -04:00
efe4481a28 add save model (#1641)
* add save model

* Fix duplicates function and remove args

* Fix style

* fix description

* add save_model to Accelerator object

* Revert "fix potential OOM when resuming with multi-GPU training (#1444)"

This reverts commit 3a381bfa48dfb082c1f8e892a9a07ca5717bf0df.

* Fix style

* Fix description

* Replace state_dict() by accelerator get_state_dict

* FIx state dict

* clean comment
2023-06-27 11:10:42 -04:00
df215cc243 Add skorch to runners (#1646)
* Skorch tests

* Take 2

* runs-on

* Take 2

* Rm needs

* Needs testing deps

* dep

* Only use all GPUs

* Add skorch tests

* rm

* nl
2023-06-27 10:08:22 -04:00
5791d949ff fix modeling low zero (#1634)
* fix modeling low zero

* low zero logic change
2023-06-26 13:19:48 -04:00
b76409ba05 fix autocasting bug (#1637)
* fix autocasting bug

* refactor and resolve comment
2023-06-26 20:18:36 +05:30
a25c4eacae Swap disable rich (#1640) 2023-06-26 09:59:10 -04:00
d8437ae096 Fix nightly 2023-06-26 09:20:01 -04:00
2fa22f3342 deepspeed z2/z1 state_dict bloating fix (#1638)
* deepspeed z2/z1 state_dict bloating fix

* fix
2023-06-26 17:44:36 +05:30
a2ecb58132 fix: Megatron is not installed. please build it from source. (#1636)
The megatron package name is mismatch with dist directory name.

Signed-off-by: yuanwu <yuan.wu@intel.com>
2023-06-26 08:13:28 -04:00
73cc944067 fixes offload dtype (#1631)
* Fix offload dtype

* Set dtype on meta device

* fix style
2023-06-22 17:38:09 -04:00
b16916f447 Fix transformers sync bug with accumulate (#1624)
* Fix transformers sync

* Docs + expose

* Right arg

* bool
2023-06-22 04:42:54 -04:00
36f8e48747 Fix workflow (#1625)
* Fix steps

* Right runs-on

* Fix directory

* Just integration

* Fix check

* Disable wandb

* Fin

* Diff
2023-06-21 16:04:55 -04:00
790cb8b461 Fix tb issue (#1623) 2023-06-21 13:48:41 -04:00
7b4d12623a Doc to md (#1618)
* Convert doc files to MD

* Convert doc files to Markdown
2023-06-20 18:12:19 -04:00
956c6baf71 Fix failing multinode tests (#1616)
* Should fix multinode test

* For testing, remove after

* try this

* Try disabling

* Try again

* move more

* Fix multinode tests

* New check

* Fix err

* Fix test
2023-06-20 15:32:13 -04:00
485e8c8cb4 Ignore low_zero option when only device is available (#1617) 2023-06-20 12:28:56 -04:00
aaf38c2f35 fix for arc gpus (#1615) 2023-06-20 11:09:11 -04:00
f433457244 reset end_of_dataloader for dataloader_dispatcher (#1609)
* reset end_of_dataloader for dataloader_dispatcher

* add ruff fixes
2023-06-20 08:41:11 -04:00
535b52cef2 Remove GPU safetensors env variable (#1603) 2023-06-16 10:59:41 -04:00
e60a424398 Remove asking xpu plugin for non xpu devices (#1594)
* remove asking xpu plugin for non xpu devices

* style
2023-06-15 13:11:24 -04:00
32f85ce524 Add triggers for CI workflow (#1597)
* Trigger

* Space
2023-06-15 09:12:41 -04:00
0983a9b9b4 Integration tests (#1593)
* Integration tests

* Typofix

* Clean up python version

* Trainer typo

* Clean env

* rm cache
2023-06-15 02:42:34 -04:00
e5d0df44f0 Update modeling.py (#1595) 2023-06-14 17:59:28 -04:00
50eabe5b1d FSDP updates (#1576)
* FSDP updates

* quality and import fixes

* bug fix and adding contributors

Co-Authored-By: Vik Paruchuri <github@vikas.sh>
Co-Authored-By: raghavanone <115454562+raghavanone@users.noreply.github.com>

* fix 🐛

* update docs and example

* quality

* fixes and updates

* use logger

* fix circular dependency issue

* quality

* refactor

* quality

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* address comment

---------

Co-authored-by: Vik Paruchuri <github@vikas.sh>
Co-authored-by: raghavanone <115454562+raghavanone@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-13 20:36:32 +05:30
f2d1047059 Update checkpoint.mdx (#1587) 2023-06-13 09:57:52 -04:00
3e68f1da63 Fix test (#1586) 2023-06-13 09:03:47 -04:00
f8b0696076 fix logger level (#1579) 2023-06-13 08:55:10 -04:00
51a2ca5d88 Return false if CUDA available (#1581) 2023-06-13 08:44:31 -04:00
51de46e368 Update training_tpu.mdx (#1582) 2023-06-13 07:52:59 -04:00
e2b0224ec4 improve oob performance when use mpirun to start DDP finetune without accelerate launch (#1575)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-06-13 07:52:26 -04:00
db11bd5035 Get Torch version using importlib instead of pkg_resources (#1585)
This fixes the following warning:
> pkg_resources is deprecated as an API
2023-06-13 07:50:12 -04:00
543c59af22 Expand prepare() doc (#1580)
* Expand device_placement

* Expand doc

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update accelerator.py

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-12 14:37:43 -04:00
81765e6e00 Make sure that we only set is_accelerator_prepared on items accelerate actually prepares (#1578)
* Other items

* Better test and check

* Align test

* Clean
2023-06-12 12:09:31 -04:00
a4ebc14fab fix the bug in xpu (#1508)
* fix bug in is_xpu_available

* fix device configure bug for DDP with ccl backend

* enable accelerate launch for DistributedType.MULTI_XPU

* fix the bug in wait_for_everyone for xpu

* fix the bug in rng_sync_check for xpu

* refactoring code according to muellerzr's suggestion

* define RegressionModel4XPU for xpu to avoid ccl bug

* make MULTI_XPU independent on env var 'CCL_WORKER_COUNT'
2023-06-12 11:34:21 -04:00
058f6f70f5 Perminant solution (#1577) 2023-06-12 11:29:36 -04:00
665d5180fc Check for bak and expand docs on directory structure (#1571)
* Check for bak and expand doc

* Better regex

* Update docstring

* Use exclusion at beginning and simplify check for digit
2023-06-09 13:10:53 -04:00
d1ea9ab40c Introduce listify, fix tensorboard silently failing (#1570)
* Introduce untensorify, fix logging with tensor

* Clean imports and make note

* untensorify -> listify
2023-06-09 12:50:28 -04:00
632dce67ab Raise error instead of warn (#1568) 2023-06-09 12:18:26 -04:00
e41864ce9d Update mixed precision integrations in README (#1569) 2023-06-09 11:26:33 -04:00
979991aa78 Update gradient sync docs to reflect importance of optimizer.step() (#1565)
Before this commit, this documentation suggested that model parameters
are updated when `accelerator.backward()` is called (which in turn calls
`loss.backward()`). This isn't the case - parameter updates happen when
`optimizer.step()` is called.

This commit:
1. Updates this documentation to reflect this within the discussion of
   gradient accumulation.
2. Adds calls to `optimizer.step()` as that's key to gradient
   accumulation.
2. Adds optimizer.zero_grad() for consistency with `accelerator.accumulate()`'s docs
3. Does some related word-smithing

To make sure I was thinking about gradient accumulation correctly, I'm
using `huggingface/transformer`'s performance guide for a working
definition of gradient accumulation, which this diff is consistent with:

> The idea behind gradient accumulation is to instead of calculating the
gradients for the whole batch at once to do it in smaller steps. The way
we do that is to calculate the gradients iteratively in smaller batches
by doing a forward and backward pass through the model and accumulating
the gradients in the process. *When enough gradients are accumulated we
run the model’s optimization step*. This way we can easily increase the
overall batch size to numbers that would never fit into the GPU’s
memory. In turn, however, the added forward and backward passes can slow
down the training a bit.

(https://huggingface.co/docs/transformers/perf_train_gpu_one#gradient-accumulation)

Another huggingface example of gradient accumulation that is consistent
with this change: [run_glue_no_trainer.py][0]

[0]: https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py#L518-L532
2023-06-09 09:30:43 -04:00
7fc1e438d1 [bnb] Fix failing int8 tests (#1567)
* fix int8 tests

* replace with `replace_8bit_linear`
2023-06-09 14:53:07 +02:00
040f178569 Update big_modeling.mdx (#1564) 2023-06-08 15:52:05 -04:00
87c81315a1 Reset dataloader end_of_datalaoder at each iter (#1562) 2023-06-08 12:08:17 -04:00
f1e84decc9 [core] Fix possibility to passNoneType objects in prepare (#1561)
* add possibility to pass nonetype objects

* adds nice test
2023-06-08 14:56:22 +02:00
eafddf02e3 fix the typo when setting the "_accelerator_prepared" attribute (#1560)
* fix the typo when setting the "_accelerator_prepared" attribute

* use the name "_is_accelerate_prepared" instead
2023-06-07 18:18:08 -04:00
f0029d6f60 Fix tests not being ran on multi-GPU nightly (#1558)
* Fix tests not being ran

* More tests
2023-06-07 15:14:02 -04:00
3147de9010 Fix load_state_dict when there is one device and disk (#1557) 2023-06-07 14:57:20 -04:00
d448ebaf90 Update README.md (#1556) 2023-06-07 14:44:27 -04:00
65dd4f2039 Avoid double wrapping of all accelerate.prepare objects (#1555)
* Add step reset to free memory

* Check if not Accelerated Optimizer

* Continue

* Another try

* Check the rest

* Try with just check on init

* Change logic based on review

* Update

* Oops very big logic issue!
2023-06-07 13:37:19 -04:00
7ee2c79da9 Update launch.mdx (#1553) 2023-06-07 13:35:51 -04:00
bbe2e30901 [doc build] Use secrets (#1551) 2023-06-07 18:42:09 +02:00
0ab72613a7 v0.21.0.dev0 2023-06-07 10:12:36 -04:00
6f14e619b2 Update migration.mdx (#1549) 2023-06-07 09:50:09 -04:00
90e9703d99 Eval mode (#1540) 2023-06-07 09:27:05 -04:00
5f21cde3c7 [documentation] grammar fixes in gradient_synchronization.mdx (#1547)
* Update deferring_execution.mdx

* [documentation] grammar fixes in gradient_synchronization.mdx

These changes are grammatical and do not affect the ideas communicated in the file.
2023-06-06 17:06:03 -04:00
76ccfae682 Add mps support to big inference modeling (#1545)
* Add mps support

* make style

* Fix syntax

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Fix condition

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-06 16:31:02 -04:00
62357f218f Apply deprecations (#1537)
* MPS

* Update examples

* Fix env var

* device type

* Fix test
2023-06-06 13:04:45 -04:00
be1b76e97a Update deferring_execution.mdx (#1544) 2023-06-06 11:59:30 -04:00
3f2b5da094 Update performance.mdx (#1543) 2023-06-06 09:54:25 -04:00
3f1cb09e7b Update deepspeed.mdx (#1541) 2023-06-06 09:54:03 -04:00
7a39d928f5 Prevent using extra VRAM for static device_map (#1536) 2023-06-06 09:31:41 -04:00
961fe728d9 remove ipexplugin, let ACCELERATE_USE_IPEX/ACCELERATE_USE_XPU control the ipex and xpu (#1503)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-06-06 09:27:31 -04:00
ef0c4bf277 Officially support naive PP for quantized models + PEFT (#1523)
* officially support naive PP

- relax check
- add test

* Apply suggestions from code review

* more tests

* Update src/accelerate/accelerator.py
2023-06-06 14:41:59 +02:00
de855b3247 Raise ValueError on iterable dataset if we've hit the end and attempting to go beyond it (#1531)
* Raise ValueError on iterable

* Clean
2023-06-06 07:51:22 -04:00
b9628f13c2 Check tied parameters (#1529)
* Check that parameters are tied correctly

* Fix style

* Fix condition

* Fix failing test

* Fix check_tied_parameters function

* Fix condition

* Fix arg

* Apply suggestions from code review

Fix log

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Fix tests and comments

Fix comments and tests

Fix description

* Remove dep

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-05 15:17:49 -04:00
16ca01feea Refactor mp into its own wrapper (#1527)
* Better, clean version

* Diff

* oops need return

* Make adjustments

* Docstring
2023-06-05 12:00:51 -04:00
4cbbde8945 Fixup deepspeed/cli tests (#1526) 2023-06-05 11:35:21 -04:00
eba6eb79dc Fix a bug when parameters tied belong to the same module (#1514)
* Fix a bug when parameters tied belong to the same module

* Address review comments

* Add tests
2023-06-02 17:07:39 -04:00
109f3272f5 Swap env vars for XPU and IPEX + CLI (#1513)
* Swap env vars

* Clean up CLI

* use_xpu

* Add CLI docs

* Ipex only

* Nit

* Check

* Capitolize

* Make changes from review
2023-06-02 13:30:16 -04:00
85901cdcf9 should set correct dtype to ipex optimize and use amp logic in native… (#1511)
* should set correct dtype to ipex optimize and use amp logic in native_amp logic in prepare_model

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* remove mix precision set in ipex, directly use it from accelerate state

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* raise import error if ipex is not valid in prepare ipex

* Update src/accelerate/accelerator.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-06-02 10:45:17 -04:00
5e74d932b9 NVME path support for deepspeed (#1484)
* NVME path support for deepspeed

* modify stage 3 ds test

* review commit and fixes

* review commits
2023-06-02 09:55:17 -04:00
090c65cd9d Add assertion when call prepare with deepspeed config. (#1468) 2023-06-02 09:55:04 -04:00
b7d5d9072a adjust overriding of model's forward function (#1492)
* adjust overriding of model's forward function

* bug fix

* extend solution to all model.forward overrides

* leave fp8 section alone

* make style

---------

Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2023-06-02 07:52:56 -04:00
d4262021d5 Fix 4bit model on multiple devices (#1506)
* Add 4bit case and fix device index

* Fix style
2023-06-01 15:10:51 -04:00
8ae56dc51d [bnb] Add fp4 support for dispatch (#1505)
* add fp4 support for dispatch

* add tests

* refactor
2023-06-01 20:41:03 +02:00
c9fbb71e37 fix crash when ipex is installed and torch has no xpu (#1502)
also when cpu flag is set. should use cpu instead of XPU
2023-06-01 11:48:55 -04:00
4d583ad6a1 Allow key skipping in big model inference (#1491)
* Allow key skipping in big model inference

* Add a repr
2023-05-31 15:04:52 -04:00
70d999ee4a Use empty like when we only need to create buffers (#1497)
* Use empty like

* Make
2023-05-31 11:53:17 -04:00
3913fa4dd0 Let gather_for_metrics always run (#1496) 2023-05-31 10:59:31 -04:00
f9b2e6769b Update README.md (#1493) 2023-05-31 09:25:29 -04:00
d3f8c52f4c Only use IPEX if available (#1495)
* Only use IPEX if available

* Check first, then make plugin
2023-05-31 08:18:13 -04:00
af12e7b023 Add rdzv-backend (#1490)
* Add rdzv

* rm print

* Doc

* Better help
2023-05-31 08:06:55 -04:00
68376babd8 Fix gradient state bugs in multiple dataloader (#1483)
* Fix gradient state bugs in multiple dataloader

* Fix style issue

* Update src/accelerate/data_loader.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Add docstring

* Fix style

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-05-30 10:56:42 -04:00
7d24bdefb5 Move to device (#1478) 2023-05-26 15:01:02 -04:00
bb296348e1 Split tensors as part of split_between_processes (#1477)
* Try with this

* Remove import to be late

* Apply padding properly for tensors

* Pad across tensors

* Check to see if this works

* Use -1

* Properly send the first item as what's to be padded

* Update docstring

* Add tests

* Fix test

* Update typehints and docstrings
2023-05-26 14:23:07 -04:00
0226f75025 Imrpove sagemaker (#1470)
* Should fix everything now:

* Simplify logic
2023-05-24 15:50:31 -04:00
419c9ce22a Update gradient accumulation docs, and remove redundant example (#1461) 2023-05-24 10:43:42 -04:00
2249fbde0d update register_empty_buffer to match torch args (#1465) 2023-05-24 08:32:38 -04:00
e0ffea5bc3 Check for xpu specifically (#1472) 2023-05-23 12:42:12 -04:00
9a86a49f72 update conversion of layers to retain original data type. (#1467)
* add dtype to retain original dtype of layers in convert_model

* updated params_dtype

* ran make style,quality:
2023-05-23 05:19:57 -04:00
70920895e8 Fix skip first batch being perminant (#1466)
* Better version of fix

* Failing diff test

* Special str
2023-05-22 14:18:16 -04:00
bf3cd30a66 4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) (#1458)
* Added change for FP4.

* fix suggestion

* better check

---------

Co-authored-by: younesbelkada <younesbelkada@gmail.com>
2023-05-22 11:35:14 -04:00
bfa74e51d2 Document how to use commands with python module instead of argparse (#1457)
* Include other commands

* Add another paragraph

* Reverse order

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2023-05-19 12:32:54 -04:00
e6699e6aba Refactor and simplify xpu device in state (#1456)
* Refactor and simplify xpu device in state

* review commit
2023-05-19 10:43:24 -04:00
0871e93a74 fix error for CPU DDP using trainer api. (#1455)
init_process_group() got multiple values for argument 'backend'

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-05-19 06:32:11 -04:00
86720fdb11 Adds in_order argument that defaults to False, to log in order. (#1262)
* Adds `in_order` argument that defaults to False, to log in order.

Ads `in_order` argument that defaults to `False`, to log in order. 
It really helps with readability.  Defaults to false to not break backwards comp.

* fixed formatting

* Update src/accelerate/logging.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Fixed quality & suggestions

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-05-18 15:01:26 -04:00
1deab71e3c Update with cli instructions (#1453)
* Update with cli instructions

* Also update basic tut
2023-05-18 11:32:26 -04:00
5d1cee3d81 Auto multigpu logic (#1452) 2023-05-18 11:12:58 -04:00
5904f56c45 [docs] Replace state.rank -> process_index (#1450)
I couldn't find a rank property in `PartialState`.
2023-05-18 07:13:39 -04:00
99d790dc34 split_between_processes (#1449) 2023-05-17 15:35:36 -04:00
1760d2dc8c Add to (#1448) 2023-05-17 14:52:25 -04:00
b93bfac16d Distributed prompting/inference utility (#1410)
* Splitter

* Rename and fix

* Change value

* Add plus 1?

* mvp

* Nested processes

* Start of implementation

* Fin

* Introduce util

* Return non-nested for now

* Future annotation

* Fix

* Fix failing tests, make it fully nested

* Fin

* Start doc

* Fixup tests

* Add is_torch_version

* Should work now with padding

* Include padding

* Docstrings

* toctree

* Dash

* Note on when padding is needed

* Apply typo fixes from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Try quicklink

* Use dash

* URL

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-17 14:41:25 -04:00
981c6fb8d6 Fix ci (#1447) 2023-05-17 13:49:56 -04:00
6413f25ba9 Raise error when logging improperly (#1446)
* Raise error when logging

* Update src/accelerate/logging.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-17 11:16:35 -04:00
39e20d3e55 Fixes in infer_auto_device_map (#1441) 2023-05-17 10:54:42 -04:00
3a381bfa48 fix potential OOM when resuming with multi-GPU training (#1444)
* load `optimizers`, `schedulers`, `scalers` and `states` in different devices

* only apply to the optimizer state
2023-05-17 10:53:17 -04:00
bc82d18821 fixed: ZeroDivisionError: division by zero (#1436)
* Update modeling.py

fixed: ZeroDivisionError: division by zero

* fixed style

* code optimize

---------

Co-authored-by: xingwei <xingwei@i-click.com>
2023-05-17 08:59:12 -04:00
330d60b817 Make sure torch compiled model can also be unwrapped (#1437)
* Make sure torch compiled model can also be unwrapped

* Apply suggestions from code review

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* add tests

* fix double import

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-05-16 19:03:36 +01:00
612ecef7b8 Fix XPU (#1440) 2023-05-16 13:03:22 -04:00
9493d7276b [core] Introducing CustomDtype enum for custom dtypes (#1434)
* working v1 - draft

* format

* more comments
2023-05-16 16:24:17 +02:00
40c6e0ca41 Ensure that it gets installed (#1439) 2023-05-16 09:50:53 -04:00
a28491bc24 Let quality yell at the user if it's a version difference (#1438)
* Let quality yell at the user if it's a version difference

* Also include in style
2023-05-16 09:30:08 -04:00
435079aafb Improve Slack Updater (#1433)
* Update log_reports to send to slack

* REVERT this change, just for testing!

* Add slack_sdk dep

* Second one

* Try now?

* Remove len

* Need secret

* Try with new version

* Right boldface

* Fix import

* New format, use tabulate

* Add tabulate to yml

* Quality

* Purposefully fail

* Working updater, now to test

* Int

* Print payload

* Append

* Change maxcolwidth

* Offset

* More offset

* Context

* No max width

* gh format

* max-col-width'

* Reduce max

* Non-working tables

* Rm md report

* Try now

* Try with just count

* Use table

* New version

* Use table

* Try with thread

* Should be working now

* Clean

* Fixup test reports fully

* Revert workflow

* Keep tabulate in workflow ci

* Update other workflows

* Use blocks for better formatting

* ONe more test

* Works as expected
2023-05-16 09:08:10 -04:00
dcde1e93d0 Fix bug on ipex for diffusers (#1426) 2023-05-12 23:32:01 +02:00
ab379793d4 Intel GPU support initialization (#1118)
* Intel GPU support initialization

* rng state for xpu ,accel backend

* add xpu variable and clean code

* checkpointing, hooks, colls & megatronlm porting

* fix runtime errors

* test utils and xpu runtime checks

* fix unknown import in constant

* Resolve amp and cuda/xpu tensor placement

* add ipex for state and hooks

* add mingxiao's ipex changes and source code rebase changes

* add ipex binding in cluster

* resolve megatron lm issues and modelling memory

* indent fix and syntax

* versioning and sanity checks

* use kwargs and add upstream

* revert megatron lm xpu changes

* cleanups and test npr

* fix merge conflict

* fix merge conflict

* Fix merge conflict

* review commits

* make style, ruff code styling

* hf doc builder code style

* Review commits and code style

* remove xpu plugin and use only ipex by default if cpu/xpu present

* review commits and fix tests on state

* fix test in state

* add xpu condition in optimizer and code style/testing

* fix test add warn for ipex

* fix test

* fix test

* fix test and condition

* fix  amp test prod,cli ,core

* fix minimum torch tests

* refine accelerator and modelling for tests

* refine modeling and merge

* Fix slow cuda tests

* doc and retrigger test
2023-05-11 09:03:24 -04:00
b50e75f85d Make mlflow logging dir optional (#1413) 2023-05-11 12:03:13 +02:00
f95067bfbf fix deepspeed failing tests (#1411)
* changes required for DS integration

* changing the default value of `zero_force_ds_cpu_optimizer` to True to fix the failing tests
2023-05-11 10:35:46 +05:30
d07fd959cc changes required for DS integration (#1406) 2023-05-11 00:47:32 +05:30
873b39b85b use existing mlflow experiment if exists (#1403)
Co-authored-by: Rustem Galiullin <rustem.galiullin@bayanat.ai>
2023-05-10 11:51:21 +02:00
da39665055 Adding support for local SGD. (#1378)
* Adding support for local SGD.

* Update src/accelerate/local_sgd.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Update src/accelerate/local_sgd.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Update src/accelerate/local_sgd.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* fixing reduction + adding a test.

* style fix.

* Update docs/source/usage_guides/local_sgd.mdx

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/local_sgd.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update examples/by_feature/local_sgd.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-09 10:52:03 -04:00
d95d68ec46 Support TPU v2 and v3 on new PyTorch/XLA TPU runtime (#1385)
* Use numpy Generator instead of global seed

* Implement SharedDict descriptor

* Formatting and comments

* Remove `GlobalSharedDict`

* Formatting

* Formatting with `doc-builder` installed correctly
2023-05-09 09:12:43 -04:00
fafadc5323 Add in a section on papers using Accelerate (#1399)
* Start of papers

* Add back in PickScore

* Rm non-urld

* Test

* Remove space
2023-05-09 15:00:50 +02:00
145fca5a09 Support TPU v4 with new PyTorch/XLA TPU runtime (#1393)
* Fix `XLA_USE_BF16` when not using mixed precision

* Fix RNG sync during data loading

* Fix hanging during checkpointing

* Remove extra _mp_fn

* Use all_gather to implement _tpu_gather

* Use collective_broadcast for torch RNG state

* Formatting and comments.

* Fix formatting with `make style`
2023-05-08 13:53:43 -04:00
9fe690706d v0.20.0.dev0 2023-05-08 08:37:42 -04:00
6e81938282 Update training_zoo.mdx (#1397) 2023-05-07 19:00:46 -04:00
e965d590cd Fix gather_obj (#1391)
* Fix gather_obj

* Fix cpu test

* Requires torch 1.7

* Set torch version
2023-05-05 17:55:51 +02:00
6dfcf5b8ef Bump torch v (#1392) 2023-05-05 17:55:21 +02:00
e4ea4ed4de Log Images and other types to wandb (#962)
* add image logging

* add table logging

* add artifact logging capabilities

* fix black

* remove log_iamges on base class

* fix docstring

* quality

* remove the artifact code

* add main proc decorator

* add main process to log_images in ternsorboard

* quality

---------

Co-authored-by: Thomas Capelle <thomas.capelle@steady-sun.com>
2023-05-05 16:11:16 +02:00
fa8e1cff91 fix config bug for 'mixed_precision' from 'yaml.safe_load()' (#1386)
* fix config bug for 'mixed_precision' from 'yaml.safe_load()'

* Update src/accelerate/commands/config/config_args.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-05 07:37:09 -04:00
60856787ac Fix flakey thread issue (#1387)
* Fix thread issue?

* Fix bool

* \<2

* Below 2.0 fully

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-04 14:41:53 -04:00
995563fec9 delete textfile after tests are done (#1381) 2023-05-02 09:58:06 -04:00
2d62bd1570 Seperate out contextmanager generation (#1379)
* Seperate out contextmanager generation

* Move over to modeling

* Switch import
2023-05-02 09:54:53 -04:00
f8169eaded Improve accelerate env reporting (#1376)
* Have env state GPU kind

* Include system RAM

* CLean
2023-05-01 11:08:26 -04:00
75ab711993 Special transformers case from args (#1364)
* Special transformers case

* Reduce to single line

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Revert

* Clean

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-01 09:44:44 -04:00
f489a86573 Fix default FSDP_MIN_NUM_PARAMS (#1367)
FSDP_MIN_NUM_PARAMS default changed from 1e8 to 100000000 (no floats allowed)
2023-04-28 12:35:07 -04:00
2708c1ae31 fix: typing issues, and replace deprecated python typing (Optional, Union) to | (#1363) 2023-04-27 10:50:53 -04:00
e30034ed07 Better check for packages availability (#1356)
* Better check for packages availability

* lint
2023-04-26 08:46:16 -04:00
78bf8bcb21 fix bnb slow test (#1355) 2023-04-25 13:30:37 +02:00
57f2cf5fa7 using deepspeed.comm for distrbiuted init (#1352) 2023-04-25 09:37:16 +05:30
e06e7b35e7 Support FP8 mixed precision training for Ada Lovelace GPUs (#1348)
* Support FP8 mixed training for Ada Lovelace GPUs

* Black format

* Updating error message
2023-04-24 13:01:12 -04:00
5651521833 Pop more backend options (#1342)
* Fixup more args

* Consistency
2023-04-20 11:41:24 -04:00
ba0ee8a54d only update progress bar when done with tensor (#1341) 2023-04-20 08:57:44 -04:00
c2a162932a Fix nested context manager for main_process_first() (#1304)
* Fix nested context manager for main_process_first()

* Fix test for main_process_first()

* Improve test for main_process_first()

* Fix formatting

* Fix test with single process
2023-04-20 06:38:12 -04:00
c29c3c5e70 Rm unused amp check (#1340) 2023-04-19 14:33:37 -04:00
945085edb3 Temp skip test (#1339) 2023-04-19 14:25:58 -04:00
70388fa44e Verbosity, Progress Bar for Loading (#1329)
* added progress bar to tensor loader, and allocation info when verbose

* align coding style with norms
2023-04-19 09:21:02 -04:00
2fee0c15fd v0.19.0.dev0 2023-04-18 11:00:52 -04:00
c05ed13fc9 Fix clearning of memory (#1332) 2023-04-18 10:53:32 -04:00
5e6351502a Remove repetitive devices in load_state_dict() (#1321)
Previously devices() was a list containing duplicate entries. This
changes it into a set.

This significantly speeds safetensors loading when the device map is
long, as the safetensors loop loads each weight entry for each device
entry.

Co-authored-by: John Doe <john.doe@example.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-04-17 15:57:07 -04:00
ee0c587182 ensure module prefixes only match that module (#1319)
Co-authored-by: John Doe <john.doe@example.com>
2023-04-17 15:52:35 -04:00
43e7229a1a Add test flag and import check for dynamo (#1322)
* Add is_dynamo_available + marker

* Use min_torch_version instead
2023-04-17 13:58:53 -04:00
8b96515ed2 Upgrade torch version on main tests (#1323)
* Upgrade torch version on main tests'

* Also in docker
2023-04-17 13:52:20 -04:00
9d9ea62785 Ensure that dynamo is compatible with mixed precision (#1318)
* Fixed

* Use args kwargs
2023-04-17 13:10:39 -04:00
2106e87d58 offload the previous model hook before the current module is moved to the execution device (#1315) 2023-04-14 21:24:59 -04:00
40980e8fe8 Default to nccl (#1314) 2023-04-14 10:18:37 -04:00
f2f810c536 Allow xpu backend (#1313)
* Allow xpu set

* Use in dataclass
2023-04-13 15:23:48 -04:00
0a9403f308 Bug fix in setattr (#1312) 2023-04-13 07:09:27 -04:00
75a693c9b4 Simplify MPS implementation (#1308)
* Simplify MPS implementation

* Quality

* Update src/accelerate/state.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-04-12 08:54:44 -04:00
55691b14c2 add usage guide for ipex plugin (#1270)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-04-07 08:23:12 -04:00
b757b62325 Set the state device dependant to Accelerator on multigpu (#1220)
* Set the state device dependant to Accelerator on multigpu
2023-04-06 13:59:59 -04:00
15dbf9722b fix for load_checkpoint_and_dispatch(device_map=None) (#1297)
The `load_checkpoint_and_dispatch` method has `device_map: Optional[Union[str, Dict[str, Union[int, str, torch.device]]]] = None,`

But if you pass `device_map=None` you get an error:

```
accelerate/big_modeling.py", line 477, in load_checkpoint_and_dispatch
    if offload_state_dict is None and "disk" in device_map.values():
AttributeError: 'NoneType' object has no attribute 'values'
```
2023-04-06 12:55:37 -04:00
419ecf38af Make note about grad accum and prec (#1296) 2023-04-06 11:55:19 -04:00
3cb9d5fd9c Raise better error on notebook_launcher (#1293)
* Raise better error

* Better err

* Move import
2023-04-04 14:42:29 -04:00
f1298b143e fix bnb slow test (#1292) 2023-04-04 20:02:03 +02:00
07ad358f2d Check for dtype attr (#1288) 2023-04-03 16:57:46 -04:00
211707857d Expound error on recursively_apply (#1286)
* Expound

* Adjust test
2023-04-03 14:07:32 -04:00
e57d5d0eae Raise more explicit error when transformer_engine isn't installed (#1287)
* Raise err for unsupported fp8

* Change hardware spec

* Rm hardware part since we don't check it

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Style

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-04-03 13:40:28 -04:00
92d072043e Fix TypeError bug in honor_type (#1285)
* Use is_namedtuple
2023-04-03 12:23:12 -04:00
3d1a0f7e98 fix attribute error in DataloaderShared (#1278)
When running in single GPU, the `batch_sampler` of `DataLoaderShared` is a `torch.utils.data.sampler.BatchSampler` object instead of `DataSamplerShared ` object, which does not contain necessary attributes to calculate `total_batch_size`.
2023-04-03 09:44:59 -04:00
8b3e30887a Minor fix whitespace colon (#1272)
More readability
2023-04-03 09:42:56 -04:00
3e304c4a1a Update quicktour.mdx (#1273) 2023-04-03 09:42:48 -04:00
1c102f23cc Missing fp8 (#1284) 2023-04-03 09:42:21 -04:00
4c0d5a46ba Raise import err (#1283) 2023-04-03 09:37:17 -04:00
d0c17d707f Fix reduce operation (#1268)
Co-authored-by: amax <amax@admin.cluster.local>
2023-03-31 09:24:36 -04:00
b41d8d8228 Change error raised to ValueError (#1267) 2023-03-30 10:37:08 -04:00
3a6db664c7 Update bug-report.yml (#1264) 2023-03-30 09:17:58 -04:00
166520feea ipex intel extension for pytorch integration (#1255)
* ipex intel extension for pytorch integration

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

Co-authored-by: jianan-gu <jianan.gu@intel.com>

Co-authored-by: Wang, Yi A <yi.a.wang@intel.com>

* fix test error

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* fix the review comment and add testcase

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-03-30 09:08:17 -04:00
663f5120c2 Check attribute 'overflow' exists in optimizer. (#1259)
* Check attribute 'overflow' exists in optimizer.

* Fix code formatting. ;)
2023-03-28 09:26:17 -04:00
23ac55fcab [core] Add Quantization support for dispatch_model (#1237)
* add quantization support for `dispatch_model`

* fix multi-gpu

* more chaecks

* fix bias issue

* Update src/accelerate/utils/modeling.py

Co-authored-by: Andrei Panferov <andrei@BlackSamorez.ru>

* make style

* add tests

* left some todos

---------

Co-authored-by: Andrei Panferov <andrei@BlackSamorez.ru>
2023-03-27 15:33:52 -04:00
93951ce516 handle missing deepspeed config (#1251) 2023-03-24 16:10:12 -04:00
ae86a00be0 raise error when dataloader with None as batch_size when using DS (#1250) 2023-03-24 21:15:23 +05:30
532da3e342 Fix pypi image (#1249) 2023-03-24 11:34:36 -04:00
a826e4441d Handle multiple tied parameters (#1241)
* Handle multiple tied parameters

* Add tests

* Ensure backward compatibility with Transformers

* Update src/accelerate/utils/modeling.py

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>

* Gate test requiring Transformers

---------

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
2023-03-24 09:53:29 -04:00
1fe27e7c95 Hardware Auto-Setup Example/Tutorial for Distributed Launch (#1227)
* add self hosted hardware example

add multi gpu launch script

add auto setup hardware docs

remove an example

tiny fixes

* add colab link

* style

* update readme, remove docs page
2023-03-24 09:46:29 -04:00
c1a6c209df Change multinode to multigpu (#1247) 2023-03-24 09:40:21 -04:00
8ebd6ab2ee backfill ds plugin attributes when using ds_config (#1235)
* backfill ds pluging attributes when using ds_config

* add test

* refactoring code
2023-03-23 21:28:02 +05:30
ea9b85477d remove empty dicts while saving accelerate config (#1236) 2023-03-23 19:14:21 +05:30
420ff21c3b extensions has been removed and replaced by customizations (#1075)
Co-authored-by: Dennis Bappert <bappert@outlook.com>
2023-03-23 09:15:23 -04:00
b1b3312749 Make grad accum steps mutable on the Accelerator object (#1233)
* Make grad accum steps mutable

* Reset state
2023-03-22 17:44:31 -04:00
6e4e870203 add additional check before deleting env variable (#1229) 2023-03-22 15:03:18 -04:00
a3065e1842 Silence dynamo_backend (#1226) 2023-03-22 11:34:08 -04:00
4eaf36e1c4 docs: add finetuner to ppl who use accelerate (#1224) 2023-03-22 09:08:21 -04:00
e7bb060c0e Fix get_logger kwarg documentation issue (#1222) 2023-03-22 08:05:00 -04:00
a15d307426 Fix bug in loading launch config (#1218)
* Fix bug in loading launch config
2023-03-20 10:20:09 -04:00
7e7f3445aa FIx TPU gradient state (#1219) 2023-03-20 09:56:07 -04:00
10c674633d ds offload optim fix to use CPUAdam (#1208)
* ds offload optim fix to use CPUAdam

* fix
2023-03-20 19:21:39 +05:30
82c2665cd6 Fix example in accumulate method (#1211) 2023-03-18 21:00:11 -04:00
2930cac698 Fix typo in TPU config (#1202) 2023-03-18 09:42:56 -04:00
901ab69a16 Better error message when using multi-GPU and Accelerate on torch <1.9.1 (#1203)
* Better err

* Split
2023-03-16 11:45:09 -04:00
780e4aa32a Fix tied weights load (#1204)
* Retie weight after loading checkpoint

* Adapt doc
2023-03-16 11:29:11 -04:00
e4620984f8 Make the Scheduler adjust the steps taken relative to the gradient accumulation steps (#1187)
* Make scheduler actually adjust the length
2023-03-15 12:16:12 -04:00
017a98c0e9 Fixup --fsdp (#1198) 2023-03-15 10:34:13 -04:00
d1aa558119 [Accelerator] We should not call to on modules that wraps accelerate loaded models (#1172)
* add v1

* fix docstring
2023-03-15 08:28:28 +01:00
41479fe483 Set drop last to ensure modulo16 restriction for fp8 (#1189)
* set drop last to ensure modulo16 restriction for fp8

* fix quality

* Use all eval samples for non-FP8 case
2023-03-14 14:35:02 -04:00
eac5d13c7b Only convert linear layers with weights multiple of 16 (#1188)
* Only convert linear layers with weights multiple of 16

* Simpler test
2023-03-13 17:03:29 -04:00
b228136cae add use_orig_params to FullyShardedDataParallelPlugin (#1184)
* add `use_orig_params` to FullyShardedDataParallelPlugin

* fix 🐛
2023-03-14 00:20:30 +05:30
90deb748c6 Add documentation about PyTorch FSDP state dict behavior (#1181) 2023-03-13 10:53:56 -04:00
d942708745 Support special mapping of dtypes when preparing device map (#1179) 2023-03-13 10:48:31 -04:00
3783180844 fixed typo in launch.py tpu_pod_launcher (#1180) 2023-03-10 18:36:52 -05:00
ea836f3057 Add repr to AlignHook for easier debugging. (#1177) 2023-03-10 14:35:11 -05:00
a4c9476204 Run accelerate_test in cli (#1176)
* Run accelerate_test in cli

* Make it run on more than one process for gather check
2023-03-10 10:28:42 -05:00
3ca8c9a997 Fix CPU error always being raised (#1175)
* Save state

* Revert to old behavior

* Fix failing test/update

* Remove duplicate test
2023-03-10 10:22:26 -05:00
2f83b1afef Fix accelerate test with new config_file errors (#1169) 2023-03-09 11:56:42 -05:00
b0591c665c Fix backward compatibility in configs wrt dynamo backend (#1168) 2023-03-09 11:39:22 -05:00
d9871c0f87 v0.18.0.dev0 2023-03-09 11:18:26 -05:00
abc2beb423 Remove outdated command directions and use in tests (#1166)
* Get rid of launch in docs

* Run instead of Launch

* Proper ddp prefix

* Include note about older torch versions
2023-03-08 14:37:46 -05:00
8749b4ece4 Fix what files get deleted through total_limit (#1165)
* Use lambda func to sort the keys

* Use inner instead

* With more explicit regex

* Regression check

* Better check that uses multiple numbers
2023-03-08 12:34:22 -05:00
4a3eaee6be Document skip_first_batches in the checkpoint usage guides (#1164)
* Include skip_first_batches

* Repeated statements

* Middle of an epoch
2023-03-08 12:17:30 -05:00
3533e2b0b1 [Accelerator] Fix issue with 8bit models (#1155)
* fix 8bit models on `accelerate`

* add bnb as dependency

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix

* skip a test

* make style

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-08 14:51:25 +01:00
3e0ceac79f Attempt to fix import error when PyTorch is build without torch.distributed module (#1108)
* Attempt to fix importing invalid `torch.distributed.ReduceOp` when torch is built without distributed support.

* Style.

* Move `torch.distributed` logic detection to `imports.py` according to @muellerzr comments

* Style.

* Update wording

* Remove raising exceptions in the case of a non-distributed setup, simply dont import the ReduceOp in this case.
2023-03-08 08:49:45 -05:00
03b617b674 Let GradientState know active dataloaders and reset the remainder (#1162) 2023-03-07 14:46:05 -05:00
840bb1aeda update support for torch dynamo compile (#1150)
* update support for torch dynamo compile

* fix tests and backward compatibility

* fix tests

* Update config_args.py

* Update config_args.py

* fix 🐛

* fix 🐛

* fix bug

* fix 🐛

* bug fix

* 😅

* Update config_utils.py

* 😅

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* resolving comments

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-07 22:05:14 +05:30
1bfde6b963 Fp8 integration (#1086)
* Draft of FP8 support

* Missing import

* Fix names

* Conversion is inplace

* Enable fp8 in examples

* Customization point for Recipe

* Auto-enable FP8 depending on compute capability

* Fix typo

* Put back mixed precision arg

* Add debug script

* Add more tests in debug

* Add more stuff to debug

* Don't forget train

* Put the train in the right place

* Add options for selective conversion

* Fix typo

* Properly recurse

* Add more debug utils

* Typo and init

* Last choice

* More fixes

* More options in example

* Remove debug scripts

* Clean up debug and new names

* Add torch.no_grad for conversion

* Optimizer is deconnected from model?

* Re-attach model parameters to optimizer

* Fix extract

* Style

* Cleanup post-rebase

* Deal with apdding

* fix examples

* Update src/accelerate/accelerator.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Address comments

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2023-03-07 09:10:10 -05:00
3482495bb5 📝 add a couple more trackers to the docs (#1158) 2023-03-06 19:06:56 -05:00
947b2a88a9 Load custom state to cpu (#1156)
The current implementation loads custom states to GPUs, leading to OOM. I add `map_location="cpu"` to the `torch.load` function, which is similar to the strategy in `load_accelerator_state`.
2023-03-06 13:15:21 -05:00
cac1ed41eb Solve arrow keys being environment dependant for accelerate config 2023-03-06 10:09:24 -05:00
9dc5b349ea [Safetensors] Relax missing metadata constraint (#1151)
* [Safetensors] Relax missing metadata constraint

* correcct

* char limit
2023-03-06 16:01:35 +01:00
0aae1e93f4 Include a note in the gradient synchronization docs on "what can go wrong" and show the timings (#1153)
* Include timing results

* Don't include tilda for accelerator

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-06 10:00:43 -05:00
78151f87a4 Fixed typos in notebook (#1146)
* Bad cut for the eval_split

* Fixed typo.
2023-03-03 14:30:53 -05:00
853823d0ae FSDP enhancements and fixes (#1145)
* fsdp version update

* fsdp fixes

* update accelerate config
2023-03-03 19:19:48 +05:30
77ae51a050 fix partial state (#1144)
* fix partial state

* fix failing tests
2023-03-03 19:03:24 +05:30
ad9cf788b1 Fix notebook_launcher (#1141)
* Fix initialization on decorator for the Accelerator
2023-03-02 12:08:32 -05:00
5f9cea4ce9 fsdp bf16 enable autocast (#1125) 2023-03-02 18:59:19 +05:30
96ffd349f3 fix lr scheduler issue (#1140)
* fix lr scheduler issue

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-02 18:41:46 +05:30
d88bbbd0e2 fix ds dist init kwargs issue (#1138)
* fix ds dist init kwargs issue

* fix
2023-03-02 18:35:16 +05:30
075b5d615d deepspeed dataloader prepare fix (#1126) 2023-03-02 18:34:35 +05:30
9b5877d1b6 Fix multinode with GPU ids when each node has 1 (#1127)
* Fix multinode

* Assert

* Reverse logic

* Use <= and not "not"

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* All on a single statement

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-01 14:02:17 -05:00
586941d107 Expand warning and grab all GPUs available by default (#1134)
* Use all GPUs by default

* Warn and include multi_gpu pull by default
2023-03-01 13:50:27 -05:00
e1b84bf503 Add tee and role to launch (#1132) 2023-03-01 12:37:16 -05:00
b2ea1c7b4f [Big model loading] Correct GPU only loading (#1121)
* [Big model loading] Correct GPU only loading

* Update src/accelerate/utils/modeling.py

* make style

* Update src/accelerate/utils/modeling.py

* make style 2

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-01 16:22:06 +01:00
bdd93cd933 Refactor launch for greater extensibility (#1123)
* Refactor `launch` for greater extensibility

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Fix

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Fix import

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

---------

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
2023-03-01 05:43:32 -05:00
639c1da8df Move dynamo.optimize to the end of model preparation (#1128) 2023-02-28 14:11:38 -05:00
fdb1402c7d Deep merge SageMaker additional_args, allowing more flexible configuration and env variable support (#1113)
* deep merge additional args

* added trailing line

* `make style`
2023-02-28 09:55:03 -05:00
0b3f219881 Add test for ops and fix reduce (#1122)
* Add test for ops and fix reduce

* Adjust testers

* Try w/o shape checK

* Passthrough?

* Make into float

* Clean

* Undo all_gather for now
2023-02-28 09:18:09 -05:00
ade4f1db92 Actually raise if exception (#1124) 2023-02-28 07:54:32 -05:00
907a86d145 TensorBoardTracker: wrong arg def (#1111) 2023-02-25 00:57:49 -08:00
f054799e7f Attempt to unwrap tracker. (#1109) 2023-02-24 15:47:54 +01:00
d4f5fd694e Update performance.mdx (#1107)
Correct import location
2023-02-23 09:05:21 -05:00
38fd30e764 Tracker rewrite and lazy process checker (#1079)
* Refactor implementation to use PartialState and adjust deprecation tests

* Utilize multi-process in Accelerator

* Use state

* Lazy PartialState

* Name, plus keep on_main_process for accelerator

* Handle if the tracker was made on main-process-only properly

* Missing variable names, oops

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Clean

* Logs

* Main process

* Clean

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2023-02-22 07:48:55 -05:00
03754c1e02 Update README.md (#1100) 2023-02-21 21:21:18 -05:00
ea36b7dceb add multi_cpu support to reduce (#1094) 2023-02-20 09:25:55 +01:00
bc9153e465 adds missing "lfs" in pull (#1091) 2023-02-17 17:40:20 +01:00
89b7e36bf6 Fix config (#1090)
* Fix config

* Proper fix
2023-02-17 10:42:24 -05:00
b34db0b987 Added SageMaker local mode config section (#1084) 2023-02-15 14:18:43 -05:00
9875714610 Update complete_cv_example.py (#1082)
minimal typo :)
2023-02-15 13:36:18 -05:00
4b47f190a9 Fix tpu_cluster arg (#1081) 2023-02-15 10:43:04 -05:00
17bc8a1103 Allow custom SageMaker Estimator arguments (#1080)
* Added additional_args to SageMaker Config

* temporary fix #1078

* temporary fix #1078 properly

* Extended SageMaker config

* Revert " temporary fix #1078 properly"

This reverts commit 81c683711d5a94ba9327686563bb55d3e8801555.

* Revert "temporary fix #1078"

This reverts commit c8a4b0973aee6ffd4612a69bb1ccd079b3dbb9ce.

* Extended documentation to reflect manual configuration changes.

* Fixed a small typo
2023-02-15 10:39:08 -05:00
279475307a SageMaker image_uri is now optional (#1077) 2023-02-15 09:31:47 -05:00
9c2e704791 Add error if passed --config_file does not exist (#1074) 2023-02-15 09:10:20 -05:00
4e1816d7ec Refactor state and make PartialState first class citizen (#1071)
* Refactor into State and expose

* Make PartialState mainstream!
2023-02-14 14:50:06 -05:00
5a2cb3b5e3 Fix/implement process-execution decorators on the Accelerator (#1070) 2023-02-14 13:36:33 -05:00
04103090cc update fsdp docs and removing deepspeed version pinning (#1059)
* update fsdp docs and removing deepspeed version pinning

* address comments
2023-02-14 16:39:47 +05:30
ca615f879f Swap utils over to use PartialState (#1065) 2023-02-13 16:08:56 -05:00
2694a6c63a Update integrations (#1063) 2023-02-13 13:28:55 -05:00
b4388b45dc Try with this (#1062) 2023-02-13 10:58:24 -05:00
69e4c3c54d Flag for deprecation (#1061) 2023-02-13 10:38:33 -05:00
68d809256c Introduce PartialState (#1055)
* Try again

* Try off multi-gpu

* This is a test

* Finished now

* PartialState

* Update logger to use new API

* backend

* Working tests

* Working again!

* Raise err instead

* Better error

* Update src/accelerate/state.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-02-13 10:23:39 -05:00
bd091a605b deepspeed hidden_size auto value default fixes (#1060) 2023-02-13 20:23:40 +05:30
cb993d7d8c Fix args by adding in the defaults (#1053) 2023-02-09 15:00:57 -05:00
028b5816c8 Use create_task (#1052) 2023-02-09 14:44:09 -05:00
8951195a15 Introduce TPU Pod launching to accelerate launch (#1049)
* Working version -- run one more test

* commands

* Undo commands

* cli

* Undo config args

* cluster

* Command

* use_alpha

* Fully working now!

* Fix log

* Wrong alpha storing
2023-02-09 13:02:14 -05:00
60460ae1af Fix cpu_offload_with_hook code snippet (#1047)
* Fix cpu_offload_with_hook code snippet

* Make model explicit for clarity.
2023-02-08 09:23:13 -05:00
978dfc38ea Load tensors directly on device (#1028)
* Load tensors directly on device

* Update src/accelerate/utils/modeling.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-02-07 13:48:28 -05:00
5002e56704 Update quality tools to 2023 (#1046)
* Setup 2023 tooling for quality

* Result of styling

* Simplify inits and remove isort and flake8 from doc

* Puts back isort skip flag
2023-02-07 13:34:05 -05:00
71e81bab00 Add cpu_offload_with_hook (#1045)
* Add cpu offload with hook

* Style

* add to init

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add documentation

* Add tests

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-07 13:09:27 -05:00
76c41f0df7 Make sure direct parameters are properly set on device (#1043) 2023-02-06 13:36:18 -05:00
2b981c0942 Add daily slack notifier for nightlies (#1042)
* Update log_reports to send to slack
2023-02-06 10:44:58 -05:00
a60640d4fa Refactor process executors to be in AcceleratorState (#1039)
* Start of refactor

* Fix yield

* Print

* Add test
2023-02-06 10:44:33 -05:00
4be70838e7 Pass keywords arguments of backward function deeper to DeepSpeed (#1037) 2023-02-03 10:39:19 -05:00
e89131c92d do not scale gradient in bf16 mode (#1036) 2023-02-02 14:01:57 -05:00
4e5cc0c6b9 fix: links to gradient synchronization (#1035) 2023-02-02 11:12:30 -05:00
587eea9bb5 enabling mps device by default and removing related config (#1030)
* enabling `mps` device by default and removing related config

* address comments

* fix tests
2023-02-01 23:27:15 +05:30
57cbcab45b Deepspeed param check (#1015)
* Deepspeed param check

On line 146, in set_module_tensor_to_device(), adding a check for deepspeed parameters in the kwargs object, and not passing them solved the error I was receiving regarding the ds parameters not being recognized by torch.nn.Parameter.__new__(). With my admittedly limited knowledge, it seemed to me that the kwargs are not necessary to pass in the case of using Deepspeed+ Accelerate, and this bears out since the model loaded fine with zero-3 cpu parameter and buffer offload on a single-GPU machine, and performed perfectly comprehensible inference outputs (slowly) using the GPU.

The error, in my case, was occurring here as called from accelerator's dispatch_model().

Please let me know if my thinking on this is in anyway wrong! This fix worked for me. 

 `transformers` version: 4.26.0
- Platform: Linux-5.15.83.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes and no (zero-3 on single machine)

* 146-150 check for Int8 arguments

146-150 check for Int8 arguments. If found, send the args as well as the value.

* Used make style on branch

* Used make style with correct versions of black and flake8 on branch
2023-02-01 11:19:01 -05:00
c0caa068ba v0.17.0.dev0 2023-01-31 12:15:08 -05:00
b51b78ffb7 It was 0.16.0.dev0 all along... 2023-01-31 11:07:26 -05:00
67dbae52be sagemaker launcher fixes (#1031)
* sagemaker launcher fixes

* fixes

* addressing comments
2023-01-31 21:17:16 +05:30
d0df263b09 With example (#1027) 2023-01-30 12:57:24 -05:00
a5026706a7 More improvements to docstrings + examples (#1010)
* Start of examples
2023-01-30 12:34:26 -05:00
20e4973903 Start of adding examples (#1001)
* Start of examples

* Missing >

* Fix docstring nit

* Add comment on main_process_first

* Make comment on randomness

* first

* Backprop issues with examples into here
2023-01-30 12:33:47 -05:00
1d9bcdd39d Efficiently skip batches in a dataloader (#1002)
* Efficiently skip batches in a dataloader

* Add method in Accelerator and example

* Apply suggestions from code review

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Rename point of access

* Add point of access to init

* Add tests

* Don't forget to include fixes silly!

* Adapt examples

* Fix quality

* Forgot one

* fix method name

* Fix DataLoaderShard reinstantation

* Fix for epoch checkpointing

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-01-30 11:56:59 -05:00
ba856524f6 Fix slow test by keeping tied weights on the same GPU (#1026) 2023-01-30 11:13:39 -05:00
332326c833 Change default for keep_fp32_wrapper (#1025)
* Change default

* Fix tests
2023-01-30 10:18:40 -05:00
e6d5776ad8 Light vs dark theme based on pick (#1023) 2023-01-30 09:35:37 -05:00
fe709a2490 Fix env var (#1024) 2023-01-30 09:33:19 -05:00
ac970148cd Include steppage in performance docs (#1013)
* Include steppage in performance docs

* New explanation
2023-01-27 12:02:47 -05:00
f0f348921d Don't force mixed precision as no in examples (#1018) 2023-01-27 10:12:27 -05:00
b37680bd66 Fix import of LrScheduler (#1017) 2023-01-27 08:50:33 -05:00
5286d843c8 Add in code exploration tool to docs (#1014)
* Add in code exploration tool to docs

* Update index to hotlink over to the explore

* With 100%

* Just do 750 for now

* Safe height

* Let's try with this

* Comment out original

* Revert

* Add in a note on the docs and remove a secondary code snippet

* Use 1550 for now so it fully fits

* 1600*
2023-01-27 07:32:34 -05:00
22bf677ceb Allow the torch device to be set with an env var (#1009)
* Allow the torch device to be set with an env var

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Fix

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Refactor

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Use self.device

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Refactor

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Add test

* Add test

* Fix test

* Tweak comment

* Fix test

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
2023-01-26 16:01:36 -05:00
bd82bec78e Fix test introduced in PR and introduce AcceleratorTestCase (#1016)
* Fix test, missing reset

* tearDown

* Refactor and inherit to avoid future errors
2023-01-26 15:35:21 -05:00
3825e478b2 Saving and loading state hooks (#991)
* [RFC] Possible design for loading and saving state hooks design

* fix bug

* add tests & docstring

* improve docs

* make style

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-01-26 20:07:21 +01:00
6c3f6792e9 Maintain accumulation steps (#1011) 2023-01-26 06:33:50 -05:00
5858ac62b4 Add styleguide (#1007)
* Add styleguide

* Uniformity

* Accelerate specific
2023-01-25 14:28:24 -05:00
5b0a03d1fb Update toctree (#1008) 2023-01-25 13:52:25 -05:00
c3ea690d48 improve deepspeed notes (#1003)
* improve deepspeed notes

* style
2023-01-23 20:45:45 -08:00
ae8c4875dc Fix parameters tying in dispatch_model (#1000)
* Fix parameters tying in dispatch_model

* Add test
2023-01-23 13:10:30 -05:00
55a528487d Fix scheduler incorrect steps when gradient accumulation enabled (#999)
* add additional check for optimizer step

* rewrite scheduler w/ grad accumulation test
2023-01-23 13:06:45 -05:00
bd1d5fad2f adding support for kwargs in load_state (#989)
* adding support for kwargs in `load_state`

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* quality 

* addressing comments

1. renaming variable to make it explicit
2. adding kwargs to `save_state` for parity

Co-Authored-By: Zachary Mueller <7831895+muellerzr@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Zachary Mueller <7831895+muellerzr@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2023-01-23 20:27:35 +05:30
b22f088ff6 Add new release_memory util (#990)
* Add new release_memory util

* Req cuda
2023-01-19 13:01:24 -05:00
f3f2f9e4b5 in sync with trfs, removing style_doc utils and using doc-builder instead (#988) 2023-01-19 19:24:44 +05:30
7e4136164e Fix test for converting tensor to proper dtype (#983)
* Fix test for converting tensor to proper dtype

* Adds a test
2023-01-18 11:21:45 -05:00
5dd631e2cd Skip wandb test for now (#984) 2023-01-18 10:57:38 -05:00
0a16f37ba1 Ensure that last batch doesn't get dropped if perfectly even in gather_for_metrics (#982)
* Add test_last_batch

* Fix gather bug
2023-01-18 10:30:34 -05:00
aaa2637a5e Fixe type error on line 36 (#981)
Fix to type error on line 36
2023-01-18 09:38:05 -05:00
7573a8cd55 Fix tied parameters test in big model inference (#979) 2023-01-17 14:52:52 -05:00
126550126d Raise minimum version for distrib launch (#978) 2023-01-17 12:24:36 -05:00
733755c94c Update README.md (#968)
When use deepspeed, We must import from accelerate package.
2023-01-12 03:18:56 +01:00
741d23301f Allowing encoded configuration for DeepSpeed (#895)
* allow-encoded-ds-config

* fix style
2023-01-11 14:32:03 +01:00
9b7ef9679f support master port when using ds multi-node launcher (#959)
* support master port when using ds multi-node launcher

* 😅
2023-01-09 23:52:00 +04:00
30a6a3435f Typo fix in src/accelerate/utils/modeling.py (#955)
Simple typo fix I happened to notice and figured I should just fix while I'm looking at it.
2023-01-07 09:58:05 +01:00
f7427c86ee Don't automatically offload buffers when loading checkpoints (#951)
* Don't automatically offload buffers when loading checkpoints

* Add test
2023-01-04 09:01:24 -05:00
d0bf459c7f Fix DeepSpeed tests (#950)
* Fix deepspeed tests

* Reset state

* With manual reset?
2023-01-03 12:49:51 -05:00
bf8fe0347b Add is_initialized method and refactor (#949)
* Add is_initialized method and refactor

* As module method
2023-01-03 10:13:44 -05:00
e60f3cab7a raise error for duplicate accelerate config values when using deepspeed_config_file (#941)
* ds config vs accelerate config checks

* add mp assertion checks and refactoring

* 😅

* minor fix

* address comments

* address comments and making doc and help clear

* 😅

* fixes

* error msg fix

* more details in error msg

* 

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* address comment

* address comment by changing cluster config

* 😅

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* use `accelerate launch` cmd args for `auto` filling

So far, `accelerate launch` cmd args were used for filling deepspeed plugin fields and not for setting `auto` values. This PR enables that too.

It also raises assertions when ambiguous values are passed in accelerate config file when using `deepspeed_config_file`

* fixes

* fixes and adding tests

* quality

* 😅

* refactor

* fix

* add documentation wrt improvements of DeepSpeed config

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* address comment

* address comment

* refactor

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2022-12-31 13:42:57 +05:30
07e2e712ca Fix offload when weights are on the GPU (#945) 2022-12-28 02:43:29 -05:00
63f09f63b8 Fix tracker (#942) 2022-12-23 12:07:56 -05:00
50b8d8e8a8 fix mp related test fails (#943) 2022-12-23 22:17:13 +05:30
0ec1f24c17 fix batch size in prepare_dataloader for iterable datasets (#937)
* fix batch size

* black
2022-12-23 02:52:52 -05:00
3c5c0f9c99 add mixed_precision_type property to AcceleratorState (#935)
* add `mixed_precision_type` property to `AcceleratorState`

* address comments
2022-12-23 12:02:20 +05:30
53b8ed1e8e Fix silly typo (#939) 2022-12-22 23:14:03 +05:30
49bbf2390d ds zero-3 init context manager (#932)
* ds zero-3 init context manager

* address comment

* renaming `set_zero3_init` to `zero3_init_context_manager`
2022-12-21 10:49:35 +05:30
aa533277f6 Honor model dtype in load_checkpoint (#920)
* Honor model dtype in

* Move dtype logic to set_module_tensor_to_device
2022-12-20 02:48:18 -05:00
ca6505a6a8 ds-z3-init and prepending ds env variables with ACCELERATE_ (#928)
* ds-z3-init and prepending ds env variables with `ACCELERATE_`

* quality

* rerun checks
2022-12-17 00:48:21 +05:30
bb6ee0b7bc Support init_on_device (#926)
* Support init_on_device

* Support mps backend as well in testing
2022-12-16 13:07:39 +01:00
7889ba6b6d Specify inference (#921) 2022-12-14 09:02:13 -05:00
f002ce2ae9 Introduce project_dir and limit the number of saved checkpoints (#916)
* Working save limit

* Centralize to project_dir

* Update docs

* Fix up tests

* Maintain old version, should fix tests

* Revert logging behavior

* Fix failing test

* Automatic checkpoint naming flag

* Logging -> Logger

* Fix naming

* Remove args and make a SaveConfiguration

* logger -> logging

* save_configuration to save_config

* Good to go now, just need to update docs

* Update all the docs

* Deprecate logging_dir param

* ProjectConfiguration

* Project_config

* Fix test

* Finish renaming

* Docfix

* Clean

* Update docs/source/usage_guides/tracking.mdx

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-12-13 08:29:58 -05:00
7fd0635d46 fix accelerate test failure with cpu config (#909)
*failure occurs when testing FP16
*autocast fail to work for cpu bf16 in some gpu+cpu platform,
no need to use is_bf16_available logic. because native_amp already contains such logic.
2022-12-13 08:29:15 -05:00
235fdf1096 🚨🚨🚨 Act on deprecations 🚨🚨🚨 (#917)
* Act on deprecations

* Act on deprecations

* Resume from checkpoint

* Finish deprecations
2022-12-12 16:09:52 -05:00
351f89758a Fix typos accelerate -> accelerator (#915) 2022-12-12 11:11:05 -05:00
7f5e94d33b fsdp enhancements (#911)
* fsdp enhancements

* fix

* fix
2022-12-09 22:23:45 +05:30
74a8ed9e48 fix issue that amp bf16 does not work for cpu in env with cuda. (#906)
and num_cpu_threads_per_process is not reset for better performance in cpu only case

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2022-12-08 09:05:34 -05:00
6bd28790c2 Fix conditional (#907)
* Fix conditional

* Into one if statement
2022-12-07 09:34:58 -05:00
2359af1870 Expand sanity checks (#905)
* Expand sanity checks

* multi_cpu to cpu
2022-12-06 15:46:47 -05:00
e6b61da7ca Add usage examples (#904) 2022-12-06 15:12:43 -05:00
344bfe2713 Flag to silence subprocess.CalledProcessError in launch (#902)
* add an option to silence subprocess.CalledProcessError when running accelerate launch

* for black

* for real this time

* Add suggestion

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Update cli.mdx

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-12-06 08:47:31 -05:00
e9d15e5973 Adds a utility function to install correct version of torch XLA (#896)
* Add utility to install torch xla wheels

* Fix formatting

* Update docs and fix lint issues
2022-12-01 15:11:41 -05:00
5315290b55 Support bfloat16 in load_offloaded_weight (#892)
* Support bfloat16 in load_offloaded_weight

* Quality
2022-11-29 13:32:31 -05:00
f4eee1cf86 Better description for improper kwargs (#894)
* Better flag

* an
2022-11-29 13:24:41 -05:00
b12f503f6d Fix windows cli selector (#893)
* Still need to test on windows

* Move imports

* Somewhat working

* More if

* undo

* Try with unicode

* All done
2022-11-29 11:36:22 -05:00
58be9901b6 fix prefix issues in tests (#891)
* fix prefix issues in tests

* fix
2022-11-29 18:57:58 +05:30
13ef1c83f9 Prefix all accelerate env vars with ACCELERATE (#890)
* Rename all env vars to prefix with accelerate

* Rich

* Undo fork launch

* Fork launched

* Fix patch env

* Finish rich
2022-11-28 14:45:14 -05:00
62e5cfcbbd fixing lr scheduler for pytorch nightly (#884) 2022-11-28 21:46:20 +05:30
762ce7cc80 Allow safetensors offload (#873)
* Allow safetensors offload

* Address review comments + auto-enable fast GPU load

* Quality
2022-11-28 10:03:50 -05:00
4a447d85be fix a bug (#887) 2022-11-28 17:48:31 +05:30
e4e5611e5d Update deprecated logging warn (#881)
Use `logging.warning()` instead of the deprecated `logging.warn()`.
2022-11-22 15:14:18 -05:00
79b712559a fix fsdp state_dict_config because of PyTorch changes (#877)
* fix fsdp state_dict_config because of PyTorch changes

* fix fsdp test

* fixes and addressing comments

Co-Authored-By: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-21 21:22:03 +05:30
eaf7899850 fixing lr_scheduler prepare issue when using pytorch nightly (#878) 2022-11-21 21:20:31 +05:30
d2e804f69d Spring cleaning (#865)
* CLean cluster and big model

* Spring cleaning :)

* Undo much!

* Bring back the fstring!

* Parenthesis for readability
2022-11-21 09:40:59 -05:00
2df1a9328a Solve pickling issues (#872)
* Raise a pickling error if tried to save w/o unwrap
2022-11-21 09:24:41 -05:00
8bf40e5870 Even more log level refined, leave alone if not explicitly set (#871)
* Even more refined, leave alone if not explicitly set

* Leave as setLevel

* Even more explicit
2022-11-18 11:33:47 -05:00
b0165a0f77 fix failing deepspeed test (#868)
* update deepspeed error message wrt `batch_size`

Co-Authored-By: Stas Bekman <stas00@users.noreply.github.com>

* 

* fix failing deepspeed test

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2022-11-18 19:41:04 +05:30
8a96b0bfb8 update deepspeed error message wrt batch_size (#861)
* update deepspeed error message wrt `batch_size`

Co-Authored-By: Stas Bekman <stas00@users.noreply.github.com>

* 

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2022-11-17 20:53:19 +05:30
0efabe485e Remove mixed precision hook as part of the unwrap_model (#860)
* Mixed precision hook

* Rename

* Rm comment, need to move

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Fix doc

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-16 16:12:53 -05:00
75c7d935fd Switch default log to warn (#859)
* Switch default log to warn

* Fix deprecation
2022-11-16 14:17:10 -05:00
bea1e75182 Revert "Update pr docs actions (#827)" (#857)
This reverts commit 56308da519db06b830dafcda917c65a1a443c55a.
2022-11-16 12:06:01 +01:00
dd8f2054d8 Clean up, add update command (#853)
* Clean up, add update command

* Use args for all but default_config

* Call explicitly with args

* Update CLI docs
2022-11-15 17:04:49 -05:00
71660af123 Refactor Accelerate config and introduce a multi-argument CLI interface (#851)
* Improve CLI to have independent names
2022-11-15 09:33:09 -05:00
5f4ba04628 Fix complete_cv example (#848) 2022-11-15 08:56:43 -05:00
39e4a5a0f3 Fix if/else (#849) 2022-11-14 12:07:51 -05:00
0d0f2cd5a7 Fix log error and add log level to get_logger (#842)
* Fix log error and add log level

* Example in docs

* Docstring fix

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Fixes

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-14 09:01:29 -05:00
e8e3709765 Introduce default-config command (#840)
* Add new default config command

* Include docs

* Rm arg
2022-11-11 11:16:01 -05:00
074d8d5a5a Add join_uneven_inputs context manager to Accelerator (#820)
* Add test for join context manager

* Add join_uneven_inputs context manager

* Format

* add conditional import for join

* Replace bare yield with nullcontext

* Update accelerator to maintain references to dataloaders

* add override option to join context manager

* format

* Add minimal docstring

* updates based on initial feedback

* remove launcher used for local testing from test script

* fix quality issues

* DEBUG: try resetting accelerator state to fix test

* Revert "DEBUG: try resetting accelerator state to fix test"

This reverts commit a13a56ea8e084cad72317cd451a176a2d3fa5dff.

* Reset state after accelerator tests

* Update src/accelerate/accelerator.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Warn if at least one iterable dataset seen

* remove launcher used for local test running

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-11-10 13:09:07 -05:00
b17fb69dd6 Highlight selection with pretty colors (#839)
* Highlight with pretty colors

* Rm comment
2022-11-10 10:35:18 -05:00
ccdc2252f7 Deepspeed example should use gather_for_metrics (#821)
* Deepspeed example should use gather_for_metrics

I believe this example should be using gather_for_metrics here instead of gather.

* Update deepspeed_with_config_support.py
2022-11-10 09:41:15 -05:00
f9317f253c fix 🐛 (#836) 2022-11-10 19:38:32 +05:30
08f64896a0 Small questionairre CLI (#830)
* Working CLI questionairre

* Forgot space

* Finish the rest

* Rename and make all funcs/options public

* Include Brian Chao in copyright

* Working number inptus

* Fix num

* Linebreak to ease viewing

* Finish sagemaker

* Clean

* Fix mixed precision
2022-11-09 14:51:16 -05:00
74642aac95 Add support for torch dynamo (#829)
* Add torch dynamo optimizations

* More work

* Fix enum values

* Add to basic config

* fix more tests

* Apply suggestions from code review

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-11-09 11:30:30 -05:00
ceffd47cdd v0.15.0.dev0 2022-11-08 14:26:26 -05:00
4ed46648e7 Isolate distrib_run (#828) 2022-11-08 11:00:08 -05:00
56308da519 Update pr docs actions (#827) 2022-11-08 10:49:25 -05:00
4855405041 adding support to return logits and generate for Megatron-LM GPT models (#819)
* adding support to return logits and generate for Megatron-LM GPT models

* addressing issue

* fix 🐛

* fixing many 🐛 and adding documentation

* remove warning

* address comments

* add docs and utilities for megatron-lm gpt generate and logits
2022-11-08 19:44:11 +05:30
cea6aaa116 Rename (#824) 2022-11-07 15:18:23 -05:00
91f8fb018b rename sklearn to proper dep (#825) 2022-11-07 15:17:26 -05:00
05d58c835f Update docs (#823) 2022-11-07 11:14:53 -05:00
874c4967d9 Rename pod-config to tpu-config + docs (#818)
* Refactor and docs

* Move file

* tests
2022-11-03 08:53:53 -04:00
dc9966df93 Update CLI docs and use mps rather than mps_device (#814)
* Update docs and use mps

* A few more deprecation warnings

* Clean

* Newlines
2022-11-02 15:34:33 -04:00
e2cd36b6cc Mlflow-tracker-v2 🔥 (#794)
* mlflow tracker class

* is_mlflow_available

* is_mlflow_available

* include mlflow dataclass

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* eliminate confusing variables

* make style, quality

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-02 08:38:33 -07:00
6a0082de30 Act on deprecations (#813)
* Deprecations

* fp16 related warnings

* version num

* Last one

* Keep consistent with old
2022-11-02 10:38:17 -04:00
102cf00ded add recurse argument in remove_hook_from_module (#812)
* add `recurse` argument in `remove_hook_from_module`

* correct docstring

* Update src/accelerate/hooks.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/hooks.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-02 10:32:28 -04:00
359bd1bc5f adding support to pickle and unpickle AcceleratedOptimizer (#811)
* adding support to pickle and unpickle `AcceleratedOptimizer`

* address comment

Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* add test

* fixing test

* 😅

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2022-11-02 19:43:37 +05:30
0de1644126 Refactor CLI to improve readability (#810)
* Rewrite CLI

* Comments

* remove rich

* Fix all issue

* Check better for accelerate launch and accelerate-launch

* rm aws

* Resource then paradigm

* Naming nits + make public
2022-11-02 10:04:19 -04:00
b816e258a9 Introduce a pod-config command (#802)
* Add in ability to configure pod and start CLI commands

* Further tests, add a help

* Added tests and cleaned up!

* Fix weird missing parts

* MOre tests + install accelerate with flag

* Unused pod_config_file

* Test with multiple commands

* Update src/accelerate/commands/config/cluster.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Clarity during printing

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Make public names for readability

* Fix test expected outputs and refactor response

* Fix ref errors

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-01 10:00:48 -04:00
c4c444a158 Deal with optimizer.differentiable in PyTorch 1.13.0 (#803)
* Update accelerator.py

* Update src/accelerate/accelerator.py

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-31 19:52:56 -04:00
f3129d1130 fix: add pdsh as default launcher (#800) 2022-10-31 16:02:23 -04:00
8c928057c6 Fix extraction of state dict in offload (#795) 2022-10-31 12:29:02 -04:00
8c0505d760 Fix device_map="auto" on CPU-only envs (#797) 2022-10-31 12:28:52 -04:00
16d548c358 Add even_batches keyword to Accelerator (#781)
* Add even_batches argument to prepare dataloader

* Add even_batches argument to accelerator

* Add e2e tests for even_batches

* Fix double import

* Fix variable name bug in test script

* Refactor test script to pytest format

* Apply documentation suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update BatchSampler warnings

* Fix typo

* Remove comment

* Add main driver method to even_batches tests

* Fix tests

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2022-10-31 12:16:03 -04:00
415b73853a Consider top-level buffers when computing infer_auto_device_map (#792)
* add `buffers` support when computing `infer_auto_device_map`

* should fix broken test

* fix broken test

* simpler solution

- use `model.named_buffers(recurse=False)` instead
Co-authored-by: Sylvain Gugger <sgugger@users.noreply.github.com>

* forward contrib credits from suggestion

Co-authored-by: sgugger <sgugger@users.noreply.github.com>
2022-10-27 23:14:17 +02:00
a5525406fc separate dataloader generator from sampler generator (#789)
* separate dataloader and sampler generator

* resolving comments

Co-Authored-By: YouJiacheng <1503679330@qq.com>
Co-Authored-By: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* minor comment resolution

Co-authored-by: YouJiacheng <1503679330@qq.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-26 02:08:54 +05:30
37b2aa0173 Add Dev Container configuration (#782)
* Add devcontainer

* Add dev container info to CONTRIBUTING.md

* Make cpu image the dev container default

* Fix comment typo
2022-10-21 10:05:49 -04:00
4df576efe8 Work in kaggle! (#783) 2022-10-20 15:39:01 -04:00
87a7e0783f fix transformers tests (#777) 2022-10-19 21:32:11 +02:00
5c8f181ab0 Add same_network + docs (#780) 2022-10-19 13:26:08 -04:00
6f7fa4f48e Make rich toggleable and seperate out a new environment utility file (#779)
* Toggleable rich

* Refactor into environment utils
2022-10-19 12:15:12 -04:00
15a854e2cd Allow BatchSamplerShard to not even out batches (#776)
* Allow BatchSamplerShard to not even out batches

* Update src/accelerate/data_loader.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Add early error

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-10-19 11:46:25 -04:00
63d0653647 Add defaults for launchers (#778)
* Add defaults

* DeepSpeed
2022-10-19 10:19:04 -04:00
21b7f15c96 Fix flakey wandb test (#775)
* Fix flakey wandb
2022-10-18 16:47:31 -04:00
49cd8d37e6 Fix all github actions issues + depreciations (#773)
* Fix all github actions issues + depreciations
2022-10-18 12:27:05 -04:00
1eafa55b80 Fix number of devices in get_balanced_memory (#774)
* Fix number of devices in get_balanced_memory

* Add test
2022-10-18 11:57:52 -04:00
9114fb09d5 Regression cli tests (#772)
* New cli tests

* Add CLI testing

* Makefile + tests

* Segment out CLI in makefile better
2022-10-18 11:07:36 -04:00
5e8ab12c3d Move io_same_device hook to before attach_align_device hook on cpu_offload and disk_offload. (#768)
* Move io_same_device hook to before attach_align_device hook on cpu_offload and disk_offload.

That way we can keep the changes on forward method for the whole module without deleting the hook we want to keep: the one with execution device and configurations on how to move the tensors between devices.

* add append flag to add hook to enable usage of sequential hooks

* add tests to append hooks

* add docstring to append flag

* address review comments

* move io_same_device hook to top on cpu_offload and disk_offload

* trigger ci
2022-10-18 10:13:52 -04:00
a63511107b updating docs to use fork of megatron-lm and minor example/docs fix (#766)
* updating docs to use fork of megatorn-lm and minor example fix

* Update megatron_lm_gpt_pretraining.py

* minor example fixes to have logs in sync with config and args

* Update megatron_lm_gpt_pretraining.py
2022-10-17 21:58:59 +05:30
Sam
a7334df955 Only wrap modules in DDP if they require grad (#761) 2022-10-17 10:14:42 -04:00
4a7268df9c update docs (#759)
* addressing comments

* minor doc updates

* Update training_zoo.mdx

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-15 08:22:49 +05:30
148f6dcaaa refactor (#758) 2022-10-15 08:05:06 +05:30
Sam
693d46826e Return unclipped gradient from grad_clip_norm_ (#756) 2022-10-14 10:04:43 -04:00
dfba92adcd ensure megatron is 2.2.0+ (#755)
* ensure megatron is 2.2.0+

* address comment

* formatting
2022-10-14 09:49:12 +05:30
4dc5049927 Change num_cpu_threads_per_process default (#753)
* Change num_cpu_threads_per_process

* Adjust based on Sylvain's feedback

* Explicit checking for None

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-13 07:26:27 +10:00
e3ebf176b8 Megatron-LM integration (#667)
* Megatron-LM integration

* add code and resolve comment

Co-Authored-By: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* add code

* add code

* fix many 🐛

* add code

* add code and reverting tracker processes

* updating logging utilities, fixing Pipeline Parallelism and dataset/dataloader 🐛 s

1. Fixing bugs related to Pipeline Parallelism
2. Fixing bugs related to dataloaders/datasets.
3. Fixing logging utilities so that all logging and tracking happens on last process when using Megatron.

* addressing comments

* resolving comments

* update code

* refactoring and adding code to support custom implementation of`AbstractTrainStep` class

* minor change

* Many fixes for supporting custom TrainStep and Megatron Indexed Datasets

* Add code, 🐛 fixes and a initial doc file with headings

* fixing a big 🐛 related to loading checkpoints

* adding doc and an example

* example test CI

* docs

* more docs

* more doc changes

* more doc changes

* docs

* more docs

* doc fixing

* trying if we can directly import megatronlm utils

* doc fixing and throwing error if megatron isn't available.

* resolving comments

* fixes to bert and t5 and more docs

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-13 00:34:08 +05:30
2697bebeb4 Add gpu_ids to SageMakerConfig though it should never be set (#751) 2022-10-12 05:48:47 +10:00
1f25825211 Use HTML relative paths for tiles (#749) 2022-10-11 21:08:18 +02:00
b04776159e [Device map] nn.Parameter don't have children (#747)
* [Device map] nn.Parameter don't have children

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-10 15:13:08 +02:00
9179e6bf85 Fix num_processes is not defined (#746)
* Fix num_processes is not defined

* Also reorganize questions

Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com>
2022-10-07 11:53:05 -04:00
ba88a710eb [ds launcher] un-hijack PYTHONPATH (#741)
* [ds launcher] un-hijack PYTHONPATH

* move to utils

* improve doc, arg names

* fix

* Update src/accelerate/commands/launch.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* style

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-10-06 21:56:51 +05:30
66edfe103a Add non_blocking kwarg to send_to_device() (#607) 2022-10-05 20:51:59 +02:00
ec183666b6 v0.14.0.dev0 2022-10-05 14:28:39 -04:00
340 changed files with 54575 additions and 7255 deletions

View File

@ -0,0 +1,29 @@
// File only needed for VSCode users to have proper Docker based interpreters
{
"name": "accelerate_dev_environment",
"build": {
// ACTION NEEDED: comment/uncomment the relevant line depending on whether you are in a CPU/GPU environment
"dockerfile": "../docker/accelerate-cpu/Dockerfile"
// "dockerfile": "../docker/accelerate-gpu/Dockerfile"
},
"runArgs": [
// ACTION NEEDED: uncomment the next line if your local machine has GPUs available
// "--gpus", "all",
// Enable the docker container to access system resources
"--ipc", "host"
],
"remoteEnv": {
"PYTHONPATH": "${containerEnv:PATH}:${containerWorkspaceFolder}"
},
"customizations": {
"vscode": {
"extensions": [
// Ensure we have IntelliSense in VSCode when running inside container
"ms-python.python"
]
}
},
"workspaceFolder": "/workspaces/accelerate",
// Need git for VSCode to color code modifications. Only runs when building environment.
"onCreateCommand": "apt-get update && apt-get install -y git && pip install -e '.[dev]'"
}

View File

@ -1,6 +1,12 @@
name: "\U0001F41B Bug Report" name: "\U0001F41B Bug Report"
description: Submit a bug report to help us improve Accelerate description: Submit a bug report to help us improve Accelerate
body: body:
- type: markdown
attributes:
value: |
Thanks for taking the time to submit a bug report! 🐛
If this is not a bug related to the Accelerate library directly, but instead a general question about your code or the library specifically please use the [forums](https://discuss.huggingface.co/c/accelerate/18).
- type: textarea - type: textarea
id: system-info id: system-info
attributes: attributes:
@ -55,4 +61,3 @@ body:
attributes: attributes:
label: Expected behavior label: Expected behavior
description: "A clear and concise description of what you would expect to happen." description: "A clear and concise description of what you would expect to happen."
render: Shell

47
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,47 @@
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
- Big modeling: @SunMarc
- Fully-Sharded Data Parallism: @SunMarc @zach-huggingface
- DeepSpeed: @SunMarc @zach-huggingface
- Command Line Interface: @SunMarc @zach-huggingface
- Documentation: @SunMarc @zach-huggingface
- Core parts of the library: @BenjaminBossan @SunMarc @zach-huggingface
- Maintained examples: @SunMarc or @zach-huggingface
-->

View File

@ -15,50 +15,90 @@ jobs:
outputs: outputs:
version: ${{ steps.step1.outputs.version }} version: ${{ steps.step1.outputs.version }}
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
- id: step1 - id: step1
run: echo "::set-output name=version::$(python setup.py --version)" run: echo "version=$(python setup.py --version)" >> $GITHUB_OUTPUT
version-cpu: version-cpu:
name: "Latest Accelerate CPU [version]" name: "Latest Accelerate CPU [version]"
runs-on: ubuntu-latest runs-on:
group: aws-general-8-plus
needs: get-version needs: get-version
steps: steps:
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1 uses: docker/setup-buildx-action@v2
- name: Check out code
uses: actions/checkout@v2
- name: Login to DockerHub - name: Login to DockerHub
uses: docker/login-action@v1 uses: docker/login-action@v2
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }} password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push CPU - name: Build and Push CPU
uses: docker/build-push-action@v2 uses: docker/build-push-action@v4
with: with:
context: ./docker/accelerate-cpu file: docker/accelerate-cpu/Dockerfile
push: true push: true
tags: huggingface/accelerate-cpu:${{needs.get-version.outputs.version}} tags: huggingface/accelerate:cpu-release-${{ needs.get-version.outputs.version }}
version-cuda: version-cuda:
name: "Latest Accelerate GPU [version]" name: "Latest Accelerate GPU [version]"
runs-on: ubuntu-latest runs-on:
group: aws-g6-4xlarge-plus
needs: get-version needs: get-version
steps: steps:
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1 uses: docker/setup-buildx-action@v2
- name: Check out code
uses: actions/checkout@v2
- name: Login to DockerHub - name: Login to DockerHub
uses: docker/login-action@v1 uses: docker/login-action@v2
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }} password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU - name: Build and Push GPU
uses: docker/build-push-action@v2 uses: docker/build-push-action@v4
with: with:
context: ./docker/accelerate-gpu file: docker/accelerate-gpu/Dockerfile
push: true push: true
tags: huggingface/accelerate-gpu:${{needs.get-version.outputs.version}} tags: huggingface/accelerate:gpu-release-${{needs.get-version.outputs.version}}
version-cuda-deepspeed:
name: "Latest Accelerate GPU DeepSpeed [version]"
runs-on:
group: aws-g6-4xlarge-plus
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-gpu-deepspeed/Dockerfile
push: true
tags: huggingface/accelerate:gpu-deepspeed-release-${{needs.get-version.outputs.version}}
version-cuda-fp8-transformerengine:
name: "Latest Accelerate GPU FP8 TransformerEngine [version]"
runs-on:
group: aws-g6-4xlarge-plus
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-gpu/Dockerfile
push: true
tags: huggingface/accelerate:gpu-fp8-transformerengine-release-${{needs.get-version.outputs.version}}

View File

@ -16,20 +16,20 @@ jobs:
outputs: outputs:
changed: ${{ steps.was_changed.outputs.changed }} changed: ${{ steps.was_changed.outputs.changed }}
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
with: with:
fetch-depth: "2" fetch-depth: "2"
- name: Get changed files - name: Get changed files
id: changed-files id: changed-files
uses: tj-actions/changed-files@v22.2 uses: tj-actions/changed-files@3f54ebb830831fc121d3263c1857cfbdc310cdb9 #v42
- name: Was setup changed - name: Was setup changed
id: was_changed id: was_changed
run: | run: |
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
if [ `basename "${file}"` == "setup.py" ]; then if [ `basename "${file}"` == "setup.py" ]; then
echo ::set-output name=changed::"1" echo "changed=1" >> $GITHUB_OUTPUT
fi fi
done done
@ -42,4 +42,9 @@ jobs:
run-merge-tests: run-merge-tests:
needs: build-docker-containers needs: build-docker-containers
if: always() if: always()
uses: ./.github/workflows/run_merge_tests.yml uses: ./.github/workflows/run_merge_tests.yml
run-integration-tests:
needs: build-docker-containers
if: always()
uses: ./.github/workflows/self_hosted_integration_tests.yml

View File

@ -13,42 +13,104 @@ concurrency:
jobs: jobs:
latest-cpu: latest-cpu:
name: "Latest Accelerate CPU [dev]" name: "Latest Accelerate CPU [dev]"
runs-on: ubuntu-latest runs-on:
group: aws-general-8-plus
steps: steps:
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1 uses: docker/setup-buildx-action@v2
- name: Check out code
uses: actions/checkout@v2
- name: Login to DockerHub - name: Login to DockerHub
uses: docker/login-action@v1 uses: docker/login-action@v2
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }} password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Get current date
id: date
run: |
echo "date=$(date '+%Y-%m-%d')" >> $GITHUB_ENV
- name: Build and Push CPU - name: Build and Push CPU
uses: docker/build-push-action@v2 uses: docker/build-push-action@v4
with: with:
context: ./docker/accelerate-cpu file: docker/accelerate-cpu/Dockerfile
push: true push: true
tags: huggingface/accelerate-cpu tags: |
huggingface/accelerate:cpu-nightly
huggingface/accelerate:cpu-nightly-${{ env.date }}
latest-cuda: latest-cuda:
name: "Latest Accelerate GPU [dev]" name: "Latest Accelerate GPU [dev]"
runs-on: ubuntu-latest runs-on:
group: aws-g6-4xlarge-plus
steps: steps:
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1 uses: docker/setup-buildx-action@v2
- name: Check out code
uses: actions/checkout@v2
- name: Login to DockerHub - name: Login to DockerHub
uses: docker/login-action@v1 uses: docker/login-action@v2
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }} password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Get current date
id: date
run: |
echo "date=$(date '+%Y-%m-%d')" >> $GITHUB_ENV
- name: Build and Push GPU - name: Build and Push GPU
uses: docker/build-push-action@v2 uses: docker/build-push-action@v4
with: with:
context: ./docker/accelerate-gpu file: docker/accelerate-gpu/Dockerfile
push: true push: true
tags: huggingface/accelerate-gpu tags: |
huggingface/accelerate:gpu-nightly
huggingface/accelerate:gpu-nightly-${{ env.date }}
latest-cuda-deepspeed:
name: "Latest Accelerate GPU DeepSpeed [dev]"
runs-on:
group: aws-g6-4xlarge-plus
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Get current date
id: date
run: |
echo "date=$(date '+%Y-%m-%d')" >> $GITHUB_ENV
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-gpu-deepspeed/Dockerfile
push: true
tags: |
huggingface/accelerate:gpu-deepspeed-nightly
huggingface/accelerate:gpu-deepspeed-nightly-${{ env.date }}
latest-cuda-fp8-transformerengine:
name: "Latest Accelerate GPU FP8 TransformerEngine [dev]"
runs-on:
group: aws-g6-4xlarge-plus
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Get current date
id: date
run: |
echo "date=$(date '+%Y-%m-%d')" >> $GITHUB_ENV
# Get the previous month
echo "base_year=$(date -d 'last month' '+%y')" >> $GITHUB_ENV
echo "base_month=$(date -d 'last month' '+%m')" >> $GITHUB_ENV
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: benchmarks/fp8/transformer_engine/Dockerfile
push: true
tags: huggingface/accelerate:gpu-fp8-transformerengine-nightly-${{ env.date }}
build-args: |
BASE_YEAR=${{ env.base_year }}
BASE_MONTH=${{ env.base_month }}

View File

@ -13,5 +13,6 @@ jobs:
with: with:
commit_sha: ${{ github.sha }} commit_sha: ${{ github.sha }}
package: accelerate package: accelerate
custom_container: huggingface/transformers-doc-builder
secrets: secrets:
token: ${{ secrets.HUGGINGFACE_PUSH }} hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}

View File

@ -14,3 +14,4 @@ jobs:
commit_sha: ${{ github.event.pull_request.head.sha }} commit_sha: ${{ github.event.pull_request.head.sha }}
pr_number: ${{ github.event.number }} pr_number: ${{ github.event.number }}
package: accelerate package: accelerate
custom_container: huggingface/transformers-doc-builder

View File

@ -1,13 +0,0 @@
name: Delete dev documentation
on:
pull_request:
types: [ closed ]
jobs:
delete:
uses: huggingface/doc-builder/.github/workflows/delete_doc_comment.yml@main
with:
pr_number: ${{ github.event.number }}
package: accelerate

37
.github/workflows/fp8_runner.yml vendored Normal file
View File

@ -0,0 +1,37 @@
name: Test FP8 Runner
on:
workflow_dispatch:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
jobs:
set-prev-day:
runs-on: ubuntu-latest
outputs:
prev-day: ${{ steps.set-prev-day.outputs.prev-day }}
steps:
- name: Set PREV_DAY
id: set-prev-day
run: |
PREV_DAY=$(date -d "yesterday" '+%Y-%m-%d')
echo "prev-day=$PREV_DAY" >> $GITHUB_OUTPUT
run-fp8-tests:
needs: set-prev-day
runs-on:
group: aws-g6e-12xlarge
container:
image: huggingface/accelerate:gpu-fp8-transformerengine-nightly-${{ needs.set-prev-day.outputs.prev-day }}
options: --gpus all --shm-size "16gb"
steps:
- uses: actions/checkout@v3
- name: Install the library
run: |
pip install -e .[test_prod,test_fp8]
- name: Show installed libraries
run: |
pip freeze
- name: Run TE FP8 tests
run: |
python -m pytest -s -v ./tests/test_fp8.py

87
.github/workflows/gaudi3_scheduled.yml vendored Normal file
View File

@ -0,0 +1,87 @@
name: Gaudi3 tests (scheduled)
on:
workflow_dispatch:
schedule: # every day at 6 AM UTC
- cron: "0 6 * * *"
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
run-gaudi3-tests:
runs-on:
group: itac-bm-emr-gaudi3-dell-2gaudi
container:
image: docker://vault.habana.ai/gaudi-docker/1.21.1/ubuntu22.04/habanalabs/pytorch-installer-2.6.0:latest
options: --runtime=habana --shm-size=64G --cap-add=sys_nice --env HABANA_VISIBLE_DEVICES
env:
OMPI_MCA_btl_vader_single_copy_mechanism: none
PT_ENABLE_INT64_SUPPORT: 1
PT_HPU_LAZY_MODE: 0
RUN_SLOW: 1
steps:
- name: HL-SMI (1)
run: |
hl-smi
echo "HABANA_VISIBLE_DEVICES=${HABANA_VISIBLE_DEVICES}"
echo "HABANA_VISIBLE_MODULES=${HABANA_VISIBLE_MODULES}"
- name: Extract HPU visible modules
id: add-modules
run: |
export HABANA_VISIBLE_MODULES=$(hl-smi -Q module_id -f csv,noheader | tr '\n' ',' | sed 's/,$//')
echo "HABANA_VISIBLE_MODULES=${HABANA_VISIBLE_MODULES}" >> $GITHUB_ENV
- name: HL-SMI (2)
run: |
hl-smi
echo "HABANA_VISIBLE_DEVICES=${HABANA_VISIBLE_DEVICES}"
echo "HABANA_VISIBLE_MODULES=${HABANA_VISIBLE_MODULES}"
- name: Checkout to Accelerate
uses: actions/checkout@v4
- name: Install Accelerate with Transformers & DeepSpeed
run: |
pip install -e .[testing] \
git+https://github.com/HabanaAI/DeepSpeed.git@1.20.0 \
git+https://github.com/huggingface/transformers.git
- name: Run CLI tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_cli
- name: Run Core tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_core
- name: Run Big Modeling tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_big_modeling
- name: Run DeepSpeed integration tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_deepspeed
- name: Run FSDP integration tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_fsdp
- name: Run TP integration tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_tp
- name: Run Examples tests
if: ${{ !cancelled() && (success() || failure()) }}
run: |
make test_examples

58
.github/workflows/integration_tests.yml vendored Normal file
View File

@ -0,0 +1,58 @@
# CI for specifically ensuring integrations work fine (`transformers` mainly)
# Useful tips:
# - New integrations to test should have its own job, and follow a strategy method where we check both
# the pypi and github versions.
# - When checking the latest release of the integration, use
# git checkout $(git describe --tags `git rev-list --tags --max-count=1`) to get the latest release.
name: Integration Tests
on:
pull_request:
paths:
- "src/**"
- "tests/**"
- ".github/**"
- "examples/**"
- "setup.py"
types: [opened, synchronize, reopened]
env:
HF_HOME: ~/hf_cache
jobs:
run-trainer-tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v4
- name: Set up python 3.9
uses: actions/setup-python@v5
with:
python-version: 3.9
cache: 'pip'
cache-dependency-path: 'setup.py'
- name: Install Accelerate from source
run: |
pip install --upgrade pip
pip install -e .
- name: Clone and install transformers
run: |
cd ..
git clone https://github.com/huggingface/transformers
cd transformers
pip install .[torch,testing]
- name: Show installed libraries
run: |
pip freeze
- name: Run Trainer tests
env:
WANDB_DISABLED: true
run: |
cd ../transformers
pytest -sv tests/trainer

View File

@ -8,81 +8,226 @@ on:
env: env:
RUN_SLOW: "yes" RUN_SLOW: "yes"
IS_GITHUB_CI: "1" IS_GITHUB_CI: "1"
SLACK_API_TOKEN: ${{ secrets.SLACK_API_TOKEN }}
jobs: jobs:
run_all_tests_single_gpu: run_core_tests_single_gpu:
runs-on: [self-hosted, docker-gpu, multi-gpu] runs-on:
group: aws-g6-4xlarge-plus
env: env:
CUDA_VISIBLE_DEVICES: "0" CUDA_VISIBLE_DEVICES: "0"
TEST_TYPE: "single_gpu"
container: container:
image: huggingface/accelerate-gpu:latest image: huggingface/accelerate:gpu-nightly
options: --gpus all --shm-size "16gb" options: --gpus all --shm-size "16gb"
defaults: defaults:
run: run:
working-directory: accelerate/
shell: bash shell: bash
steps: steps:
- name: Update clone & pip install - name: Update clone & pip install
run: | run: |
source activate accelerate source activate accelerate
git config --global --add safe.directory '*' git clone https://github.com/huggingface/accelerate;
git fetch && git checkout ${{ github.sha }} cd accelerate;
git checkout ${{ github.sha }};
pip install -e . --no-deps pip install -e . --no-deps
pip install pytest-reportlog pip install pytest-reportlog tabulate
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run test on GPUs - name: Run test on GPUs
working-directory: accelerate
run: | run: |
source activate accelerate source activate accelerate
make test make test
- name: Run examples on GPUs - name: Run examples on GPUs
working-directory: accelerate
if: always()
run: | run: |
source activate accelerate source activate accelerate
pip uninstall comet_ml -y pip uninstall comet_ml -y
make test_examples make test_examples
- name: Generate Report - name: Generate Report
working-directory: accelerate
if: always() if: always()
run: | run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_all_tests_multi_gpu: run_deepspeed_tests_single_gpu:
runs-on: [self-hosted, docker-gpu, multi-gpu] runs-on:
group: aws-g6-4xlarge-plus
env: env:
CUDA_VISIBLE_DEVICES: "0,1" CUDA_VISIBLE_DEVICES: "0"
TEST_TYPE: "single_gpu_deepspeed"
container: container:
image: huggingface/accelerate-gpu:latest image: huggingface/accelerate:gpu-deepspeed-nightly
options: --gpus all --shm-size "16gb" options: --gpus all --shm-size "16gb"
defaults: defaults:
run: run:
working-directory: accelerate/
shell: bash shell: bash
steps: steps:
- name: Update clone - name: Update clone & pip install
run: | run: |
source activate accelerate source activate accelerate
git config --global --add safe.directory '*' git clone https://github.com/huggingface/accelerate;
git fetch && git checkout ${{ github.sha }} cd accelerate;
git checkout ${{ github.sha }};
pip install -e . --no-deps pip install -e . --no-deps
pip install pytest-reportlog pip install pytest-reportlog tabulate
- name: Run core and big modeling tests on GPUs - name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run test on GPUs
working-directory: accelerate
run: | run: |
source activate accelerate source activate accelerate
make test_big_modeling make test_deepspeed
make test_core
- name: Run Integration tests on GPUs - name: Run Integration tests on GPUs
working-directory: accelerate
if: always()
run: | run: |
source activate accelerate source activate accelerate
make test_integrations make test_integrations
- name: Run examples on GPUs - name: Run examples on GPUs
working-directory: accelerate
if: always()
run: | run: |
source activate accelerate source activate accelerate
pip uninstall comet_ml -y pip uninstall comet_ml -y
make test_examples make test_examples
- name: Generate Report - name: Generate Report
working-directory: accelerate
if: always() if: always()
run: | run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_core_tests_multi_gpu:
runs-on:
group: aws-g6-12xlarge-plus
env:
CUDA_VISIBLE_DEVICES: "0,1"
TEST_TYPE: "multi_gpu"
container:
image: huggingface/accelerate:gpu-nightly
options: --gpus all --shm-size "16gb"
defaults:
run:
shell: bash
steps:
- name: Update clone
run: |
source activate accelerate
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e . --no-deps
pip install pytest-reportlog tabulate
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run core and big modeling tests on GPUs
working-directory: accelerate
run: |
source activate accelerate
make test_core
make test_big_modeling
make test_cli
- name: Run Integration tests on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate
make test_integrations
- name: Run examples on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y
make test_examples
- name: Generate Report
working-directory: accelerate
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_deepspeed_tests_multi_gpu:
runs-on:
group: aws-g6-12xlarge-plus
env:
CUDA_VISIBLE_DEVICES: "0,1"
TEST_TYPE: "multi_gpu_deepspeed"
container:
image: huggingface/accelerate:gpu-deepspeed-nightly
options: --gpus all --shm-size "16gb"
defaults:
run:
shell: bash
steps:
- name: Update clone
run: |
source activate accelerate
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e . --no-deps
pip install pytest-reportlog tabulate
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run DeepSpeed tests
working-directory: accelerate
run: |
source activate accelerate
make test_deepspeed
- name: Run Integration tests on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate
make test_integrations
- name: Run examples on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y
make test_examples
- name: Generate Report
working-directory: accelerate
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run-integration-tests:
if: always()
uses: ./.github/workflows/self_hosted_integration_tests.yml

19
.github/workflows/pr_style_bot.yml vendored Normal file
View File

@ -0,0 +1,19 @@
# To run this bot, comment "@bot /style" on a PR
name: Style Bot
on:
issue_comment:
types: [created]
permissions:
contents: write
pull-requests: write
jobs:
style:
uses: huggingface/huggingface_hub/.github/workflows/style-bot-action.yml@main
with:
python_quality_dependencies: "[quality]"
style_command_type: "default"
secrets:
bot_token: ${{ secrets.GITHUB_TOKEN }}

View File

@ -6,12 +6,19 @@ jobs:
quality: quality:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v4
- name: Set up Python 3.7 - name: Set up Python 3.9
uses: actions/setup-python@v3 uses: actions/setup-python@v5
with: with:
python-version: 3.7 python-version: 3.9
cache: 'pip'
cache-dependency-path: 'setup.py'
- name: Install Python dependencies - name: Install Python dependencies
run: pip install -e .[quality] run: pip install -e .[quality]
- name: Run Quality check - name: Run Quality check
run: make quality run: make quality
- name: Check if failure
if: ${{ failure() }}
run: |
echo "Quality check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and rerun 'make style; make quality;'" >> $GITHUB_STEP_SUMMARY

View File

@ -9,71 +9,180 @@ env:
IS_GITHUB_CI: "1" IS_GITHUB_CI: "1"
jobs: jobs:
run_all_tests_single_gpu: run_core_tests_single_gpu:
runs-on: [self-hosted, docker-gpu, multi-gpu] runs-on:
group: aws-g6-4xlarge-plus
env: env:
CUDA_VISIBLE_DEVICES: "0" CUDA_VISIBLE_DEVICES: "0"
container: container:
image: huggingface/accelerate-gpu:latest image: huggingface/accelerate:gpu-nightly
options: --gpus all --shm-size "16gb" options: --gpus all --shm-size "16gb"
defaults: defaults:
run: run:
working-directory: accelerate/
shell: bash shell: bash
steps: steps:
- name: Update clone & pip install - name: Install accelerate
run: | run: |
source activate accelerate source activate accelerate;
git config --global --add safe.directory '*' git clone https://github.com/huggingface/accelerate;
git fetch && git checkout ${{ github.sha }} cd accelerate;
pip install -e .[testing,test_trackers] git checkout ${{ github.sha }};
pip install pytest-reportlog pip install -e .[testing,test_trackers] -U;
pip install pytest-reportlog tabulate ;
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run CLI tests (use make cli)
working-directory: accelerate
run: |
source activate accelerate;
make test_cli
- name: Run test on GPUs - name: Run test on GPUs
working-directory: accelerate
if: always()
run: | run: |
source activate accelerate source activate accelerate;
make test make test
- name: Run examples on GPUs - name: Run examples on GPUs
working-directory: accelerate
if: always()
run: | run: |
source activate accelerate source activate accelerate;
pip uninstall comet_ml -y pip uninstall comet_ml -y;
make test_examples make test_examples
- name: Generate Report - name: Generate Report
working-directory: accelerate
if: always() if: always()
run: | run: |
pip install tabulate;
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_all_tests_multi_gpu: run_deepspeed_tests_single_gpu:
runs-on: [self-hosted, docker-gpu, multi-gpu] runs-on:
group: aws-g6-4xlarge-plus
env:
CUDA_VISIBLE_DEVICES: "0"
container: container:
image: huggingface/accelerate-gpu:latest image: huggingface/accelerate:gpu-deepspeed-nightly
options: --gpus all --shm-size "16gb"
defaults:
run:
shell: bash
steps:
- name: Install accelerate
run: |
source activate accelerate;
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e .[testing,test_trackers] -U;
pip install pytest-reportlog tabulate ;
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run test on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate;
make test_deepspeed
- name: Generate Report
working-directory: accelerate
if: always()
run: |
pip install tabulate;
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_core_tests_multi_gpu:
runs-on:
group: aws-g6-12xlarge-plus
env:
CUDA_VISIBLE_DEVICES: 0,1
container:
image: huggingface/accelerate:gpu-nightly
options: --gpus all --shm-size "16gb" options: --gpus all --shm-size "16gb"
defaults: defaults:
run: run:
working-directory: accelerate/
shell: bash shell: bash
steps: steps:
- name: Update clone - name: Update clone
run: | run: |
source activate accelerate source activate accelerate;
git config --global --add safe.directory '*' git clone https://github.com/huggingface/accelerate;
git fetch && git checkout ${{ github.sha }} cd accelerate;
pip install -e .[testing,test_trackers] git checkout ${{ github.sha }};
pip install pytest-reportlog pip install -e .[testing,test_trackers] -U;
pip install pytest-reportlog tabulate
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run test on GPUs - name: Run test on GPUs
working-directory: accelerate
run: | run: |
source activate accelerate source activate accelerate;
make test make test
- name: Run examples on GPUs - name: Run examples on GPUs
working-directory: accelerate
if: always()
run: | run: |
source activate accelerate source activate accelerate;
pip uninstall comet_ml -y pip uninstall comet_ml -y;
make test_examples make test_examples
- name: Generate Report - name: Generate Report
working-directory: accelerate
if: always() if: always()
run: | run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY source activate accelerate;
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_deepspeed_tests_multi_gpu:
runs-on:
group: aws-g6-12xlarge-plus
container:
image: huggingface/accelerate:gpu-deepspeed-nightly
options: --gpus all --shm-size "16gb"
defaults:
run:
shell: bash
steps:
- name: Install accelerate
run: |
source activate accelerate;
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e .[testing,test_trackers] -U;
pip install pytest-reportlog tabulate ;
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run test on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate;
make test_deepspeed
- name: Generate Report
working-directory: accelerate
if: always()
run: |
pip install tabulate;
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

View File

@ -0,0 +1,127 @@
# CI for specifically ensuring integrations work fine (`transformers` mainly) on GPUs
# Useful tips:
# - `working-directory` should be set to the root of the repo, which is cloned on the actual CI runner.
# It follows the directory structure of `actions-runner/_work/{repo_name}/{repo_name}/{cloned_repo} on
# prem, but in Actions setting `working-directory` looks just in the `{repo_name}` level.
# - New integrations to test should have its own job, and follow a strategy method where we check both
# the pypi and github versions.
# - Workflow call lets this be called from `build_and_run_tests.yml`
# - When using a docker container, it's recommended to set `--shm-size`, we use 16gb.
name: Integration Tests (push to "main")
on:
workflow_call:
workflow_dispatch:
env:
HF_HOME: ~/hf_cache
defaults:
run:
shell: bash
jobs:
run-trainer-tests:
container:
image: huggingface/accelerate:gpu-deepspeed-nightly
options: --gpus all --shm-size "16gb"
runs-on:
group: aws-g6-12xlarge-plus
strategy:
fail-fast: false
matrix:
cuda_visible_devices: [
"0",
"0,1"
]
steps:
- name: Install transformers
run: |
source activate accelerate;
git clone https://github.com/huggingface/transformers --depth 1;
cd transformers;
pip install .[torch,deepspeed-testing];
cd ..;
- name: Install accelerate
run: |
source activate accelerate;
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }} ;
pip install -e .[testing];
pip uninstall comet_ml wandb dvclive -y
cd ..;
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run trainer tests
working-directory: transformers/
env:
CUDA_VISIBLE_DEVICES: ${{ matrix.cuda_visible_devices }}
WANDB_DISABLED: true
run: |
source activate accelerate;
pytest -sv tests/trainer
- name: Run deepspeed tests
working-directory: transformers/
env:
CUDA_VISIBLE_DEVICES: ${{ matrix.cuda_visible_devices }}
WANDB_DISABLED: true
if: always()
run: |
source activate accelerate;
pytest -sv tests/deepspeed
- name: Run transformers examples tests
working-directory: transformers/
env:
CUDA_VISIBLE_DEVICES: ${{ matrix.cuda_visible_devices }}
WANDB_DISABLED: true
run: |
source activate accelerate
pip install -r examples/pytorch/_tests_requirements.txt
pytest -sv examples/pytorch/test_accelerate_examples.py examples/pytorch/test_pytorch_examples.py
run-skorch-tests:
container:
image: huggingface/accelerate:gpu-nightly
options: --gpus all --shm-size "16gb"
runs-on:
group: aws-g6-12xlarge-plus
strategy:
fail-fast: false
steps:
- name: Install accelerate
run:
source activate accelerate;
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e .[testing];
cd ..
- name: Install skorch
run: |
source activate accelerate
git clone https://github.com/skorch-dev/skorch;
cd skorch;
git config --global --add safe.directory '*'
git checkout master && git pull
pip install .[test]
pip install flaky
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run skorch tests
working-directory: skorch/
run: |
source activate accelerate;
pytest -sv -k TestAccelerate

View File

@ -10,19 +10,24 @@ jobs:
name: Close Stale Issues name: Close Stale Issues
if: github.repository == 'huggingface/accelerate' if: github.repository == 'huggingface/accelerate'
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v4
- name: Setup Python - name: Setup Python
uses: actions/setup-python@v1 uses: actions/setup-python@v5
with: with:
python-version: 3.7 python-version: 3.9
cache: 'pip'
cache-dependency-path: 'setup.py'
- name: Install requirements - name: Install requirements
run: | run: |
pip install PyGithub pip install PyGithub
- name: Close stale issues - name: Close stale issues
run: | run: |
python utils/stale.py python utils/stale.py

View File

@ -23,11 +23,12 @@ jobs:
matrix: matrix:
pytorch-version: [ pytorch-version: [
latest, latest,
minimum minimum,
] ]
test-kind: [ test-kind: [
test_prod, test_prod,
test_core, test_core,
test_cli,
test_big_modeling, test_big_modeling,
test_deepspeed, test_deepspeed,
test_fsdp, test_fsdp,
@ -37,34 +38,33 @@ jobs:
test_rest test_rest
] ]
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
- name: Set up python 3.7 - name: Set up python 3.9
uses: actions/setup-python@v3 uses: actions/setup-python@v5
with: with:
python-version: 3.7 python-version: 3.9
cache: 'pip'
- name: Activate python cache cache-dependency-path: 'setup.py'
uses: actions/cache@v3
with:
path: |
${{ env.pythonLocation }}
${{ env.HF_HOME }}
key: ${{ env.pythonLocation }}-${{ matrix.test-kind }}-${{ hashFiles('setup.py') }}
- name: Install the library - name: Install the library
run: | run: |
pip install --upgrade pip
if [[ ${{ matrix.test-kind }} = test_prod ]]; then pip install -e .[test_prod]; fi if [[ ${{ matrix.test-kind }} = test_prod ]]; then pip install -e .[test_prod]; fi
if [[ ${{ matrix.test-kind }} != test_prod ]]; then pip install -e .[testing,test_trackers]; fi if [[ ${{ matrix.test-kind }} != test_prod ]]; then pip install -e .[testing,test_trackers]; fi
if [[ ${{ matrix.test-kind }} = test_rest ]]; then pip uninstall comet_ml -y; fi if [[ ${{ matrix.test-kind }} = test_rest ]]; then pip uninstall comet_ml -y; fi
if [[ ${{ matrix.pytorch-version }} = minimum ]]; then pip install torch==1.6.0; fi if [[ ${{ matrix.pytorch-version }} = minimum ]]; then pip install torchvision==0.18.1 torch==2.3.1; fi
pip install pytest-reportlog pip install pytest-reportlog tabulate setuptools importlib_metadata
- name: Show installed libraries
run: |
pip freeze
- name: Run Tests - name: Run Tests
env:
PYTORCH_VERSION: ${{ matrix.pytorch-version }}
run: | run: |
make ${{ matrix.test-kind }} make ${{ matrix.test-kind }}
- name: Generate Report - name: Generate Report
if: always() if: always()
run: | run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

55
.github/workflows/test_imports.yml vendored Normal file
View File

@ -0,0 +1,55 @@
name: Run Import Tests
on:
pull_request:
paths:
- "src/**"
- "tests/**"
- ".github/**"
- "examples/**"
- "setup.py"
types: [opened, synchronize, reopened]
env:
HF_HOME: ~/hf_cache
TESTING_MOCKED_DATALOADERS: "1"
IS_GITHUB_CI: "1"
jobs:
run-tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
pytorch-version: [
latest,
minimum,
]
steps:
- uses: actions/checkout@v4
- name: Set up python 3.9
uses: actions/setup-python@v5
with:
python-version: 3.9
cache: 'pip'
cache-dependency-path: 'setup.py'
- name: Install the library
run: |
pip install -e .
pip install pytest-reportlog tabulate setuptools git+https://github.com/muellerzr/import-timer
- name: Show installed libraries
run: |
pip freeze
- name: Run Import Tests
env:
PYTORCH_VERSION: ${{ matrix.pytorch-version }}
run: |
pytest -sv tests/test_imports.py
- name: Generate Report
if: always()
run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

15
.github/workflows/trufflehog.yml vendored Normal file
View File

@ -0,0 +1,15 @@
on:
push:
name: Secret Leaks
jobs:
trufflehog:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Secret Scanning
uses: trufflesecurity/trufflehog@main

View File

@ -0,0 +1,16 @@
name: Upload PR Documentation
on:
workflow_run:
workflows: ["Build PR Documentation"]
types:
- completed
jobs:
build:
uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@main
with:
package_name: accelerate
secrets:
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}

5
.gitignore vendored
View File

@ -138,4 +138,7 @@ dmypy.json
.DS_Store .DS_Store
# More test things # More test things
wandb wandb
# ruff
.ruff_cache

13
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,13 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.2.1
hooks:
- id: ruff
args:
- --fix
- id: ruff-format
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: check-merge-conflict
- id: check-yaml

View File

@ -123,12 +123,18 @@ Follow these steps to start contributing:
4. Set up a development environment by running the following command in a conda or a virtual environment you've created for working on this library: 4. Set up a development environment by running the following command in a conda or a virtual environment you've created for working on this library:
```bash ```bash
$ pip install -e ".[quality]" $ pip install -e ".[dev]"
``` ```
This will install all testing and linting/code quality dependencies for the library (see `quality`, `test_dev`,
`test_prod` targets in [`setup.py`](./setup.py)).
(If accelerate was already installed in the virtual environment, remove (If accelerate was already installed in the virtual environment, remove
it with `pip uninstall accelerate` before reinstalling it in editable it with `pip uninstall accelerate` before reinstalling it in editable
mode with the `-e` flag.) mode with the `-e` flag).
Alternatively, if you are using [Visual Studio Code](https://code.visualstudio.com/Download), the fastest way to get set up is by using
the provided Dev Container. Documentation on how to get started with dev containers is available [here](https://code.visualstudio.com/docs/remote/containers).
5. Develop the features on your branch. 5. Develop the features on your branch.
@ -149,7 +155,7 @@ Follow these steps to start contributing:
$ make test $ make test
``` ```
`accelerate` relies on `black` and `isort` to format its source code `accelerate` relies on `ruff` to format its source code
consistently. After you make changes, apply automatic style corrections and code verifications consistently. After you make changes, apply automatic style corrections and code verifications
that can't be automated in one go with: that can't be automated in one go with:
@ -162,13 +168,21 @@ Follow these steps to start contributing:
$ make style $ make style
``` ```
`accelerate` also uses `flake8` and a few custom scripts to check for coding mistakes. Quality `accelerate` also uses a few custom scripts to check for coding mistakes. Quality
control runs in CI, however you can also run the same checks with: control runs in CI, however you can also run the same checks with:
```bash ```bash
$ make quality $ make quality
``` ```
You can also set up [`pre-commit`](https://pre-commit.com/) to run these checks
automatically as Git commit hooks.
```bash
$ pip install pre-commit
$ pre-commit install
```
Once you're happy with your changes, add changed files using `git add` and Once you're happy with your changes, add changed files using `git add` and
make a commit with `git commit` to record your changes locally: make a commit with `git commit` to record your changes locally:
@ -232,4 +246,4 @@ $ python -m pytest -sv ./tests
In fact, that's how `make test` is implemented (sans the `pip install` line)! In fact, that's how `make test` is implemented (sans the `pip install` line)!
You can specify a smaller set of tests in order to test only the feature You can specify a smaller set of tests in order to test only the feature
you're working on. you're working on.

View File

@ -1,6 +1,6 @@
.PHONY: quality style test docs .PHONY: quality style test docs utils
check_dirs := tests src examples benchmarks check_dirs := .
# Check that source code meets quality standards # Check that source code meets quality standards
@ -8,57 +8,94 @@ extra_quality_checks:
python utils/check_copies.py python utils/check_copies.py
python utils/check_dummies.py python utils/check_dummies.py
python utils/check_repo.py python utils/check_repo.py
python utils/style_doc.py src/accelerate docs/source --max_len 119 doc-builder style src/accelerate docs/source --max_len 119
# this target runs checks on all files # this target runs checks on all files
quality: quality:
black --check $(check_dirs) ruff check $(check_dirs)
isort --check-only $(check_dirs) ruff format --check $(check_dirs)
flake8 $(check_dirs) doc-builder style src/accelerate docs/source --max_len 119 --check_only
python utils/style_doc.py src/accelerate docs/source --max_len 119 --check_only
# Format source code automatically and check is there are any problems left that need manual fixing # Format source code automatically and check is there are any problems left that need manual fixing
style: style:
black $(check_dirs) ruff check $(check_dirs) --fix
isort $(check_dirs) ruff format $(check_dirs)
python utils/style_doc.py src/accelerate docs/source --max_len 119 doc-builder style src/accelerate docs/source --max_len 119
# Run tests for the library # Run tests for the library
test: test_core:
python -m pytest -s -v ./tests/ --ignore=./tests/test_examples.py $(if $(IS_GITHUB_CI),--report-log 'all.log',) python -m pytest -s -v ./tests/ \
--ignore=./tests/test_big_modeling.py \
--ignore=./tests/test_modeling_utils.py \
--ignore=./tests/test_examples.py \
--ignore=./tests/test_cli.py \
--ignore=./tests/deepspeed \
--ignore=./tests/fsdp \
--ignore=./tests/tp \
$(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_core.log",)
test_cli:
python -m pytest -s -v ./tests/test_cli.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_cli.log",)
test_big_modeling: test_big_modeling:
python -m pytest -s -v ./tests/test_big_modeling.py $(if $(IS_GITHUB_CI),--report-log 'big_modeling.log',) python -m pytest -s -v ./tests/test_big_modeling.py ./tests/test_modeling_utils.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_big_modeling.log",)
test_core:
python -m pytest -s -v ./tests/ --ignore=./tests/test_examples.py --ignore=./tests/deepspeed --ignore=./tests/test_big_modeling.py \
--ignore=./tests/fsdp $(if $(IS_GITHUB_CI),--report-log 'core.log',)
test_deepspeed: test_deepspeed:
python -m pytest -s -v ./tests/deepspeed $(if $(IS_GITHUB_CI),--report-log 'deepspeed.log',) python -m pytest -s -v ./tests/deepspeed $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_deepspeed.log",)
test_fsdp: test_fsdp:
python -m pytest -s -v ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log 'fsdp.log',) python -m pytest -s -v ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_fsdp.log",)
test_tp:
python -m pytest -s -v ./tests/tp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_tp.log",)
# Since the new version of pytest will *change* how things are collected, we need `deepspeed` to
# run after test_core and test_cli
test:
$(MAKE) test_core
$(MAKE) test_cli
$(MAKE) test_big_modeling
$(MAKE) test_deepspeed
$(MAKE) test_fsdp
$(MAKE) test_tp
test_examples: test_examples:
python -m pytest -s -v ./tests/test_examples.py $(if $(IS_GITHUB_CI),--report-log 'examples.log',) python -m pytest -s -v ./tests/test_examples.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_examples.log",)
# Broken down example tests for the CI runners # Broken down example tests for the CI runners
test_integrations: test_integrations:
python -m pytest -s -v ./tests/deepspeed ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log 'integrations.log',) python -m pytest -s -v ./tests/fsdp ./tests/tp ./tests/deepspeed $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_integrations.log",)
test_example_differences: test_example_differences:
python -m pytest -s -v ./tests/test_examples.py::ExampleDifferenceTests $(if $(IS_GITHUB_CI),--report-log 'example_diff.log',) python -m pytest -s -v ./tests/test_examples.py::ExampleDifferenceTests $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_example_diff.log",)
test_checkpoint_epoch: test_checkpoint_epoch:
python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "by_epoch" $(if $(IS_GITHUB_CI),--report-log 'checkpoint_epoch.log',) python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "by_epoch" $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_checkpoint_epoch.log",)
test_checkpoint_step: test_checkpoint_step:
python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "by_step" $(if $(IS_GITHUB_CI),--report-log 'checkpoint_step.log',) python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "by_step" $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_checkpoint_step.log",)
# Same as test but used to install only the base dependencies # Same as test but used to install only the base dependencies
test_prod: test_prod:
$(MAKE) test_core $(MAKE) test_core
test_rest: test_rest:
python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "not by_step and not by_epoch" $(if $(IS_GITHUB_CI),--report-log 'rest.log',) python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "not by_step and not by_epoch" $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_rest.log",)
# For developers to prepare a release
prepare_release:
rm -rf dist build
python setup.py bdist_wheel sdist
# Make sure this is ran in a fresh venv of some form
install_test_release:
pip uninstall accelerate -y
pip install -i https://testpypi.python.org/pypi --extra-index-url https://pypi.org/simple accelerate$(if $(version),==$(version),)
# Run as `make target=testpypi upload_release`
upload_release:
@if [ "$(target)" != "testpypi" ] && [ "$(target)" != "pypi" ]; then \
echo "Error: target must be either 'testpypi' or 'pypi'"; \
exit 1; \
fi
twine upload dist/* -r $(target)

View File

@ -16,28 +16,18 @@ limitations under the License.
<p align="center"> <p align="center">
<br> <br>
<img src="docs/source/imgs/accelerate_logo.png" width="400"/> <img src="https://raw.githubusercontent.com/huggingface/accelerate/main/docs/source/imgs/accelerate_logo.png" width="400"/>
<br> <br>
<p> <p>
<p align="center"> <p align="center">
<!-- Uncomment when CircleCI is setup <!-- Uncomment when CircleCI is set up
<a href="https://circleci.com/gh/huggingface/accelerate"> <a href="https://circleci.com/gh/huggingface/accelerate"><img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"></a>
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
</a>
--> -->
<a href="https://github.com/huggingface/accelerate/blob/main/LICENSE"> <a href="https://github.com/huggingface/accelerate/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/huggingface/accelerate.svg?color=blue"></a>
<img alt="License" src="https://img.shields.io/github/license/huggingface/accelerate.svg?color=blue"> <a href="https://huggingface.co/docs/accelerate/index.html"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/accelerate/index.html.svg?down_color=red&down_message=offline&up_message=online"></a>
</a> <a href="https://github.com/huggingface/accelerate/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/accelerate.svg"></a>
<a href="https://huggingface.co/docs/accelerate/index.html"> <a href="https://github.com/huggingface/accelerate/blob/main/CODE_OF_CONDUCT.md"><img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"></a>
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/accelerate/index.html.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/accelerate/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/accelerate.svg">
</a>
<a href="https://github.com/huggingface/accelerate/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
</p> </p>
<h3 align="center"> <h3 align="center">
@ -91,7 +81,7 @@ Here is an example:
optimizer.step() optimizer.step()
``` ```
As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp16). As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp8, fp16, bf16).
In particular, the same code can then be run without modification on your local machine for debugging or your training environment. In particular, the same code can then be run without modification on your local machine for debugging or your training environment.
@ -132,11 +122,11 @@ In particular, the same code can then be run without modification on your local
optimizer.step() optimizer.step()
``` ```
Want to learn more? Check out the [documentation](https://huggingface.co/docs/accelerate) or have look at our [examples](https://github.com/huggingface/accelerate/tree/main/examples). Want to learn more? Check out the [documentation](https://huggingface.co/docs/accelerate) or have a look at our [examples](https://github.com/huggingface/accelerate/tree/main/examples).
## Launching script ## Launching script
🤗 Accelerate also provides an optional CLI tool that allows you to quickly configure and test your training environment before launching the scripts. No need to remember how to use `torch.distributed.launch` or to write a specific launcher for TPU training! 🤗 Accelerate also provides an optional CLI tool that allows you to quickly configure and test your training environment before launching the scripts. No need to remember how to use `torch.distributed.run` or to write a specific launcher for TPU training!
On your machine(s) just run: On your machine(s) just run:
```bash ```bash
@ -155,28 +145,48 @@ For instance, here is how you would run the GLUE example on the MRPC task (from
accelerate launch examples/nlp_example.py accelerate launch examples/nlp_example.py
``` ```
This CLI tool is **optional**, and you can still use `python my_script.py` or `python -m torch.distributed.launch my_script.py` at your convenance. This CLI tool is **optional**, and you can still use `python my_script.py` or `python -m torchrun my_script.py` at your convenience.
You can also directly pass in the arguments you would to `torchrun` as arguments to `accelerate launch` if you wish to not run` accelerate config`.
For example, here is how to launch on two GPUs:
```bash
accelerate launch --multi_gpu --num_processes 2 examples/nlp_example.py
```
To learn more, check the CLI documentation available [here](https://huggingface.co/docs/accelerate/package_reference/cli).
Or view the configuration zoo [here](https://github.com/huggingface/accelerate/blob/main/examples/config_yaml_templates/)
## Launching multi-CPU run using MPI ## Launching multi-CPU run using MPI
🤗 Here is another way to launch multi-CPU run using MPI. You can learn how to install Open MPI on [this page](https://www.open-mpi.org/faq/?category=building#easy-build). You can use Intel MPI or MVAPICH as well. 🤗 Here is another way to launch multi-CPU run using MPI. You can learn how to install Open MPI on [this page](https://www.open-mpi.org/faq/?category=building#easy-build). You can use Intel MPI or MVAPICH as well.
Once you have MPI setup on your cluster, just run: Once you have MPI setup on your cluster, just run:
```bash
accelerate config
```
Answer the questions that are asked, selecting to run using multi-CPU, and answer "yes" when asked if you want accelerate to launch mpirun.
Then, use `accelerate launch` with your script like:
```bash
accelerate launch examples/nlp_example.py
```
Alternatively, you can use mpirun directly, without using the CLI like:
```bash ```bash
mpirun -np 2 python examples/nlp_example.py mpirun -np 2 python examples/nlp_example.py
``` ```
## Launching training using DeepSpeed ## Launching training using DeepSpeed
🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your python script, we provide you the `DeepSpeedPlugin`. 🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your Python script, we provide you the `DeepSpeedPlugin`.
```python ```python
from accelerator import Accelerator, DeepSpeedPlugin from accelerate import Accelerator, DeepSpeedPlugin
# deepspeed needs to know your gradient accumulation steps before hand, so don't forget to pass it # deepspeed needs to know your gradient accumulation steps beforehand, so don't forget to pass it
# Remember you still need to do gradient accumulation by yourself, just like you would have done without deepspeed # Remember you still need to do gradient accumulation by yourself, just like you would have done without deepspeed
deepspeed_plugin = DeepSpeedPlugin(zero_stage=2, gradient_accumulation_steps=2) deepspeed_plugin = DeepSpeedPlugin(zero_stage=2, gradient_accumulation_steps=2)
accelerator = Accelerator(fp16=True, deepspeed_plugin=deepspeed_plugin) accelerator = Accelerator(mixed_precision='fp16', deepspeed_plugin=deepspeed_plugin)
# How to save your 🤗 Transformer? # How to save your 🤗 Transformer?
accelerator.wait_for_everyone() accelerator.wait_for_everyone()
@ -200,7 +210,7 @@ An example can be found in [this notebook](https://github.com/huggingface/notebo
## Why should I use 🤗 Accelerate? ## Why should I use 🤗 Accelerate?
You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library, In fact the whole API of 🤗 Accelerate is in one class, the `Accelerator` object. You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library. In fact, the whole API of 🤗 Accelerate is in one class, the `Accelerator` object.
## Why shouldn't I use 🤗 Accelerate? ## Why shouldn't I use 🤗 Accelerate?
@ -208,18 +218,25 @@ You shouldn't use 🤗 Accelerate if you don't want to write a training loop you
## Frameworks using 🤗 Accelerate ## Frameworks using 🤗 Accelerate
If you like the simplicity of 🤗 Accelerate but would prefer a higher-level abstraction around your training loop, some frameworks that are built on top of 🤗 Accelerate are listed below: If you like the simplicity of 🤗 Accelerate but would prefer a higher-level abstraction around its capabilities, some frameworks and libraries that are built on top of 🤗 Accelerate are listed below:
* [Amphion](https://github.com/open-mmlab/Amphion) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development.
* [Animus](https://github.com/Scitator/animus) is a minimalistic framework to run machine learning experiments. Animus highlights common "breakpoints" in ML experiments and provides a unified interface for them within [IExperiment](https://github.com/Scitator/animus/blob/main/animus/core.py#L76). * [Animus](https://github.com/Scitator/animus) is a minimalistic framework to run machine learning experiments. Animus highlights common "breakpoints" in ML experiments and provides a unified interface for them within [IExperiment](https://github.com/Scitator/animus/blob/main/animus/core.py#L76).
* [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model train, and inference logic. * [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model training, and inference logic.
* [fastai](https://github.com/fastai/fastai#installing) is a PyTorch framework for Deep Learning that simplifies training fast and accurate neural nets using modern best practices. fastai provides a [Learner](https://docs.fast.ai/learner.html#Learner) to handle the training, fine-tuning, and inference of deep learning algorithms. * [fastai](https://github.com/fastai/fastai#installing) is a PyTorch framework for Deep Learning that simplifies training fast and accurate neural nets using modern best practices. fastai provides a [Learner](https://docs.fast.ai/learner.html#Learner) to handle the training, fine-tuning, and inference of deep learning algorithms.
* [Finetuner](https://github.com/jina-ai/finetuner) is a service that enables models to create higher-quality embeddings for semantic search, visual similarity search, cross-modal text<->image search, recommendation systems, clustering, duplication detection, anomaly detection, or other uses.
* [InvokeAI](https://github.com/invoke-ai/InvokeAI) is a creative engine for Stable Diffusion models, offering industry-leading WebUI, terminal usage support, and serves as the foundation for many commercial products.
* [Kornia](https://kornia.readthedocs.io/en/latest/get-started/introduction.html) is a differentiable library that allows classical computer vision to be integrated into deep learning models. Kornia provides a [Trainer](https://kornia.readthedocs.io/en/latest/x.html#kornia.x.Trainer) with the specific purpose to train and fine-tune the supported deep learning algorithms within the library. * [Kornia](https://kornia.readthedocs.io/en/latest/get-started/introduction.html) is a differentiable library that allows classical computer vision to be integrated into deep learning models. Kornia provides a [Trainer](https://kornia.readthedocs.io/en/latest/x.html#kornia.x.Trainer) with the specific purpose to train and fine-tune the supported deep learning algorithms within the library.
* [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) is a lightweight training library, with a streamlined feature set centred around a general-purpose [Trainer](https://pytorch-accelerated.readthedocs.io/en/latest/trainer.html), that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves! * [Open Assistant](https://projects.laion.ai/Open-Assistant/) is a chat-based assistant that understands tasks, can interact with their party systems, and retrieve information dynamically to do so.
* [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) is a lightweight training library, with a streamlined feature set centered around a general-purpose [Trainer](https://pytorch-accelerated.readthedocs.io/en/latest/trainer.html), that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves!
* [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is an open-source browser-based easy-to-use interface based on the Gradio library for Stable Diffusion.
* [torchkeras](https://github.com/lyhue1991/torchkeras) is a simple tool for training pytorch model just in a keras style, a dynamic and beautiful plot is provided in notebook to monitor your loss or metric.
* [transformers](https://github.com/huggingface/transformers) as a tool for helping train state-of-the-art machine learning models in PyTorch, Tensorflow, and JAX. (Accelerate is the backend for the PyTorch side).
## Installation ## Installation
This repository is tested on Python 3.6+ and PyTorch 1.4.0+ This repository is tested on Python 3.8+ and PyTorch 1.10.0+
You should install 🤗 Accelerate in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). You should install 🤗 Accelerate in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
@ -240,9 +257,11 @@ pip install accelerate
- multi-GPU on one node (machine) - multi-GPU on one node (machine)
- multi-GPU on several nodes (machines) - multi-GPU on several nodes (machines)
- TPU - TPU
- FP16 with native AMP (apex on the roadmap) - FP16/BFloat16 mixed precision
- FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine) or [MS-AMP](https://github.com/Azure/MS-AMP/)
- DeepSpeed support (Experimental) - DeepSpeed support (Experimental)
- PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental) - PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental)
- Megatron-LM support (Experimental)
## Citing 🤗 Accelerate ## Citing 🤗 Accelerate
@ -251,7 +270,7 @@ If you use 🤗 Accelerate in your publication, please cite it by using the foll
```bibtex ```bibtex
@Misc{accelerate, @Misc{accelerate,
title = {Accelerate: Training and inference at scale made simple, efficient and adaptable.}, title = {Accelerate: Training and inference at scale made simple, efficient and adaptable.},
author = {Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Mangrulkar}, author = {Sylvain Gugger and Lysandre Debut and Thomas Wolf and Philipp Schmid and Zachary Mueller and Sourab Mangrulkar and Marc Sun and Benjamin Bossan},
howpublished = {\url{https://github.com/huggingface/accelerate}}, howpublished = {\url{https://github.com/huggingface/accelerate}},
year = {2022} year = {2022}
} }

View File

@ -1,46 +1,5 @@
# Big model inference benchmarks # Benchmarks
Running inference with Accelerate on big models. The folders below contain suites to test various functionalities in Accelerate.
## Setup See their relevant README.md's for more information.
These benchmarks use the `transformers` library:
```bash
pip install transformers
```
To reproduce or test a new setup, run
```py
python inference_acc.py model_name
```
This script supports `gpt-j-6b`, `gpt-neox`, `opt` (30B version) and `T0pp` out of the box, but you can specify any valid checkpoint for `model_name`.
To force a different `torch_dtype` than the one in the config: `--torch_dtype xxx`.
If you get an error linked to disk offload, you need to add the option `--disk-offload`
## Results
On a setup with two Titan RTXs (24GB of RAM) and 32GB of RAM, we get the following benchmarks (T0pp does not run in float16, which is why it's not included).
| Model | Model load time | Generation time | dtype | GPU 0 use | GPU 1 use | CPU use | Disk offload |
|:-----:|:---------------:|:---------------:|:-----:|:---------:|:---------:|:-------:|:------------:|
| GPT-J-6B | 8.7s | 0.05s per token | float16 | 11.7GB | 0GB | 0GB | no |
| GPT-J-6B | 12.4s | 0.06s per token | float32 | 21.9GB | 1.5GB | 0GB | no |
| GPT-Neo-X-20B | 30.9s | 0.08s per token | float16 | 21.5GB | 18GB | 0GB | no |
| GPT-Neo-X-20B | 78.2s | 10.72s per token | float32 | 20.3GB | 22.7 GB | 24.4GB | yes |
| T0pp (11B) | 29.4s | 0.05s per token | float32 | 21.1GB | 21.3GB | 0GB | no |
| OPT-30B | 34.5s | 2.37s per token | float16 | 20.7GB | 22.3GB | 14.1GB | no |
| OPT-30B | 112.3s | 33.9s per token | float32 | 20.2GB | 21.2GB | 23.5GB | yes |
Note on the results:
- using two GPUs instead of one does not slow down generation
- using CPU offload slows down a bit (see OPT-30b)
- using disk offload slows down a lot (need to implement prefetching)
You will also note that Accelerate does not use anymore GPU and CPU RAM than necessary:
- peak GPU memory is exactly the size of the model put on a given GPU
- peak CPU memory is either the size of the biggest checkpoint shard or the part of the model offloaded on CPU, whichever is bigger.

View File

@ -0,0 +1,46 @@
# Big model inference benchmarks
Running inference with Accelerate on big models.
## Setup
These benchmarks use the `transformers` library:
```bash
pip install transformers
```
To reproduce or test a new setup, run
```py
python big_model_inference.py model_name
```
This script supports `gpt-j-6b`, `gpt-neox`, `opt` (30B version) and `T0pp` out of the box, but you can specify any valid checkpoint for `model_name`.
To force a different `torch_dtype` than the one in the config: `--torch_dtype xxx`.
If you get an error linked to disk offload, you need to add the option `--disk-offload`
## Results
On a setup with two Titan RTXs (24GB of RAM) and 32GB of RAM, we get the following benchmarks (T0pp does not run in float16, which is why it's not included).
| Model | Model load time | Generation time | dtype | GPU 0 use | GPU 1 use | CPU use | Disk offload |
|:-----:|:---------------:|:---------------:|:-----:|:---------:|:---------:|:-------:|:------------:|
| GPT-J-6B | 8.7s | 0.05s per token | float16 | 11.7GB | 0GB | 0GB | no |
| GPT-J-6B | 12.4s | 0.06s per token | float32 | 21.9GB | 1.5GB | 0GB | no |
| GPT-Neo-X-20B | 30.9s | 0.08s per token | float16 | 21.5GB | 18GB | 0GB | no |
| GPT-Neo-X-20B | 78.2s | 10.72s per token | float32 | 20.3GB | 22.7 GB | 24.4GB | yes |
| T0pp (11B) | 29.4s | 0.05s per token | float32 | 21.1GB | 21.3GB | 0GB | no |
| OPT-30B | 34.5s | 2.37s per token | float16 | 20.7GB | 22.3GB | 14.1GB | no |
| OPT-30B | 112.3s | 33.9s per token | float32 | 20.2GB | 21.2GB | 23.5GB | yes |
Note on the results:
- using two GPUs instead of one does not slow down generation
- using CPU offload slows down a bit (see OPT-30b)
- using disk offload slows down a lot (need to implement prefetching)
You will also note that Accelerate does not use anymore GPU and CPU RAM than necessary:
- peak GPU memory is exactly the size of the model put on a given GPU
- peak CPU memory is either the size of the biggest checkpoint shard or the part of the model offloaded on CPU, whichever is bigger.

View File

@ -16,12 +16,12 @@ import argparse
import time import time
import torch import torch
import transformers import transformers
from accelerate.utils import compute_module_sizes
from measures_util import end_measure, log_measures, start_measure from measures_util import end_measure, log_measures, start_measure
from transformers import AutoConfig, AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer from transformers import AutoConfig, AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer
from accelerate.utils import compute_module_sizes
DEFAULT_MODELS = { DEFAULT_MODELS = {
"gpt-j-6b": {"is_causal": True, "model": "sgugger/sharded-gpt-j-6B", "tokenizer": "EleutherAI/gpt-j-6B"}, "gpt-j-6b": {"is_causal": True, "model": "sgugger/sharded-gpt-j-6B", "tokenizer": "EleutherAI/gpt-j-6B"},

View File

@ -1,10 +1,28 @@
# Copyright 2023 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc import gc
import threading import threading
import time import time
import psutil
import torch import torch
import psutil from accelerate.test_utils.testing import get_backend
torch_device_type, _, _ = get_backend()
torch_accelerator_module = getattr(torch, torch_device_type, torch.cuda)
class PeakCPUMemory: class PeakCPUMemory:
@ -42,16 +60,16 @@ def start_measure():
measures = {"time": time.time()} measures = {"time": time.time()}
gc.collect() gc.collect()
torch.cuda.empty_cache() torch_accelerator_module.empty_cache()
# CPU mem # CPU mem
measures["cpu"] = psutil.Process().memory_info().rss measures["cpu"] = psutil.Process().memory_info().rss
cpu_peak_tracker.start() cpu_peak_tracker.start()
# GPU mem # GPU mem
for i in range(torch.cuda.device_count()): for i in range(torch_accelerator_module.device_count()):
measures[str(i)] = torch.cuda.memory_allocated(i) measures[str(i)] = torch_accelerator_module.memory_allocated(i)
torch.cuda.reset_peak_memory_stats() torch_accelerator_module.reset_peak_memory_stats()
return measures return measures
@ -61,16 +79,16 @@ def end_measure(start_measures):
measures = {"time": time.time() - start_measures["time"]} measures = {"time": time.time() - start_measures["time"]}
gc.collect() gc.collect()
torch.cuda.empty_cache() torch_accelerator_module.empty_cache()
# CPU mem # CPU mem
measures["cpu"] = (psutil.Process().memory_info().rss - start_measures["cpu"]) / 2**20 measures["cpu"] = (psutil.Process().memory_info().rss - start_measures["cpu"]) / 2**20
measures["cpu-peak"] = (cpu_peak_tracker.stop() - start_measures["cpu"]) / 2**20 measures["cpu-peak"] = (cpu_peak_tracker.stop() - start_measures["cpu"]) / 2**20
# GPU mem # GPU mem
for i in range(torch.cuda.device_count()): for i in range(torch_accelerator_module.device_count()):
measures[str(i)] = (torch.cuda.memory_allocated(i) - start_measures[str(i)]) / 2**20 measures[str(i)] = (torch_accelerator_module.memory_allocated(i) - start_measures[str(i)]) / 2**20
measures[f"{i}-peak"] = (torch.cuda.max_memory_allocated(i) - start_measures[str(i)]) / 2**20 measures[f"{i}-peak"] = (torch_accelerator_module.max_memory_allocated(i) - start_measures[str(i)]) / 2**20
return measures return measures
@ -78,9 +96,9 @@ def end_measure(start_measures):
def log_measures(measures, description): def log_measures(measures, description):
print(f"{description}:") print(f"{description}:")
print(f"- Time: {measures['time']:.2f}s") print(f"- Time: {measures['time']:.2f}s")
for i in range(torch.cuda.device_count()): for i in range(torch_accelerator_module.device_count()):
print(f"- GPU {i} allocated: {measures[str(i)]:.2f}MiB") print(f"- {torch_device_type} {i} allocated: {measures[str(i)]:.2f}MiB")
peak = measures[f"{i}-peak"] peak = measures[f"{i}-peak"]
print(f"- GPU {i} peak: {peak:.2f}MiB") print(f"- {torch_device_type} {i} peak: {peak:.2f}MiB")
print(f"- CPU RAM allocated: {measures['cpu']:.2f}MiB") print(f"- CPU RAM allocated: {measures['cpu']:.2f}MiB")
print(f"- CPU RAM peak: {measures['cpu-peak']:.2f}MiB") print(f"- CPU RAM peak: {measures['cpu-peak']:.2f}MiB")

View File

@ -0,0 +1,12 @@
FROM ghcr.io/azure/msamp
RUN pip install transformers evaluate datasets
RUN git clone https://github.com/huggingface/accelerate
RUN cd accelerate && \
pip install -e . && \
cd benchmarks/fp8
CMD ["bash"]

View File

@ -0,0 +1,123 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `MS-AMP`.
This particular script verifies this for DDP training.
"""
import evaluate
import msamp
import torch
from fp8_utils import evaluate_model, get_training_utilities
from torch.nn.parallel import DistributedDataParallel as DDP
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, get_grad_scaler, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline(opt_level="O2"):
set_seed(42)
scaler = get_grad_scaler()
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
accelerator = Accelerator()
device = accelerator.device
model, optimizer = msamp.initialize(model, optimizer, opt_level=opt_level)
model.to(device)
# Convert the model to DDP
device_ids, output_device = [accelerator.local_process_index], accelerator.local_process_index
model = DDP(model, device_ids=device_ids, output_device=output_device)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for i, batch in enumerate(train_dataloader):
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
loss = outputs.loss
scaler.scale(loss).backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration(opt_level="O2"):
kwargs_handlers = [FP8RecipeKwargs(backend="msamp", opt_level=opt_level)]
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=kwargs_handlers)
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for i, batch in enumerate(train_dataloader):
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
for opt_level in ["O1", "O2"]:
baseline_not_trained, baseline_trained = train_baseline(opt_level)
accelerator_not_trained, accelerator_trained = train_integration(opt_level)
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)

View File

@ -0,0 +1,161 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `MS-AMP`.
This particular script verifies this for DeepSpeed training.
NOTE: MS-AMP does *not* support ZeRO-3.
"""
# import msamp.deepspeed as msamp_deepspeed
import evaluate
import torch
from fp8_utils import evaluate_model, get_training_utilities
from msamp import deepspeed as msamp_deepspeed
from accelerate import Accelerator, DeepSpeedPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline(zero_stage: int = 1, opt_level: str = "O1"):
set_seed(42)
accelerator = Accelerator()
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
import numpy as np
config = {
"train_batch_size": 32,
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": zero_stage,
"offload_optimizer": {"device": "none", "nvme_path": None},
"offload_param": {"device": "none", "nvme_path": None},
},
"gradient_clipping": 1.0,
"steps_per_print": np.inf,
"bf16": {"enabled": True},
"fp16": {"enabled": False},
"zero_allow_untested_optimizer": True,
"msamp": {
"enabled": True,
"opt_level": opt_level,
},
}
(
model,
optimizer,
_,
_,
) = msamp_deepspeed.initialize(
model=model,
optimizer=optimizer,
config_params=config,
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
model.backward(loss)
model.step()
for _ in range(accelerator.num_processes):
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
torch.cuda.empty_cache()
AcceleratorState()._reset_state(True)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration(zero_stage: int = 1, opt_level: str = "O1"):
set_seed(42)
deepspeed_plugin = DeepSpeedPlugin(
zero_stage=zero_stage,
enable_msamp=True,
msamp_opt_level=opt_level,
)
accelerator = Accelerator(mixed_precision="fp8", deepspeed_plugin=deepspeed_plugin)
accelerator.state.deepspeed_plugin.deepspeed_config["train_micro_batch_size_per_gpu"] = 16
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
torch.cuda.empty_cache()
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
AcceleratorState()._reset_state(True)
return base_model_results, trained_model_results
if __name__ == "__main__":
for zero_stage in [1, 2]:
for opt_level in ["O1", "O2", "O3"]:
baseline_not_trained, baseline_trained = train_baseline(zero_stage, opt_level)
accelerator_not_trained, accelerator_trained = train_integration(zero_stage, opt_level)
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,118 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
def get_dataloaders(model_name: str, batch_size: int = 16):
from datasets import load_dataset
from torch.utils.data import DataLoader
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
return tokenizer.pad(
examples,
padding="longest",
pad_to_multiple_of=16, # Specific for FP8
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"],
shuffle=False,
collate_fn=collate_fn,
batch_size=16,
drop_last=True,
)
return train_dataloader, eval_dataloader
def get_training_utilities(model_name: str, batch_size: int = 16, accelerator=None):
"""
Returns a tuple of:
- Model
- Optimizer
- Train dataloader (prepared)
- Eval dataloader (prepared)
- LR Scheduler
Suitable for training on the MRPC dataset
"""
from torch.optim import AdamW
from transformers import AutoModelForSequenceClassification, get_linear_schedule_with_warmup
from accelerate import Accelerator
if accelerator is None:
accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(model_name)
train_dataloader, eval_dataloader = get_dataloaders(model_name, batch_size)
optimizer = AdamW(model.parameters(), lr=0.0001)
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=len(train_dataloader) * 2,
)
train_dataloader, eval_dataloader = accelerator.prepare(train_dataloader, eval_dataloader)
return model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
def get_named_parameters(model):
"""
Same thing as `Accelerator.get_named_parameters` Returns a list of the named parameters of the model (extracted
from parallel)
"""
from accelerate.utils import extract_model_from_parallel
model = extract_model_from_parallel(model)
return {n: p for n, p in model.named_parameters()}
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
# W/ MS-AMP, we need to cast while evaluating
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()

View File

@ -0,0 +1,118 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `MS-AMP`.
This particular script verifies this for single GPU training.
"""
import evaluate
import msamp
import torch
from fp8_utils import evaluate_model, get_training_utilities
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, get_grad_scaler, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline(opt_level="O2"):
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
model, optimizer = msamp.initialize(model, optimizer, opt_level=opt_level)
model.to("cuda")
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
scaler = get_grad_scaler()
for batch in train_dataloader:
batch = batch.to("cuda")
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
loss = outputs.loss
loss = scaler.scale(loss)
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration(opt_level="O2"):
kwargs_handlers = [FP8RecipeKwargs(backend="msamp", opt_level=opt_level)]
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=kwargs_handlers)
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler)
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
for opt_level in ["O1", "O2"]:
baseline_not_trained, baseline_trained = train_baseline(opt_level)
accelerator_not_trained, accelerator_trained = train_integration(opt_level)
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)

View File

@ -0,0 +1,12 @@
FROM nvcr.io/nvidia/pytorch:24.07-py3
RUN pip install transformers evaluate datasets
RUN git clone https://github.com/huggingface/accelerate.git
RUN cd accelerate && \
pip install -e . && \
cd benchmarks/fp8
RUN /bin/bash

View File

@ -0,0 +1,32 @@
# FP8 Benchmarks
Comparing and running [torchao](https://github.com/pytorch/ao/tree/main/torchao/float8) FP8 with accelerate
## Overview
This repo provides scripts which compare native `torchao` model training against `accelerate`'s own integration. Each modeling type is segmented out via a script, supporting the following:
* Single GPU training (`non_distributed.py`)
* Multi-GPU training via DistributedDataParallelism (`ddp.py`)
* Fully Sharded Data Parallelism (`fsdp.py`)
* DeepSpeed ZeRO 1-3 (`deepspeed.py`)
To run them, it's recommended to use a docker image (see the attached `Dockerfile`) and not install `torchao` manually.
## Running:
There are official Docker images located at `huggingface/accelerate:gpu-fp8-torchao-nightly` which can be used.
You can run all scripts using the core `accelerate launch` command without any `accelerate config` being needed.
For single GPU, run it via `python`:
```bash
python non_distributed.py
```
For the rest, run it via `accelerate launch`:
```bash
accelerate launch ddp.py # or distrib_deepspeed.py, ddp.py
```

View File

@ -0,0 +1,158 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
This particular script verifies this for DDP training.
"""
from functools import partial
import evaluate
import torch
from fp8_utils import get_training_utilities
from torch.nn.parallel import DistributedDataParallel as DDP
from torchao.float8 import convert_to_float8_training
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import AORecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
if isinstance(module, torch.nn.Linear):
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
return False
# For stability reasons, we skip the first and last linear layers
# Otherwise can lead to the model not training or converging properly
if fqn in (first_layer_name, last_layer_name):
return False
return True
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
first_linear = None
last_linear = None
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if first_linear is None:
first_linear = name
last_linear = name
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
accelerator = Accelerator()
device = accelerator.device
model.to(device)
convert_to_float8_training(model, module_filter_fn=func)
# Convert the model to DDP
device_ids, output_device = [accelerator.local_process_index], accelerator.local_process_index
model = DDP(model, device_ids=device_ids, output_device=output_device)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for batch in train_dataloader:
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
batch = batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=[AORecipeKwargs()])
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,213 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
This particular script verifies this for deepspeed training.
"""
from functools import partial
from unittest.mock import patch
import deepspeed
import evaluate
import torch
from fp8_utils import evaluate_model, get_training_utilities
from torchao.float8 import convert_to_float8_training
from transformers.integrations import HfDeepSpeedConfig
from accelerate import Accelerator, DeepSpeedPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import AORecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
if isinstance(module, torch.nn.Linear):
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
return False
# For stability reasons, we skip the first and last linear layers
# Otherwise can lead to the model not training or converging properly
if fqn in (first_layer_name, last_layer_name):
return False
return True
def train_baseline(zero_stage: int = 1):
set_seed(42)
# This forces transformers to think Zero-3 Init should be used
with patch("transformers.integrations.deepspeed.is_deepspeed_zero3_enabled") as mock:
mock.return_value = zero_stage == 3
config = HfDeepSpeedConfig(
{
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {"stage": zero_stage},
}
)
plugin = DeepSpeedPlugin(hf_ds_config=config)
accelerator = Accelerator(deepspeed_plugin=plugin)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
first_linear = None
last_linear = None
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if first_linear is None:
first_linear = name
last_linear = name
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
convert_to_float8_training(model, module_filter_fn=func)
import numpy as np
config = {
"train_batch_size": 32,
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": zero_stage,
"offload_optimizer": {"device": "none", "nvme_path": None},
"offload_param": {"device": "none", "nvme_path": None},
"stage3_gather_16bit_weights_on_model_save": False,
},
"gradient_clipping": 1.0,
"steps_per_print": np.inf,
"bf16": {"enabled": True},
"fp16": {"enabled": False},
"zero_allow_untested_optimizer": True,
}
(
model,
optimizer,
_,
lr_scheduler,
) = deepspeed.initialize(
model=model,
optimizer=optimizer,
lr_scheduler=lr_scheduler,
config_params=config,
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
model_outputs = []
data = []
for batch in train_dataloader:
outputs = model(**batch)
data.append(batch.to("cpu"))
model_outputs.append(outputs.logits.to("cpu"))
loss = outputs.loss
model.backward(loss)
model.step()
for _ in range(accelerator.num_processes):
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
del config
return base_model_results, trained_model_results, model_outputs, data
def train_integration(zero_stage: int = 1):
set_seed(42)
AcceleratorState()._reset_state(True)
config = HfDeepSpeedConfig(
{
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {"stage": zero_stage},
}
)
deepspeed_plugin = DeepSpeedPlugin(
hf_ds_config=config,
)
# This forces transformers to think Zero-3 Init should be used
with patch("transformers.integrations.deepspeed.is_deepspeed_zero3_enabled") as mock:
mock.return_value = zero_stage == 3
accelerator = Accelerator(
mixed_precision="fp8", kwargs_handlers=[AORecipeKwargs()], deepspeed_plugin=deepspeed_plugin
)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler, train_dataloader, eval_dataloader = accelerator.prepare(
model, optimizer, lr_scheduler, train_dataloader, eval_dataloader
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
model_outputs = []
data = []
for batch in train_dataloader:
outputs = model(**batch)
data.append(batch.to("cpu"))
model_outputs.append(outputs.logits.to("cpu"))
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
del config
return base_model_results, trained_model_results, model_outputs, data
if __name__ == "__main__":
for zero_stage in [1, 2, 3]:
baseline_not_trained, baseline_trained, baseline_outputs, baseline_data = train_baseline(zero_stage)
accelerator_not_trained, accelerator_trained, accelerator_outputs, accelerator_data = train_integration(
zero_stage
)
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
AcceleratorState()._reset_state(True)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,116 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
def get_dataloaders(model_name: str, batch_size: int = 16):
from datasets import load_dataset
from torch.utils.data import DataLoader
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
return tokenizer.pad(
examples,
padding="longest",
pad_to_multiple_of=16, # Specific for FP8
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"],
shuffle=False,
collate_fn=collate_fn,
batch_size=16,
drop_last=True,
)
return train_dataloader, eval_dataloader
def get_training_utilities(model_name: str, batch_size: int = 16, accelerator=None, prepare=True):
"""
Returns a tuple of:
- Model
- Optimizer
- Train dataloader (prepared)
- Eval dataloader (prepared)
- LR Scheduler
Suitable for training on the MRPC dataset
"""
from torch.optim import AdamW
from transformers import AutoModelForSequenceClassification, get_linear_schedule_with_warmup
from accelerate import Accelerator
if accelerator is None:
accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(model_name)
train_dataloader, eval_dataloader = get_dataloaders(model_name, batch_size)
optimizer = AdamW(model.parameters(), lr=0.0001)
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=len(train_dataloader) * 2,
)
train_dataloader, eval_dataloader = accelerator.prepare(train_dataloader, eval_dataloader)
return model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
def get_named_parameters(model):
"""
Same thing as `Accelerator.get_named_parameters` Returns a list of the named parameters of the model (extracted
from parallel)
"""
from accelerate.utils import extract_model_from_parallel
model = extract_model_from_parallel(model)
return {n: p for n, p in model.named_parameters()}
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()

View File

@ -0,0 +1,173 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
This particular script verifies this for FSDP training.
"""
from functools import partial
import evaluate
import torch
from fp8_utils import get_training_utilities
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import MixedPrecision
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
from torchao.float8 import convert_to_float8_training
from transformers.models.bert import BertLayer
from accelerate import Accelerator
from accelerate import FullyShardedDataParallelPlugin as FSDPPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import AORecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
FSDP_WRAP_POLICY = partial(transformer_auto_wrap_policy, transformer_layer_cls={BertLayer})
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
if isinstance(module, torch.nn.Linear):
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
return False
# For stability reasons, we skip the first and last linear layers
# Otherwise can lead to the model not training or converging properly
if fqn in (first_layer_name, last_layer_name):
return False
return True
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
first_linear = None
last_linear = None
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if first_linear is None:
first_linear = name
last_linear = name
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
accelerator = Accelerator()
device = accelerator.device
model.to(device)
convert_to_float8_training(model, module_filter_fn=func)
# Convert the model to FSDP
model = FSDP(
model,
use_orig_params=True,
mixed_precision=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.float32),
auto_wrap_policy=FSDP_WRAP_POLICY,
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for batch in train_dataloader:
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
batch = batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
AcceleratorState()._reset_state(True)
fsdp_plugin = FSDPPlugin(
auto_wrap_policy=FSDP_WRAP_POLICY,
use_orig_params=True,
mixed_precision_policy=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.float32),
)
accelerator = Accelerator(mixed_precision="fp8", fsdp_plugin=fsdp_plugin, kwargs_handlers=[AORecipeKwargs()])
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,145 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
This particular script verifies this for single GPU training.
"""
from functools import partial
import evaluate
import torch
from fp8_utils import get_training_utilities
from torchao.float8 import convert_to_float8_training
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import AORecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
if isinstance(module, torch.nn.Linear):
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
return False
# For stability reasons, we skip the first and last linear layers
# Otherwise can lead to the model not training or converging properly
if fqn in (first_layer_name, last_layer_name):
return False
return True
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
first_linear = None
last_linear = None
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if first_linear is None:
first_linear = name
last_linear = name
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
model.to("cuda")
convert_to_float8_training(model, module_filter_fn=func)
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
set_seed(42)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=[AORecipeKwargs()])
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model = accelerator.prepare(model)
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
AcceleratorState._reset_state(True)
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)

View File

@ -0,0 +1,15 @@
ARG BASE_YEAR=25
ARG BASE_MONTH=03
FROM nvcr.io/nvidia/pytorch:${BASE_YEAR}.${BASE_MONTH}-py3
RUN pip install transformers evaluate datasets
RUN git clone https://github.com/huggingface/accelerate.git
RUN cd accelerate && \
pip install -e .[deepspeed] && \
cd benchmarks/fp8
RUN /bin/bash

View File

@ -0,0 +1,32 @@
# FP8 Benchmarks
Comparing and running [TransformerEngine](https://github.com/NVIDIA/TransformerEngine) FP8 with accelerate
## Overview
This repo provides scripts which compare native TransformerEngine model training against `accelerate`'s own integration. Each modeling type is segmented out via a script, supporting the following:
* Single GPU training (`non_distributed.py`)
* Multi-GPU training via DistributedDataParallelism (`ddp.py`)
* Fully Sharded Data Parallelism (`fsdp.py`)
* DeepSpeed ZeRO 1-3 (`deepspeed.py`)
To run them, it's recommended to use a docker image (see the attached `Dockerfile`) and not install `TransformerEngine` manually.
## Running:
There are official Docker images located at `huggingface/accelerate:gpu-fp8-transformerengine-nightly` which can be used.
You can run all scripts using the core `accelerate launch` command without any `accelerate config` being needed.
For single GPU, run it via `python`:
```bash
python non_distributed.py
```
For the rest, run it via `accelerate launch`:
```bash
accelerate launch ddp.py # or distrib_deepspeed.py, ddp.py
```

View File

@ -0,0 +1,144 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `TransformersEngine`.
This particular script verifies this for DDP training.
"""
import evaluate
import torch
import transformer_engine.common.recipe as te_recipe
import transformer_engine.pytorch as te
from fp8_utils import evaluate_model, get_named_parameters, get_training_utilities
from torch.nn.parallel import DistributedDataParallel as DDP
from transformer_engine.common.recipe import DelayedScaling
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, set_seed
from accelerate.utils.transformer_engine import convert_model
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
accelerator = Accelerator()
device = accelerator.device
model.to(device)
# Convert the model to TE
old_named_params = get_named_parameters(model)
with torch.no_grad():
convert_model(model)
FP8_RECIPE_KWARGS = {"fp8_format": te_recipe.Format.HYBRID, "amax_history_len": 32, "amax_compute_algo": "max"}
fp8_recipe = DelayedScaling(**FP8_RECIPE_KWARGS)
new_named_params = get_named_parameters(model)
# Convert the model to DDP
device_ids, output_device = [accelerator.local_process_index], accelerator.local_process_index
model = DDP(model, device_ids=device_ids, output_device=output_device)
mapping = {p: new_named_params[n] for n, p in old_named_params.items()}
for param_group in optimizer.param_groups:
param_group["params"] = [mapping[p] for p in param_group["params"]]
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
with te.fp8_autocast(enabled=True, fp8_recipe=fp8_recipe):
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
batch = batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
FP8_RECIPE_KWARGS = {"fp8_format": "HYBRID", "amax_history_len": 32, "amax_compute_algo": "max"}
kwargs_handlers = [FP8RecipeKwargs(backend="TE", **FP8_RECIPE_KWARGS)]
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=kwargs_handlers)
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,191 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `TransformersEngine`.
This particular script verifies this for DDP training.
"""
from unittest.mock import patch
import deepspeed
import evaluate
import torch
import transformer_engine.common.recipe as te_recipe
import transformer_engine.pytorch as te
from fp8_utils import evaluate_model, get_named_parameters, get_training_utilities
from transformer_engine.common.recipe import DelayedScaling
from accelerate import Accelerator, DeepSpeedPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, set_seed
from accelerate.utils.transformer_engine import convert_model
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline(zero_stage: int = 1):
# This forces transformers to think Zero-3 Init should be used
with patch("transformers.integrations.deepspeed.is_deepspeed_zero3_enabled") as mock:
mock.return_value = zero_stage == 3
set_seed(42)
accelerator = Accelerator()
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
# Convert the model to TE
old_named_params = get_named_parameters(model)
with torch.no_grad():
convert_model(model)
new_named_params = get_named_parameters(model)
mapping = {p: new_named_params[n] for n, p in old_named_params.items()}
for param_group in optimizer.param_groups:
param_group["params"] = [mapping[p] for p in param_group["params"]]
FP8_RECIPE_KWARGS = {"fp8_format": te_recipe.Format.HYBRID, "amax_history_len": 32, "amax_compute_algo": "max"}
fp8_recipe = DelayedScaling(**FP8_RECIPE_KWARGS)
import numpy as np
config = {
"train_batch_size": 16,
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": zero_stage,
"offload_optimizer": {"device": "none", "nvme_path": None},
"offload_param": {"device": "none", "nvme_path": None},
"stage3_gather_16bit_weights_on_model_save": False,
},
"gradient_clipping": 1.0,
"steps_per_print": np.inf,
"bf16": {"enabled": True},
"fp16": {"enabled": False},
"zero_allow_untested_optimizer": True,
}
(
model,
optimizer,
_,
_,
) = deepspeed.initialize(
model=model,
optimizer=optimizer,
config_params=config,
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
model_outputs = []
data = []
for _ in range(2):
for batch in train_dataloader:
with te.fp8_autocast(enabled=True, fp8_recipe=fp8_recipe):
outputs = model(**batch)
data.append(batch.to("cpu"))
model_outputs.append(outputs.logits.to("cpu"))
loss = outputs.loss
model.backward(loss)
model.step()
for _ in range(accelerator.num_processes):
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results, model_outputs, data
def train_integration(zero_stage: int = 1):
set_seed(42)
FP8_RECIPE_KWARGS = {"fp8_format": "HYBRID", "amax_history_len": 32, "amax_compute_algo": "max"}
kwargs_handlers = [FP8RecipeKwargs(backend="TE", **FP8_RECIPE_KWARGS)]
AcceleratorState()._reset_state(True)
deepspeed_plugin = DeepSpeedPlugin(
zero_stage=zero_stage,
zero3_init_flag=zero_stage == 3,
)
accelerator = Accelerator(
mixed_precision="fp8", kwargs_handlers=kwargs_handlers, deepspeed_plugin=deepspeed_plugin
)
accelerator.state.deepspeed_plugin.deepspeed_config["train_micro_batch_size_per_gpu"] = 16
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
model_outputs = []
data = []
for _ in range(2):
for batch in train_dataloader:
outputs = model(**batch)
data.append(batch.to("cpu"))
model_outputs.append(outputs.logits.to("cpu"))
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results, model_outputs, data
if __name__ == "__main__":
for zero_stage in [1, 2, 3]:
baseline_not_trained, baseline_trained, baseline_outputs, baseline_data = train_baseline(zero_stage)
accelerator_not_trained, accelerator_trained, accelerator_outputs, accelerator_data = train_integration(
zero_stage
)
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,116 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
def get_dataloaders(model_name: str, batch_size: int = 16):
from datasets import load_dataset
from torch.utils.data import DataLoader
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
return tokenizer.pad(
examples,
padding="longest",
pad_to_multiple_of=16, # Specific for FP8
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"],
shuffle=False,
collate_fn=collate_fn,
batch_size=16,
drop_last=True,
)
return train_dataloader, eval_dataloader
def get_training_utilities(model_name: str, batch_size: int = 16, accelerator=None):
"""
Returns a tuple of:
- Model
- Optimizer
- Train dataloader (prepared)
- Eval dataloader (prepared)
- LR Scheduler
Suitable for training on the MRPC dataset
"""
from torch.optim import AdamW
from transformers import AutoModelForSequenceClassification, get_linear_schedule_with_warmup
from accelerate import Accelerator
if accelerator is None:
accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(model_name)
train_dataloader, eval_dataloader = get_dataloaders(model_name, batch_size)
optimizer = AdamW(model.parameters(), lr=0.0001)
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=len(train_dataloader) * 2,
)
train_dataloader, eval_dataloader = accelerator.prepare(train_dataloader, eval_dataloader)
return model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
def get_named_parameters(model):
"""
Same thing as `Accelerator.get_named_parameters` Returns a list of the named parameters of the model (extracted
from parallel)
"""
from accelerate.utils import extract_model_from_parallel
model = extract_model_from_parallel(model)
return {n: p for n, p in model.named_parameters()}
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()

View File

@ -0,0 +1,161 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `TransformersEngine`.
This particular script verifies this for FSDP training.
"""
from functools import partial
import evaluate
import torch
import transformer_engine.common.recipe as te_recipe
import transformer_engine.pytorch as te
from fp8_utils import evaluate_model, get_named_parameters, get_training_utilities
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import MixedPrecision
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
from transformer_engine.common.recipe import DelayedScaling
from transformers.models.bert import BertLayer
from accelerate import Accelerator
from accelerate import FullyShardedDataParallelPlugin as FSDPPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, set_seed
from accelerate.utils.transformer_engine import convert_model
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
FSDP_WRAP_POLICY = partial(transformer_auto_wrap_policy, transformer_layer_cls={BertLayer})
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
accelerator = Accelerator()
device = accelerator.device
model.to(device)
# Convert the model to TE
old_named_params = get_named_parameters(model)
with torch.no_grad():
convert_model(model)
FP8_RECIPE_KWARGS = {"fp8_format": te_recipe.Format.HYBRID, "amax_history_len": 32, "amax_compute_algo": "max"}
fp8_recipe = DelayedScaling(**FP8_RECIPE_KWARGS)
new_named_params = get_named_parameters(model)
# Convert the model to FSDP
model = FSDP(
model,
use_orig_params=True,
mixed_precision=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.float32),
auto_wrap_policy=FSDP_WRAP_POLICY,
)
mapping = {p: new_named_params[n] for n, p in old_named_params.items()}
for param_group in optimizer.param_groups:
param_group["params"] = [mapping[p] for p in param_group["params"]]
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
with te.fp8_autocast(enabled=True, fp8_recipe=fp8_recipe):
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
batch = batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
FP8_RECIPE_KWARGS = {"fp8_format": "HYBRID", "amax_history_len": 32, "amax_compute_algo": "max"}
kwargs_handlers = [FP8RecipeKwargs(backend="TE", **FP8_RECIPE_KWARGS)]
AcceleratorState()._reset_state(True)
fsdp_plugin = FSDPPlugin(
auto_wrap_policy=FSDP_WRAP_POLICY,
use_orig_params=True,
mixed_precision_policy=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.float32),
)
accelerator = Accelerator(mixed_precision="fp8", fsdp_plugin=fsdp_plugin, kwargs_handlers=kwargs_handlers)
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,132 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `TransformersEngine`.
This particular script verifies this for single GPU training.
"""
import evaluate
import torch
import transformer_engine.common.recipe as te_recipe
import transformer_engine.pytorch as te
from fp8_utils import evaluate_model, get_named_parameters, get_training_utilities
from transformer_engine.common.recipe import DelayedScaling
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, set_seed
from accelerate.utils.transformer_engine import convert_model
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline():
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
# Convert the model to TE
old_named_params = get_named_parameters(model)
with torch.no_grad():
convert_model(model)
new_named_params = get_named_parameters(model)
mapping = {p: new_named_params[n] for n, p in old_named_params.items()}
for param_group in optimizer.param_groups:
param_group["params"] = [mapping[p] for p in param_group["params"]]
FP8_RECIPE_KWARGS = {"fp8_format": te_recipe.Format.HYBRID, "amax_history_len": 32, "amax_compute_algo": "max"}
fp8_recipe = DelayedScaling(**FP8_RECIPE_KWARGS)
model.to("cuda")
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
with te.fp8_autocast(enabled=True, fp8_recipe=fp8_recipe):
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
batch = batch.to("cuda")
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
def train_integration():
FP8_RECIPE_KWARGS = {"fp8_format": "HYBRID", "amax_history_len": 32, "amax_compute_algo": "max"}
kwargs_handlers = [FP8RecipeKwargs(backend="TE", **FP8_RECIPE_KWARGS)]
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=kwargs_handlers)
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler)
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
)
assert trained_model_results["f1"] > base_model_results["f1"], (
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
)
return base_model_results, trained_model_results
if __name__ == "__main__":
baseline_not_trained, baseline_trained = train_baseline()
accelerator_not_trained, accelerator_trained = train_integration()
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
)
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
)
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
)
assert baseline_trained["f1"] == accelerator_trained["f1"], (
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
)

View File

@ -0,0 +1,74 @@
# FSDP2 Benchmarks
This benchmark showcases `FSDP2` in 🤗 `accelerate` and compares it to `torch` baseline.
## Overview
This benchmark consists of two parts:
- `main.py` is the main script that runs the benchmark
- `visualize.py` is the script that visualizes the results (if `--output_dir` was specified for the previous command)
## Motivation
We want to showcase that 🤗 `accelerate`'s integration of `FSDP2` is on par raw PyTorch, and highlight a "broken" part in PyTorch that creating an optimizer before applying `FSDP2` **doesn't result in a working training loop**. (more on this later)
This script showcases **matching memory usage and convergence between `accelerate` and `torch`'s baseline.**
To deal with this breaking change (and maintain backward compatibility with FSDP1 in terms of an API), `accelerate` had to come up with a workaround since `accelerate` assumes that the user will nearly always create a model, optimizer, scheduler, etc beforehand and bring them themselves. This lead to an issue of a stark increase in memory as well as the model not even training if the user creates an optimizer beforehand.
To workaround this, we replace the parameters inside the optimizer with the newly created FSDP2 sharded ones. More about this can be found in this [blog post (TBD)](TODO)
> [!WARNING]
> This script is intended to fit on 2x 24GB GPUs, though on so few GPUs it's not possible to see the memory difference (discrepancies in grad allocation result in lower memory usage in the non-fixed case), only the difference in convergence. Below are attached results from 8x H100 GPUs where the difference is visible.
> TLDR: more GPUs = bigger memory difference between fixed and non-fixed cases.
## Results
Here are the results from running the benchmark on 8x H100 GPUs:
<p align="center">
<img src="imgs/allocated_memory.png" width="80%" alt="Allocated Memory Usage">
</p>
<p align="center">
<img src="imgs/reserved_memory.png" width="80%" alt="Reserved Memory Usage">
</p>
As you can see, the memory usage of `accelerate` and `torch_post_shard` (the **intended** way) are very similar, while `torch_pre_shard_not_fixed` uses significantly more memory. Our fix in `torch_pre_shard_fixed` brings the memory usage back in line with the **intended** approach.
> [!WARNING]
> Timing discrepancies are due to the benchmarks being ran in 1 script.
## Running
To run the benchmark, you can either use `accelerate launch` or `torchrun`:
```bash
accelerate launch main.py
```
```bash
# For two GPUs
torchrun --nproc_per_node 2 main.py
```
This supports multiple configurable options, you can learn about them by running:
```bash
python3 main.py --help
```
This script will run 4 different benchmarks:
- `torch_optimizer_after_fsdp`: `torch` baseline where optimizer is created after applying `FSDP2`, this is the **intended** way to do it
- `torch_optimizer_before_fsdp_not_fixed`: `torch` baseline where optimizer is created before applying `FSDP2` without fixing the optimizer parameters
- `torch_optimizer_before_fsdp_fixed`: `torch` baseline where optimizer is created before applying `FSDP2` with our fix to the optimizer
- `accelerate`: `accelerate`'s own integration of `FSDP2` where optimizer is created before applying `FSDP2`, but we apply our fix to the optimizer
Memory results are saved in a folder specified by `--output_dir` argument.
Optionally, you can specify `--save_memory_snapshot` to save the torch memory snapshot, which can then be viewed using [`torch memory viz`](https://pytorch.org/memory_viz)
## Visualizing results
To visualize the results, you can run:
```bash
python3 visualize.py --dir <path_to_output_dir>
```
This will then create two plots, showcasing allocated and reserved memory usage between all the different benchmarks discussed above.

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

122
benchmarks/fsdp2/main.py Normal file
View File

@ -0,0 +1,122 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
from typing import Callable
import torch
from accelerate import Accelerator
from utils import parse_args, prepare_accelerate, prepare_torch
MODEL_NAME = "Qwen/Qwen2.5-1.5B-Instruct"
LEARNING_RATE = 3e-5
CONFIG = {
"model_name": MODEL_NAME,
"learning_rate": LEARNING_RATE,
}
def train(
model: torch.nn.Module,
optimizer: torch.optim.Optimizer,
train_dataloader: torch.utils.data.DataLoader,
accelerator: Accelerator,
) -> torch.Tensor:
losses = []
for batch in train_dataloader:
optimizer.zero_grad()
outputs = model(**batch, use_cache=False)
loss = outputs.loss
losses.append(loss.item())
accelerator.backward(loss)
optimizer.step()
return torch.tensor(losses)
def evaluate(args, config: dict, init_fn: Callable, run_name: str) -> torch.Tensor:
model, optimizer, dataloader, accelerator, memory_tracker = init_fn(args, config)
loss = train(model, optimizer, dataloader, accelerator)
memory_tracker.stop()
msg = f"""Results for {run_name} (rank 0):
Loss: {loss[-1].item()}
Peak Allocated Memory: {float(memory_tracker.peak_allocated_memory):.2f} MB
Peak Reserved Memory: {float(memory_tracker.peak_reserved_memory):.2f} MB
{"-" * 34}"""
accelerator.print(msg)
return loss
def main():
args = parse_args()
evaluations = [
functools.partial(
evaluate,
init_fn=functools.partial(prepare_torch, post_shard_optimizer=False, apply_optimizer_fix=True),
run_name="Optimizer Before FSDP (w/ fix)",
),
functools.partial(
evaluate,
init_fn=functools.partial(prepare_torch, post_shard_optimizer=False, apply_optimizer_fix=False),
run_name="Optimizer Before FSDP (w/o fix)",
),
functools.partial(
evaluate,
init_fn=functools.partial(prepare_torch, post_shard_optimizer=True),
run_name="Optimizer After FSDP",
),
functools.partial(evaluate, init_fn=prepare_accelerate, run_name="Accelerate"),
]
labels = [
"Optimizer Before FSDP (w/ fix)",
"Optimizer Before FSDP (w/o fix)",
"Optimizer After FSDP",
"Accelerate",
]
results = {}
torch.use_deterministic_algorithms(True)
for evaluation, label in zip(evaluations, labels):
results[label] = evaluation(args, CONFIG)
torch.testing.assert_close(
results["Optimizer After FSDP"],
results["Optimizer Before FSDP (w/ fix)"],
msg="Optimizer After FSDP and Optimizer Before FSDP (w/ fix) should be the same",
)
torch.testing.assert_close(
results["Optimizer After FSDP"],
results["Accelerate"],
msg="Optimizer After FSDP and Accelerate should be the same",
)
torch.testing.assert_close(
results["Accelerate"],
results["Optimizer Before FSDP (w/ fix)"],
msg="Accelerate and Optimizer Before FSDP (w/ fix) should be the same",
)
torch.distributed.destroy_process_group()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,130 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc
import json
import os
import threading
import time
import psutil
import torch
from accelerate import PartialState
class MemoryTracker:
def __init__(
self,
device: torch.device,
output_directory: str,
run_name: str,
save_memory_snapshot: bool,
log_interval: float = 0.01,
):
"""Class for tracking gpu and cpu memory usage of the process.
Args:
device (`torch.device`):
PyTorch device to monitor.
output_directory (`str`):
Directory to save the memory usage data to, will be created if it doesn't exist.
run_name (`str`):
Name of the run, will be used to name the output files.
save_memory_snapshot (`bool`):
Whether to also save `torch.cuda.memory._dump_snapshot` to the output directory.
log_interval (`float`, *optional*):
Interval in seconds between memory measurements. Defaults to 0.01.
"""
self.log_interval = log_interval
self.save_memory_snapshot = save_memory_snapshot
self.output_directory = output_directory
self.run_name = run_name
self.timestamps = []
self.allocated_memory = []
self.reserved_memory = []
self.virtual_memory = []
self.start_time = None
self.running = False
self._thread = None
self._state = PartialState()
self._process = psutil.Process()
self._device = device
self.torch_accelerator_module = getattr(torch, device.type, torch.cuda)
def _monitor(self):
self.start_time = time.time()
while self.running:
allocated = self.torch_accelerator_module.memory_allocated(self._device) / (1024 * 1024)
reserved = self.torch_accelerator_module.memory_reserved(self._device) / (1024 * 1024)
virtual_memory = self._process.memory_info().rss / (1024 * 1024)
self.allocated_memory.append(allocated)
self.reserved_memory.append(reserved)
self.virtual_memory.append(virtual_memory)
self.timestamps.append(time.time() - self.start_time)
time.sleep(self.log_interval)
def start(self):
gc.collect()
self.torch_accelerator_module.empty_cache()
if self.output_directory:
os.makedirs(self.output_directory, exist_ok=True)
if self.save_memory_snapshot:
self.torch_accelerator_module.memory._record_memory_history()
self.running = True
self._thread = threading.Thread(target=self._monitor)
self._thread.daemon = True
self._thread.start()
def stop(self):
self.running = False
if self._thread:
self._thread.join()
if self.save_memory_snapshot and self._state.is_main_process and self.output_directory:
output_file = os.path.join(self.output_directory, f"{self.run_name}_memory_snapshot.pkl")
self.torch_accelerator_module.memory._dump_snapshot(output_file)
if self._state.is_main_process and self.output_directory:
path = os.path.join(self.output_directory, f"{self.run_name}_memory_usage.json")
with open(path, "w") as f:
json.dump(
{
"timestamps": self.timestamps,
"allocated_memory": self.allocated_memory,
"reserved_memory": self.reserved_memory,
"virtual_memory": self.virtual_memory,
},
f,
)
if self.save_memory_snapshot:
self.torch_accelerator_module.memory._record_memory_history(False)
self.torch_accelerator_module.empty_cache()
@property
def peak_allocated_memory(self):
return max(self.allocated_memory)
@property
def peak_reserved_memory(self):
return max(self.reserved_memory)

290
benchmarks/fsdp2/utils.py Normal file
View File

@ -0,0 +1,290 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from types import MethodType
from typing import Union
import torch
from datasets import load_dataset
from measure_utils import MemoryTracker
from torch.distributed.fsdp import MixedPrecisionPolicy, fully_shard
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, DataCollatorForLanguageModeling
from transformers.models.qwen2.modeling_qwen2 import Qwen2DecoderLayer
from accelerate import Accelerator, FullyShardedDataParallelPlugin
from accelerate.state import AcceleratorState, is_initialized
from accelerate.utils import convert_outputs_to_fp32, set_seed
SEED = 421
def get_named_parameters(model: torch.nn.Module, drop_refs: bool = False) -> dict[str, Union[torch.Tensor, int]]:
"""
This function returns a dictionary mapping the parameter names to their data pointers or
the original parameters if `drop_refs` is `False`.
It is used to get the original parameter names before `fully_shard` is applied.
We only return the data pointers, so we drop the references to the original parameters
and `fully_shard` will then trigger a new allocation for the sharded ones.
Args:
model (`torch.nn.Module`): Model instance to get the named parameters from
drop_refs (`bool`, *optional*, defaults to `False`): Whether to drop the references to the original parameters
Returns:
`dict[str, Union[torch.Tensor, int]]`: Dictionary mapping the parameter names to their data pointers or the original parameters if `drop_refs` is `False`
"""
named_parameters = {}
for n, p in model.named_parameters():
# We only preserve the data pointers to have the unique 1:1 mapping between the original and the sharded parameters
named_parameters[n] = p.data_ptr() if drop_refs else p
return named_parameters
def replace_optimizer_params(optimizer: torch.optim.Optimizer):
"""
This function is called before using `fully_shard` on the model. It replaces the parameters of the optimizer with
empty tensors, so `fully_shard` can trigger a new allocation for the sharded ones. After this, we swap the parameters
`data_ptr` to the original one, so we can reuse that later to map the sharded parameters to the original ones.
This function modifies the optimizer in-place.
Args:
optimizer (torch.optim.Optimizer): Optimizer instance which contains the original model parameters
"""
for param_group in optimizer.param_groups:
for i, p in enumerate(param_group["params"]):
# We drop a reference to the original param here, so that _move_states_to_device triggers a reallocation
# This is required or else the `fully_shard` -> `_move_states_to_device` uses the original memory address
# for the sharded parameters, and we get a weird/undefined behavior.
param_group["params"][i] = torch.empty_like(p)
# We save the original data_ptr, so we can swap back the parameters later
param_group["params"][i].data_ptr = p.data_ptr()
def swap_back_optimizer_params(
model: torch.nn.Module, optimizer: torch.optim.Optimizer, old_named_parameter_pointers: dict[str, int]
):
"""
This function is the counterpart of `replace_optimizer_params`. It is called after `fully_shard` being applied to
the model. It swaps the parameters of the optimizer to their sharded counterparts.
It is done using the `data_ptr` mapping prepared in `replace_optimizer_params` and `get_named_parameters`.
Args:
model (`torch.nn.Module`): Model instance to get the new named parameters from
optimizer (`torch.optim.Optimizer`): Optimizer instance to swap the parameters of
old_named_parameter_pointers (`dict[str, int]`): Dictionary mapping the original parameter names: data_ptrs to the new ones
"""
# We get the new named parameters after `fully_shard` being applied
# We don't drop the references as we need the sharded parameters now
new_named_parameters = get_named_parameters(model, drop_refs=False)
# We create a mapping from the original data_ptr to the new sharded param corresponding to it
mapping = {p: new_named_parameters[n] for n, p in old_named_parameter_pointers.items()}
for param_group in optimizer.param_groups:
# We swap the parameters of the optimizer to the new sharded ones
param_group["params"] = [mapping[p.data_ptr] for p in param_group["params"]]
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument(
"--output_dir",
type=str,
help="Directory to save the benchmarking results.",
)
parser.add_argument(
"--save_memory_snapshot",
action="store_true",
default=False,
help="If True, `torch.cuda.memory._dump_snapshot` will be used to additionaly save the memory trace.",
)
######################
# Training arguments #
######################
parser.add_argument(
"--batch_size",
type=int,
default=2,
help="Batch size for the training loop.",
)
parser.add_argument(
"--block_size",
type=int,
default=128,
help="The maximum sequence length to use with the model.",
)
parser.add_argument(
"--dataset_fraction",
type=float,
default=1.0,
help="Fraction of the dataset to use.",
)
return parser.parse_args()
def prepare_dataloader(tokenizer, args, accelerator: Accelerator) -> DataLoader:
dataset = load_dataset("tiny_shakespeare", split="train", trust_remote_code=True)
def tokenize_function(example):
return tokenizer(
example["text"],
)
dataset = dataset.map(
tokenize_function,
batched=True,
remove_columns=["text"],
)
block_size = min(tokenizer.model_max_length, args.block_size)
def group_texts(examples):
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
total_length = (total_length // block_size) * block_size
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
dataset = dataset.map(group_texts, batched=True)
dataset = dataset.select(range(int(len(dataset) * args.dataset_fraction)))
def collate_fn(examples):
return DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False,
)(examples)
dataloader = DataLoader(
dataset,
batch_size=args.batch_size,
collate_fn=collate_fn,
)
dataloader = accelerator.prepare(dataloader)
return dataloader
def get_model(model_name: str):
# We reguire model to be loaded in fp32, otherwise benchmarks don't match as accelerate does upcasting of parameters to fp32
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float32)
model = AutoModelForCausalLM.from_config(config)
return model
def get_tokenizer(model_name: str):
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
return tokenizer
def prepare_torch(
args, config: dict, post_shard_optimizer: bool = False, apply_optimizer_fix: bool = False
) -> tuple[torch.nn.Module, torch.optim.Optimizer, torch.utils.data.DataLoader, Accelerator]:
mp_policy = MixedPrecisionPolicy(
param_dtype=torch.bfloat16,
reduce_dtype=torch.bfloat16,
output_dtype=torch.bfloat16,
)
accelerator = Accelerator(mixed_precision="bf16")
set_seed(SEED)
is_fixed = "fixed" if apply_optimizer_fix else "not_fixed"
is_post_shard = "optimizer_after_fsdp" if post_shard_optimizer else "optimizer_before_fsdp"
run_name = f"torch_{is_post_shard}" if post_shard_optimizer else f"torch_{is_post_shard}_{is_fixed}"
tokenizer = get_tokenizer(config["model_name"])
train_dataloader = prepare_dataloader(tokenizer, args, accelerator)
memory_tracker = MemoryTracker(accelerator.device, args.output_dir, run_name, args.save_memory_snapshot)
memory_tracker.start()
model = get_model(config["model_name"])
optimizer = None
if not post_shard_optimizer:
optimizer = AdamW(model.parameters(), lr=config["learning_rate"])
if apply_optimizer_fix:
# We drop the references to the original parameters, so that `fully_shard` can trigger a new allocation
# Then we get the `module_name: data_ptr` mapping, so we can swap back the parameters later
old_named_parameters = get_named_parameters(model, drop_refs=True)
# We replace the parameters of the optimizer with empty tensors, so that `fully_shard` can trigger a new allocation
# We also change the `data_ptr` of the parameters to the original ones, so we can swap back the parameters later
replace_optimizer_params(optimizer)
for module in model.modules():
if isinstance(module, Qwen2DecoderLayer):
fully_shard(module, mp_policy=mp_policy)
fully_shard(model, mp_policy=mp_policy)
# We do this to imitate how accelerate forces outputs to be in fp32 via `convert_outputs_to_fp32`
autocast_context = torch.autocast(device_type=accelerator.state.device.type, dtype=torch.bfloat16)
model_forward_func = model.forward.__func__
new_forward = autocast_context(model_forward_func)
model.forward = MethodType(new_forward, model)
model.forward = MethodType(convert_outputs_to_fp32(model.forward.__func__), model)
if post_shard_optimizer:
optimizer = AdamW(model.parameters(), lr=config["learning_rate"])
if not post_shard_optimizer and apply_optimizer_fix:
# We swap back the parameters of the optimizer to the original ones
swap_back_optimizer_params(model, optimizer, old_named_parameters)
return model, optimizer, train_dataloader, accelerator, memory_tracker
def prepare_accelerate(
args, config: dict
) -> tuple[torch.nn.Module, torch.optim.Optimizer, torch.utils.data.DataLoader, Accelerator]:
if is_initialized():
AcceleratorState()._reset_state(True)
fsdp_plugin = FullyShardedDataParallelPlugin(
fsdp_version=2,
auto_wrap_policy="transformer_based_wrap",
transformer_cls_names_to_wrap=["Qwen2DecoderLayer"],
)
accelerator = Accelerator(
fsdp_plugin=fsdp_plugin,
mixed_precision="bf16",
)
set_seed(SEED)
tokenizer = get_tokenizer(config["model_name"])
train_dataloader = prepare_dataloader(tokenizer, args, accelerator)
memory_tracker = MemoryTracker(accelerator.device, args.output_dir, "accelerate", args.save_memory_snapshot)
memory_tracker.start()
model = get_model(config["model_name"])
optimizer = AdamW(model.parameters(), lr=config["learning_rate"])
model, optimizer = accelerator.prepare(model, optimizer)
return model, optimizer, train_dataloader, accelerator, memory_tracker

View File

@ -0,0 +1,114 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import json
import matplotlib.pyplot as plt
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--dir", type=str, help="Directory containing the memory usage data")
parser.add_argument(
"--memory_threshold",
type=int,
default=0,
help="Memory threshold to filter data that is below this value (only filters 1st `--filter_partition` of the points which should roughtly correspond to the model loading)",
)
parser.add_argument(
"--filter_partition",
type=float,
default=1 / 3,
help="Partition to drop data from that are below the memory threshold",
)
return parser.parse_args()
def filter_data(data, memory_threshold, filter_partition, key):
timestamps = data["timestamps"]
memory = data[key]
mid_point = int(len(timestamps) * filter_partition)
filtered_times = []
filtered_memory = []
for i, (t, m) in enumerate(zip(timestamps, memory)):
if i < mid_point and m < memory_threshold:
continue
filtered_times.append(t)
filtered_memory.append(m)
return filtered_times, filtered_memory
def compare_memory_usage(data, labels, memory_threshold, filter_partition):
plt.style.use("seaborn-v0_8")
colors = ["#2ecc71", "#e74c3c", "#3498db", "#f1c40f"]
fig1, ax1 = plt.subplots(figsize=(15, 5))
for data_item, label, color in zip(data, labels, colors):
timestamps, allocated = filter_data(data_item, memory_threshold, filter_partition, "allocated_memory")
ax1.plot(timestamps, allocated, label=label, color=color, linewidth=2)
ax1.set_xlabel("Time (s)", fontsize=12)
ax1.set_ylabel("Allocated Memory (GB)", fontsize=12)
ax1.set_title("Allocated Memory Usage Over Time", fontsize=14, pad=15)
ax1.grid(True, linestyle="--", alpha=0.7)
ax1.legend(frameon=True, fancybox=True, shadow=True, fontsize=10)
ax1.spines["top"].set_visible(False)
ax1.spines["right"].set_visible(False)
plt.tight_layout()
fig2, ax2 = plt.subplots(figsize=(15, 5))
for data_item, label, color in zip(data, labels, colors):
timestamps, reserved = filter_data(data_item, memory_threshold, filter_partition, "reserved_memory")
ax2.plot(timestamps, reserved, label=label, color=color, linewidth=2)
ax2.set_xlabel("Time (s)", fontsize=12)
ax2.set_ylabel("Reserved Memory (GB)", fontsize=12)
ax2.set_title("Reserved Memory Usage Over Time", fontsize=14, pad=15)
ax2.grid(True, linestyle="--", alpha=0.7)
ax2.legend(frameon=True, fancybox=True, shadow=True, fontsize=10)
ax2.spines["top"].set_visible(False)
ax2.spines["right"].set_visible(False)
plt.tight_layout()
return fig1, fig2
if __name__ == "__main__":
args = parse_args()
DIR = args.dir
with open(f"{DIR}/torch_optimizer_before_fsdp_not_fixed_memory_usage.json") as f:
optimizer_before_fsdp_not_fixed = json.load(f)
with open(f"{DIR}/torch_optimizer_after_fsdp_memory_usage.json") as f:
optimizer_after_fsdp = json.load(f)
with open(f"{DIR}/torch_optimizer_before_fsdp_fixed_memory_usage.json") as f:
optimizer_before_fsdp_fixed = json.load(f)
with open(f"{DIR}/accelerate_memory_usage.json") as f:
accelerate = json.load(f)
data = [optimizer_before_fsdp_not_fixed, optimizer_before_fsdp_fixed, optimizer_after_fsdp, accelerate]
labels = [
"Optimizer Before FSDP (w/o fix)",
"Optimizer Before FSDP (w/ fix)",
"Optimizer After FSDP",
"Accelerate",
]
fig1, fig2 = compare_memory_usage(data, labels, args.memory_threshold, args.filter_partition)
fig1.savefig(f"{DIR}/allocated_memory.png")
fig2.savefig(f"{DIR}/reserved_memory.png")

View File

@ -0,0 +1,111 @@
# Regional Compilation Benchmark
This benchmark compares different compilation strategies using PyTorch's `torch.compile` and Accelerate's `compile_regions` utility, which is based on the recipe in [PyTorch documentation](https://pytorch.org/tutorials/recipes/regional_compilation.html).
## Overview
The benchmark evaluates three approaches:
- **Baseline**: No compilation, standard PyTorch eager execution.
- **Full compilation**: Using PyTorch's `torch.compile()` on the entire model.
- **Regional compilation**: Using `accelerate.utils.compile_regions()` which targets specific blocks of the model to optimize compilation time.
Each approach is tested with different batch sizes (1 and 4) and sequence lengths (128) on various LLaMA-based models ranging from 1B to 13B parameters. We purposefully run the forward pass outside of the `torch.no_grad()` context to simulate performance in a training environment, where gradients are needed.
## Usage
To run this benchmark:
```bash
python regional_compilation.py
```
The script will automatically download the model configurations, create models, and benchmark both compilation and inference times across different scenarios.
## Requirements
- Suitable GPU memory for the models being tested.
- PyTorch with CUDA support.
- Transformers library.
- Accelerate library.
## Results
The benchmark results are summarized in the following figures:
- Compilation time is how long it takes to run the first forward pass.
- Speedup factor is the ratio of non-compiled baseline inference time to the fully/regionally compiled inference time.
<p align="center">
<img src="imgs/compilation_time.png" width="80%" alt="Compilation Time">
</p>
<p align="center">
<img src="imgs/speedup_factor.png" width="80%" alt="Speedup Factor">
</p>
Full results are available in the tables below:
```markdown
[-------------------------------------------------- NousResearch/Llama-3.2-1B ---------------------------------------------------]
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
1 threads: -----------------------------------------------------------------------------------------------------------------------
Baseline | 18.3 | 18.4 | |
Full compilation | 6.3 | 10.0 | 10696.4 | 10248.0
Regional compilation | 9.7 | 10.0 | 1952.7 | 2903.9
Times are in milliseconds (ms).
[---------------------------------------------- NousResearch/Hermes-3-Llama-3.2-3B ----------------------------------------------]
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
1 threads: -----------------------------------------------------------------------------------------------------------------------
Baseline | 33.4 | 33.6 | |
Full compilation | 11.2 | 23.9 | 17857.5 | 17736.5
Regional compilation | 17.3 | 23.7 | 2993.2 | 2478.8
Times are in milliseconds (ms).
[---------------------------------------------- NousResearch/Hermes-3-Llama-3.1-8B ----------------------------------------------]
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
1 threads: -----------------------------------------------------------------------------------------------------------------------
Baseline | 40.3 | 59.5 | |
Full compilation | 18.9 | 54.4 | 20437.8 | 20152.3
Regional compilation | 19.7 | 54.0 | 2903.1 | 2438.0
Times are in milliseconds (ms).
[--------------------------------------------- NousResearch/Nous-Hermes-Llama2-13b ----------------------------------------------]
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
1 threads: -----------------------------------------------------------------------------------------------------------------------
Baseline | 45.5 | 100.4 | |
Full compilation | 29.4 | 89.7 | 23099.4 | 22885.9
Regional compilation | 29.4 | 87.5 | 2945.5 | 2526.2
Times are in milliseconds (ms).
```
## Results Summary
### Compilation Time
Regional compilation provides significantly faster compilation times compared to full model compilation:
- **Full compilation**: Takes ~10-23 seconds depending on model size.
- **Regional compilation**: Takes only ~2-3 seconds across all model sizes.
- **Speed improvement**: Regional compilation is **5-9x faster** to compile.
### Inference Time
Regional compilation delivers inference performance close to full compilation:
- For batch size 1:
- For smaller models (1B-3B): Full compilation has a slight edge over regional compilation.
- For larger models (8B-13B): Regional compilation performs similarly to full compilation.
- For batch size 4: Regional compilation performs similarly to full compilation across all models.
## Key Takeaways
1. **Comparable Performance**: Regional compilation delivers performance speedups similar to full compilation, especially for larger models.
2. **Faster Compilation**: Regional compilation significantly reduces the time taken to compile models, making it a more efficient choice for deployment.
3. **Batch Size Impact**: At batch size 4, full compilation and regional compilation perform nearly identically.
4. **Model Size Impact**: Even with a small batch size, full compilation and regional compilation perform similarly for larger models (8B-13B).
5. **Practical Application**: For real-world applications, regional compilation is a practical choice for optimizing training cold start times, especially when working with large models.

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

View File

@ -0,0 +1,77 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from torch.utils.benchmark import Compare, Timer
from transformers import AutoConfig, AutoModelForCausalLM
from accelerate.test_utils.testing import get_backend
from accelerate.utils import compile_regions
torch.set_float32_matmul_precision("high")
COMPILE_ITERS = 2
INFERENCE_ITERS = 100
BASELINE = "Baseline"
COMPILE_TIME = "Compile time"
INFRENCE_TIME = "Inference time"
FULL_COMPILATION = "Full compilation"
REGIONAL_COMPILATION = "Regional compilation"
INFRENCE_STMT = "model(input_ids, use_cache=False)"
COMPILE_STMT = f"torch._dynamo.reset(); torch._inductor.utils.clear_inductor_caches(); {INFRENCE_STMT}"
torch_device_type, _, _ = get_backend()
results = []
for model_id in [
# non-gated llama models
"NousResearch/Llama-3.2-1B",
"NousResearch/Hermes-3-Llama-3.2-3B",
"NousResearch/Hermes-3-Llama-3.1-8B",
"NousResearch/Nous-Hermes-Llama2-13b",
]:
with torch.device(torch_device_type):
config = AutoConfig.from_pretrained(model_id)
model = AutoModelForCausalLM.from_config(config).to(dtype=torch.float16).eval()
full_compilation_model = torch.compile(model)
regional_compilation_model = compile_regions(model)
for model, sub_label, description, stmt, iters in [
(model, BASELINE, INFRENCE_TIME, INFRENCE_STMT, INFERENCE_ITERS),
(full_compilation_model, FULL_COMPILATION, COMPILE_TIME, COMPILE_STMT, COMPILE_ITERS),
(full_compilation_model, FULL_COMPILATION, INFRENCE_TIME, INFRENCE_STMT, INFERENCE_ITERS),
(regional_compilation_model, REGIONAL_COMPILATION, COMPILE_TIME, COMPILE_STMT, COMPILE_ITERS),
(regional_compilation_model, REGIONAL_COMPILATION, INFRENCE_TIME, INFRENCE_STMT, INFERENCE_ITERS),
]:
for batch_size, sequence_length in [(1, 128), (4, 128)]:
input_ids = torch.randint(
0, 1000, size=(batch_size, sequence_length), dtype=torch.int64, device=torch_device_type
)
results.append(
Timer(
label=model_id,
sub_label=sub_label,
description=f"{description} ({batch_size}x{sequence_length})",
globals={"model": model, "input_ids": input_ids},
stmt=stmt,
).timeit(number=iters)
)
compare = Compare(results)
compare.colorize()
compare.print()

74
docker/README.md Normal file
View File

@ -0,0 +1,74 @@
<!---
Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Official Hugging Face Accelerate Docker Images
Accelerate publishes a variety of docker versions as part of our CI that users can also use. These are stable images that Accelerate can run off of which comes with a variety of different setup configurations, all of which are officially hosted on [Docker Hub](https://hub.docker.com/r/huggingface/accelerate).
A breakdown of each are given below
## Naming Conventions
Accelerate docker images follow a tagging convention of:
```bash
huggingface/accelerate:{accelerator}-{nightly,release}
```
`accelerator` in this instance is one of many applical pre-configured backend supports:
* `gpu`: Comes compiled off of the `nvidia/cuda` image and includes core parts like `bitsandbytes`. Runs off python 3.9.
* `cpu`: Comes compiled off of `python:3.9-slim` and is designed for non-CUDA based workloads.
* More to come soon
* `gpu-deepspeed`: Comes compiled off of the `nvidia/cuda` image and includes core parts like `bitsandbytes` as well as the latest `deepspeed` version. Runs off python 3.10.
* `gpu-fp8-transformerengine`: Comes compiled off of `nvcr.io/nvidia/pytorch` and is specifically for running the `benchmarks/fp8` scripts on devices which support FP8 operations using the `TransformerEngine` library (RTX 4090, H100, etc)
## Nightlies vs Releases
Each release a new build is pushed with a version number included in the name. For a GPU-supported image of version 0.28.0 for instance, it would look like the following:
```bash
huggingface/accelerate:gpu-release-0.28.0
```
Nightlies contain two different image tags. There is a general `nightly` tag which is built each night, and a `nightly-YYYY-MM-DD` which corresponds to a build from a particular date.
For instance, here is an example nightly CPU image from 3/14/2024
```bash
huggingface/accelerate:cpu-nightly-2024-03-14
```
## Running the images
Each image comes compiled with `conda` and an `accelerate` environment contains all of the installed dependencies.
To pull down the latest nightly run:
```bash
docker pull huggingface/accelerate:gpu-nightly
```
To then run it in interactive mode with GPU-memory available, run:
```bash
docker container run --gpus all -it huggingface/accelerate:gpu-nightly
```
## DEPRECATED IMAGES
CPU and GPU docker images were hosted at `huggingface/accelerate-gpu` and `huggingface/accelerate-cpu`. These builds are now outdated and will not receive updates.
The builds at the corresponding `huggingface/accelerate:{gpu,cpu}` contain the same `Dockerfile`, so it's as simple as changing the docker image to the desired ones from above. We will not be deleting these images for posterity, but they will not be receiving updates going forward.

View File

@ -1,7 +1,7 @@
# Builds CPU-only Docker image of PyTorch # Builds CPU-only Docker image of PyTorch
# Uses multi-staged approach to reduce size # Uses multi-staged approach to reduce size
# Stage 1 # Stage 1
FROM python:3.7-slim as compile-image FROM python:3.9-slim as compile-image
ARG DEBIAN_FRONTEND=noninteractive ARG DEBIAN_FRONTEND=noninteractive
@ -25,7 +25,7 @@ RUN python3 -m pip install --no-cache-dir \
--extra-index-url https://download.pytorch.org/whl/cpu --extra-index-url https://download.pytorch.org/whl/cpu
# Stage 2 # Stage 2
FROM python:3.7-slim AS build-image FROM python:3.9-slim AS build-image
COPY --from=compile-image /opt/venv /opt/venv COPY --from=compile-image /opt/venv /opt/venv
RUN useradd -ms /bin/bash user RUN useradd -ms /bin/bash user
USER user USER user

View File

@ -0,0 +1,46 @@
# Builds GPU docker image of PyTorch specifically
# Uses multi-staged approach to reduce size
# Stage 1
# Use base conda image to reduce time
FROM continuumio/miniconda3:latest AS compile-image
# Specify py version
# Note: DeepSpeed beyond v0.12.6 requires py 3.10
ENV PYTHON_VERSION=3.10
# Install apt libs
RUN apt-get update && \
apt-get install -y curl git wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists*
# Create our conda env
RUN conda create --name accelerate python=${PYTHON_VERSION} ipython jupyter pip
# We don't install pytorch here yet since CUDA isn't available
# instead we use the direct torch wheel
ENV PATH /opt/conda/envs/accelerate/bin:$PATH
# Activate our bash shell
RUN chsh -s /bin/bash
SHELL ["/bin/bash", "-c"]
# Activate the conda env, install mpy4pi, and install torch + accelerate
RUN source activate accelerate && conda install -c conda-forge mpi4py
RUN source activate accelerate && \
python3 -m pip install --no-cache-dir \
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers,deepspeed] \
--extra-index-url https://download.pytorch.org/whl/cu126
RUN python3 -m pip install --no-cache-dir bitsandbytes
# Stage 2
FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 AS build-image
COPY --from=compile-image /opt/conda /opt/conda
ENV PATH /opt/conda/bin:$PATH
# Install apt libs
RUN apt-get update && \
apt-get install -y curl git wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists*
RUN echo "source activate accelerate" >> ~/.profile
# Activate the virtualenv
CMD ["/bin/bash"]

View File

@ -1,10 +1,10 @@
# Builds GPU docker image of PyTorch # Builds GPU docker image of PyTorch specifically
# Uses multi-staged approach to reduce size # Uses multi-staged approach to reduce size
# Stage 1 # Stage 1
# Use base conda image to reduce time # Use base conda image to reduce time
FROM continuumio/miniconda3:latest AS compile-image FROM continuumio/miniconda3:latest AS compile-image
# Specify py version # Specify py version
ENV PYTHON_VERSION=3.7.3 ENV PYTHON_VERSION=3.9
# Install apt libs # Install apt libs
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y curl git wget && \ apt-get install -y curl git wget && \
@ -19,14 +19,17 @@ ENV PATH /opt/conda/envs/accelerate/bin:$PATH
# Activate our bash shell # Activate our bash shell
RUN chsh -s /bin/bash RUN chsh -s /bin/bash
SHELL ["/bin/bash", "-c"] SHELL ["/bin/bash", "-c"]
# Activate the conda env and install torch + accelerate # Activate the conda env, install mpy4pi, and install torch + accelerate
RUN source activate accelerate && conda install -c conda-forge mpi4py
RUN source activate accelerate && \ RUN source activate accelerate && \
python3 -m pip install --no-cache-dir \ python3 -m pip install --no-cache-dir \
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers] \ git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers] \
--extra-index-url https://download.pytorch.org/whl/cu113 --extra-index-url https://download.pytorch.org/whl/cu126
RUN python3 -m pip install --no-cache-dir bitsandbytes
# Stage 2 # Stage 2
FROM nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04 AS build-image FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 AS build-image
COPY --from=compile-image /opt/conda /opt/conda COPY --from=compile-image /opt/conda /opt/conda
ENV PATH /opt/conda/bin:$PATH ENV PATH /opt/conda/bin:$PATH

267
docs/README.md Normal file
View File

@ -0,0 +1,267 @@
<!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Generating the documentation
To generate the documentation, you first have to build it. Several packages are necessary to build the doc,
you can install them with the following command, at the root of the code repository:
```bash
pip install -e ".[docs]"
```
Then you need to install our special tool that builds the documentation:
```bash
pip install git+https://github.com/huggingface/doc-builder
```
---
**NOTE**
You only need to generate the documentation to inspect it locally (if you're planning changes and want to
check how they look before committing for instance). You don't have to commit the built documentation.
---
## Building the documentation
Once you have setup the `doc-builder` and additional packages, you can generate the documentation by
typing the following command:
```bash
doc-builder build accelerate docs/source/ --build_dir ~/tmp/test-build
```
You can adapt the `--build_dir` to set any temporary folder that you prefer. This command will create it and generate
the MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite
Markdown editor.
## Previewing the documentation
To preview the docs, first install the `watchdog` module with:
```bash
pip install watchdog
```
Then run the following command:
```bash
doc-builder preview {package_name} {path_to_docs}
```
For example:
```bash
doc-builder preview accelerate docs/source/
```
The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
---
**NOTE**
The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
---
## Adding a new element to the navigation bar
Accepted files are Markdown (.md).
Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/accelerate/blob/main/docs/source/_toctree.yml) file.
## Renaming section headers and moving sections
It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
```
Sections that were moved:
[ <a href="#section-b">Section A</a><a id="section-a"></a> ]
```
and of course, if you moved it to another file, then:
```
Sections that were moved:
[ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
```
Use the relative style to link to the new file so that the versioned docs continue to work.
## Writing Documentation - Specification
The `huggingface/accelerate` documentation follows the
[Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings,
although we can write them directly in Markdown.
### Adding a new tutorial
Adding a new tutorial or section is done in two steps:
- Add a new file under `./source`. This file can either be ReStructuredText (.rst) or Markdown (.md).
- Link that file in `./source/_toctree.yml` on the correct toc-tree.
Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so
depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or
four.
### Writing source documentation
Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
and objects like True, None, or any strings should usually be put in `code`.
When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool
adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or
function to be in the main package.
If you want to create a link to some internal class or function, you need to
provide its path. For instance: \[\`utils.gather\`\]. This will be converted into a link with
`utils.gather` in the description. To get rid of the path and only keep the name of the object you are
linking to in the description, add a ~: \[\`~utils.gather\`\] will generate a link with `gather` in the description.
The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\].
#### Defining arguments in a method
Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and
an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its
description:
```
Args:
n_layers (`int`): The number of layers of the model.
```
If the description is too long to fit in one line (more than 119 characters in total), another indentation is necessary
before writing the description after the argument.
Finally, to maintain uniformity if any *one* description is too long to fit on one line, the
rest of the parameters should follow suit and have an indention before their description.
Here's an example showcasing everything so far:
```
Args:
gradient_accumulation_steps (`int`, *optional*, default to 1):
The number of steps that should pass before gradients are accumulated. A number > 1 should be combined with `Accelerator.accumulate`.
cpu (`bool`, *optional*):
Whether or not to force the script to execute on CPU. Will ignore GPU available if set to `True` and force the execution on one process only.
```
For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the
following signature:
```
def my_function(x: str = None, a: float = 1):
```
then its documentation should look like this:
```
Args:
x (`str`, *optional*):
This argument controls ... and has a description longer than 119 chars.
a (`float`, *optional*, defaults to 1):
This argument is used to ... and has a description longer than 119 chars.
```
Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even
if the first line describing your argument type and its default gets long, you can't break it on several lines. You can
however write as many lines as you want in the indented description (see the example above with `input_ids`).
#### Writing a multi-line code block
Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
````
```python
# first line of code
# second line
# etc
```
````
#### Writing a return block
The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation.
The first line should be the type of the return, followed by a line return. No need to indent further for the elements
building the return.
Here's an example of a single value return:
```
Returns:
`List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.
```
Here's an example of a tuple return, comprising several objects:
```
Returns:
`tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
- ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
- **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
```
## Styling the docstring
We have an automatic script running with the `make style` comment that will make sure that:
- the docstrings fully take advantage of the line width
- all code examples are formatted using black, like the code of the Transformers library
This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's
recommended to commit your changes before running `make style`, so you can revert the changes done by that script
easily.
## Writing documentation examples
The syntax for Example docstrings can look as follows:
```
Example:
```python
>>> import time
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
>>> if accelerator.is_main_process:
... time.sleep(2)
>>> else:
... print("I'm waiting for the main process to finish its sleep...")
>>> accelerator.wait_for_everyone()
>>> # Should print on every process at the same time
>>> print("Everyone is here")
```
```
The docstring should give a minimal, clear example of how the respective function
is to be used in inference and also include the expected (ideally sensible)
output.
Often, readers will try out the example before even going through the function
or class definitions. Therefore, it is of utmost importance that the example
works as expected.

View File

@ -10,65 +10,124 @@
- local: basic_tutorials/overview - local: basic_tutorials/overview
title: Overview title: Overview
- local: basic_tutorials/migration - local: basic_tutorials/migration
title: Migrating to 🤗 Accelerate title: Add Accelerate to your code
- local: basic_tutorials/execution
title: Execution process
- local: basic_tutorials/tpu
title: TPU training
- local: basic_tutorials/launch - local: basic_tutorials/launch
title: Launching distributed code title: Launching Accelerate scripts
- local: basic_tutorials/notebook - local: basic_tutorials/notebook
title: Launching distributed training from Jupyter Notebooks title: Launching distributed training from Jupyter Notebooks
title: Tutorials title: Tutorials
- sections: - sections:
- local: usage_guides/gradient_accumulation - isExpanded: true
title: Performing gradient accumulation sections:
- local: usage_guides/fsdp - local: usage_guides/explore
title: Fully Sharded Data Parallelism title: Start Here!
- local: usage_guides/checkpoint - local: usage_guides/model_size_estimator
title: Saving and loading training states title: Model memory estimator
- local: usage_guides/deepspeed - local: usage_guides/quantization
title: How to use DeepSpeed title: Model quantization
- local: usage_guides/tracking - local: usage_guides/tracking
title: Using experiment trackers title: Experiment trackers
- local: usage_guides/big_modeling - local: usage_guides/profiler
title: How to use large models with small resources title: Profiler
- local: usage_guides/memory - local: usage_guides/checkpoint
title: How to avoid CUDA Out-of-Memory title: Checkpointing
- local: usage_guides/sagemaker - local: basic_tutorials/troubleshooting
title: Using 🤗 Accelerate on SageMaker title: Troubleshoot
- local: usage_guides/mps - local: usage_guides/training_zoo
title: How to use Apple Silicon M1 GPUs title: Example Zoo
- local: usage_guides/training_zoo title: Accelerate
title: 🤗 Accelerate Example Zoo - isExpanded: true
title: How-To Guides sections:
- local: usage_guides/gradient_accumulation
title: Gradient accumulation
- local: usage_guides/local_sgd
title: Local SGD
- local: usage_guides/low_precision_training
title: Low precision (FP8) training
- local: usage_guides/deepspeed
title: DeepSpeed
- local: usage_guides/deepspeed_multiple_model
title: Using multiple models with DeepSpeed
- local: usage_guides/ddp_comm_hook
title: DDP Communication Hooks
- local: usage_guides/fsdp
title: Fully Sharded Data Parallel
- local: usage_guides/megatron_lm
title: Megatron-LM
- local: usage_guides/sagemaker
title: Amazon SageMaker
- local: usage_guides/mps
title: Apple M1 GPUs
- local: usage_guides/intel_cpu
title: Intel CPU
- local: usage_guides/gaudi
title: Intel Gaudi
- local: usage_guides/compilation
title: Compilation
title: Training
- isExpanded: true
sections:
- local: usage_guides/big_modeling
title: Big Model Inference
- local: usage_guides/distributed_inference
title: Distributed inference
title: Inference
title: How to guides
- sections: - sections:
- local: concept_guides/internal_mechanism
title: Accelerate's internal mechanism
- local: concept_guides/big_model_inference
title: Loading big models into memory
- local: concept_guides/performance - local: concept_guides/performance
title: Comparing performance across distributed setups title: Comparing performance across distributed setups
- local: concept_guides/gradient_synchronization
title: Gradient synchronization
- local: concept_guides/deferring_execution - local: concept_guides/deferring_execution
title: Executing and deferring jobs title: Executing and deferring jobs
- local: concept_guides/gradient_synchronization
title: Gradient synchronization
- local: concept_guides/fsdp_and_deepspeed
title: FSDP vs DeepSpeed
- local: concept_guides/fsdp1_vs_fsdp2
title: FSDP1 vs FSDP2
- local: concept_guides/context_parallelism
title: Context parallelism
- local: concept_guides/low_precision_training
title: Low precision training methods
- local: concept_guides/training_tpu - local: concept_guides/training_tpu
title: TPU best practices title: Training on TPUs
title: Concepts and fundamentals title: Concepts and fundamentals
- sections: - sections:
- local: package_reference/accelerator - local: package_reference/accelerator
title: Main Accelerator class title: Accelerator
- local: package_reference/state - local: package_reference/state
title: Stateful configuration classes title: Stateful classes
- local: package_reference/cli - local: package_reference/cli
title: The Command Line title: The Command Line
- local: package_reference/torch_wrappers - local: package_reference/torch_wrappers
title: Torch wrapper classes title: DataLoaders, Optimizers, Schedulers
- local: package_reference/tracking - local: package_reference/tracking
title: Experiment trackers title: Experiment trackers
- local: package_reference/launchers - local: package_reference/launchers
title: Distributed launchers title: Launchers
- local: package_reference/deepspeed - local: package_reference/deepspeed
title: DeepSpeed utilities title: DeepSpeed utilities
- local: package_reference/logging - local: package_reference/logging
title: Logging title: Logging
- local: package_reference/big_modeling - local: package_reference/big_modeling
title: Working with large models title: Working with large models
- local: package_reference/inference
title: Pipeline parallelism
- local: package_reference/kwargs - local: package_reference/kwargs
title: Kwargs handlers title: Kwargs handlers
- local: package_reference/fp8
title: FP8
- local: package_reference/utilities - local: package_reference/utilities
title: Utility functions and classes title: Utility functions and classes
title: "Reference" - local: package_reference/megatron_lm
title: Megatron-LM utilities
- local: package_reference/fsdp
title: Fully Sharded Data Parallel utilities
title: "Reference"

View File

@ -0,0 +1,128 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Execution process
When working with distributed training systems, it is important to manage how and when processes are executed across GPUs. Some processes are completed faster than others, and some processes shouldn't begin if others haven't finished yet. Accelerate provides tools for orchestrating when processes are executed to ensure everything remains synchronized across all devices.
This tutorial will teach you how to execute a process on only one machine and how to delay execution until all processes have reached a certain point.
## Execute on one process
Certain code only needs to be run once on a given machine, such as printing a log statement or only displaying one progress bar on the local main process.
<hfoptions id="local-execution">
<hfoption id="statements">
You should use `accelerator.is_local_main_process` to indicate code that should only be executed once.
```py
from tqdm.auto import tqdm
progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
```
You could also wrap a statement with `accelerator.is_local_main_process`.
> [!TIP]
> For standalone `print` statements that aren't wrapped in `accelerator.is_local_main_process`, replace `print` with Accelerate's [`~Accelerator.print`] method to only print once per process.
```py
if accelerator.is_local_main_process:
print("Accelerate is the best")
```
</hfoption>
<hfoption id="function">
For a function that should only be executed once, use [`~Accelerator.on_local_main_process`].
```py
@accelerator.on_local_main_process
def do_my_thing():
"Something done once per server"
do_thing_once_per_server()
```
</hfoption>
</hfoptions>
You could also direct Accelerate to execute code once across *all processes* regardless of the number of machines. This is useful if you're uploading a final model to the Hub.
<hfoptions id="main-execution">
<hfoption id="statement">
You should use `accelerator.is_main_process` to indicate code that should only be executed once across all processes.
```py
if accelerator.is_main_process:
repo.push_to_hub()
```
</hfoption>
<hfoption id="function">
For a function that should only be executed once across all processes, use [`~Accelerator.on_main_process`].
```py
@accelerator.on_main_process
def do_my_thing():
"Something done once per server"
do_thing_once()
```
</hfoption>
</hfoptions>
## Execute on a specific process
Accelerate can also help you execute functions that should only be executed on a specific process or a local process index.
<hfoptions id="specific-execution">
<hfoption id="specific process">
Use the [`~Accelerator.on_process`] method and specify the process index to execute a function on.
```py
@accelerator.on_process(process_index=0)
def do_my_thing():
"Something done on process index 0"
do_thing_on_index_zero()
```
</hfoption>
<hfoption id="local process">
Use the [`~Accelerator.on_local_process`] method and specify the local process index to execute a function on.
```py
@accelerator.on_local_process(local_process_idx=0)
def do_my_thing():
"Something done on process index 0 on each server"
do_thing_on_index_zero_on_each_server()
```
</hfoption>
</hfoptions>
## Defer execution
When you run your script on several GPUs at the same time, some code may be executed faster than others. You might need to wait for all processes to reach a certain point before executing the next set of instructions. For instance, you shouldnt save a model before making sure every process is done with training.
To do this, add [`~Accelerator.wait_for_everyone`] in your code. This blocks all processes that have finished first from continuing until all remaining processes have reached the same point (this has no effect if you're running on a single GPU or CPU).
```py
accelerator.wait_for_everyone()
```

View File

@ -8,33 +8,34 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> -->
# Installation and Configuration # Installation
Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 Accelerate. 🤗 Accelerate is tested on **Python 3.7+**. Before you start, you will need to setup your environment, install the appropriate packages, and configure Accelerate. Accelerate is tested on **Python 3.8+**.
## Installing 🤗 Accelerate Accelerate is available on pypi and conda, as well as on GitHub. Details to install from each are below:
🤗 Accelerate is available on pypi and conda, as well as on GitHub. Details to install from each are below: ## pip
### pip To install Accelerate from pypi, perform:
To install 🤗 Accelerate from pypi, perform:
```bash ```bash
pip install accelerate pip install accelerate
``` ```
### conda ## conda
🤗 Accelerate can also be installed with conda with: Accelerate can also be installed with conda with:
```bash ```bash
conda install -c conda-forge accelerate conda install -c conda-forge accelerate
``` ```
### Source ## Source
New features are added every day that haven't been released yet. To try them out yourself, install New features are added every day that haven't been released yet. To try them out yourself, install
from the GitHub repository: from the GitHub repository:
@ -53,9 +54,9 @@ cd accelerate
pip install -e . pip install -e .
``` ```
## Configuring 🤗 Accelerate ## Configuration
After installing, you need to configure 🤗 Accelerate for how the current system is setup for training. After installing, you need to configure Accelerate for how the current system is setup for training.
To do so run the following and answer the questions prompted to you: To do so run the following and answer the questions prompted to you:
```bash ```bash
@ -67,7 +68,8 @@ To write a barebones configuration that doesn't include options such as DeepSpee
```bash ```bash
python -c "from accelerate.utils import write_basic_config; write_basic_config(mixed_precision='fp16')" python -c "from accelerate.utils import write_basic_config; write_basic_config(mixed_precision='fp16')"
``` ```
🤗 Accelerate will automatically utilize the maximum number of GPUs available and set the mixed precision mode.
Accelerate will automatically utilize the maximum number of GPUs available and set the mixed precision mode.
To check that your configuration looks fine, run: To check that your configuration looks fine, run:
@ -77,23 +79,36 @@ accelerate env
An example output is shown below, which describes two GPUs on a single machine with no mixed precision being used: An example output is shown below, which describes two GPUs on a single machine with no mixed precision being used:
```bash ```bash
- `Accelerate` version: 0.11.0.dev0 - `Accelerate` version: 1.2.0.dev0
- Platform: Linux-5.10.0-15-cloud-amd64-x86_64-with-debian-11.3 - Platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.35
- Python version: 3.7.12 - `accelerate` bash location: /home/zach/miniconda3/envs/accelerate/bin/accelerate
- Numpy version: 1.19.5 - Python version: 3.10.13
- PyTorch version (GPU?): 1.12.0+cu102 (True) - Numpy version: 1.26.4
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- PyTorch MLU available: False
- PyTorch MUSA available: False
- System RAM: 187.91 GB
- GPU type: NVIDIA GeForce RTX 4090
- `Accelerate` default config: - `Accelerate` default config:
- compute_environment: LOCAL_MACHINE - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU - distributed_type: MULTI_GPU
- mixed_precision: no - mixed_precision: no
- use_cpu: False - use_cpu: False
- debug: False
- num_processes: 2 - num_processes: 2
- machine_rank: 0 - machine_rank: 0
- num_machines: 1 - num_machines: 1
- main_process_ip: None - gpu_ids: all
- main_process_port: None - rdzv_backend: static
- same_network: True
- main_training_function: main - main_training_function: main
- deepspeed_config: {} - enable_cpu_affinity: False
- fsdp_config: {} - downcast_bf16: no
``` - tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
```

View File

@ -8,11 +8,14 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> -->
# Launching your 🤗 Accelerate scripts # Launching Accelerate scripts
In the previous tutorial, you were introduced to how to modify your current training script to use 🤗 Accelerate. In the previous tutorial, you were introduced to how to modify your current training script to use Accelerate.
The final version of that code is shown below: The final version of that code is shown below:
```python ```python
@ -36,7 +39,7 @@ for batch in training_dataloader:
But how do you run this code and have it utilize the special hardware available to it? But how do you run this code and have it utilize the special hardware available to it?
First you should rewrite the above code into a function, and make it callable as a script. For example: First, you should rewrite the above code into a function, and make it callable as a script. For example:
```diff ```diff
from accelerate import Accelerator from accelerate import Accelerator
@ -61,20 +64,20 @@ First you should rewrite the above code into a function, and make it callable as
+ main() + main()
``` ```
Next you need to launch it with `accelerate launch`. Next, you need to launch it with `accelerate launch`.
<Tip warning={true}> <Tip warning={true}>
It's recommended you run `accelerate config` before using `accelerate launch` to configure your environment to your liking. It's recommended you run `accelerate config` before using `accelerate launch` to configure your environment to your liking.
Otherwise 🤗 Accelerate will use very basic defaults depending on your system setup. Otherwise Accelerate will use very basic defaults depending on your system setup.
</Tip> </Tip>
## Using accelerate launch ## Using accelerate launch
🤗 Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`. Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them are. This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them is.
<Tip> <Tip>
@ -88,23 +91,32 @@ You can launch your script quickly by using:
accelerate launch {script_name.py} --arg1 --arg2 ... accelerate launch {script_name.py} --arg1 --arg2 ...
``` ```
Just put `accelerate launch` at the start of your command, and pass in additional arguments and parameters to your script afterwards like normal! Just put `accelerate launch` at the start of your command, and pass in additional arguments and parameters to your script afterward like normal!
Since this runs the various torch spawn methods, all of the expected environment variables can be modified here as well. Since this runs the various torch spawn methods, all of the expected environment variables can be modified here as well.
For example, here is how to use `accelerate launch` with a single GPU: For example, here is how to use `accelerate launch` with a single GPU:
```bash ```bash
# for cuda device:
CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ... CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...
# for xpu device:
ZE_AFFINITY_MASK="0" accelerate launch {script_name.py} --arg1 --arg2 ...
``` ```
You can also use `accelerate launch` without performing `accelerate config` first, but you may need to manually pass in the right configuration parameters. You can also use `accelerate launch` without performing `accelerate config` first, but you may need to manually pass in the right configuration parameters.
In this case, 🤗 Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision. In this case, Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision.
Here is how you would use all GPUs and train with mixed precision disabled: Here is how you would use all GPUs and train with mixed precision disabled:
```bash ```bash
accelerate launch --multi_gpu {script_name.py} {--arg1} {--arg2} ... accelerate launch --multi_gpu {script_name.py} {--arg1} {--arg2} ...
``` ```
Or by specifying a number of GPUs to use:
```bash
accelerate launch --num_processes=2 {script_name.py} {--arg1} {--arg2} ...
```
To get more specific you should pass in the needed parameters yourself. For instance, here is how you To get more specific you should pass in the needed parameters yourself. For instance, here is how you
would also launch that same script on two GPUs using mixed precision while avoiding all of the warnings: would also launch that same script on two GPUs using mixed precision while avoiding all of the warnings:
@ -120,16 +132,40 @@ accelerate launch -h
<Tip> <Tip>
Even if you are not using 🤗 Accelerate in your code, you can still use the launcher for starting your scripts! Even if you are not using Accelerate in your code, you can still use the launcher for starting your scripts!
</Tip> </Tip>
For a visualization of this difference, that earlier `accelerate launch` on multi-gpu would look something like so with `torchrun`: For a visualization of this difference, that earlier `accelerate launch` on multi-gpu would look something like so with `torchrun`:
```bash ```bash
MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --num_machines=1 {script_name.py} {--arg1} {--arg2} ... MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --nnodes=1 {script_name.py} {--arg1} {--arg2} ...
``` ```
You can also launch your script utilizing the launch CLI as a python module itself, enabling the ability to pass in other python-specific
launching behaviors. To do so, use `accelerate.commands.launch` instead of `accelerate launch`:
```bash
python -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2}
```
If you want to execute the script with any other python flags, you can pass them in as well similar to `-m`, such as
the below example enabling unbuffered stdout and stderr:
```bash
python -u -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2}
```
<Tip>
You can run your code on CPU as well! This is helpful for debugging and testing purposes on toy models and datasets.
```bash
accelerate launch --cpu {script_name.py} {--arg1} {--arg2}
```
</Tip>
## Why you should always use `accelerate config` ## Why you should always use `accelerate config`
Why is it useful to the point you should **always** run `accelerate config`? Why is it useful to the point you should **always** run `accelerate config`?
@ -145,7 +181,7 @@ accelerate launch {script_name.py} {--arg1} {--arg2} ...
## Custom Configurations ## Custom Configurations
As briefly mentioned earlier, `accelerate launch` should be mostly used through combining set configurations As briefly mentioned earlier, `accelerate launch` should be mostly used through combining set configurations
made with the `accelerate config` command. These configs are saved to a `default_config.yaml` file in your cache folder for 🤗 Accelerate. made with the `accelerate config` command. These configs are saved to a `default_config.yaml` file in your cache folder for Accelerate.
This cache folder is located at (with decreasing order of priority): This cache folder is located at (with decreasing order of priority):
- The content of your environment variable `HF_HOME` suffixed with `accelerate`. - The content of your environment variable `HF_HOME` suffixed with `accelerate`.
@ -175,4 +211,25 @@ use_cpu: false
Launching a script from the location of that custom yaml file looks like the following: Launching a script from the location of that custom yaml file looks like the following:
```bash ```bash
accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2} ... accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2} ...
``` ```
## Multi-node training
Multi-node training with Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following:
- Copy your codebase and data to all nodes. (or place them on a shared filesystem)
- Setup your python packages on all nodes.
- Run `accelerate config` on the main single node first. After specifying the number of nodes, you will be asked to specify the rank of each node (this will be 0 for the main/master node), along with the IP address and port for the main process. This is required for the worker nodes to communicate with the main process. Afterwards, you can copy or send this config file across all of your nodes, changing the `machine_rank` to 1, 2,3, etc. to avoid having to run the command (or just follow their directions directly for launching with `torchrun` as well)
Once you have done this, you can start your multi-node training run by running `accelerate launch` (or `torchrun`) on all nodes.
<Tip>
It is required that the command be ran on all nodes for everything to start, not just running it from the main node. You can use something like SLURM or a different process executor to wrap around this requirement and call everything from a single command.
</Tip>
<Tip>
It is recommended to use the intranet IP of your main node over the public IP for better latency. This is the `192.168.x.x` or the `172.x.x.x` address you see when you run `hostname -I` on the main node.
</Tip>
To get a better idea about multi-node training, check out our example for [multi-node training with FSDP](https://huggingface.co/blog/ram-efficient-pytorch-fsdp).

View File

@ -0,0 +1,224 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Add Accelerate to your code
Each distributed training framework has their own way of doing things which can require writing a lot of custom code to adapt it to your PyTorch training code and training environment. Accelerate offers a friendly way to interface with these distributed training frameworks without having to learn the specific details of each one. Accelerate takes care of those details for you, so you can focus on the training code and scale it to any distributed training environment.
In this tutorial, you'll learn how to adapt your existing PyTorch code with Accelerate and get you on your way toward training on distributed systems with ease! You'll start with a basic PyTorch training loop (it assumes all the training objects like `model` and `optimizer` have been setup already) and progressively integrate Accelerate into it.
```python
device = "cuda"
model.to(device)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss.backward()
optimizer.step()
scheduler.step()
```
## Accelerator
The [`Accelerator`] is the main class for adapting your code to work with Accelerate. It knows about the distributed setup you're using such as the number of different processes and your hardware type. This class also provides access to many of the necessary methods for enabling your PyTorch code to work in any distributed training environment and for managing and executing processes across devices.
That's why you should always start by importing and creating an [`Accelerator`] instance in your script.
```python
from accelerate import Accelerator
accelerator = Accelerator()
```
The [`Accelerator`] also knows which device to move your PyTorch objects to, so it is recommended to let Accelerate handle this for you.
```diff
- device = "cuda"
+ device = accelerator.device
model.to(device)
```
## Prepare PyTorch objects
Next, you need to prepare your PyTorch objects (model, optimizer, scheduler, etc.) for distributed training. The [`~Accelerator.prepare`] method takes care of placing your model in the appropriate container (like single GPU or multi-GPU) for your training setup, adapting the optimizer and scheduler to use Accelerate's [`~optimizer.AcceleratedOptimizer`] and [`~scheduler.AcceleratedScheduler`], and creating a new dataloader that can be sharded across processes.
> [!TIP]
> Accelerate only prepares objects that inherit from their respective PyTorch classes such as `torch.optim.Optimizer`.
The PyTorch objects are returned in the same order they're sent.
```py
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
```
## Training loop
Finally, remove the `to(device)` calls to the inputs and targets in the training loop because Accelerate's DataLoader classes automatically places them on the right device. You should also replace the usual `backward()` pass with Accelerate's [`~Accelerator.backward`] method which scales the gradients for you and uses the appropriate `backward()` method depending on your distributed setup (for example, DeepSpeed or Megatron).
```diff
- inputs = inputs.to(device)
- targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
- loss.backward()
+ accelerator.backward(loss)
```
Put everything together and your new Accelerate training loop should now look like this!
```python
from accelerate import Accelerator
accelerator = Accelerator()
device = accelerator.device
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
```
## Training features
Accelerate offers additional features - like gradient accumulation, gradient clipping, mixed precision training and more - you can add to your script to improve your training run. Let's explore these three features.
### Gradient accumulation
Gradient accumulation enables you to train on larger batch sizes by accumulating the gradients over multiple batches before updating the weights. This can be useful for getting around memory limitations. To enable this feature in Accelerate, specify the `gradient_accumulation_steps` parameter in the [`Accelerator`] class and add the [`~Accelerator.accumulate`] context manager to your script.
```diff
+ accelerator = Accelerator(gradient_accumulation_steps=2)
model, optimizer, training_dataloader = accelerator.prepare(model, optimizer, training_dataloader)
for input, label in training_dataloader:
+ with accelerator.accumulate(model):
predictions = model(input)
loss = loss_function(predictions, label)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
### Gradient clipping
Gradient clipping is a technique to prevent "exploding gradients", and Accelerate offers:
* [`~Accelerator.clip_grad_value_`] to clip gradients to a minimum and maximum value
* [`~Accelerator.clip_grad_norm_`] for normalizing gradients to a certain value
### Mixed precision
Mixed precision accelerates training by using a lower precision data type like fp16 (half-precision) to calculate the gradients. For the best performance with Accelerate, the loss should be computed inside your model (like in Transformers models) because computations outside of the model are computed in full precision.
Set the mixed precision type to use in the [`Accelerator`], and then use the [`~Accelerator.autocast`] context manager to automatically cast the values to the specified data type.
> [!WARNING]
> Accelerate enables automatic mixed precision, so [`~Accelerator.autocast`] is only needed if there are other mixed precision operations besides those performed on loss by [`~Accelerator.backward`] which already handles the scaling.
```diff
+ accelerator = Accelerator(mixed_precision="fp16")
+ with accelerator.autocast():
loss = complex_loss_function(outputs, target)
```
## Save and load
Accelerate can also save and load a *model* once training is complete or you can also save the model and optimizer *state* which could be useful for resuming training.
### Model
Once all processes are complete, unwrap the model with the [`~Accelerator.unwrap_model`] method before saving it because the [`~Accelerator.prepare`] method wrapped your model into the proper interface for distributed training. If you don't unwrap the model, saving the model state dictionary also saves any potential extra layers from the larger model and you won't be able to load the weights back into your base model.
You should use the [`~Accelerator.save_model`] method to unwrap and save the model state dictionary. This method can also save a model into sharded checkpoints or into the [safetensors](https://hf.co/docs/safetensors/index) format.
<hfoptions id="save">
<hfoption id="single checkpoint">
```py
accelerator.wait_for_everyone()
accelerator.save_model(model, save_directory)
```
<Tip>
For models from the [Transformers](https://hf.co/docs/transformers/index) library, save the model with the [`~transformers.PreTrainedModel.save_pretrained`] method so that it can be reloaded with the [`~transformers.PreTrainedModel.from_pretrained`] method.
```py
from transformers import AutoModel
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(
"path/to/my_model_directory",
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
)
model = AutoModel.from_pretrained("path/to/my_model_directory")
```
</Tip>
To load your weights, use the [`~Accelerator.unwrap_model`] method to unwrap the model first before loading the weights. All model parameters are references to tensors, so this loads your weights inside `model`.
```py
unwrapped_model = accelerator.unwrap_model(model)
path_to_checkpoint = os.path.join(save_directory,"pytorch_model.bin")
unwrapped_model.load_state_dict(torch.load(path_to_checkpoint))
```
</hfoption>
<hfoption id="sharded checkpoint">
Set `safe_serialization=True` to save the model in the safetensor format.
```py
accelerator.wait_for_everyone()
accelerator.save_model(model, save_directory, max_shard_size="1GB", safe_serialization=True)
```
To load a sharded checkpoint or a safetensor formatted checkpoint, use the [`~accelerate.load_checkpoint_in_model`] method. This method allows you to load a checkpoint onto a specific device.
```py
load_checkpoint_in_model(unwrapped_model, save_directory, device_map={"":device})
```
</hfoption>
</hfoptions>
### State
During training, you may want to save the current state of the model, optimizer, random generators, and potentially learning rate schedulers so they can be restored in the *same script*. You should add the [`~Accelerator.save_state`] and [`~Accelerator.load_state`] methods to your script to save and load states.
To further customize where and how states are saved through [`~Accelerator.save_state`], use the [`~utils.ProjectConfiguration`] class. For example, if `automatic_checkpoint_naming` is enabled, each saved checkpoint is stored at `Accelerator.project_dir/checkpoints/checkpoint_{checkpoint_number}`.
Any other stateful items to be stored should be registered with the [`~Accelerator.register_for_checkpointing`] method so they can be saved and loaded. Every object passed to this method to be stored must have a `load_state_dict` and `state_dict` function.
> [!TIP]
> If you have [`torchdata>=0.8.0`](https://github.com/pytorch/data/tree/main) installed, you can additionally pass `use_stateful_dataloader=True` into your [`~utils.DataLoaderConfiguration`]. This extends Accelerate's DataLoader classes with a `load_state_dict` and `state_dict` function, and makes it so `Accelerator.save_state` and `Accelerator.load_state` also track how far into the training dataset it has read when persisting the model.

View File

@ -1,123 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Migrating your code to 🤗 Accelerate
This tutorial will detail how to easily convert existing PyTorch code to use 🤗 Accelerate!
You'll see that by just changing a few lines of code, 🤗 Accelerate can perform its magic and get you on
your way towards running your code on distributed systems with ease!
## The base training loop
To begin, write out a very basic PyTorch training loop.
<Tip>
We are under the presumption that `training_dataloader`, `model`, `optimizer`, `scheduler`, and `loss_function` have been defined beforehand.
</Tip>
```python
device = "cuda"
model.to(device)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss.backward()
optimizer.step()
scheduler.step()
```
## Add in 🤗 Accelerate
To start using 🤗 Accelerate, first import and create an [`Accelerator`] instance:
```python
from accelerate import Accelerator
accelerator = Accelerator()
```
[`Accelerator`] is the main force behind utilizing all the possible options for distributed training!
### Setting the right device
The [`Accelerator`] class knows the right device to move any PyTorch object to at any time, so you should
change the definition of `device` to come from [`Accelerator`]:
```diff
- device = 'cuda'
+ device = accelerator.device
model.to(device)
```
### Preparing your objects
Next you need to pass all of the important objects related to training into [`~Accelerator.prepare`]. 🤗 Accelerate will
make sure everything is setup in the current environment for you to start training:
```
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
```
These objects are returned in the same order they were sent in with. By default when using `device_placement=True`, all of the objects that can be sent to the right device will be.
If you need to work with data that isn't passed to [~Accelerator.prepare] but should be on the active device, you should pass in the `device` you made earlier.
<Tip warning={true}>
Accelerate will only prepare objects that inherit from their respective PyTorch classes (such as `torch.optim.Optimizer`).
</Tip>
### Modifying the training loop
Finally, three lines of code need to be changed in the training loop. 🤗 Accelerate's DataLoader classes will automatically handle the device placement by default,
and [`~Accelerator.backward`] should be used for performing the backward pass:
```diff
- inputs = inputs.to(device)
- targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
- loss.backward()
+ accelerator.backward(loss)
```
With that, your training loop is now ready to use 🤗 Accelerate!
## The finished code
Below is the final version of the converted code:
```python
from accelerate import Accelerator
accelerator = Accelerator()
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
```

View File

@ -8,9 +8,12 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> -->
# Launching Multi-Node Training from a Jupyter Environment # Launching distributed training from Jupyter Notebooks
This tutorial teaches you how to fine tune a computer vision model with 🤗 Accelerate from a Jupyter Notebook on a distributed system. This tutorial teaches you how to fine tune a computer vision model with 🤗 Accelerate from a Jupyter Notebook on a distributed system.
You will also learn how to setup a few requirements needed for ensuring your environment is configured properly, your data has been prepared properly, and finally how to launch training. You will also learn how to setup a few requirements needed for ensuring your environment is configured properly, your data has been prepared properly, and finally how to launch training.
@ -23,19 +26,19 @@ You will also learn how to setup a few requirements needed for ensuring your env
## Configuring the Environment ## Configuring the Environment
Before any training can be performed, a 🤗 Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts: Before any training can be performed, an Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
```bash ```bash
accelerate config accelerate config
``` ```
However, if general defaults are fine and you are *not* running on a TPU, 🤗Accelerate has a utility to quickly write your GPU configuration into a config file via [`utils.write_basic_config`]. However, if general defaults are fine and you are *not* running on a TPU, Accelerate has a utility to quickly write your GPU configuration into a config file via [`utils.write_basic_config`].
The following code will restart Jupyter after writing the configuration, as CUDA code was called to perform this. The following code will restart Jupyter after writing the configuration, as CUDA code was called to perform this.
<Tip warning={true}> <Tip warning={true}>
CUDA can't be initialized more than once on a multi-node system. It's fine to debug in the notebook and have calls to CUDA, but in order to finally train a full cleanup and restart will need to be performed. CUDA can't be initialized more than once on a multi-GPU system. It's fine to debug in the notebook and have calls to CUDA, but in order to finally train a full cleanup and restart will need to be performed.
</Tip> </Tip>
@ -49,7 +52,7 @@ os._exit(00) # Restart the notebook
## Preparing the Dataset and Model ## Preparing the Dataset and Model
Next you should prepare your dataset. As mentioned at earlier, great care should be taken when preparing the `DataLoaders` and model to make sure that **nothing** is put on *any* GPU. Next you should prepare your dataset. As mentioned earlier, great care should be taken when preparing the `DataLoaders` and model to make sure that **nothing** is put on *any* GPU.
If you do, it is recommended to put that specific code into a function and call that from within the notebook launcher interface, which will be shown later. If you do, it is recommended to put that specific code into a function and call that from within the notebook launcher interface, which will be shown later.
@ -153,7 +156,7 @@ def get_dataloaders(batch_size: int = 64):
random_perm = np.random.permutation(len(fnames)) random_perm = np.random.permutation(len(fnames))
cut = int(0.8 * len(fnames)) cut = int(0.8 * len(fnames))
train_split = random_perm[:cut] train_split = random_perm[:cut]
eval_split = random_perm[:cut] eval_split = random_perm[cut:]
# For training a simple RandomResizedCrop will be used # For training a simple RandomResizedCrop will be used
train_tfm = Compose([RandomResizedCrop((224, 224), scale=(0.5, 1.0)), ToTensor()]) train_tfm = Compose([RandomResizedCrop((224, 224), scale=(0.5, 1.0)), ToTensor()])
@ -183,7 +186,7 @@ Here is a basic training loop for the animal classification problem:
<Tip> <Tip>
The code has been split up to allow for explainations on each section. A full version that can be copy and pasted will be available at the end The code has been split up to allow for explanations on each section. A full version that can be copy and pasted will be available at the end
</Tip> </Tip>
@ -324,7 +327,7 @@ def training_loop(mixed_precision="fp16", seed: int = 42, batch_size: int = 64):
# Build dataloaders # Build dataloaders
train_dataloader, eval_dataloader = get_dataloaders(batch_size) train_dataloader, eval_dataloader = get_dataloaders(batch_size)
# Instantiate the model (you build the model here so that the seed also controls new weight initaliziations) # Instantiate the model (you build the model here so that the seed also controls new weight initializations)
model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id)) model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
# Freeze the base model # Freeze the base model
@ -337,11 +340,11 @@ def training_loop(mixed_precision="fp16", seed: int = 42, batch_size: int = 64):
mean = torch.tensor(model.default_cfg["mean"])[None, :, None, None] mean = torch.tensor(model.default_cfg["mean"])[None, :, None, None]
std = torch.tensor(model.default_cfg["std"])[None, :, None, None] std = torch.tensor(model.default_cfg["std"])[None, :, None, None]
# To make this constant available on the active device, set it to the accelerator device # To make these constants available on the active device, set it to the accelerator device
mean = mean.to(accelerator.device) mean = mean.to(accelerator.device)
std = std.to(accelerator.device) std = std.to(accelerator.device)
# Intantiate the optimizer # Instantiate the optimizer
optimizer = torch.optim.Adam(params=model.parameters(), lr=3e-2 / 25) optimizer = torch.optim.Adam(params=model.parameters(), lr=3e-2 / 25)
# Instantiate the learning rate scheduler # Instantiate the learning rate scheduler
@ -398,6 +401,26 @@ args = ("fp16", 42, 64)
notebook_launcher(training_loop, args, num_processes=2) notebook_launcher(training_loop, args, num_processes=2)
``` ```
In the case of running on multiple nodes, you need to set up a Jupyter session at each node and run the launching cell at the same time.
For an environment containing 2 nodes (computers) with 8 GPUs each and the main computer with an IP address of "172.31.43.8", it would look like so:
```python
notebook_launcher(training_loop, args, master_addr="172.31.43.8", node_rank=0, num_nodes=2, num_processes=8)
```
And in the second Jupyter session on the other machine:
<Tip>
Notice how the `node_rank` has changed
</Tip>
```python
notebook_launcher(training_loop, args, master_addr="172.31.43.8", node_rank=1, num_nodes=2, num_processes=8)
```
In the case of running on the TPU, it would look like so: In the case of running on the TPU, it would look like so:
```python ```python
@ -407,6 +430,17 @@ args = (model, "fp16", 42, 64)
notebook_launcher(training_loop, args, num_processes=8) notebook_launcher(training_loop, args, num_processes=8)
``` ```
To launch the training process with elasticity, enabling fault tolerance, you can use the `elastic_launch` feature provided by PyTorch. This requires setting additional parameters such as `rdzv_backend` and `max_restarts`. Here is an example of how to use `notebook_launcher` with elastic capabilities:
```python
notebook_launcher(
training_loop,
args,
num_processes=2,
max_restarts=3
)
```
As it's running it will print the progress as well as state how many devices you ran on. This tutorial was ran with two GPUs: As it's running it will print the progress as well as state how many devices you ran on. This tutorial was ran with two GPUs:
```python out ```python out
@ -420,10 +454,23 @@ epoch 4: 94.71
And that's it! And that's it!
Please note that [`notebook_launcher`] ignores the Accelerate config file, to launch based on the config use:
```bash
accelerate launch
```
## Debugging
A common issue when running the `notebook_launcher` is receiving a CUDA has already been initialized issue. This usually stems
from an import or prior code in the notebook that makes a call to the PyTorch `torch.cuda` sublibrary. To help narrow down what went wrong,
you can launch the `notebook_launcher` with `ACCELERATE_DEBUG_MODE=yes` in your environment and an additional check
will be made when spawning that a regular process can be created and utilize CUDA without issue. (Your CUDA code can still be ran afterwards).
## Conclusion ## Conclusion
This notebook showed how to perform distributed training from inside of a Jupyter Notebook. Some key notes to remember: This notebook showed how to perform distributed training from inside of a Jupyter Notebook. Some key notes to remember:
- Make sure to save any code that use CUDA (or CUDA imports) for the function passed to [`notebook_launcher`] - Make sure to save any code that use CUDA (or CUDA imports) for the function passed to [`notebook_launcher`]
- Set the `num_processes` to be the number of devices used for training (such as number of GPUs, CPUs, TPUs, etc) - Set the `num_processes` to be the number of devices used for training (such as number of GPUs, CPUs, TPUs, etc)
- If using the TPU, declare your model outside the training loop function - If using the TPU, declare your model outside the training loop function

View File

@ -8,14 +8,17 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> -->
# Overview # Overview
Welcome to the 🤗 Accelerate tutorials! These introductory guides will help catch you up to speed on working with 🤗 Accelerate. Welcome to the Accelerate tutorials! These introductory guides will help catch you up to speed on working with Accelerate.
You'll learn how to modify your code to have it work with the API seamlessly, how to launch your script properly, You'll learn how to modify your code to have it work with the API seamlessly, how to launch your script properly,
and more! and more!
These tutorials assume some basic knowledge of Python and familiarity with the PyTorch framework. These tutorials assume some basic knowledge of Python and familiarity with the PyTorch framework.
If you have any questions about 🤗 Accelerate, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/accelerate/18). If you have any questions about Accelerate, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/accelerate/18).

View File

@ -0,0 +1,38 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# TPU training
A [TPU (Tensor Processing Unit)](https://cloud.google.com/tpu/docs/intro-to-tpu) is a type of hardware specifically designed for training models efficiently. Accelerate supports TPU training, but there are a few things you should be aware of, namely graph compilation. This tutorial briefly discusses compilation, and for more details, take a look at the [Training on TPUs with Accelerate](../concept_guides/training_tpu) guide.
## Compilation
A TPU creates a graph of all the operations in the training step such as the forward pass, backward pass and optimizer step. This is why the first training step always takes a while because building and compiling this graph takes time. But once compilation is complete, it is cached and all subsequent steps are much faster.
The key is to avoid compiling your code again or else training is super slow. This means all your operations must be exactly the same:
* all tensors in your batches must have the same length (for example, no dynamic padding for NLP tasks)
* your code must be static (for example, no layers with for loops that have different lengths depending on the input such as a LSTM)
## Weight tying
A common language model design is to tie the weights of the embedding and softmax layers. However, moving the model to a TPU (either yourself or passing it to the [`~Accelerator.prepare`] method) breaks the weight tying and you'll need to retie the weights.
To add special behavior (like weight tying) in your script for TPUs, set [`~Accelerator.distributed_type`] to `DistributedType.TPU` first. Then you can use the [`~transformers.PreTrainedModel.tie_weights`] method to tie the weights.
```py
if accelerator.distributed_type == DistributedType.TPU:
model.tie_weights()
```

View File

@ -0,0 +1,211 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Troubleshoot
This guide provides solutions to some issues you might encounter when using Accelerate. Not all errors are covered because Accelerate is an active library that is continuously evolving and there are many different use cases and distributed training setups. If the solutions described here don't help with your specific error, please take a look at the [Ask for help](#ask-for-help) section to learn where and how to get help.
## Logging
Logging can help you identify where an error is coming from. In a distributed setup with multiple processes, logging can be a challenge, but Accelerate provides the [`~accelerate.logging`] utility to ensure logs are synchronized.
To troubleshoot an issue, use [`~accelerate.logging`] instead of the standard Python [`logging`](https://docs.python.org/3/library/logging.html#module-logging) module. Set the verbosity level (`INFO`, `DEBUG`, `WARNING`, `ERROR`, `CRITICAL`) with the `log_level` parameter, and then you can either:
1. Export the `log_level` as the `ACCELERATE_LOG_LEVEL` environment variable.
2. Pass the `log_level` directly to `get_logger`.
For example, to set `log_level="INFO"`:
```py
from accelerate.logging import get_logger
logger = get_logger(__name__, log_level="DEBUG")
```
By default, the log is called on main processes only. To call it on all processes, pass `main_process_only=False`.
If a log should be called on all processes and in order, also pass `in_order=True`.
```py
from accelerate.logging import get_logger
logger = get_logger(__name__, log_level="DEBUG")
# log all processes
logger.debug("thing_to_log", main_process_only=False)
# log all processes in order
logger.debug("thing_to_log", main_process_only=False, in_order=True)
```
## Hanging code and timeout errors
There can be many reasons why your code is hanging. Let's take a look at how to solve some of the most common issues that can cause your code to hang.
### Mismatched tensor shapes
Mismatched tensor shapes is a common issue that can cause your code to hang for a significant amount of time on a distributed setup.
When running scripts in a distributed setup, functions such as [`Accelerator.gather`] and [`Accelerator.reduce`] are necessary to grab tensors across devices to collectively perform operations on them. These (and other) functions rely on `torch.distributed` to perform a `gather` operation, which requires tensors to have the **exact same shape** across all processes. When the tensor shapes don't match, your code hangs and you'll eventually hit a timeout exception.
You can use Accelerate's operational debug mode to immediately catch this issue. We recommend enabling this mode during the `accelerate config` setup, but you can also enable it from the CLI, as an environment variable, or by manually editing the `config.yaml` file.
<hfoptions id="mismatch">
<hfoption id="CLI">
```bash
accelerate launch --debug {my_script.py} --arg1 --arg2
```
</hfoption>
<hfoption id="environment variable">
If enabling debug mode as an environment variable, you don't need to call `accelerate launch`.
```bash
ACCELERATE_DEBUG_MODE="1" torchrun {my_script.py} --arg1 --arg2
```
</hfoption>
<hfoption id="config.yaml">
Add `debug: true` to your `config.yaml` file.
```yaml
compute_environment: LOCAL_MACHINE
debug: true
```
</hfoption>
</hfoptions>
Once you enable debug mode, you should get a traceback that points to the tensor shape mismatch issue.
```py
Traceback (most recent call last):
File "/home/zach_mueller_huggingface_co/test.py", line 18, in <module>
main()
File "/home/zach_mueller_huggingface_co/test.py", line 15, in main
broadcast_tensor = broadcast(tensor)
File "/home/zach_mueller_huggingface_co/accelerate/src/accelerate/utils/operations.py", line 303, in wrapper
accelerate.utils.operations.DistributedOperationException:
Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
Operation: `accelerate.utils.operations.broadcast`
Input shapes:
- Process 0: [1, 5]
- Process 1: [1, 2, 5]
```
### Early stopping
For early stopping in distributed training, if each process has a specific stopping condition (e.g. validation loss), it may not be synchronized across all processes. As a result, a break can happen on process 0 but not on process 1 which will cause your code to hang indefinitely until a timeout occurs.
If you have early stopping conditionals, use the `set_trigger` and `check_trigger` methods to make sure all the processes
are ended correctly.
```py
# Assume `should_do_breakpoint` is a custom defined function that returns a conditional,
# and that conditional might be true only on process 1
if should_do_breakpoint(loss):
accelerator.set_trigger()
# Later in the training script when we need to check for the breakpoint
if accelerator.check_trigger():
break
```
### Low kernel versions on Linux
On Linux with kernel version < 5.5, hanging processes have been reported. To avoid this problem, upgrade your system to a later kernel version.
### MPI
If your distributed CPU training job using MPI is hanging, ensure that you have
[passwordless SSH](https://www.open-mpi.org/faq/?category=rsh#ssh-keys) setup (using keys) between the nodes. This means
that for all nodes in your hostfile, you should to be able to SSH from one node to another without being prompted for a password.
Next, try to run the `mpirun` command as a sanity check. For example, the command below should print out the
hostnames for each of the nodes.
```bash
mpirun -f hostfile -n {number of nodes} -ppn 1 hostname
```
## Out-of-Memory
One of the most frustrating errors when it comes to running training scripts is hitting "Out-of-Memory" on devices like CUDA, XPU or CPU. The entire script needs to be restarted and any progress is lost.
To address this problem, Accelerate provides the [`find_executable_batch_size`] utility that is heavily based on [toma](https://github.com/BlackHC/toma).
This utility retries code that fails due to OOM (out-of-memory) conditions and automatically lowers batch sizes. For each OOM condition, the algorithm decreases the batch size by half and retries the code until it succeeds.
To use [`find_executable_batch_size`], restructure your training function to include an inner function with `find_executable_batch_size` and build your dataloaders inside it. At a minimum, this only takes 4 new lines of code.
<Tip warning={true}>
The inner function **must** take batch size as the first parameter, but we do not pass one to it when called. The wrapper will handle this for you. Any object (models, optimizers) that consumes device memory and is passed to the [`Accelerator`] also **must** be declared inside the inner function.
</Tip>
```diff
def training_function(args):
accelerator = Accelerator()
+ @find_executable_batch_size(starting_batch_size=args.batch_size)
+ def inner_training_loop(batch_size):
+ nonlocal accelerator # Ensure they can be used in our context
+ accelerator.free_memory() # Free all lingering references
model = get_model()
model.to(accelerator.device)
optimizer = get_optimizer()
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
lr_scheduler = get_scheduler(
optimizer,
num_training_steps=len(train_dataloader)*num_epochs
)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
train(model, optimizer, train_dataloader, lr_scheduler)
validate(model, eval_dataloader)
+ inner_training_loop()
```
## Non-reproducible results between device setups
If you changed the device setup and observe different model performance, it is likely you didn't update your script when moving from one setup to another. Even if you're using the same script with the same batch size, the results will still be different on a TPU, multi-GPU, and single GPU.
For example, if you were training on a single GPU with a batch size of 16 and you move to a dual GPU setup, you need to change the batch size to 8 to have the same effective batch size. This is because when training with Accelerate, the batch size passed to the dataloader is the **batch size per GPU**.
To make sure you can reproduce the results between the setups, make sure to use the same seed, adjust the batch size accordingly, and consider scaling the learning rate.
For more details and a quick reference for batch sizes, check out the [Comparing performance between different device setups](../concept_guides/performance) guide.
## Performance issues on different GPUs
If your multi-GPU setup consists of different GPUs, you may encounter some performance issues:
- There may be an imbalance in GPU memory between the GPUs. In this case, the GPU with the smaller memory will limit the batch size or the size of the model that can be loaded onto the GPUs.
- If you are using GPUs with different performance profiles, the performance will be driven by the slowest GPU you are using because the other GPUs will have to wait for it to complete its workload.
Vastly different GPUs within the same setup can lead to performance bottlenecks.
## Ask for help
If none of the solutions and advice here helped resolve your issue, you can always reach out to the community and Accelerate team for help.
- Ask for help on the Hugging Face forums by posting your question in the [Accelerate category](https://discuss.huggingface.co/c/accelerate/18). Make sure to write a descriptive post with relevant context about your setup and reproducible code to maximize the likelihood that your problem is solved!
- Post a question on [Discord](http://hf.co/join/discord), and let the team and the community help you.
- Create an Issue on the Accelerate [GitHub repository](https://github.com/huggingface/accelerate/issues) if you think you've found a bug related to the library. Include context regarding the bug and details about your distributed setup to help us better figure out what's wrong and how we can fix it.

View File

@ -8,11 +8,14 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> -->
# Handling big models # Loading big models into memory
When loading a pretrained model in PyTorch, the usual workflow looks like this: When loading a pre-trained model in PyTorch, the usual workflow looks like this:
```py ```py
import torch import torch
@ -27,11 +30,11 @@ In plain English, those steps are:
2. Load the model weights (in a dictionary usually called a state dict) from the disk 2. Load the model weights (in a dictionary usually called a state dict) from the disk
3. Load those weights inside the model 3. Load those weights inside the model
While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pretrained weights. If you're loading a model with 6 billions parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16). While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pre-trained weights. If you're loading a model with 6 billion parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16).
<Tip warning={true}> <Tip warning={true}>
This API is quite new and still in its experimental stage. While we strive to provide a stable API, it's possible some small parts of the public API will change in the future. This API is quite new and still in its experimental stage. While we strive to provide a stable API, it's possible some small parts of the public API will change in the future.
</Tip> </Tip>
@ -43,7 +46,7 @@ While this works very well for regularly sized models, this workflow has some cl
### Instantiating an empty model ### Instantiating an empty model
The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM, so that step 1 can be done on models of any size. Here is how it works: The first tool Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
```py ```py
from accelerate import init_empty_weights from accelerate import init_empty_weights
@ -59,7 +62,7 @@ with init_empty_weights():
model = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)]) model = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
``` ```
initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved on that device. initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved to that device.
<Tip warning={true}> <Tip warning={true}>
@ -69,9 +72,9 @@ initializes an empty model with a bit more than 100B parameters. Behind the scen
### Sharded checkpoints ### Sharded checkpoints
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split in several smaller files that we call checkpoint shards. It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split into several smaller files that we call checkpoint shards.
🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. For instance we could have a folder containing: Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
```bash ```bash
first_state_dict.bin first_state_dict.bin
@ -94,48 +97,69 @@ and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"l
### Loading weights ### Loading weights
The second tool 🤗 Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard. The second tool Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
Here is how we can use this to load the [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) model. You clone the sharded version of this model with: If you want to use big model inference with Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
Here is how we can use this to load the [GPT2-1.5B](https://huggingface.co/marcsun13/gpt2-xl-linear-sharded) model.
Let's download the sharded version of this model.
```bash ```bash
git clone https://huggingface.co/sgugger/sharded-gpt-j-6B pip install huggingface_hub
cd sharded-gpt-j-6B
git-lfs install
git pull
``` ```
then we can initialize the model with ```py
from huggingface_hub import snapshot_download
checkpoint = "marcsun13/gpt2-xl-linear-sharded"
weights_location = snapshot_download(repo_id=checkpoint)
```
In order to initialize the model, we will use the library minGPT.
```bash
git clone https://github.com/karpathy/minGPT.git
pip install minGPT/
```
```py ```py
from accelerate import init_empty_weights from accelerate import init_empty_weights
from transformers import AutoConfig, AutoModelForCausalLM from mingpt.model import GPT
checkpoint = "EleutherAI/gpt-j-6B" model_config = GPT.get_default_config()
config = AutoConfig.from_pretrained(checkpoint) model_config.model_type = 'gpt2-xl'
model_config.vocab_size = 50257
model_config.block_size = 1024
with init_empty_weights(): with init_empty_weights():
model = AutoModelForCausalLM.from_config(config) model = GPT(model_config)
``` ```
and load the checkpoint we just downloaded with: Then, load the checkpoint we just downloaded with:
```py ```py
from accelerate import load_checkpoint_and_dispatch from accelerate import load_checkpoint_and_dispatch
model = load_checkpoint_and_dispatch( model = load_checkpoint_and_dispatch(
model, "sharded-gpt-j-6B", device_map="auto", no_split_module_classes=["GPTJBlock"] model, checkpoint=weights_location, device_map="auto", no_split_module_classes=['Block']
) )
``` ```
By passing `device_map="auto"`, we tell 🤗 Accelerate to determine automatically where to put each layer of the model depending on the available resources: By passing `device_map="auto"`, we tell Accelerate to determine automatically where to put each layer of the model depending on the available resources:
- first we use the maximum space available on the GPU(s) - first, we use the maximum space available on the GPU(s)
- if we still need space, we store the remaining weights on the CPU - if we still need space, we store the remaining weights on the CPU
- if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors - if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors
`no_split_module_classes=["GPTJBlock"]` indicates that the modules that are `GPTJBlock` should not be split on different devices. You should set here all blocks that include a residual connection of some kind.
You can see the `device_map` that 🤗 Accelerate picked by accessing the `hf_device_map` attribute of your model: #### `no_split_module_classes`
This parameter will indicate that some of the modules with the name `"Block"` should not be split across different devices. You should set here all blocks that
include a residual connection of some kind.
#### The `device_map`
You can see the `device_map` that Accelerate picked by accessing the `hf_device_map` attribute of your model:
```py ```py
model.hf_device_map model.hf_device_map
@ -143,43 +167,34 @@ model.hf_device_map
```python out ```python out
{'transformer.wte': 0, {'transformer.wte': 0,
'transformer.wpe': 0,
'transformer.drop': 0, 'transformer.drop': 0,
'transformer.h.0': 0, 'transformer.h.0': 0,
'transformer.h.1': 0, ...
'transformer.h.2': 0, 'transformer.h.21': 0,
'transformer.h.3': 0, 'transformer.h.22': 1,
'transformer.h.4': 0, 'transformer.h.23': 1,
'transformer.h.5': 0,
'transformer.h.6': 0,
'transformer.h.7': 0,
'transformer.h.8': 0,
'transformer.h.9': 0,
'transformer.h.10': 0,
'transformer.h.11': 0,
'transformer.h.12': 0,
'transformer.h.13': 0,
'transformer.h.14': 0,
'transformer.h.15': 0,
'transformer.h.16': 0,
'transformer.h.17': 0,
'transformer.h.18': 0,
'transformer.h.19': 0,
'transformer.h.20': 0,
'transformer.h.21': 0,
'transformer.h.22': 0,
'transformer.h.23': 0,
'transformer.h.24': 1, 'transformer.h.24': 1,
'transformer.h.25': 1, ...
'transformer.h.26': 1, 'transformer.h.47': 1,
'transformer.h.27': 1, 'transformer.ln_f': 1,
'transformer.ln_f': 1,
'lm_head': 1} 'lm_head': 1}
``` ```
You can also design your `device_map` yourself, if you prefer to explicitly decide where each layer should be. In this case, the command above becomes: It's fully possible to create your own device map for the layers to use as well, specifying the GPU device to use (a number), `"cpu"`, or `"disk"` and pass this in:
```python
device_map = {
"transformer.wte": "cpu",
"transformer.wpe": 0,
"transformer.drop": "cpu",
"transformer.h.0": "disk"
}
model = load_checkpoint_and_dispatch(
model, checkpoint=weights_location, device_map=device_map
)
```py
model = load_checkpoint_and_dispatch(model, "sharded-gpt-j-6B", device_map=my_device_map)
``` ```
### Run the model ### Run the model
@ -187,31 +202,30 @@ model = load_checkpoint_and_dispatch(model, "sharded-gpt-j-6B", device_map=my_de
Now that we have done this, our model lies across several devices, and maybe the hard drive. But it can still be used as a regular PyTorch model: Now that we have done this, our model lies across several devices, and maybe the hard drive. But it can still be used as a regular PyTorch model:
```py ```py
from transformers import AutoTokenizer from mingpt.bpe import BPETokenizer
tokenizer = BPETokenizer()
inputs = tokenizer("Hello, my name is").to(0)
tokenizer = AutoTokenizer.from_pretrained(checkpoint) outputs = model.generate(x1, max_new_tokens=10, do_sample=False)[0]
inputs = tokenizer("Hello, my name is", return_tensors="pt") tokenizer.decode(outputs.cpu().squeeze())
inputs = inputs.to(0)
output = model.generate(inputs["input_ids"])
tokenizer.decode(output[0].tolist())
``` ```
Behind the scenes, 🤗 Accelerate added hooks to the model, so that: Behind the scenes, Accelerate added hooks to the model, so that:
- at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works) - at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works)
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass, and cleaned up just after - for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass, and cleaned up just after - for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after
This way, you model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM! This way, your model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM!
<Tip warning={true}> <Tip warning={true}>
This only supports inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations. This only supports the inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations.
</Tip> </Tip>
### Designing a device map ### Designing a device map
You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself, if you want more control over where each layer should go. You can let Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
<Tip> <Tip>
@ -221,7 +235,7 @@ You can let 🤗 Accelerate handle the device map computation by setting `device
All the options will produce the same result when you don't have enough GPU memory to accommodate the whole model (which is to fit everything that can on the GPU, then offload weights on the CPU or even on the disk if there is not enough RAM). All the options will produce the same result when you don't have enough GPU memory to accommodate the whole model (which is to fit everything that can on the GPU, then offload weights on the CPU or even on the disk if there is not enough RAM).
When you have more GPU memory available than the model size, here the difference between each option: When you have more GPU memory available than the model size, here is the difference between each option:
- `"auto"` and `"balanced"` evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1. - `"auto"` and `"balanced"` evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1.
- `"balanced_low_0"` evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the `generate` function for Transformers models - `"balanced_low_0"` evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the `generate` function for Transformers models
- `"sequential"` will fit what it can on GPU 0, then move on GPU 1 and so forth (so won't use the last GPUs if it doesn't need to). - `"sequential"` will fit what it can on GPU 0, then move on GPU 1 and so forth (so won't use the last GPUs if it doesn't need to).
@ -232,9 +246,9 @@ When you have more GPU memory available than the model size, here the difference
</Tip> </Tip>
First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want used for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`. First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want to use for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`.
Here is an example where we don't want to use more than 10GiB on each of two GPUs and no more than 30GiB of CPU RAM for the model weights: Here is an example where we don't want to use more than 10GiB on each of the two GPUs and no more than 30GiB of CPU RAM for the model weights:
```python ```python
from accelerate import infer_auto_device_map from accelerate import infer_auto_device_map
@ -246,18 +260,18 @@ device_map = infer_auto_device_map(my_model, max_memory={0: "10GiB", 1: "10GiB",
When a first allocation happens in PyTorch, it loads CUDA kernels which take about 1-2GB of memory depending on the GPU. Therefore you always have less usable memory than the actual size of the GPU. To see how much memory is actually used do `torch.ones(1).cuda()` and look at the memory usage. When a first allocation happens in PyTorch, it loads CUDA kernels which take about 1-2GB of memory depending on the GPU. Therefore you always have less usable memory than the actual size of the GPU. To see how much memory is actually used do `torch.ones(1).cuda()` and look at the memory usage.
Therefore when you create memory maps with `max_memory` make sure to adjust the avaialble memory accordingly to avoid out-of-memory errors. Therefore when you create memory maps with `max_memory` make sure to adjust the available memory accordingly to avoid out-of-memory errors.
</Tip> </Tip>
Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup the close to ideal map is: Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup, the close-to-ideal map is:
```python ```python
max_memory = {0: "30GIB", 1: "46GIB", 2: "46GIB", 3: "46GIB", 4: "46GIB", 5: "46GIB", 6: "46GIB", 7: "46GIB"} max_memory = {0: "30GIB", 1: "46GIB", 2: "46GIB", 3: "46GIB", 4: "46GIB", 5: "46GIB", 6: "46GIB", 7: "46GIB"}
``` ```
as you can see we gave the remaining 7 GPUs ~50% more memory than GPU 0. as you can see we gave the remaining 7 GPUs ~50% more memory than GPU 0.
If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be: If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance, if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be:
```python ```python
device_map = {"block1": 0, "block2": 1} device_map = {"block1": 0, "block2": 1}
@ -281,14 +295,47 @@ device_map = {"block1": 0, "block2.linear1": 1, "block2.linear2": 1}
</Tip> </Tip>
## CPU offload only
If you want to offload your model on CPU, you can use [`cpu_offload`]. As a result, all parameters of the model will be offloaded and only one copy of the state dict of the model will be kept. During the forward pass, parameters will be extracted from that state dict and put on the execution device and passed as they are needed, then offloaded again.
```python
cpu_offload(model, execution_device)
```
You can also use [`cpu_offload_with_hook`]. This function will offloads a model on the CPU and puts it back to an execution device when executed. The difference with [`cpu_offload`] is that the model stays on the execution device after the forward and is only offloaded again when the `offload` method of the returned `hook` is called. Furthermore, [`cpu_offload_with_hook`] is more performant but less memory saving. It is useful for pipelines running a model in a loop:
```python
model_1, hook_1 = cpu_offload_with_hook(model_1, execution_device)
model_2, hook_2 = cpu_offload_with_hook(model_2, execution_device, prev_module_hook=hook_1)
model_3, hook_3 = cpu_offload_with_hook(model_3, execution_device, prev_module_hook=hook_2)
hid_1 = model_1(input)
for i in range(50):
# model1 is offloaded on the CPU at the first iteration, model 2 stays on the GPU for this whole loop.
hid_2 = model_2(hid_1)
# model2 is offloaded to the CPU just before this forward.
hid_3 = model_3(hid_3)
# For model3, you need to manually call the hook offload method.
hook_3.offload()
```
## Disk offload only
To perform disk offload, you can use [`disk_offload`]. As a result, all parameters of the model will be offloaded as memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and put on the execution device passed as they are needed, then offloaded again.
```python
disk_offload(model, offload_dir, execution_device)
```
## Limits and further development ## Limits and further development
We are aware of the current limitations in the API: We are aware of the current limitations in the API:
- While this could theoretically work on just one CPU with potential disk offload, you need at least one GPU to run this API. This will be fixed in further development. - [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to a lack of RAM.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to lack of RAM.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) attributes devices sequentially (to avoid moving things back and forth) so if your first layer is bigger than the size of the GPU you have, it will end up with everything on the CPU/Disk. - [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) attributes devices sequentially (to avoid moving things back and forth) so if your first layer is bigger than the size of the GPU you have, it will end up with everything on the CPU/Disk.
- [`load_checkpoint_and_dispatch`] and [`load_checkpoint_in_model`] do not perform any check on the correctness of your state dict compared to your model at the moment (this will be fixed in a future version), so you may get some weird errors if trying to load a checkpoint with mismatched or missing keys. - [`load_checkpoint_and_dispatch`] and [`load_checkpoint_in_model`] do not perform any check on the correctness of your state dict compared to your model at the moment (this will be fixed in a future version), so you may get some weird errors if trying to load a checkpoint with mismatched or missing keys.
- The model parallelism used when your model is split on several GPUs is naive and not optimized, meaning that only one GPU works at a given time and the other sits idle. - The model parallelism used when your model is split on several GPUs is naive and not optimized, meaning that only one GPU works at a given time and the other sits idle.
- When weights are offloaded on the CPU/hard drive, there is no pre-fetching (yet, we will work on this for future versions) which means the weights are put on the GPU when they are needed and not before. - When weights are offloaded on the CPU/hard drive, there is no pre-fetching (yet, we will work on this for future versions) which means the weights are put on the GPU when they are needed and not before.
- Hard-drive offloading might be very slow if the hardware you run on does not have fast communication between disk and CPU (like NVMes). - Hard-drive offloading might be very slow if the hardware you run on does not have fast communication between disk and CPU (like NVMes).

View File

@ -0,0 +1,204 @@
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Context Parallel in 🤗`accelerate`
This guide will cover basics of using context parallelism in 🤗`accelerate`, for the more curious readers, we will also cover some technicalities in the later sections.
## Why context parallelism?
With the advent of large language models, and recently reasoning models, the sequence length has been growing rapidly. This, combined with quadratic memory complexity of attention, has led to a need for more efficient ways to train models with long sequences.
With sequence length of 128k, the memory requirement of the attention matrix is `128k * 128k * 2 bytes * num_heads = ~32 GB * num_heads` for `bf16` precision, given vanilla attention implementation. Granted, with usage of `flash attention` or `SDPA` which do not materialize these attention weights, this decreases drastically, but the growth in memory requirements is still considerable.
Context parallelism allows us to shard the inputs to the attention computation along the sequence dimension and compute the attention in parallel on multiple GPUs. With this, we can train models with long sequences, scaling potentially to 1M+ sequence length.
## How to use context parallelism?
```diff
from accelerate.utils import ParallelismConfig, TorchContextParallelConfig
+ cp_config = TorchContextParallelConfig(
+ cp_comm_strategy="alltoall", # no need to use cp_config at all, if you want to use the default "allgather"
+ )
+ parallelism_config = ParallelismConfig(
+ cp_size=8,
+ cp_handler=cp_config, # or just cp_size=8, if you want to use the default "allgather"
+ )
accelerator = Accelerator(
...,
parallelism_config=parallelism_config,
)
```
As with any other feature in 🤗`accelerate`, you can enable context parallelism also by passing the corresponding flags to `accelerate launch`.
In this case, it's no different:
```bash
accelerate launch --parallelism-config-cp-size 8 --parallelism-config-cp-comm-strategy [allgather|alltoall] ...
```
> [!Tip]
> You can also set the `cp_size` and `cp_comm_strategy` in the `accelerate config` command, which will save them in your `accelerate` configuration file, so you don't have to pass them every time you launch your script.
> [!Tip]
> Context parallelism is compatible with other parallelism strategies, such as data parallelism, tensor parallelism and FSDP2.
> You can simply combine them by setting your parallelism sizes to the desired values, e.g. `--parallelism-config-dp-size 8 --parallelism-config-tp-size 2 --parallelism-config-cp-size 8`. Or you can use the `ParallelismConfig` class to set them programmatically.
> [!Warning]
> Context parallelism is tightly coupled with `FSDP2`, which you can learn more about in the [FSDP2 introduction](fsdp1_vs_fsdp2.md). Meaning, context parallelism only works if you use `FullyShardedDataParallelPlugin` or `--use-fsdp` with version set to 2 to your
> program. If no `FSDP2` is used, error will be raised.
> [!Warning]
> Context parallelism works only with [SDPA](https://docs.pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) and only with no mask or causal mask. We can't properly detect this for you, so it's your responsibility to ensure that you are using `SDPA` with no mask or causal mask. If you use any other attention implementation, it will raise an error.
After enabling context parallelism with the methods mentioned above, you can then apply it to your training loop. We provide a thin wrapper around [`torch.distributed.tensor.experimental.context_parallel`](https://docs.pytorch.org/docs/stable/distributed.tensor.html#torch.distributed.tensor.experimental.context_parallel) that you can use in your training loop, that abstracts some of the complexity of using it (more on this later). To minimize the changes you have to do in your training loop, we provide a context manager that is a `noop` if context parallelism is not enabled, and applies the context parallelism if it is enabled. This way, you can use it in your training loop without changing any code based on your parallelism configuration.
You can use it as follows:
```python
for batch in dataloader:
with accelerator.maybe_context_parallel(
buffers=[batch["input_ids"], batch["attention_mask"]],
buffer_seq_dims=[1, 1],
no_restore_buffers={batch["input_ids"], batch["labels"]},
):
outputs = model(**batch)
...
```
> [!Warning]
> This context manager has to be recreated with each training step, as shown in the example above. It's crucial to do so.
This can scale your context size to 1M+ sequence length potentially. Below, we showcase speed and memory usage of context parallelism for up-to 256k context size. We can see that when we double the context size and number of GPUs, we can achieve consistent memory usage, potentially enabling endless context length scaling.
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/examples/fsdp2/cp_perf.png" alt="context parallelism memory usage" />
<br>
<em>Figure 1: Memory usage and speed of context parallelism for up-to 256k context size.</em>
</p>
> [!Tip]
> These examples were created with a script you can find [in the examples folder](https://github.com/huggingface/accelerate/blob/main/examples/fsdp2/nd_parallel.py). To run the example on 8 H100 GPUs (128k sequence length), you can use the following command:
> ```bash
> accelerate launch --use-fsdp --fsdp-activation-checkpointing=TRUE examples/fsdp2/nd_parallel.py --cp-size=8 --sequence-length=128000
> ```
## Accelerate's interface
The context manager takes a few arguments, that are used to configure the context parallelism.
- `buffers`: This is a list of tensors that are to be sharded across the sequence dimension. These tensors are usually input ids, labels and attention mask.
- `buffer_seq_dims`: This is a list of integers, that specify the sequence dimension of the buffers, in the order of the `buffers` list. If you pass `buffers=[input_ids, shift_labels]` with both having shape `[batch_size, sequence_length]`, you would pass `buffer_seq_dims=[1, 1]`.
as the sequence dimension is the second dimension of the tensors. This is required for correct computation of the model outputs.
- `no_restore_buffers`: The implementation of context parallelism modifies the buffers in-place, converting them to `torch.distributed.tensor.Dtensor`s. After the context manager exits, a communication kernel would need to be launched to restore the buffers to their original state (usually all-gather). This takes some time, so it is recommended to pass the same tensors as in the `buffers` argument, to avoid unnecessary communication, unless you are sure that you need to use the buffers after the context manager exits.
> [!Warning]
> Context parallelism is not compatible with `labels` that are a copy of `input_ids`, which models from 🤗 transformers can shift to enable causal language modeling themselves.
> Imagine this case:
> labels = [l1, l2, l3, l4, ... li]
> if we apply context parallelism, each rank would end up with a part of labels, such as this:
> labels_rank_0 = [l1, l2], labels_rank_1 = [l3, l4], ...
> after transformers modelling code shifts the labels, it would end up with:
> labels_rank_0 = [l2, PAD], labels_rank_1 = [l3, PAD], ...
> where `PAD` is a padding token. This would result in incorrect loss computation, as the labels are not aligned with the inputs anymore.
> Because of this, you need to manually shift the labels before passing them in the model
## Configurable options
Accelerate provides only a single option to configure context parallelism (except for `cp_size`)
- `cp_comm_strategy`: The rotation method to use for the shards. We strongly recommend keeping this as `"allgather"`, as it's very likely it will outperform `"alltoall"` in most cases.
Context parallel size is rather self-explanatory, it's the number of ranks across which the inputs are to be-sharded.
Context parallel shard rotation defines how the shards of the inputs are rotated across ranks. We'll cover the 2 options in more detail in the next section.
You can see an end-to-end example in the [ND parallel example](https://github.com/huggingface/accelerate/blob/main/examples/fsdp2/nd_parallel.py) file, where you can train an 8B model with up-to 128k context length on a single 8xH100 node. Using multi-node training, you can scale this to 1M+ sequence length on multiple GPUs. You can also seamlessly combine it with other parallelism strategies to fit your needs.
## Technical details
> [!Tip]
> This section is fairly technical, so if you don't need to learn the internals of context parallelism, you can skip it and start building 🚀
We're going to be using word `shard` extensively in the following sections, so let's define it first. If we call tensor `sharded` across `Dth` dimension, across `N` ranks, we mean that this tensor is split into `N` parts, where each part of the tensor has shape `[..., D//N, ...]`.
## So how does it work?
Context parallelism works on sharding the `Q, K and V` matrices across the sequence dimension. Each rank has its assigned shard of `Q`, let's call it `Q_i`. This matrix stays only on this rank, during the whole computation. Similarly, each rank has its own shard of `K` and `V`, let's call them `K_i` and `V_i`. Then, each rank calculates attention with its own shard of `Q_i`, `K_i` and `V_i`, let's call it `attn_i`. During this computation, a communication kernel is launched to gather the `Ks` and `Vs` from all other ranks. What communication primitive is used, depends on the `context_parallel_shard_rotation` option.
This way, each rank gets to calculate local attention, first with `Q_i`, `K_i` and `V_i`, then with `K_j` and `V_j` from all other ranks. As each rank holds `Q, K and V` matrices that are sharded across the sequence dimension, the resulting matrices are smaller and can fit on a single GPU.
We can formalize this in the following pseudocode:
```python
comm_kernel = {"allgather": allgather, "alltoall": alltoall}[context_parallel_shard_rotation]
Qi, Ki, Vi = shard(Q, K, V, seq_dim)
attn[i] = attn(Qi, Ki, Vi)
for j in range(context_parallel_size):
Kj, Vj = comm_kernel()
attn[j] = attn(Qi, Kj, Vj) # [batch, num_heads, seq_len // context_parallel_size, head_dim]
final_attn = combine(attn)
```
## all-to-all vs all-gather
### all-gather
So what's the difference between all-to-all and all-gather? With all-gather, the communication is very simple. After (well, before, as it usually takes longer) we compute the local attention `attn_i` we launch an all-gather to gather all other `Ks` and `Vs` from all other ranks. As this communication is done, each rank has all the `Ks` and `Vs` from all other ranks, and can compute the attention with them sequentially.
In ideal scenario, all-gather finishes in the exact moment as the calculation of `attn_i` is done. However, this never happens in practice, so the ideal real overlap is achieved when the full `attn_i` is overlapped with a part of the communication, then to start the computation with `K_j` and `V_j`, we wait for the all-gather to finish.
### all-to-all
All-to-all, or sometimes called `ring-rotation` utilizes a ring-like communication pattern. After concluding `attn_i` computation, an all-to-all is launched to send `K_i` and `V_i` to the neighbouring ranks. We then repeat this `context_parallel_size-1` times, so that each rank sees all the shards of `K` and `V` from all other ranks once. In ideal scenario, we prefetch shards `K_i+1` and `V_i+1` from the neighbouring rank and this communication is exactly overlapped with computation of our current `attn_i`. Again, realistically, this perfect overlap doesn't ever happen. Given the nature of this approach, if we don't achieve perfect overlap, the penalty is way larger than with all-gather.
## How to choose the right rotation method?
In theory, all-to-all should be the better choice. Though in practice, it rarely is. Therefore, we default to all-gather, as it's more likely to achieve better performance. Extensive [benchmarks](https://discuss.pytorch.org/t/distributed-w-torchtitan-breaking-barriers-training-long-context-llms-with-1m-sequence-length-in-pytorch-using-context-parallel/215082) from the `torchtitan` team also show that all-to-all rarely outperforms all-gather. Though, we still provide both options, as you might find one to be better for your use case.
You can directly see this issue in the profiler output in the image below:
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/examples/fsdp2/cp_all_to_all.png" alt="all-to-all profiler output" />
<br>
<em>Figure 1: In red you can see the idle time, while we wait for the all-to-all kernel to finish. Highlighted in the first blue bar, you can see that it takes ~250us to finish, which is repeated N-1 times for each attention call, where N is the context parallel size.</em>
</p>
## Why only FSDP2?
We only support context parallelism with `FSDP2`, as we create a joint mesh of `context_parallel_size` and `dp_shard_size` to
utilize its full potential.
How it works is: we shard the model across the joint mesh of size `cp_size*dp_shard_size`, which maximizes the memory savings.
This is a "free lunch" of sorts, as `FSDP` communication is fully overlapped with the computation of attention, as shown in the images below.
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/examples/fsdp2/cp_why_fsdp2.png" alt="why FSDP2+CP" />
<br>
<em>Figure 2: In blue rectangles (Stream 23), you can see that the pre-fetch of `FSDP` shard is fully overlapped with the computation of attention (Stream 7), while in red rectangles (Stream 24), you can see that the all-gather kernel results in a bubble of idle time, in which our compute stream (7) is idle.</em>
</p>
In the figure above, you can also note the difference between all-to-all and all-gather. While in all-to-all (Figure 1), we launch a communication kernel N-1 times for each attention call, in all-gather (Figure 2), we launch a communication kernel only once. This results in a bigger bubble, but it only happens once per attention call, while in all-to-all, it happens N-1 times.
## Data dispatching in joint mesh
We make sure to dispatch the same batch of data to the whole `cp` subgroup, so that the results are correct. (Meaning each rank in `cp` subgroup gets the same batch of data.) However, we also dispatch different batches to each rank of `dp_shard` group.
Imagine it like this:
```
# 8 GPUS, --dp_shard_size 4, --cp_size 2
# mesh = [[0, 1], [2, 3], [4, 5], [6, 7]]
# model is sharded across the whole mesh (each GPU holds 1/8 of the model)
# GPUs 0,1 = batch 0
# GPUs 2,3 = batch 1
... and so on.
```

View File

@ -8,11 +8,14 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> -->
# Deferring Executions # Executing and deferring jobs
When you run your usual script, instructions are executed in order. Using 🤗 Accelerate to deploy your script on several When you run your usual script, instructions are executed in order. Using Accelerate to deploy your script on several
GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
faster than others. faster than others.
@ -27,7 +30,7 @@ accelerator.wait_for_everyone()
This instruction will block all the processes that arrive first until all the other processes have reached that This instruction will block all the processes that arrive first until all the other processes have reached that
point (if you run your script on just one GPU or CPU, this won't do anything). point (if you run your script on just one GPU or CPU, this won't do anything).
A few example cases for when to use this utility are listed below: A few example cases of when to use this utility are listed below:
<Tip> <Tip>
@ -38,7 +41,7 @@ A few example cases for when to use this utility are listed below:
## Downloading a Dataset ## Downloading a Dataset
When downloading a dataset, you should download it first on the main process and then loading the cached dataset in afterwards When downloading a dataset, you should download it first on the main process and then load the cached dataset afterward
<Tip> <Tip>
@ -104,4 +107,24 @@ with accelerator.main_process_first():
batched=True, batched=True,
remove_columns=["idx", "sentence1", "sentence2"], remove_columns=["idx", "sentence1", "sentence2"],
) )
``` ```
## Applying checks such as Early Stopping
To have a check that works with a flag set by a particular process, the `set_trigger` and `check_trigger` API should be used. Useful examples
for doing so can include situations such as using early stopping and monitoring the loss (as each loss slightly differs on each process).
Call [`Accelerator.set_trigger`] when your condition has been met, and [`Accelerator.check_trigger`] when checking if that condition has been met in any process:
```python
for (x,y) in data_loader:
logits = model(x)
loss = loss_func(logits, y)
# Assume `should_do_early_stopping` is a custom defined function that returns a conditional
if should_do_early_stopping(loss):
accelerator.set_trigger()
# Later in the training script when we need to check for the breakpoint
if accelerator.check_trigger():
break
```

View File

@ -0,0 +1,105 @@
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# FSDP1 vs FSDP2
This guide explains the key differences between `FSDP1` and `FSDP2` and helps you migrate your existing code to use `FSDP2` with minimal changes.
## How is FSDP2 better than FSDP1?
First, we want to understand how `FSDP1` and `FSDP2` work internally to understand the differences between them. This also helps us understand the limitations of `FSDP1` and how `FSDP2` solves them.
We'll be discussing a scenario where we have a single `Layer` that contains 3 `Linear` layers and is wrapped using `FSDP` to be sharded across 2 GPUs.
<div align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/layer.png" alt="Layer">
</div>
### FSDP1
First, we have to understand the original `FSDP1` and the limitations it brings. It represents each `FSDP` module as a single `FlatParameter` which is a single 1D tensor that contains all of the module parameters, which then get sharded across ranks. I.e. if you wrap the `Layer` with `FSDP1`, you'd achieve something as such:
<div align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/fsdp1.png" alt="FSDP1">
</div>
You might notice a problem. The whole `Layer` gets flattened into a single `FlatParameter`, which then gets sharded across ranks. But if it's a single `FlatParameter` object, how do we store metadata? That is one of the limitations. Properly storing per-parameter metadata such as `dtype`, `requires_grad`, etc. is not possible without some ugly hacks.
### FSDP2
This is why `FSDP2` was introduced. It doesn't use `FlatParameter`, instead it uses `DTensor` which is short for "Distributed Tensor". Each `DTensor` basically represents a vanilla `torch.Tensor` that has been sharded across ranks. It contains metadata about the original `torch.Tensor` and how it's sharded, what is the [placement type](https://pytorch.org/docs/stable/distributed.tensor.html#module-torch.distributed.tensor.placement_types) and so on. This is why it's called `per-parameter sharding`. The following figure shows the difference:
<div align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/fsdp2.png" alt="FSDP2">
</div>
Each Parameter of the original `Layer` is sharded across the 0th dimension, and split between 2 GPUs. Now, each `Linear` layer is a separate `DTensor` and storing metadata per-parameter is possible and straightforward.
> [!TIP]
> In the image above, the tensors were sharded across the 1st dimension for the sake of fitting the image on the screen, in reality, they are sharded across the 0th dimension as stated above
## What does FSDP2 offer?
`FSDP2` is a new and improved version of PyTorch's fully-sharded data parallel training API. Its main advantage is using `DTensor` to represent sharded parameters. Compared to `FSDP1`, it offers:
- Simpler internal implementation, where each `Parameter` is a separate `DTensor`
- Enables simple partial parameter freezing because of the above, which makes methods as [`LORA`](https://arxiv.org/abs/2106.09685) work out of the box
- With `DTensor`, `FSDP2` supports mixing `fp8` and other parameter types in the same model out of the box
- Faster and simpler checkpointing without extra communication across ranks using `SHARDED_STATE_DICT` and [`torch.distributed.checkpoint`](https://pytorch.org/docs/stable/distributed.checkpoint.html), this way, each rank only saves its own shard and corresponding metadata
- For loading, it uses a `state_dict` of the sharded model to directly load the sharded parameters
- Support for asynchronous checkpointing, where parameters are first copied to CPU memory, after this, main thread continues training while another thread stores the parameters on disk
- Memory efficiency and deterministic memory usage, `FSDP2` doesn't use `recordStream` anymore and uses stream-to-stream synchronization (for more technical details see [this forum post](https://dev-discuss.pytorch.org/t/fsdp-cudacachingallocator-an-outsider-newb-perspective/1486) and [this issue](https://github.com/pytorch/pytorch/issues/114299))
- In the future, optimizations of the communication patterns via `torch.compile` are planned, further improving the performance and memory efficiency
## API Differences
We have already discussed the internal differences, now let's discuss the differences, you, as a user, will need to know.
Here are the main changes in configuration options when using `FSDP2` through the `accelerate` CLI:
Previous (`FSDP1`) | New (`FSDP2`) | What Changed
-- | -- | --
`--fsdp_sharding_strategy` | `--fsdp_reshard_after_forward` | replaces `--fsdp_sharding_strategy`, changed to `true` (previously `FULL_SHARD`) or `false` (previously `SHARD_GRAD_OP`)
`--fsdp_backward_prefetch` | \*\***REMOVED**\*\* | `FSDP2` uses previous `BACKWARD_PRE` option by default, as only this allows communication and computation overlap
`--fsdp_forward_prefetch` | \*\***NOT YET IMPLEMENTED**\*\* | How to implement this is under active discussion, for now it is not supported in `FSDP2`
`--fsdp_sync_module_states` | \*\***REMOVED**\*\* | with `FSDP2`, this parameter becomes redundant
`--fsdp_cpu_ram_efficient_loading` | `--fsdp_cpu_ram_efficient_loading` | if `true`, `FSDP2` will similarly load the model only on rank 0, and then parameters get synced to other ranks, this is the same behavior as `FSDP1`, however, setting `--fsdp_sync_module_states` isn't required anymore
`--fsdp_state_dict_type` | `--fsdp_state_dict_type` | `LOCAL_STATE_DICT` becomes obsolete and with `FSDP2` `SHARDED_STATE_DICT` is the default option, which results in no extra communication and each rank saving its own shard, other possible option is `FULL_STATE_DICT` which results in extra communication and spike in memory usage but saves the full model from rank 0.
`--fsdp_use_orig_params` | \*\***REMOVED**\*\* | `FSDP2` uses a `DTensor` class on the background, which means it *always* uses the original parameters by default
\*\***NEW**\*\* | `--fsdp_version` | `1` is the default option, to not break existing code, set to `2` to use `FSDP2`
For all other options that remain unchanged, see the [`FSDP` documentation](../usage_guides/fsdp.md).
## How to Switch to FSDP2
### If using Python code:
Simply set `fsdp_version=2` when creating your plugin and replace options according to the table above.
```python
from accelerate import FullyShardedDataParallelPlugin, Accelerator
fsdp_plugin = FullyShardedDataParallelPlugin(
fsdp_version=2
# other options...
)
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
```
### If using YAML config:
Use our conversion tool:
```bash
accelerate to-fsdp2 --config_file config.yaml --output_file new_config.yaml
```
This will automatically convert all FSDP1 settings to their FSDP2 equivalents. Use `--overwrite` to update the existing file instead of creating a new one.

View File

@ -0,0 +1,192 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# FSDP vs DeepSpeed
Accelerate offers flexibilty of training frameworks, by integrating two extremely powerful tools for distributed training, namely [Pytorch FSDP](../usage_guides/fsdp) and [Microsoft DeepSpeed](../usage_guides/deepspeed). The aim of this tutorial is to draw parallels, as well as to outline potential differences, to empower the user to switch seamlessly between these two frameworks.
<Tip>
To switch between the frameworks, we recommend launching code `accelerate launch` passing in the correct config file with `--config_file`, or passing in the respective arguments directly for [FSDP and DeepSpeed](../package_reference/cli#accelerate-launch) .
Example Accelerate configurations can be found here for [DeepSpeed](../usage_guides/deepspeed#accelerate-deepspeed-plugin) and [FSDP](../usage_guides/fsdp#how-it-works-out-of-the-box), or in the [example zoo under "Launch Configurations"](../usage_guides/explore)
</Tip>
<Tip warning={true}>
This tutorial is for single-node, multi-GPU, scenarios only.
</Tip>
## Configuring Functionalities
Model tensors are split into different GPUs in an attempt to scale up model sizes; this is termed *sharding* in FSDP, and *partitioning* in DeepSpeed. FSDP sharding and DeepSpeed ZeRO (partitioning) stages are configured by `--fsdp_sharding_strategy`, and `--zero_stage`, respectively. In particular, FSDP `FULL_SHARD` maps to DeepSpeed ZeRO stage `3`; see this [comprehensive mapping between FSDP sharding and DeepSpeed ZeRO settings](../usage_guides/fsdp#mapping-between-fsdp-sharding-strategies-and-deepspeed-zero-stages). The below table summarizes and groups similar settings:
Group | Framework | Configuration | Example | Restrictions (if any)
--|--|--|--|--
sharding / partitioning | FSDP<br>DeepSpeed | `--fsdp_sharding_strategy`<br>`--zero_stage` | `1` (`FULL_SHARD`) <br>`3` |
offload | FSDP<br>DeepSpeed | `--fsdp_offload_params`<br>`--offload_param_device`<br>`--offload_optimizer_device` | `true`<br>`cpu`<br>`cpu` | all or nothing <br><br>
model loading | FSDP<br>DeepSpeed | <span style="white-space:nowrap;">`--fsdp_cpu_ram_efficient_loading`</span><br>`--zero3_init_flag` | `true`<br>`true` | <br>only ZeRO 3
efficient checkpointing | FSDP<br>DeepSpeed | `--fsdp_state_dict_type`<br>`--zero3_save_16bit_model` | `SHARDED_STATE_DICT`<br>`true` | <br>only ZeRO 3
weights prefetching | FSDP<br><br>DeepSpeed | `--fsdp_forward_prefetch`<br>`--fsdp_backward_prefetch`<br>None | `true`<br>`BACKWARD_PRE` | <br><br>
model | FSDP<br><br>DeepSpeed | `--fsdp_auto_wrap_policy`<br><span style="white-space:nowrap;">`--fsdp_transformer_layer_cls_to_wrap`</span><br>None | `TRANSFORMER_BASED_WRAP`<br><Layer Class> |<br>Usually not needed <br>Transparent to user.
parameters summoning | FSDP<br>DeepSpeed | `--fsdp_use_orig_params`<br>None | `true` | required for `torch.compile`<br>Transparent to user
parameters syncing | FSDP<br>DeepSpeed | `--fsdp_sync_module_states`<br>None | `true` |
training | FSDP<br>DeepSpeed | None<br>`--gradient_accumulation_steps`<br>`--gradient_clipping` | <br>`auto`<br>`auto` | Transparent to user
For detailed descriptions of the above, refer to [`Accelerate` launch documentation](../package_reference/cli#accelerate-launch).
<Tip>
To access other DeepSpeed configurations, such as mixed precision settings,
you need to pass in a `--deepspeed_config_file`, see the [documentation](../usage_guides/deepspeed#deepspeed-config-file).
DeepSpeed can be also configured via [`DeepSpeedPlugin`], e.g., `DeepSpeedPlugin.zero_stage` is equivalent of `--zero_stage`, and `DeepSpeedPlugin.hf_ds_config` can be used to pass `--deepeed_config_file.`
</Tip>
<Tip>
FSDP can be also configured via [`FullyShardedDataParallelPlugin`], e.g., `FullyShardedDataParallelPlugin.sharding_strategy` is equivalent of `--fsdp_sharding_strategy`.
</Tip>
### Checkpointing
Do note that while FSDP can be configured via `--fsdp_state_dict_type` to save either full / sharded checkpoints.
<Tip>
For DeepSpeed Zero3, one could pass a `--zero3_save_16bit_model true`, which conveniently consolidates the model to a single rank and saves; this is the FSDP equivalent of `fsdp_state_dict_type: FULL_STATE_DICT`.
</Tip>
<Tip warning={true}>
For large models, consolidating the model to a single rank can be very slow.
</Tip>
<Tip>
For quicker checkpointing, for FSDP use `fsdp_state_dict_type: SHARDED_STATE_DICT`, and for DeepSpeed Zero3 [use the `zero_to_fp32.py` script to post-convert sharded checkpoints](https://www.deepspeed.ai/tutorials/zero/#extracting-weights).
</Tip>
### Offloading
FSDP only allows *all-or-nothing* offload (i.e., either offload parameters, gradients, and optimizer, or keep them all in GPU), but DeepSpeed can offload parameters and optimizer differently. Furthermore, DeepSpeed also supports [offloading to NVME](https://www.deepspeed.ai/docs/config-json/#parameter-offloading).
### Prefetching
FSDP allows two prefetching configurations `--fsdp_forward_prefetch` and `--fsdp_backward_prefetch` to improve overlap of comms / computation at a cost of extra memory, see [FSDP documentation](https://pytorch.org/docs/stable/fsdp.html).
For DeepSpeed, the prefetching will be turned on when needed, and it turns on depending on certain hyper-params like `stage3_param_persistence_threshold`, `stage3_max_reuse_distance`, etc, [that can be configured for Zero3](https://www.deepspeed.ai/docs/config-json/#parameter-offloading); `accelerate` may set these hyper-params automatically if you don't set those explicitly in the deepspeed config file.
<Tip>
For FSDP set `fsdp_backward_prefetch: BACKWARD_PRE` for improved throughputs if memory allows.
</Tip>
### Model Loading
While FSDP require an explicit `--fsdp_cpu_ram_efficient_loading true` to activate efficient model loading, `transformers` will activate the similar feature whenever DeepSpeed Zero3 is used.
<Tip>
For FSDP, whenever setting `--fsdp_cpu_ram_efficient_loading true`, `accelerate` will automatically set `sync_module_states` to true.
For RAM efficient loading the weights will be loaded only in a single rank, and thus requires `sync_module_states` to broadcast weights to other ranks.
</Tip>
### Model
FSDP requires an explicit `--fsdp_auto_wrap_policy` for the algorithm to decide how to schedule the all-gather and reduce-scatter operations. But for DeepSpeed this is transparent to the user.
<Tip>
For FSDP, simply set `fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP`. With the latest [`transformers`] versions, we try our best to figure out the suitable `fsdp_transformer_layer_cls_to_wrap` for HF transformers models. However, if you get an error regarding it, please specify this.
</Tip>
### Parameters Summoning
FSDP requires an explicit `--fsdp_use_orig_params` flag if using `torch.compile`, see [the pytorch documentation](https://pytorch.org/docs/stable/fsdp.html#module-torch.distributed.fsdp). For DeepSpeed this is transparent to the user.
<Tip>
For FSDP, when using `torch.compile` please set `fsdp_use_orig_params: True`.
</Tip>
## Training
Deepspeed requires explicit `--gradient_accumulation_steps` and `--gradient_clipping` flags. For FSDP this is transparent to the user.
<Tip>
When using DeepSpeed, set `gradient_accumulation_steps: "auto"` and `gradient_clipping: "auto"` to automatically pick up values set in the [`Accelerator`] or [`TrainingArguments`] (if using `transformers`).
</Tip>
## On Differences in Data Precision Handling
To discuss how data precision is handled in both FSDP and Deepspeed, it is instructive to first give an overview of how model parameters are handled in these frameworks. Before the model / optimizer parameters are distributed across GPUs, parameter preparation is involved to first "flatten" them to one-dimensional [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor). The implementation of FSDP / DeepSpeed varies in the respect of the `dtype` in which these "flattened" parameters are stored, and there are ramifications with regards to how [`torch.Optimizer`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) allocate their `dtype`s. The table below outlines the processes for both frameworks; the "Local" column indicates the process occurring at a per-gpu level, therefore any memory overheads by upcasting should be understood to be amortized by the number of gpus used.
<Tip>
As a rule of thumb, for stable training with automatic mixed precision, all the trainable parameters have to be in `torch.float32`.
</Tip>
Process | Local | Framework | Details
--|--|--|--
Loading, i.e., [`AutoModel.from_pretrained(..., torch_dtype=torch_dtype)`] |
Preparation, i.e., creation of "flat params" | ✅ | FSDP<br>DeepSpeed | created in `torch_dtype`.<br> disregards `torch_dtype`, created in `float32`.
Optimizer initialization | ✅ | FSDP<br>DeepSpeed | creates parameters in `torch_dtype`<br> creates parameters in `float32`
Training Step, i.e, forward, backward, reduction | | FSDP<br>DeepSpeed | follows [`MixedPrecision`](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.MixedPrecision)<br> follows `deepspeed_config_file` mixed precision settings.
Optimizer (Pre-Step) | ✅ | FSDP<br>DeepSpeed | upcasting (if any) to `torch_dtype`<br>upcasted to `float32`
Optimizer (Actual Step) | ✅ | FSDP<br>DeepSpeed | occurs in `torch_dtype` <br> occurs in `float32`.
<Tip warning={true}>
Therefore when using DeepSpeed a small number of GPUs, be aware of potentially significant memory overheads due to the upcasting during preparation.
</Tip>
<Tip>
With FSDP, in the absence of mixed precision, it is possible to operate the [`torch.Optimizer`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) in low precision `torch_dtype`, which may be helpful when using small number of GPUs.
</Tip>
<Tip warning={true}>
With mixed precision, FSDP and DeepSpeed will upcast in the model preparation step (c.f. table above). But do note that FSDP will then save checkpoints in the upcasted precision; Deepspeed may still save low precision checkpoints if `--zero3_save_16bit_model` is specified.
</Tip>
To clarify the above table consider the concrete examples below; the optimizer pre- and actual step combined for brevity. With FSDP it is possible to operate in the two modes shown below, but DeepSpeed can only operate in one.
Framework | Model Loading (`torch_dtype`) | Mixed Precision | Preparation (Local) | Training | Optimizer (Local)
--|--|--|--|--|--
FSDP | bf16 | default (none) | bf16 | bf16 | bf16
FSDP | bf16 | bf16 | fp32 | bf16 | fp32
DeepSpeed | bf16 | bf16 | fp32 | bf16 | fp32

View File

@ -0,0 +1,184 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Gradient synchronization
PyTorch's distributed module operates by communicating back and forth between all of the GPUs in your system.
This communication takes time, and ensuring all processes know the states of each other happens at particular triggerpoints
when using the `ddp` module.
These triggerpoints are added to the PyTorch model, specifically their `forward()` and `backward()` methods.
This happens when the model is wrapped with `DistributedDataParallel`:
```python
import torch.nn as nn
from torch.nn.parallel import DistributedDataParallel
model = nn.Linear(10, 10)
ddp_model = DistributedDataParallel(model)
```
In Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model.
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
import torch.nn as nn
- from torch.nn.parallel import DistributedDataParallel
model = nn.Linear(10,10)
+ model = accelerator.prepare(model)
```
## The slowdown in gradient accumulation
You now understand that PyTorch adds hooks to the `forward` and `backward` method of your PyTorch model when
training in a distributed setup. But how does this risk slowing down your code?
In DDP (distributed data parallel), the specific order in which processes are performed and ran are expected
at specific points and these must also occur at roughly the same time before moving on.
The most direct example is when you update model parameters through
`optimizer.step()`.
Without gradient accumulation, all instances of the model need to have updated
their gradients computed, collated, and updated before moving on to the next
batch of data.
When performing gradient accumulation, you accumulate `n` loss gradients and
skip `optimizer.step()` until `n` batches have been reached. As all training
processes only need to synchronize by the time `optimizer.step()` is called,
without any modification to your training step, this needless inter-process
communication can cause a significant slowdown.
How can you avoid this overhead?
## Solving the slowdown problem
Since you are skipping model parameter updates when training on these batches, their gradients do not need to be synchronized until the point where `optimizer.step()` is actually called.
PyTorch cannot automagically tell when you need to do this, but they do provide a tool to help through the [`no_sync`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.no_sync) context manager
that is added to your model after converting it to DDP.
Under this context manager, PyTorch will skip synchronizing the gradients when
`.backward()` is called, and the first call to `.backward()` outside this
context manager will trigger the synchronization. See an example below:
```python
ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
for index, batch in enumerate(dataloader):
inputs, targets = batch
# Trigger gradient synchronization on the last batch
if index != (len(dataloader) - 1):
with ddp_model.no_sync():
# Gradients only accumulate
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
else:
# Gradients finally sync
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
optimizer.step()
```
In Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
`ddp_model.no_sync` gets replaced with [`~Accelerator.no_sync`] and operates the same way:
```diff
ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
for index, batch in enumerate(dataloader):
inputs, targets = batch
# Trigger gradient synchronization on the last batch
if index != (len(dataloader)-1):
- with ddp_model.no_sync():
+ with accelerator.no_sync(model):
# Gradients only accumulate
outputs = ddp_model(inputs)
loss = loss_func(outputs, targets)
accelerator.backward(loss)
else:
# Gradients finally sync
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
As you may expect, the [`~Accelerator.accumulate`] function wraps around this conditional check by keeping track of the current batch number, leaving you with the final
gradient accumulation API:
```python
ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
for batch in dataloader:
with accelerator.accumulate(model):
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
As a result, you should either use *`accelerator.accumulate` or `accelerator.no_sync`* when it comes to API choice.
## Just how much of a slowdown is there, and easy mistakes you can make
To set up a realistic example, consider the following setup:
* Two single-GPU T4 nodes and one node with two GPUs
* Each GPU is a T4, and are hosted on GCP
* The script used is a modification of the [NLP Example](https://github.com/muellerzr/timing_experiments/blob/main/baseline.py) script
* Batch size per GPU is 16, and gradients are accumulated every 4 steps
All scripts are available in [this repository](https://github.com/muellerzr/timing_experiments).
If not careful about gradient synchronization and GPU communication, a *large* amount of time can be wasted
from when these GPUs communicate to each other during unnecessary periods.
By how much?
Reference:
- Baseline: uses no synchronization practices discussed here
- `no_sync` improperly: `no_sync` only around the `backward` call, not the `forward`
- `no_sync`: using the `no_sync` pattern properly
- `accumulate`: using [`~Accelerator.accumulate`] properly
Below are the average seconds per batch iterating over 29 batches of data for each setup on both a single node and on the dual-node setup:
| | Baseline | `no_sync` improperly | `no_sync` | `accumulate`|
| :---------: | :-------: | :------------------: | :-------: | :---------: |
| Multi-Node | 2±0.01s | 2.13±0.08s | **0.91±0.11s** | **0.91±0.11s** |
| Single Node | 0.50±0.01s | 0.50±0.01s | **0.41±0.015s** | **0.41±0.015s** |
As you can see, if you are not careful about how you set up your gradient synchronization, you can get upwards of more than a 2x slowdown during training!
If you are worried about making sure everything is done properly, we highly recommend utilizing the [`~Accelerator.accumulate`] function and passing in
`gradient_accumulation_steps` or `gradient_accumulation_plugin` to the [`Accelerator`] object so Accelerate can handle this for you.
### `no_sync` requires additional GPU memory when using FSDP
Be aware that not syncing gradients can have adverse effects while performing FSDP training. As it has been warned in `torch`, the [`no_sync` context manager for FSDP](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.FullyShardedDataParallel.no_sync) will require additional memory.
Therefore in memory intensive situations while using FSDP, we recommend to set `sync_each_batch` to `True` in the [`~utils.GradientAccumulationPlugin`] to disable `no_sync`.
See the example below where we fine-tune Mixtral (47B parameters) on 8 A100-80GB GPUs. We see that even for a modest `gradient_accumulation_steps=2` we quickly go out-of-memory (OOM) if `no_sync` is enabled. Again, this is due to additional memory overheads due to FSDP's `no_sync`. However, if `no_sync` is disabled via `sync_each_batch=True`, then the memory consumption for `gradient_accumulation_steps=16` reverts to that of `gradient_accumulation_steps=1`.
| Model | `no_sync` (accum=1) | `no_sync` (accum=2) | `no_sync` disabled (accum=16)
| :-------------: | :-----------------: | :-----------------: | :-----------------:
mixtral 8x7B | 69G | OOM | 69G
> [!WARNING]
> Disabling `no_sync` means there _will be slowdown_ due the extra data syncs, as explained by the earlier sections of this guide.

View File

@ -1,119 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Gradient Synchronization
PyTorch's distributed module operates by communicating back and forth between all of the GPUs in your system.
This communication takes time, and ensuring all processes know the states of each other happens at particular triggerpoints
when using the `ddp` module.
These triggerpoints are added to the PyTorch model, specifically their `forward()` and `backward()` methods.
This happens when the model is wrapped with `DistributedDataParallel`:
```python
import torch.nn as nn
from torch.nn.parallel import DistributedDataParallel
model = nn.Linear(10, 10)
ddp_model = DistributedDataParallel(model)
```
In 🤗 Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model.
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
import torch.nn as nn
- from torch.nn.parallel import DistributedDataParallel
model = nn.Linear(10,10)
+ model = accelerator.prepare(model)
```
## The slowdown in gradient accumulation
You now understand that PyTorch adds hooks to the `forward` and `backward` method of your PyTorch model when
training in a distributed setup. But how does this risk slowing down your code?
In DDP (distributed data parallel), the specific order in which processes are performed and ran are expected
at specific points and these must also occur at roughly the same time before moving on.
The most direct example is when you update all of the parameters in a model through `.backward()`. All instances of the model
need to have updated their gradients, collated, and updated again before moving onto the next batch of data. But when performing
gradient accumulation, you accumulate `n` losses and skip `.backward()` until `n` batches have been reached. This
can cause a significant slowdown since all the processes need to communicate with them more times than needed. How
can you avoid this overhead?
## Solving the slowdown problem
Since you are skipping these batches, their gradients do not need to be synchronized until the point where `.backward()` is actually called.
PyTorch cannot automagically tell when you need to do this, but they do provide a tool to help through the [`no_sync`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.no_sync) context manager
that is added to your model after converting it to DDP.
Under this context manager, PyTorch will skip synchronizing the gradients when `.backward()` is called, and the first call to `.backward()` outside this
context manager will trigger the synchronization. See an example below:
```python
ddp_model, dataloader = accelerator.prepare(model, dataloader)
for index, batch in enumerate(dataloader):
inputs, targets = batch
# Trigger gradient synchronization on the last batch
if index != (len(dataloader) - 1):
with ddp_model.no_sync():
# Gradients only accumulate
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
else:
# Gradients finally sync
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
```
In 🤗 Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
`ddp_model.no_sync` gets replaced with [`~Accelerator.no_sync`] and operates the same way:
```diff
ddp_model, dataloader = accelerator.prepare(model, dataloader)
for index, batch in enumerate(dataloader):
inputs, targets = batch
# Trigger gradient synchronization on the last batch
if index != (len(dataloader)-1):
- with ddp_model.no_sync():
+ with accelerator.no_sync(model):
# Gradients only accumulate
outputs = ddp_model(inputs)
loss = loss_func(outputs, targets)
accelerator.backward(loss)
else:
# Gradients finally sync
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
```
As you may expect, the [`~Accelerator.accumulate`] function wraps around this conditional check by keeping track of the current batch number, leaving you with the final
gradient accumulation API:
```python
ddp_model, dataloader = accelerator.prepare(model, dataloader)
for batch in dataloader:
with accelerator.accumulate(model):
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
```
As a result, you should either use *`accelerator.accumulate` or `accelerator.no_sync`* when it comes to API choice.

View File

@ -0,0 +1,74 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerate's internal mechanisms
Internally, Accelerate works by first analyzing the environment in which the script is launched to determine which
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
that information is stored in the [`~AcceleratorState`].
This class is initialized the first time you instantiate an [`~Accelerator`] as well as performing any
specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of
[`~state.AcceleratorState`]. (The same can also be done with the [`PartialState`], a more barebones version it inherits)
Then, when calling [`~Accelerator.prepare`], the library:
- wraps your model(s) in the container adapted for the distributed setup,
- wraps your optimizer(s) in an [`~optimizer.AcceleratedOptimizer`],
- wraps your scheduler(s) in an [`~scheduler.AcceleratedScheduler`]
- creates a new version of your dataloader(s) in a [`~data_loader.DataLoaderShard`] or [`~data_loader.DataLoaderDispatcher`]
While the model(s), optimizer(s), and scheduler(s) are just put in simple wrappers, the dataloader(s) are re-created. This is mostly
because PyTorch does not let the user change the `batch_sampler` of a dataloader once it's been created and the
library handles the sharding of your data between processes by changing that `batch_sampler` to yield every other
`num_processes` batches (if enabled).
The [`~data_loader.DataLoaderShard`] subclasses `DataLoader` to add the following functionality:
- it synchronizes the appropriate random number generator of all processes at each new iteration, to ensure any
randomization (like shuffling) is done the exact same way across processes.
- it puts the batches on the proper device before yielding them (unless you have opted out of
`device_placement=True`).
The [`~data_loader.DataLoaderDispatcher`] subclasses differs from the [`~data_loader.DataLoaderShard`] in that when iterating through the `DataLoader`, the data is all starting from process 0 and *then* split and sent off to each process rather than it happening at the dataset level.
The random number generator synchronization will by default synchronize:
- the `generator` attribute of a given sampler (like the PyTorch `RandomSampler`) for PyTorch >= 1.6
- the main random number generator in PyTorch <=1.5.1
You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main
[`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on a local `generator` to avoid
setting the same seed in the main random number generator in all processes.
<Tip warning={true}>
Synchronization of the main torch (or CUDA or XLA) random number generator will affect any other potential random
artifacts you could have in your dataset (like random data augmentation) in the sense that all processes will get
the same random numbers from the torch random modules (so will apply the same random data augmentation if it's
controlled by torch).
</Tip>
<Tip>
The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local
`torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example.
</Tip>
If you have [`torchdata>=0.8.0`](https://github.com/pytorch/data/tree/main) installed, and you have passed `use_stateful_dataloader=True` into your [`~utils.DataLoaderConfiguration`], these classes will directly inherit from `StatefulDataLoader` instead, and maintain a `state_dict`.
For more details about the internals, see the [Internals page](../package_reference/torch_wrappers).

View File

@ -0,0 +1,74 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Low precision training methods
The release of new kinds of hardware led to the emergence of new training paradigms that better utilize them. Currently, this is in the form of training
in 8-bit precision using packages such as [TransformersEngine](https://github.com/NVIDIA/TransformerEngine) (TE) or [MS-AMP](https://github.com/Azure/MS-AMP/tree/main).
For an introduction to the topics discussed today, we recommend reviewing the [low-precision usage guide](../usage_guides/low_precision_training) as this documentation will reference it regularly.
## A Quick Chart
Below is a quick chart from the MS-AMP documentation showing the different bit-precisions for each solution during training:
Optimization Level | Computation(GEMM) | Comm | Weight | Master Weight | Weight Gradient | Optimizer States
-- | -- | -- | -- | -- | -- | --
FP16 AMP | FP16 | FP32 | FP32 | N/A | FP32 | FP32+FP32
Nvidia TE | FP8 | FP32 | FP32 | N/A | FP32 | FP32+FP32
MS-AMP O1 | FP8 | FP8 | FP16 | N/A | FP8 | FP32+FP32
MS-AMP O2 | FP8 | FP8 | FP16 | N/A | FP8 | FP8+FP16
MS-AMP O3 | FP8 | FP8 | FP8 | FP16 | FP8 | FP8+FP16
## `TransformersEngine`
`TransformersEngine` is the first solution to trying to train in 8-bit floating point. It works by using drop-in replacement layers for certain ones in a model that utilizes their FP8-engine to reduce the number of bits (such as 32 to 8) without degrading the final accuracy of the model.
Specifically, Accelerate will find and replace the following layers with `TransformersEngine` versions:
* `nn.LayerNorm` for `te.LayerNorm`
* `nn.Linear` for `te.Linear`
As a result we wind up with a model that has most of its layers in BF16, while some layers are in FP8 reducing some of the memory.
Anecdotally, we have noticed that performance gains don't really start showing when using `TransformerEngine` until a large majority of the layers
in the model are made up of those two layers to replace. As a result, only larger models have shown performance improvements when the number of parameters is around and upwards of a few billion.
The `TransformerEngine` can receive many different arguments that customize how it performs FP8 calculations and what they do. A full list of the arguments is available below:
* `margin`: The margin to use for the gradient scaling.
* `interval`: The interval to use for how often the scaling factor is recomputed.
* `fp8_format``: The format to use for the FP8 recipe. Must be one of `HYBRID` or `E4M3`. (Generally `HYBRID` for training, `E4M3` for evaluation)
* `amax_history_len`: The length of the history to use for the scaling factor computation
* `amax_compute_algo`: The algorithm to use for the scaling factor computation. Must be one of `max` or `most_recent`.
* `override_linear_precision`: Whether or not to execute `fprop`, `dgrad`, and `wgrad` GEMMS in higher precision.
You can customize each of these as part of [`utils.FP8RecipeKwargs`] to help optimize performance of your models.
If we notice in the chart mentioned earlier, TE simply casts the computation layers into FP8, while everything else is in FP32. As a result this winds up utilizing the most memory but does so with the benefit of guaranteeing the least amount of loss in end accuracy during training.
## `MS-AMP`
MS-AMP takes a different approach to `TransformersEngine` by providing three different optimization levels to convert more operations in FP8 or FP16.
* The base optimization level (`O1`), passes communications of the weights (such as in DDP) in FP8, stores the weights of the model in FP16, and leaves the optimizer states in FP32. The main benefit of this optimization level is that we can reduce the communication bandwidth by essentially half. Additionally, more GPU memory is saved due to 1/2 of everything being cast in FP8, and the weights being cast to FP16. Notably, both the optimizer states remain in FP32.
* The second optimization level (`O2`) improves upon this by also reducing the precision of the optimizer states. One is in FP8 while the other is in FP16. Generally it's been shown that this will only provide a net-gain of no degraded end accuracy, increased training speed, and reduced memory as now every state is either in FP16 or FP8.
* Finally, MS-AMP has a third optimization level (`O3`) which helps during DDP scenarios such as DeepSpeed. The weights of the model in memory are fully cast to FP8, and the master weights are now stored in FP16. This fully reduces memory by the highest factor as now not only is almost everything in FP8, only two states are left in FP16. Currently, only DeepSpeed versions up through 0.9.2 are supported, so this capability is not included in the Accelerate integration
## Combining the two
More experiments need to be performed but it's been noted that combining both MS-AMP and TransformersEngine can lead to the highest throughput by relying on NVIDIA's optimized FP8 operators and utilizing how MS-AMP reduces the memory overhead.

View File

@ -8,9 +8,12 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> -->
# Comparing performance between different device setups # Comparing performance across distributed setups
Evaluating and comparing the performance from different setups can be quite tricky if you don't know what to look for. Evaluating and comparing the performance from different setups can be quite tricky if you don't know what to look for.
For example, you cannot run the same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate For example, you cannot run the same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate
@ -18,7 +21,7 @@ and expect your results to line up.
But why? But why?
There's three reasons for this that this tutorial will cover: There are three reasons for this that this tutorial will cover:
1. **Setting the right seeds** 1. **Setting the right seeds**
2. **Observed Batch Sizes** 2. **Observed Batch Sizes**
@ -26,10 +29,10 @@ There's three reasons for this that this tutorial will cover:
## Setting the Seed ## Setting the Seed
While this issue has not come up as much, make sure to use [`utils.set_seed`] to fully set the seed in all distributed cases so training will be reproducable: While this issue has not come up as much, make sure to use [`utils.set_seed`] to fully set the seed in all distributed cases so training will be reproducible:
```python ```python
from accelerate import set_seed from accelerate.utils import set_seed
set_seed(42) set_seed(42)
``` ```
@ -40,13 +43,13 @@ Why is this important? Under the hood this will set **5** different seed setting
random.seed(seed) random.seed(seed)
np.random.seed(seed) np.random.seed(seed)
torch.manual_seed(seed) torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed) torch.cuda.manual_seed_all(seed) # or torch.xpu.manual_seed_all, etc
# ^^ safe to call this function even if cuda is not available # ^^ safe to call this function even if cuda is not available
if is_tpu_available(): if is_torch_xla_available():
xm.set_rng_state(seed) xm.set_rng_state(seed)
``` ```
The random state, numpy's state, torch, torch's cuda state, and if TPUs are available torch_xla's cuda state. The random state, numpy's state, torch, torch's device state, and if TPUs are available torch_xla's cuda state.
## Observed Batch Sizes ## Observed Batch Sizes
@ -58,7 +61,7 @@ The below table can be used as a quick reference to try out different batch size
<Tip> <Tip>
In this example there are two GPUs for "Multi-GPU" and a TPU pod with 8 workers In this example, there are two GPUs for "Multi-GPU" and a TPU pod with 8 workers
</Tip> </Tip>
@ -71,7 +74,7 @@ In this example there are two GPUs for "Multi-GPU" and a TPU pod with 8 workers
## Learning Rates ## Learning Rates
As noted in multiple sources[[1](https://aws.amazon.com/blogs/machine-learning/scalable-multi-node-deep-learning-training-using-gpus-in-the-aws-cloud/)][[2](https://docs.nvidia.com/clara/tlt-mi_archive/clara-train-sdk-v2.0/nvmidl/appendix/training_with_multiple_gpus.html)], the learning rate should be scaled *linearly* based on the number of devices present. The below As noted in multiple sources[[1](https://aws.amazon.com/blogs/machine-learning/scalable-multi-node-deep-learning-training-using-gpus-in-the-aws-cloud/)][[2](https://docs.nvidia.com/clara/clara-train-sdk/pt/model.html#classification-models-multi-gpu-training)], the learning rate should be scaled *linearly* based on the number of devices present. The below
snippet shows doing so with Accelerate: snippet shows doing so with Accelerate:
<Tip> <Tip>
@ -89,3 +92,12 @@ learning_rate *= accelerator.num_processes
optimizer = AdamW(params=model.parameters(), lr=learning_rate) optimizer = AdamW(params=model.parameters(), lr=learning_rate)
``` ```
You will also find that `accelerate` will step the learning rate based on the number of processes being trained on. This is because
of the observed batch size noted earlier. So in the case of 2 GPUs, the learning rate will be stepped twice as often as a single GPU
to account for the batch size being twice as large (if no changes to the batch size on the single GPU instance are made).
## Gradient Accumulation and Mixed Precision
When using gradient accumulation and mixed precision, due to how gradient averaging works (accumulation) and the precision loss (mixed precision),
some degradation in performance is expected. This will be explicitly seen when comparing the batch-wise loss between different compute
setups. However, the overall loss, metric, and general performance at the end of training should be _roughly_ the same.

View File

@ -8,11 +8,14 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> -->
# Training on TPUs with 🤗 Accelerate # Training on TPUs
Training on TPUs can be slightly different than training on multi-gpu, even with 🤗 Accelerate. This guide aims to show you Training on TPUs can be slightly different from training on multi-gpu, even with Accelerate. This guide aims to show you
where you should be careful and why, as well as the best practices in general. where you should be careful and why, as well as the best practices in general.
## Training in a Notebook ## Training in a Notebook
@ -24,8 +27,8 @@ While on a TPU that last part is not as important, a critical part to understand
When launching from the command-line, you perform **spawning**, where a python process is not currently running and you *spawn* a new process in. Since your Jupyter notebook is already When launching from the command-line, you perform **spawning**, where a python process is not currently running and you *spawn* a new process in. Since your Jupyter notebook is already
utilizing a python process, you need to *fork* a new process from it to launch your code. utilizing a python process, you need to *fork* a new process from it to launch your code.
Where this becomes important is in regards to declaring your model. On forked TPU processes, it is recommended that you instantiate your model *once* and pass this into your Where this becomes important is in regard to declaring your model. On forked TPU processes, it is recommended that you instantiate your model *once* and pass this into your
training function. This is different than training on GPUs where you create `n` models that have their gradients synced and back-propagated at certain moments. Instead one training function. This is different than training on GPUs where you create `n` models that have their gradients synced and back-propagated at certain moments. Instead, one
model instance is shared between all the nodes and it is passed back and forth. This is important especially when training on low-resource TPUs such as those provided in Kaggle kernels or model instance is shared between all the nodes and it is passed back and forth. This is important especially when training on low-resource TPUs such as those provided in Kaggle kernels or
on Google Colaboratory. on Google Colaboratory.
@ -33,7 +36,7 @@ Below is an example of a training function passed to the [`notebook_launcher`] i
<Tip> <Tip>
This code snippet is based off the one from the `simple_nlp_example` notebook found [here](https://github.com/huggingface/notebooks/blob/main/examples/accelerate/simple_nlp_example.ipynb) with slight This code snippet is based off the one from the `simple_nlp_example` notebook found [here](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb) with slight
modifications for the sake of simplicity modifications for the sake of simplicity
</Tip> </Tip>
@ -78,7 +81,7 @@ notebook_launcher(training_function)
<Tip> <Tip>
The `notebook_launcher` will default to 8 processes if 🤗 Accelerate has been configured for a TPU The `notebook_launcher` will default to 8 processes if Accelerate has been configured for a TPU
</Tip> </Tip>
@ -125,16 +128,16 @@ And finally calling the training function with:
## Mixed Precision and Global Variables ## Mixed Precision and Global Variables
As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), 🤗 Accelerate supports fp16 and bf16, both of which can be used on TPUs. As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), Accelerate supports fp16 and bf16, both of which can be used on TPUs.
That being said, ideally `bf16` should be utilized as it is extremely efficient to use. That being said, ideally `bf16` should be utilized as it is extremely efficient to use.
There are two "layers" when using `bf16` and 🤗 Accelerate on TPUs, at the base level and at the operation level. There are two "layers" when using `bf16` and Accelerate on TPUs, at the base level and at the operation level.
At the base level, this is enabled when passing `mixed_precision="bf16"` to `Accelerator`, such as: At the base level, this is enabled when passing `mixed_precision="bf16"` to `Accelerator`, such as:
```python ```python
accelerator = Accelerator(mixed_precision="bf16") accelerator = Accelerator(mixed_precision="bf16")
``` ```
By default this will cast `torch.float` and `torch.double` to `bfloat16` on TPUs. By default, this will cast `torch.float` and `torch.double` to `bfloat16` on TPUs.
The specific configuration being set is an environmental variable of `XLA_USE_BF16` is set to `1`. The specific configuration being set is an environmental variable of `XLA_USE_BF16` is set to `1`.
There is a further configuration you can perform which is setting the `XLA_DOWNCAST_BF16` environmental variable. If set to `1`, then There is a further configuration you can perform which is setting the `XLA_DOWNCAST_BF16` environmental variable. If set to `1`, then
@ -161,4 +164,4 @@ new batch size after the first few iterations.
Just because the memory is allocated does not mean it will be used or that the batch size will increase when going back to your training dataloader. Just because the memory is allocated does not mean it will be used or that the batch size will increase when going back to your training dataloader.
</Tip> </Tip>

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

View File

@ -8,11 +8,14 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> -->
# Accelerate # Accelerate
🤗 Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable. Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.
```diff ```diff
+ from accelerate import Accelerator + from accelerate import Accelerator
@ -34,7 +37,7 @@ specific language governing permissions and limitations under the License.
scheduler.step() scheduler.step()
``` ```
Built on `torch_xla` and `torch.distributed`, 🤗 Accelerate takes care of the heavy lifting, so you don't have to write any custom code to adapt to these platforms. Built on `torch_xla` and `torch.distributed`, Accelerate takes care of the heavy lifting, so you don't have to write any custom code to adapt to these platforms.
Convert existing codebases to utilize [DeepSpeed](usage_guides/deepspeed), perform [fully sharded data parallelism](usage_guides/fsdp), and have automatic support for mixed-precision training! Convert existing codebases to utilize [DeepSpeed](usage_guides/deepspeed), perform [fully sharded data parallelism](usage_guides/fsdp), and have automatic support for mixed-precision training!
<Tip> <Tip>
@ -51,21 +54,21 @@ accelerate launch {my_script.py}
<div class="mt-10"> <div class="mt-10">
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="/docs/accelerate/basic_tutorials/overview" <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./basic_tutorials/overview"
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div> ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
<p class="text-gray-700">Learn the basics and become familiar with using 🤗 Accelerate. Start here if you are using 🤗 Accelerate for the first time!</p> <p class="text-gray-700">Learn the basics and become familiar with using Accelerate. Start here if you are using Accelerate for the first time!</p>
</a> </a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="/docs/accelerate/usage_guides/gradient_accumulation" <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./usage_guides/explore"
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div> ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
<p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Accelerate to solve real-world problems.</p> <p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use Accelerate to solve real-world problems.</p>
</a> </a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="/docs/accelerate/concept_guides/gradient_synchronization" <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concept_guides/gradient_synchronization"
><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div> ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
<p class="text-gray-700">High-level explanations for building a better understanding of important topics such as avoiding subtle nuances and pitfalls in distributed training and DeepSpeed.</p> <p class="text-gray-700">High-level explanations for building a better understanding of important topics such as avoiding subtle nuances and pitfalls in distributed training and DeepSpeed.</p>
</a> </a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="/docs/accelerate/package_reference/accelerator" <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/accelerator"
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div> ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
<p class="text-gray-700">Technical descriptions of how 🤗 Accelerate classes and methods work.</p> <p class="text-gray-700">Technical descriptions of how Accelerate classes and methods work.</p>
</a> </a>
</div> </div>
</div> </div>

View File

@ -8,17 +8,19 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> -->
# Logging with Accelerate # Accelerator
Accelerate has its own logging utility to handle logging while in a distributed system. The [`Accelerator`] is the main class for enabling distributed training on any type of training setup. Read the [Add Accelerator to your code](../basic_tutorials/migration) tutorial to learn more about how to add the [`Accelerator`] to your script.
To utilize this replace cases of `logging` with `accelerate.logging`:
```diff
- import logging
+ from accelerate.logging import get_logger
- logger = logging.getLogger(__name__)
+ logger = get_logger(__name__)
```
[[autodoc]] logging.get_logger ## Accelerator[[api]]
[[autodoc]] Accelerator
## Utilities
[[autodoc]] accelerate.utils.gather_object

View File

@ -1,163 +0,0 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Accelerator
The [`Accelerator`] is the main class provided by 🤗 Accelerate.
It serves at the main entrypoint for the API.
## Quick adaptation of your code
To quickly adapt your script to work on any kind of setup with 🤗 Accelerate just:
1. Initialize an [`Accelerator`] object (that we will call `accelerator` throughout this page) as early as possible in your script.
2. Pass your dataloader(s), model(s), optimizer(s), and scheduler(s) to the [`~Accelerator.prepare`] method.
3. Remove all the `.cuda()` or `.to(device)` from your code and let the `accelerator` handle the device placement for you.
<Tip>
Step three is optional, but considered a best practice.
</Tip>
4. Replace `loss.backward()` in your code with `accelerator.backward(loss)`
5. Gather your predictions and labels before storing them or using them for metric computation using [`~Accelerator.gather`]
<Tip warning={true}>
Step five is mandatory when using distributed evaluation
</Tip>
In most cases this is all that is needed. The next section lists a few more advanced use cases and nice features
you should search for and replace by the corresponding methods of your `accelerator`:
## Advanced recommendations
### Printing
`print` statements should be replaced by [`~Accelerator.print`] to be printed once per process
```diff
- print("My thing I want to print!")
+ accelerator.print("My thing I want to print!")
```
### Executing processes
#### Once on a single server
For statements that should be executed once per server, use [`~Accelerator.is_local_main_process`]:
```python
if accelerator.is_local_main_process:
do_thing_once_per_server()
```
A function can be wrapped using the [`~Accelerator.on_local_main_process`] function to achieve the same
behavior on a function's execution:
```python
@accelerator.on_local_main_process
def do_my_thing():
"Something done once per server"
do_thing_once_per_server()
```
#### Only ever once across all servers
For statements that should only ever be executed once, use [`~Accelerator.is_main_process`]:
```python
if accelerator.is_main_process:
do_thing_once()
```
A function can be wrapped using the [`~Accelerator.on_main_process`] function to achieve the same
behavior on a function's execution:
```python
@accelerator.on_main_process
def do_my_thing():
"Something done once per server"
do_thing_once()
```
#### On specific processes
If a function should be ran on a specific overall or local process index, there are similar decorators
to achieve this:
```python
@accelerator.on_local_process(local_process_idx=0)
def do_my_thing():
"Something done on process index 0 on each server"
do_thing_on_index_zero_on_each_server()
```
```python
@accelerator.on_process(process_index=0)
def do_my_thing():
"Something done on process index 0"
do_thing_on_index_zero()
```
### Synchronicity control
Use [`~Accelerator.wait_for_everyone`] to make sure all processes join that point before continuing. (Useful before a model save for instance)
### Saving and loading
Use [`~Accelerator.unwrap_model`] before saving to remove all special model wrappers added during the distributed process.
```python
model = MyModel()
model = accelerator.prepare(model)
# Unwrap
model = accelerator.unwrap_model(model)
```
Use [`~Accelerator.save`] instead of `torch.save`:
```diff
state_dict = model.state_dict()
- torch.save(state_dict, "my_state.pkl")
+ accelerator.save(state_dict, "my_state.pkl")
```
### Operations
Use [`~Accelerator.clip_grad_norm_`] instead of ``torch.nn.utils.clip_grad_norm_`` and [`~Accelerator.clip_grad_value_`] instead of ``torch.nn.utils.clip_grad_value``
### Gradient Accumulation
To perform gradient accumulation use [`~Accelerator.accumulate`] and specify a gradient_accumulation_steps.
This will also automatically ensure the gradients are synced or unsynced when on
multi-device training, check if the step should actually be performed, and auto-scale the loss:
```diff
- accelerator = Accelerator()
+ accelerator = Accelerator(gradient_accumulation_steps=2)
for (input, label) in training_dataloader:
+ with accelerator.accumulate(model):
predictions = model(input)
loss = loss_function(predictions, labels)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
## Overall API documentation:
[[autodoc]] Accelerator

View File

@ -0,0 +1,110 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Working with large models
## Dispatch and offload
### init_empty_weights
[[autodoc]] big_modeling.init_empty_weights
### cpu_offload
[[autodoc]] big_modeling.cpu_offload
### cpu_offload_with_hook
[[autodoc]] big_modeling.cpu_offload_with_hook
### disk_offload
[[autodoc]] big_modeling.disk_offload
### dispatch_model
[[autodoc]] big_modeling.dispatch_model
### load_checkpoint_and_dispatch
[[autodoc]] big_modeling.load_checkpoint_and_dispatch
### load_checkpoint_in_model
[[autodoc]] big_modeling.load_checkpoint_in_model
### infer_auto_device_map
[[autodoc]] utils.infer_auto_device_map
## Hooks
### ModelHook
[[autodoc]] hooks.ModelHook
### AlignDevicesHook
[[autodoc]] hooks.AlignDevicesHook
### SequentialHook
[[autodoc]] hooks.SequentialHook
### LayerwiseCastingHook
[[autodoc]] hooks.LayerwiseCastingHook
## Adding Hooks
### add_hook_to_module
[[autodoc]] hooks.add_hook_to_module
### attach_execution_device_hook
[[autodoc]] hooks.attach_execution_device_hook
### attach_align_device_hook
[[autodoc]] hooks.attach_align_device_hook
### attach_align_device_hook_on_blocks
[[autodoc]] hooks.attach_align_device_hook_on_blocks
### attach_layerwise_casting_hooks
[[autodoc]] big_modeling.attach_layerwise_casting_hooks
## Removing Hooks
### remove_hook_from_module
[[autodoc]] hooks.remove_hook_from_module
### remove_hook_from_submodules
[[autodoc]] hooks.remove_hook_from_submodules
## Utilities
### has_offloaded_params
[[autodoc]] utils.has_offloaded_params
### align_module_device
[[autodoc]] utils.align_module_device

View File

@ -1,41 +0,0 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Working with large models
## Dispatching and Offloading Models
[[autodoc]] big_modeling.init_empty_weights
[[autodoc]] big_modeling.cpu_offload
[[autodoc]] big_modeling.disk_offload
[[autodoc]] big_modeling.dispatch_model
[[autodoc]] big_modeling.load_checkpoint_and_dispatch
## Model Hooks
### Hook Classes
[[autodoc]] hooks.ModelHook
[[autodoc]] hooks.AlignDevicesHook
[[autodoc]] hooks.SequentialHook
### Adding Hooks
[[autodoc]] hooks.add_hook_to_module
[[autodoc]] hooks.attach_execution_device_hook
[[autodoc]] hooks.attach_align_device_hook
[[autodoc]] hooks.attach_align_device_hook_on_blocks
### Removing Hooks
[[autodoc]] hooks.remove_hook_from_module
[[autodoc]] hooks.remove_hook_from_submodules

View File

@ -0,0 +1,335 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# The Command Line
Below is a list of all the available commands 🤗 Accelerate with their parameters
## accelerate config
**Command**:
`accelerate config` or `accelerate-config`
Launches a series of prompts to create and save a `default_config.yml` configuration file for your training system. Should
always be ran first on your machine.
**Usage**:
```bash
accelerate config [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit
## accelerate config default
**Command**:
`accelerate config default` or `accelerate-config default`
Create a default config file for Accelerate with only a few flags set.
**Usage**:
```bash
accelerate config default [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit
* `--mixed_precision {no,fp16,bf16}` (`str`) -- Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later.
## accelerate config update
**Command**:
`accelerate config update` or `accelerate-config update`
Update an existing config file with the latest defaults while maintaining the old configuration.
**Usage**:
```bash
accelerate config update [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to the config file to update. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit
## accelerate env
**Command**:
`accelerate env` or `accelerate-env` or `python -m accelerate.commands.env`
Lists the contents of the passed 🤗 Accelerate configuration file. Should always be used when opening an issue on the [GitHub repository](https://github.com/huggingface/accelerate).
**Usage**:
```bash
accelerate env [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit
## accelerate launch
**Command**:
`accelerate launch` or `accelerate-launch` or `python -m accelerate.commands.launch`
Launches a specified script on a distributed system with the right parameters.
**Usage**:
```bash
accelerate launch [arguments] {training_script} --{training_script-argument-1} --{training_script-argument-2} ...
```
**Positional Arguments**:
- `{training_script}` -- The full path to the script to be launched in parallel
- `--{training_script-argument-1}` -- Arguments of the training script
**Optional Arguments**:
* `-h`, `--help` (`bool`) -- Show a help message and exit
* `--config_file CONFIG_FILE` (`str`)-- The config file to use for the default values in the launching script.
* `-m`, `--module` (`bool`) -- Change each process to interpret the launch script as a Python module, executing with the same behavior as 'python -m'.
* `--no_python` (`bool`) -- Skip prepending the training script with 'python' - just execute it directly. Useful when the script is not a Python script.
* `--debug` (`bool`) -- Whether to print out the torch.distributed stack trace when something fails.
* `-q`, `--quiet` (`bool`) -- Silence subprocess errors from the launch stack trace to only show the relevant tracebacks. (Only applicable to DeepSpeed and single-process configurations).
The rest of these arguments are configured through `accelerate config` and are read in from the specified `--config_file` (or default configuration) for their
values. They can also be passed in manually.
**Hardware Selection Arguments**:
* `--cpu` (`bool`) -- Whether or not to force the training on the CPU.
* `--multi_gpu` (`bool`) -- Whether or not this should launch a distributed GPU training.
* `--tpu` (`bool`) -- Whether or not this should launch a TPU training.
* `--ipex` (`bool`) -- Whether or not this should launch an Intel Pytorch Extension (IPEX) training. **This argument is deprecated, will be removed in Accelerate v1.10**
**Resource Selection Arguments**:
The following arguments are useful for fine-tuning how available hardware should be used
* `--mixed_precision {no,fp16,bf16,fp8}` (`str`) -- Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later.
* `--num_processes NUM_PROCESSES` (`int`) -- The total number of processes to be launched in parallel.
* `--num_machines NUM_MACHINES` (`int`) -- The total number of machines used in this training.
* `--num_cpu_threads_per_process NUM_CPU_THREADS_PER_PROCESS` (`int`) -- The number of CPU threads per process. Can be tuned for optimal performance.
* `--enable_cpu_affinity` (`bool`) -- Whether or not CPU affinity and balancing should be enabled. Currently only supported on NVIDIA hardware.
**Training Paradigm Arguments**:
The following arguments are useful for selecting which training paradigm to use.
* `--use_deepspeed` (`bool`) -- Whether or not to use DeepSpeed for training.
* `--use_fsdp` (`bool`) -- Whether or not to use FullyShardedDataParallel for training.
* `--use_megatron_lm` (`bool`) -- Whether or not to use Megatron-LM for training.
* `--use_xpu` (`bool`) -- Whether to use IPEX plugin to speed up training on XPU specifically. **This argument is deprecated and ignored, will be removed in Accelerate v1.10**
**Distributed GPU Arguments**:
The following arguments are only useful when `multi_gpu` is passed or multi-gpu training is configured through `accelerate config`:
* `--gpu_ids` (`str`) -- What GPUs (by id) should be used for training on this machine as a comma-separated list
* `--same_network` (`bool`) -- Whether all machines used for multinode training exist on the same local network.
* `--machine_rank` (`int`) -- The rank of the machine on which this script is launched.
* `--main_process_ip` (`str`) -- The IP address of the machine of rank 0.
* `--main_process_port` (`int`) -- The port to use to communicate with the machine of rank 0.
* `-t`, `--tee` (`str`) -- Tee std streams into a log file and also to console.
* `--log_dir` (`str`) -- Base directory to use for log files when using torchrun/torch.distributed.run as launcher. Use with --tee to redirect std streams info log files.
* `--role` (`str`) -- User-defined role for the workers.
* `--rdzv_backend` (`str`) -- The rendezvous method to use, such as 'static' (the default) or 'c10d'
* `--rdzv_conf` (`str`) -- Additional rendezvous configuration (<key1>=<value1>,<key2>=<value2>,...).
* `--max_restarts` (`int`) -- Maximum number of worker group restarts before failing.
* `--monitor_interval` (`int`) -- Interval, in seconds, to monitor the state of workers.
**TPU Arguments**:
The following arguments are only useful when `tpu` is passed or TPU training is configured through `accelerate config`:
* `--tpu_cluster` (`bool`) -- Whether to use a GCP TPU pod for training.
* `--tpu_use_sudo` (`bool`) -- Whether to use `sudo` when running the TPU training script in each pod.
* `--vm` (`str`) -- List of single Compute VM instance names. If not provided we assume usage of instance groups. For TPU pods.
* `--env` (`str`) -- List of environment variables to set on the Compute VM instances. For TPU pods.
* `--main_training_function` (`str`) -- The name of the main function to be executed in your script (only for TPU training).
* `--downcast_bf16` (`bool`) -- Whether when using bf16 precision on TPUs if both float and double tensors are cast to bfloat16 or if double tensors remain as float32.
**DeepSpeed Arguments**:
The following arguments are only useful when `use_deepspeed` is passed or `deepspeed` is configured through `accelerate config`:
* `--deepspeed_config_file` (`str`) -- DeepSpeed config file.
* `--zero_stage` (`int`) -- DeepSpeed's ZeRO optimization stage.
* `--offload_optimizer_device` (`str`) -- Decides where (none|cpu|nvme) to offload optimizer states.
* `--offload_param_device` (`str`) -- Decides where (none|cpu|nvme) to offload parameters.
* `--offload_optimizer_nvme_path` (`str`) -- Decides Nvme Path to offload optimizer states.
* `--gradient_accumulation_steps` (`int`) -- No of gradient_accumulation_steps used in your training script.
* `--gradient_clipping` (`float`) -- Gradient clipping value used in your training script.
* `--zero3_init_flag` (`str`) -- Decides Whether (true|false) to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with DeepSpeed ZeRO Stage-3.
* `--zero3_save_16bit_model` (`str`) -- Decides Whether (true|false) to save 16-bit model weights when using ZeRO Stage-3. Only applicable with DeepSpeed ZeRO Stage-3.
* `--deepspeed_hostfile` (`str`) -- DeepSpeed hostfile for configuring multi-node compute resources.
* `--deepspeed_exclusion_filter` (`str`) -- DeepSpeed exclusion filter string when using multi-node setup.
* `--deepspeed_inclusion_filter` (`str`) -- DeepSpeed inclusion filter string when using multi-node setup.
* `--deepspeed_multinode_launcher` (`str`) -- DeepSpeed multi-node launcher to use.
* `--deepspeed_moe_layer_cls_names` (`str`) -- comma-separated list of transformer MoE layer class names (case-sensitive) to wrap, e.g, `MixtralSparseMoeBlock` `Qwen2MoeSparseMoeBlock`, `JetMoEAttention,JetMoEBlock`
**Fully Sharded Data Parallelism Arguments**:
The following arguments are only useful when `use_fsdp` is passed or Fully Sharded Data Parallelism is configured through `accelerate config`:
* `--fsdp_offload_params` (`str`) -- Decides Whether (true|false) to offload parameters and gradients to CPU.
* `--fsdp_min_num_params` (`int`) -- FSDP's minimum number of parameters for Default Auto Wrapping.
* `--fsdp_sharding_strategy` (`int`) -- FSDP's Sharding Strategy.
* `--fsdp_auto_wrap_policy` (`str`) -- FSDP's auto wrap policy.
* `--fsdp_transformer_layer_cls_to_wrap` (`str`) -- Transformer layer class name (case-sensitive) to wrap, e.g, `BertLayer`, `GPTJBlock`, `T5Block` ...
* `--fsdp_backward_prefetch_policy` (`str`) -- FSDP's backward prefetch policy.
* `--fsdp_state_dict_type` (`str`) -- FSDP's state dict type.
* `--fsdp_forward_prefetch` (`str`) -- FSDP forward prefetch.
* `--fsdp_use_orig_params` (`str`) -- If True, allows non-uniform `requires_grad` mixed in a FSDP unit.
* `--fsdp_cpu_ram_efficient_loading` (`str`) -- If true, only the first process loads the pretrained model checkoint while all other processes have empty weights. When using this, `--fsdp_sync_module_states` needs to True.
* `--fsdp_sync_module_states` (`str`) -- If true, each individually wrapped FSDP unit will broadcast module parameters from rank 0.
* `--fsdp_activation_checkpointing` (`bool`) -- Decides Whether intermediate activations are freed during the forward pass, and a checkpoint is left as a placeholder
**Megatron-LM Arguments**:
The following arguments are only useful when `use_megatron_lm` is passed or Megatron-LM is configured through `accelerate config`:
* `--megatron_lm_tp_degree` (``) -- Megatron-LM's Tensor Parallelism (TP) degree.
* `--megatron_lm_pp_degree` (``) -- Megatron-LM's Pipeline Parallelism (PP) degree.
* `--megatron_lm_num_micro_batches` (``) -- Megatron-LM's number of micro batches when PP degree > 1.
* `--megatron_lm_sequence_parallelism` (``) -- Decides Whether (true|false) to enable Sequence Parallelism when TP degree > 1.
* `--megatron_lm_recompute_activations` (``) -- Decides Whether (true|false) to enable Selective Activation Recomputation.
* `--megatron_lm_use_distributed_optimizer` (``) -- Decides Whether (true|false) to use distributed optimizer which shards optimizer state and gradients across Data Parallel (DP) ranks.
* `--megatron_lm_gradient_clipping` (``) -- Megatron-LM's gradient clipping value based on global L2 Norm (0 to disable).
**FP8 Arguments**:
* `--fp8_backend` (`str`) -- Choose a backend to train with FP8 (`te` or `msamp`)
* `--fp8_use_autocast_during_eval` (`bool`) -- Whether to use FP8 autocast during eval mode (useful only when `--fp8_backend=te` is passed). Generally better metrics are found when this is not passed.
* `--fp8_margin` (`int`) -- The margin to use for the gradient scaling (useful only when `--fp8_backend=te` is passed).
* `--fp8_interval` (`int`) -- The interval to use for how often the scaling factor is recomputed (useful only when `--fp8_backend=te` is passed).
* `--fp8_format` (`str`) -- The format to use for the FP8 recipe (useful only when `--fp8_backend=te` is passed).
* `--fp8_amax_history_len` (`int`) -- The length of the history to use for the scaling factor computation (useful only when `--fp8_backend=te` is passed).
* `--fp8_amax_compute_algo` (`str`) -- The algorithm to use for the scaling factor computation. (useful only when `--fp8_backend=te` is passed).
* `--fp8_override_linear_precision` (`Tuple[bool, bool, bool]`) -- Whether or not to execute `fprop`, `dgrad`, and `wgrad` GEMMS in higher precision.
* `--fp8_opt_level` (`str`) -- What level of 8-bit collective communication should be used with MS-AMP (useful only when `--fp8_backend=msamp` is passed)
**AWS SageMaker Arguments**:
The following arguments are only useful when training in SageMaker
* `--aws_access_key_id AWS_ACCESS_KEY_ID` (`str`) -- The AWS_ACCESS_KEY_ID used to launch the Amazon SageMaker training job
* `--aws_secret_access_key AWS_SECRET_ACCESS_KEY` (`str`) -- The AWS_SECRET_ACCESS_KEY used to launch the Amazon SageMaker training job
## accelerate estimate-memory
**Command**:
`accelerate estimate-memory` or `accelerate-estimate-memory` or `python -m accelerate.commands.estimate`
Estimates the total vRAM a particular model hosted on the Hub needs to be loaded in with an estimate for training. Requires that `huggingface_hub` be installed.
<Tip>
When performing inference, typically add ≤20% to the result as overall allocation [as referenced here](https://blog.eleuther.ai/transformer-math/). We will have more extensive estimations in the future that will automatically be included in the calculation.
</Tip>
**Usage**:
```bash
accelerate estimate-memory {MODEL_NAME} --library_name {LIBRARY_NAME} --dtypes {dtype_1} {dtype_2} ...
```
**Required Arguments**:
* `MODEL_NAME` (`str`)-- The model name on the Hugging Face Hub
**Optional Arguments**:
* `--library_name {timm,transformers}` (`str`) -- The library the model has an integration with, such as `transformers`, needed only if this information is not stored on the Hub
* `--dtypes {float32,float16,int8,int4}` (`[{float32,float16,int8,int4} ...]`) -- The dtypes to use for the model, must be one (or many) of `float32`, `float16`, `int8`, and `int4`
* `--trust_remote_code` (`bool`) -- Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be passed for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
## accelerate tpu-config
`accelerate tpu-config`
**Usage**:
```bash
accelerate tpu-config [arguments]
```
**Optional Arguments**:
* `-h`, `--help` (`bool`) -- Show a help message and exit
**Config Arguments**:
Arguments that can be configured through `accelerate config`.
* `--config_file` (`str`) -- Path to the config file to use for accelerate.
* `--tpu_name` (`str`) -- The name of the TPU to use. If not specified, will use the TPU specified in the config file.
* `--tpu_zone` (`str`) -- The zone of the TPU to use. If not specified, will use the zone specified in the config file.
**TPU Arguments**:
Arguments for options ran inside the TPU.
* `--command_file` (`str`) -- The path to the file containing the commands to run on the pod on startup.
* `--command` (`str`) -- A command to run on the pod. Can be passed multiple times.
* `--install_accelerate` (`bool`) -- Whether to install accelerate on the pod. Defaults to False.
* `--accelerate_version` (`str`) -- The version of accelerate to install on the pod. If not specified, will use the latest pypi version. Specify 'dev' to install from GitHub.
* `--debug` (`bool`) -- If set, will print the command that would be run instead of running it.
## accelerate test
`accelerate test` or `accelerate-test`
Runs `accelerate/test_utils/test_script.py` to verify that 🤗 Accelerate has been properly configured on your system and runs.
**Usage**:
```bash
accelerate test [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit

View File

@ -1,153 +0,0 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# The Command Line
Below is a list of all the available commands 🤗 Accelerate with their parameters
## accelerate config
**Command**:
`accelerate config` or `accelerate-config`
Launches a series of prompts to create and save a `default_config.yml` configuration file for your training system. Should
always be ran first on your machine.
**Usage**:
```bash
accelerate config [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit
## accelerate env
**Command**:
`accelerate env` or `accelerate-env`
Lists the contents of the passed 🤗 Accelerate configuration file. Should always be used when opening an issue on the [GitHub repository](https://github.com/huggingface/accelerate).
**Usage**:
```bash
accelerate env [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit
## accelerate launch
**Command**:
`accelerate launch` or `accelerate-launch`
Launches a specified script on a distributed system with the right parameters.
**Usage**:
```bash
accelerate launch [arguments] {training_script} --{training_script-argument-1} --{training_script-argument-2} ...
```
**Positional Arguments**:
- `{training_script}` -- The full path to the script to be launched in parallel
- `--{training_script-argument-1}` -- Arguments of the training script
**Optional Arguments**:
* `-h`, `--help` (`bool`) -- Show a help message and exit
* `--config_file CONFIG_FILE` (`str`)-- The config file to use for the default values in the launching script.
* `--cpu` (`bool`) -- Whether or not to force the training on the CPU.
* `--mixed_precision {no,fp16,bf16}` (`str`) -- Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on
Nvidia Ampere GPUs and PyTorch 1.10 or later.
* `--multi_gpu` (`bool`, defaults to `False`) -- Whether or not this should launch a distributed GPU training.
* `-m`, `--module` (`bool`) -- Change each process to interpret the launch script as a Python module, executing with the same behavior as 'python -m'.
* `--no_python` (`bool`) -- Skip prepending the training script with 'python' - just execute it directly. Useful when the script is not a Python script.
The rest of these arguments are configured through `accelerate config` and are read in from the specified `--config_file` (or default configuration) for their
values. They can also be passed in manually.
**Machine Configuration Arguments**:
The following arguments are useful for customization of worker machines
* `--machine_rank MACHINE_RANK` (`int`) -- The rank of the machine on which this script is launched.
* `--num_machines NUM_MACHINES` (`int`) -- The total number of machines used in this training.
* `--num_processes NUM_PROCESSES` (`int`) -- The total number of processes to be launched in parallel.
* `--gpu_ids` (`str`) -- What GPUs (by id) should be used for training on this machine as a comma-seperated list
* `--main_process_ip MAIN_PROCESS_IP` (`str`) -- The IP address of the machine of rank 0.
* `--main_process_port MAIN_PROCESS_PORT` (`int`) -- The port to use to communicate with the machine of rank 0.
* `--num_cpu_threads_per_process NUM_CPU_THREADS_PER_PROCESS` (`int`) -- The number of CPU threads per process. Can be tuned for optimal performance.
**DeepSpeed Arguments**:
The following arguments are only useful when `use_deepspeed` is passed:
* `--use_deepspeed` (`bool`) -- Whether to use deepspeed.
* `--deepspeed_config_file DEEPSPEED_CONFIG_FILE` (`str`) -- DeepSpeed config file.
* `--zero_stage ZERO_STAGE` (`str`) -- DeepSpeed's ZeRO optimization stage
* `--offload_optimizer_device OFFLOAD_OPTIMIZER_DEVICE` (`str`) -- Decides where (none|cpu|nvme) to offload optimizer states
* `--offload_param_device OFFLOAD_PARAM_DEVICE` (`str`) -- Decides where (none|cpu|nvme) to offload parameters
* `--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS` (`int`) -- Number of gradient_accumulation_steps used in your training script
* `--gradient_clipping GRADIENT_CLIPPING` (`float`) -- gradient clipping value used in your training script
The following arguments are related to using ZeRO Stage-3
* `--zero3_init_flag ZERO3_INIT_FLAG` (`bool`) -- Decides Whether (true|false) to enable `deepspeed.zero.Init` for constructing massive models
* `--zero3_save_16bit_model ZERO3_SAVE_16BIT_MODEL` (`bool`) -- Decides Whether (true|false) to save 16-bit model weights when using ZeRO Stage-3
**Fully Sharded Data Parallelism Arguments**:
The following arguments are only useful when `use_fdsp` is passed:
* `--use_fsdp` (`bool`) -- Whether to use fsdp.
* `--offload_params OFFLOAD_PARAMS` (`bool`) -- Decides Whether (true|false) to offload parameters and gradients to CPU.
* `--min_num_params MIN_NUM_PARAMS` (`int`) -- FSDP's minimum number of parameters for Default Auto Wrapping.
* `--sharding_strategy SHARDING_STRATEGY` (`str`) -- FSDP's Sharding Strategy.
**TPU Arguments**:
The following arguments are only useful when `tpu` is passed:
* `--tpu` (`bool`) - Whether or not this should launch a TPU training.
* `--main_training_function MAIN_TRAINING_FUNCTION` (`str`) -- The name of the main function to be executed in your script.
**AWS SageMaker Arguments**:
The following arguments are only useful when training in SageMaker
* `--aws_access_key_id AWS_ACCESS_KEY_ID` (`str`) -- The AWS_ACCESS_KEY_ID used to launch the Amazon SageMaker training job
* `--aws_secret_access_key AWS_SECRET_ACCESS_KEY` (`str`) -- The AWS_SECRET_ACCESS_KEY used to launch the Amazon SageMaker training job
## accelerate test
`accelerate test` or `accelerate-test`
Runs `accelerate/test_utils/test_script.py` to verify that 🤗 Accelerate has been properly configured on your system and runs.
**Usage**:
```bash
accelerate test [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit

View File

@ -0,0 +1,44 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DeepSpeed utilities
## DeepSpeedPlugin
## get_active_deepspeed_plugin
[[autodoc]] utils.get_active_deepspeed_plugin
[[autodoc]] utils.DeepSpeedPlugin
[[autodoc]] utils.deepspeed.DummyScheduler
## DeepSpeedEnginerWrapper
[[autodoc]] utils.deepspeed.DeepSpeedEngineWrapper
## DeepSpeedOptimizerWrapper
[[autodoc]] utils.deepspeed.DeepSpeedOptimizerWrapper
## DeepSpeedSchedulerWrapper
[[autodoc]] utils.deepspeed.DeepSpeedSchedulerWrapper
## DummyOptim
[[autodoc]] utils.deepspeed.DummyOptim
## DummyScheduler

View File

@ -0,0 +1,38 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# FP8
Below are functions and classes relative to the underlying FP8 implementation
## FP8RecipeKwargs
[[autodoc]] utils.FP8RecipeKwargs
## convert_model
[[autodoc]] utils.convert_model
## has_transformer_engine_layers
[[autodoc]] utils.has_transformer_engine_layers
## contextual_fp8_autocast
[[autodoc]] utils.contextual_fp8_autocast
## apply_fp8_autowrap
[[autodoc]] utils.apply_fp8_autowrap

View File

@ -0,0 +1,46 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Fully Sharded Data Parallel utilities
## enable_fsdp_ram_efficient_loading
[[autodoc]] utils.enable_fsdp_ram_efficient_loading
## disable_fsdp_ram_efficient_loading
[[autodoc]] utils.disable_fsdp_ram_efficient_loading
## merge_fsdp_weights
[[autodoc]] utils.merge_fsdp_weights
## FullyShardedDataParallelPlugin
[[autodoc]] utils.FullyShardedDataParallelPlugin
## fsdp2_load_full_state_dict
[[autodoc]] utils.fsdp2_load_full_state_dict
## fsdp2_switch_optimizer_parameters
[[autodoc]] utils.fsdp2_switch_optimizer_parameters
## fsdp2_prepare_model
[[autodoc]] utils.fsdp2_prepare_model
## fsdp2_prepare_auto_wrap_policy

View File

@ -0,0 +1,22 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Pipeline parallelism
Accelerate supports pipeline parallelism for large-scale training with the PyTorch [torch.distributed.pipelining](https://pytorch.org/docs/stable/distributed.pipelining.html) API.
## prepare_pippy
[[autodoc]] inference.prepare_pippy

View File

@ -8,18 +8,32 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License. specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> -->
# Kwargs Handlers # Kwargs handlers
The following objects can be passed to the main [`Accelerator`] to customize how some PyTorch objects The following objects can be passed to the main [`Accelerator`] to customize how some PyTorch objects
related to distributed training or mixed precision are created. related to distributed training or mixed precision are created.
## AutocastKwargs
[[autodoc]] AutocastKwargs
## DistributedDataParallelKwargs ## DistributedDataParallelKwargs
[[autodoc]] DistributedDataParallelKwargs [[autodoc]] DistributedDataParallelKwargs
## FP8RecipeKwargs
[[autodoc]] utils.FP8RecipeKwargs
## ProfileKwargs
[[autodoc]] utils.ProfileKwargs
## GradScalerKwargs ## GradScalerKwargs
[[autodoc]] GradScalerKwargs [[autodoc]] GradScalerKwargs
@ -27,3 +41,7 @@ related to distributed training or mixed precision are created.
## InitProcessGroupKwargs ## InitProcessGroupKwargs
[[autodoc]] InitProcessGroupKwargs [[autodoc]] InitProcessGroupKwargs
## KwargsHandler
[[autodoc]] utils.KwargsHandler

Some files were not shown because too many files have changed in this diff Show More