Compare commits

...

145 Commits

Author SHA1 Message Date
00301b27b7 Release: v0.24.0 2023-10-24 12:59:04 -04:00
eb8c535c17 Fix (#2080) 2023-10-24 12:55:06 -04:00
b7686ccb44 Warn when kernel version is too low on Linux (#2077)
* Warn when kernel version is too low on Linux

See #1929

On Linux with kernel version < 5.5, issues with hanging processes have
been reported. It is not clear how to fix the issue, so instead we warn
the user that they may encounter problems.

Notes

As logging requires an initialized PartialState, the actual check
happens at the end of Accelerator.__init__.

In a similar vein, the docstring of get_logger has been adjusted to
first initialize the Accelerator, as it is not working as currently
shown.

* Reviewer comment: small change to docstring
2023-10-24 12:43:55 -04:00
f3229872bc fix docstring typo (#2072) 2023-10-24 12:42:59 -04:00
7843286f2e Allow for samplers to be seedable and reproducable (#2057)
* bookmark

* Works!

* Working!

* Fully working now

* Cover dataset

* Needed for dispatch

* Check both

* Bring back pop, fix hang

* Fully working

* Change back to epoch

* Adjust for new methods

* Clean

* Fix tests

* Avoid circular import

* Clean

* Fix test

* Comment

* Add a comment

* Comment

* Use yield from instead
2023-10-24 06:41:06 -04:00
11e2e99cfc Let iterable dataset shard have a len (#2066) 2023-10-23 08:12:26 -04:00
07e745f1c4 DOC: Fix broken link to designing a device map (#2073)
There is a typo in the link.
2023-10-23 07:42:24 -04:00
c7c99a30ea fix: remove useless token (#2069) 2023-10-19 14:29:55 +02:00
8f45a2eae8 remove unused constants (#2045) 2023-10-18 14:24:01 -07:00
9fd64b7ea9 Fix the error when the "train_batch_size" is absent in DeepSpeed config (#2060)
* Update dataclasses.py

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-10-16 15:13:20 -07:00
5be16ad90b Add space to docs (#2055)
* Add space to docs

* Phrasing
2023-10-16 06:33:12 -07:00
dab62832de Reset state to pass failing test 2023-10-13 13:13:41 -04:00
caa9f9bcbb Fix stalebot (#2052) 2023-10-13 12:20:37 -04:00
943efedb88 fix docstring (#2053) 2023-10-13 07:42:26 -04:00
50acb0c2ec Let drop_last modify gather_for_metrics (#2048)
* Drop last

* Test

* Uncomment out tests

* Update src/accelerate/test_utils/scripts/external_deps/test_metrics.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Document better

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-10-12 14:27:06 -04:00
e6d96e5f70 Make fsdp ram efficient loading optional (#2037)
* make fsdp ram efficient loading optional

* Add documentation

* address comments

* address comments

* address comments

* nit
2023-10-12 20:44:09 +05:30
1dfb6e9304 Fix integration CI (#2047)
* Different method

* Should fix version
2023-10-12 07:40:11 -04:00
4bef6bc511 Safely end training even if trackers weren't initialized (#1994)
* Update accelerator.py

* init trackers on class init

* dont need getattr because trackers exists
2023-10-11 08:24:04 -04:00
73640d0463 Reduce memory by using all_gather_into_tensor (#1968)
* all_gather_into_tensor

* Cleanup

* Reduce memory on non-gloo

* Fin

* Check for backend too on cpu

* CPU comment

* Change scope for performance

* Bring back zeros after remembering why

* Add comment

* Add comment

* Use empty

* Comment
2023-10-10 10:10:32 -04:00
7a1159143e Unpin transformers (#2044) 2023-10-10 05:33:22 -04:00
cbb0b82fa2 Fix DeepSpeed version to <0.11 (#2043)
This is a temporary fix to prevent a DeepSpeed installation error that
was introduced in DeepSpeed 0.11.0.
2023-10-09 10:47:33 -04:00
5ae6111180 Allow FSDP to use with torch.autocast for bfloat16 mixed precision (#2033)
* Ignore native_amp when FSDP is used

* Rollback condition

* Fix mixed precision of bfloat16 for FSDP
2023-10-06 18:26:04 +05:30
230a5f541b Fix save on each node (#2036) 2023-10-06 05:18:02 -04:00
956114ac92 Enable shared file system with save and save_state via ProjectConfiguration (#1953)
* Support shared storage, start

* Pass use_local_node_storage

* Reverse and different namings

* Not global only

* Addres comments

* Clean

* Apply suggestions from code review

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Save on each node as explicit arg

* More explicit

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2023-10-03 12:04:01 -04:00
76ee7f211d update fsdp docs (#2026) 2023-10-03 17:40:23 +05:30
420743af22 Sync states for xpu fsdp (#2005)
* sync states for xpu fsdp

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-10-02 17:16:36 -04:00
206ab491ed update torch_dynamo backends (#1992)
* update torch_dynamo choice

* fix test
2023-10-02 14:31:44 -04:00
936d2f4f5c Add basic documentation for multi node training (#1988)
* initial commit for adding multinode training doc

* removed stray changes

* fix formatting issue and switch to bulleted list

* Update docs/source/basic_tutorials/launch.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update docs/source/basic_tutorials/launch.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* added link to new blog post

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-10-02 14:19:59 -04:00
da98d601b5 [docs] Quick tour refactor (#2008)
* quick tour refactor, moved internal mechanism into a conceptual guide

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-10-02 13:19:41 -04:00
658492fb41 fix resuming from checkpoint (#2001) 2023-09-29 13:12:41 +05:30
80da9cfb09 FIX Automatic checkpoint path inference issue (#1989)
Resolves #1983

Fixes an issue where the checkpoint directory would be incorrectly set while
loading when using relative paths.
2023-09-19 14:20:51 +02:00
03deec2a01 Fix model copy after dispatch_model (#1971)
* Fix model copy after dispatch_model

* Minor hook update to fix failing test

* address reviewer comments
2023-09-19 06:05:30 -04:00
629d02c844 Update big_modeling.md (#1976) 2023-09-18 10:11:57 -04:00
a87c95da9e Dev version 2023-09-14 15:24:15 -04:00
bbcdbbaffc Remove checkpoints only on main process (#1974)
* Remove checkpoints only on main process

shutil.rmtree might throw errors if called on multiply processes. Make a call only on main process

* Apply style
2023-09-14 08:31:55 -04:00
ce53708e0e fix for xpu (#1972) 2023-09-14 08:18:20 -04:00
53209ce6d8 update FSDP and DeepSpeed docs (#1973) 2023-09-14 08:18:11 -04:00
bd083ae1bf Add force_hooks to dispatch_model (#1969)
* Add force_hooks to dispatch_model

* Minor documentation rephrasing
2023-09-14 07:57:19 -04:00
e5452a618d fix torch compile with FSDP (#1919)
* fix torch compile with FSDP

* Update accelerator.py

* fixes

* resolve comments

* fix bug

* address comments

* addressing comments

* address comments
2023-09-14 13:19:59 +05:30
40a73e0ae0 Introduce breakpoint API (#1940)
* early stopping

* Fix tests

* Works on multi-gpu, uncomment

* Rm reset

* Check for >=1

* equal

* Trigger

* Fix test

* Update docs/source/concept_guides/deferring_execution.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Explicit example loop

* Set to zero, not None

* rename test

* Check again to ensure it's been reset

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-09-13 12:42:38 -04:00
937e08ce75 add bf16 mixed precision support for NPU (#1949)
* add bf16 mixed precision support for NPU

* Explicitly register the NPU backend to PyTorch via `import torch_npu`

---------

Co-authored-by: statelesshz <jihuazhong1@huawei.com>
2023-09-13 09:56:24 -04:00
5d558f21e2 [WIP] Implementing gather_for_metrics with dedup for non tensor objects (#1937)
* [feat] implementing gather_for_metrics for objects

* [lint] make style result

* [docs] improve fn docs gather for metrics

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [docs] update args description gather for metrics

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [refactor] gather for metrics for non tensor obj

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [fix] renaming tensor to data (was not defined and it is not just a tensor)

* [fix] else state

* [test] gather for metrics with non tensor objects

* [lint] make style result

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [test] removing useless assertion

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [test] add running on main

* [lint] style autoformat

---------

Co-authored-by: Lorenzobattistela <lorenzobattistela@gmail.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-09-12 12:17:43 -04:00
d9b5ce60b3 Rm strtobool (#1964)
* Rm strtobool

* Reorganize

* c/p

* Signature
2023-09-12 11:21:09 -04:00
61a87ab946 finish all todos (#1957) 2023-09-12 17:13:00 +02:00
5dec654aae Better guards for slow imports (#1963)
* Start

* Deepspeed

* Clean
2023-09-12 10:54:19 -04:00
b2a950205e FIX: patch_environment restores pre-existing environment variables when finished (#1960)
Resolves #1832

This fixes a bug in patch_environment that currently leads to
pre-existing items being deleted completely from the environment
variables, when they were temporarily modified by patch_environment,
once the context has finished. Now, the env vars are restored to their
previous values.
2023-09-12 15:39:54 +02:00
ca7b853abc fix safetensor saving (#1954)
* fix safetensor saving

* fix test

* fix

* better save

* modify as keyword arg
2023-09-12 09:14:41 -04:00
6832aa51a6 move tensorflow dep (#1959) 2023-09-12 06:19:26 -04:00
4a1d5b1fb6 Fix docs (#1951)
Signed-off-by: Peng Gao <peng.gao.dut@gmail.com>
2023-09-11 10:40:14 -04:00
82369c8314 fix the fsdp docs (#1947) 2023-09-11 15:30:09 +05:30
cdb001ca5f Enhance multi-node notebook launching (#1913)
* Introduce new arguments: master_addr, node_rank, and num_nodes.
  Relocate these arguments to the end of the notebook_launcher
  function for compatibility.

* Set defaults for NPROC and NODE_RANK environment variables in the
  PrepareForLaunch function to ensure compatibility.

* Thoroughly document the process and usage guidelines for
  multi-node launching.
2023-09-08 07:53:21 -04:00
c72e22419b Bring back pypi to runners (#1939)
* Bring back pypi

* Flipflop
2023-09-08 07:51:17 -04:00
c872c3086f clean num devices (#1936) 2023-09-07 10:14:52 -04:00
cec5ae8e4d Check for invalid keys (#1935)
* Check for invalid keys

* Revert else

* Better error

* Weird space
2023-09-06 12:22:22 -04:00
cd570b2e2a reduce gradient first for XLA when unscaling the gradients in mixed precision training with AMP. (#1926)
* reduce gradient first for XLA when unscaling the gradients in mixed
precision training with AMP.

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* update acceleartor.reduce and accelerate.utils.operations.reduce

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-09-06 11:00:24 -04:00
727d624322 Add support for deepspeed optimizer and custom scheduler (#1909)
* support for deepspeed optimizer and custom scheduler

* don't throw the error

* Add tests

* fix the tests

* fix the code quality

* Update tests/deepspeed/test_deepspeed.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* fix the docstrings

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-09-05 22:30:46 +05:30
afed2f75f8 Expose auto in dataclass (#1914)
* Auto

* Update str
2023-09-05 09:23:10 -04:00
739b135f83 More CI fun - run all test parts always (#1916)
* Run always

* Populate
2023-08-31 12:32:28 -04:00
4a9dd1cd82 support logging with mlflow in case of mlflow-skinny installed (#1874)
* - support a case of mlflow-skinny installed when log_with is set to mlflow.

* code beautification.
2023-08-31 12:11:02 -04:00
feab09908d improve help info when run accelerate config on npu (#1895) 2023-08-31 12:02:59 -04:00
e0baaa8df0 fix: add debug argument to sagemaker configuration (#1904)
* fix: add debug argument to sagemaker configuration #1903

* ignore:  address quality style

Signed-off-by: maximegmd <672982+maximegmd@users.noreply.github.com>

* tweak: ask if user wants debug information in SageMaker distributed operations

---------

Signed-off-by: maximegmd <672982+maximegmd@users.noreply.github.com>
2023-08-31 11:52:46 -04:00
1b998f1695 Use hosted CI runners for building docker images (#1915)
* New technique

* needs

* explicit all

* Volume prune not going

* Skip volume

* versions

* Avoid checkout perhaps?

* Working dir

* Don't include dot-slash?

* Accelerate prefix?

* Working directory?

* Context?

* other workingdir

* Faster iteration

* Right tag

* Full

* Release

* GPU
2023-08-31 11:28:48 -04:00
7befe580c2 Fix docker images (#1910)
* With driver

* Remove deps

* No bitsandbytes

* Try with raw push

* We can keep old docker images

* Also include release

* Skorch uses master

* Right tag
2023-08-31 07:14:38 -04:00
cd3d3a37f9 Skip pypi transformers until release (#1911)
* Skip release

* TODO comment
2023-08-31 07:14:06 -04:00
81fffe51fd deepspeed grad_acc_steps fixes (#1901)
* deepspeed grad_acc_steps fixes

* fix tests
2023-08-31 16:33:34 +05:30
0b5ac0253e Add PR template (#1906)
* Add PR template

* Sourab is not a fashion company
2023-08-30 03:19:15 -04:00
a16b843a1b deepspeed for ccl xpu (#1827) 2023-08-29 17:36:29 +05:30
bc86a9379f Solve at least one failing test (#1898) 2023-08-29 10:57:56 +05:30
87a096f95e Add FSDP activation checkpointing feature (#1891)
* add FSDP activation checkpointing feature

* fix formatting issue

* fix code formatting issue
2023-08-29 10:56:08 +05:30
44adf1e14f Fix nb launcher test (#1899)
* Try with raw subprocess

* Skip test for now

* Clean
2023-08-28 14:44:18 -04:00
ce870e1ce1 Final nits on model util (#1896)
* Nits

* Annoying markdown tables

* Try with one

* I give, try raw md

* Moot

* W/o code tick

* Markdown
2023-08-28 09:47:44 -04:00
1ace672d3e Update dataclasses.py (#1894) 2023-08-28 17:40:14 +05:30
e2ae254008 Add hub as core dep (#1885)
* Add hub as dep

* Missing refs
2023-08-25 10:05:58 -04:00
0fa291e707 Add doc on model memory usage (#1887)
* Doc

* Note on meta

* Phrase

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Clarity nit

* Nits

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-08-25 10:03:39 -04:00
ba6f11ec3e Enable a token to be used (#1886)
* Enable based on passing the token

* Doc more
2023-08-24 15:43:37 -04:00
430ee9df6b Update with new url (#1884) 2023-08-24 12:52:09 -04:00
409a9df0a4 Introduce model memory estimator (#1876)
* Estimator

* Right err

* Fixup tests

* trust remote code

* Print output for debugging purposes

* trust_remote_code

* Address some comments

* change doc to req arg

* Properly check for _no_split_modules in transformer models

* Note on transformer models

* Check/handle pentabytes

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Tests are passing locally again, better handle for no_split

* Adjust setup?

* Let's see if the cleaner version works

* Refactor and clean up for testing

* Specify in comments

* Better error handling

* A million tests later

* More tests + err handling

* Require hub

* More with remote code

* Clean up

* Add a test for no_split

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Docstring

* Address some comments

* rm einops

* Let it err out

* Adjust errs

* Tests

* Reduce test repeats

* Clean up borders

* Tip on 20%

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-08-24 12:12:01 -04:00
acad5bae5c Enable power users to bypass device_map="auto" training block (#1881)
* Enable TP greedy env var

* Right env setting

* Use true, not false

* Design nit

* ACCELERATE_BYPASS_DEVICE_MAP
2023-08-24 10:27:59 -04:00
81b19c4094 fix detach_hook (#1880) 2023-08-23 15:15:27 -04:00
3e97a9172b Update release instructions (#1877)
* Update release instructions

* Update setup.py

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>

---------

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
2023-08-23 16:04:09 +02:00
812719644d v0.23.0.dev0 2023-08-23 02:25:56 -04:00
16e5113f8a Improve big model inference docs (#1872)
* Start of rework

* Refactor doc

* Got too used to quarto

* They're top level

* md link

* phrasing

* Remove indent
2023-08-22 07:11:12 -04:00
3122a6164d Include a note to the forums in the bug report (#1871)
* gs

* New version
2023-08-21 11:48:39 -04:00
c8682ae74c support custom slice function in DataLoaderDispatcher (#1846)
* save progress

* work on suggestions

* work on some suggestions

* last suggestion

* oops, mini bug
2023-08-21 17:16:43 +02:00
0768905f77 remove casting to FP32 when saving state dict (#1868)
* remove casting to FP32 when saving state dict

* update docs.
2023-08-21 19:08:29 +05:30
d087be0156 add env variable for init_on_device (#1852) 2023-08-18 23:20:50 +02:00
41caaa56e1 Update fsdp_with_peak_mem_tracking.py (#1856) 2023-08-18 13:34:31 +05:30
21d127334e fix dispatch (#1855) 2023-08-17 12:23:50 -04:00
3cf7dee576 Loading logic safetensors (#1853)
* add logic in loading for safetensors

* fix style
2023-08-17 10:46:49 -04:00
64c586f5eb support for ram efficient loading of model with FSDP (#1777)
* support for ram efficient loading of model with FSDP

* with default behaviour of efficient loading when using FSDP, `sync_module_states` needs to be `True`

* fixes

* Update accelerator.py

* Update dataclasses.py
2023-08-16 15:23:20 +05:30
0e714f5ba4 Fix the noneffective parameter: gpu_ids (#1850)
Co-authored-by: Devymex <wangyumeng02@megvii.com>
2023-08-16 09:27:13 +02:00
92f23e123d Change CUDA check (#1833)
* Move into check-device

* Use proper solutiona nd write test

* Move test

* Avoid circular import

* Remove patchenv alltogether

* New version

* Better way, run a verification test

* Final working version

* Debug mode

* doc

* Just debug

* Doc

* print
2023-08-16 03:21:30 -04:00
f67e11afd7 Fix verify_device_map (#1842)
* make verify_device_map return True only if device map has more that 1 element

* Fix style and comment

* fix style
2023-08-14 11:44:41 -04:00
6458058559 FIX: Bug with unwrap_model and keep_fp32_wrapper=False (#1838)
Using accelerator.unwrap_model(model, keep_fp32_wrapper=False) results
in a defective forward method. This bug was (probably) introduced in
PR #872.

Wrapping the method in MethodType (as elsewhere in code) resolves the
issue.
2023-08-14 10:50:38 +02:00
4d13e4e474 fix bug in dev properties for ipex (#1834) 2023-08-11 09:15:15 +02:00
058a3546ea use device as context manager for init_on_device (#1826) 2023-08-10 09:35:00 +02:00
98ecab2083 Minor idiomatic change. (#1829) 2023-08-10 09:06:26 +02:00
b30a349078 Better test (#1825)
* Better test

* Test

* Comment
2023-08-09 02:22:31 -04:00
7cb19ae613 Expose a bit of args/docstring fixup (#1824)
* Expose a bit

* docstring
2023-08-08 11:26:50 -04:00
39897a0662 Update docs and docstrings to match load_and_quantize_model arg (#1822)
* Update quantization.md with correct bnb_quantization_config args

* Update load_and_quantize_model docstring to match bnb_quantization_config arg
2023-08-08 10:20:03 -04:00
aa71bb815a Fix bnb import (#1813)
* Fix import

* Fix bnb

* Comment
2023-08-08 10:17:27 -04:00
f43a08a9c5 add warning when using to and cuda (#1790)
* add warning when using to and cuda

* more warning

* style

* change warning msg

* fix typo

* better check

* Update src/accelerate/big_modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-08-08 10:08:50 -04:00
b42c65b729 Improve docs on grad accumulation (#1817)
* Improve docs on grad accumulation

* Update docs/source/usage_guides/gradient_accumulation.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* fix

* address feedback

* Update docs/source/usage_guides/gradient_accumulation.md

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-08-07 17:28:01 +02:00
7bad726935 Bibtex (#1820) 2023-08-07 11:21:40 -04:00
29ff7c3911 Expand device-map warning (#1819)
* Propagate to general prepare

* Move test to general tester

* Keep in model

* Keep in multi-gpu

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-08-07 11:04:29 -04:00
30eff605df Typo fix (#1812) 2023-08-04 11:18:14 -04:00
fc95663e03 Detect device map auto and raise a helpful error (#1810) 2023-08-04 10:02:27 -04:00
49cb83a423 More specific logging in gather_for_metrics (#1784)
* Start on testing behavior

* Add test to capture current behavior

* Cleanup test; add length to DummyIterableDataset

* Remove wip test from test_dataloader.py

* Only check on remainder state if we're at the end of a dataloader

* Cleanup

* Fix style

* Move test to test_metrics

* Remove 2 num_process assertion so that we test on single-GPU as well,
why not

* Use `isinstance()` instead of `type()` in test_metrics

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-08-03 12:38:58 -04:00
d2b159ea1a Fix pytest import (#1808)
* pytest

* Fully rm pytest

* Doc

* Works
2023-08-03 11:00:16 -04:00
40056c69d1 Add FSDP for NPU (#1806)
* Add FSDP for NPU

* enable fsdp's test case for npu&xpu
2023-08-03 11:35:29 +02:00
505b5be044 Add FSDP for XPU (#1803)
* fsdp for xpu

* add fsdp xpu
2023-08-02 15:34:55 -04:00
a6333f2e7c Changed allow_val_change param (#1796) 2023-08-02 13:42:11 -04:00
YQ
0dec477985 add support of float memory size in convert_file_size_to_int (#1799)
* support float memory size

* add unit test for
2023-07-31 15:43:19 -04:00
YQ
a24189db35 reserve 10% GPU in get_balanced_memory to avoid OOM (#1798)
* reserve 10% GPU to avoid OOM

* update warning message

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* use logger.info

* clean up comment

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-31 15:42:55 -04:00
a9aee447ee Fix import error when torch>=2.0.1 and torch.distributed is disabled (#1800) 2023-07-31 11:27:45 -04:00
d5894ab499 Set ipex default (#1776) 2023-07-26 12:20:13 -04:00
6f14928e28 simplify and correct the deepspeed example (#1775)
* simplify and correct the deepspeed example

* Update deepspeed_with_config_support.py

* 🐛 fix
2023-07-26 17:59:13 +05:30
777334a803 [FSDP] Fix load_fsdp_optimizer (#1755) 2023-07-26 14:23:01 +05:30
c3d82d24e2 Contigous on gather (#1771)
* For testing

* Contigous
2023-07-25 13:44:08 -04:00
6e70e79e4e Support wrapping multiple models in Accelerator.accumulate() (#1708)
* Support wrapping multiple models in Accelerator.accumulate()

* Fix style.

* Rename variable

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update doc.

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update variable name.

---------

Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-07-25 12:22:36 -04:00
b3fc3c9067 Introduce an experimental distributed operations framework (#1756)
* First version

* As decorator

* Better err

* Limit

* Partial state

* More work

* Tests + config

* Debug mode

* Flag

* Rm references to debug mode, debug

* Tests

* Docs

* Nit

* Disable debug in config

* Support dict
2023-07-25 11:39:31 -04:00
a9d79163e5 Change is_aim_available() function to not match aim >= 4.0.0 (#1769)
* Change is_aim_available() function to not match aim >= 4.0.0

* Use compare_versions utility function in is_aim_available
2023-07-25 09:07:06 -04:00
0b36ca6e64 Fix offload on disk when executing on CPU (#1762)
* Fix offload on disk when executing on CPU

* Actually refine the error instead
2023-07-24 11:09:29 -04:00
f3b7f9cf25 Fix error when max_memory argument is in unexpected order (#1759)
* sort the user-provided max_memory keys in gpu-cpu-disk order

* fixed the bug by adding disk to main devices

* add checking for max_memory argument

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix typo

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix typos

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-24 09:23:04 -04:00
b909bfacb9 Fix check failure in Accelerator.save_state using multi-gpu (#1760) 2023-07-24 09:03:45 -04:00
a2d8f540c3 FSDP enhancements and fixes (#1753)
* if the model is already an FSDP instance, remove the warning and prep overhead

* allow usage of `_no_split_modules` to simplify UX when using FSDP

* Update other.py

* fixes
2023-07-21 17:52:37 +05:30
e8ed10ae62 Fix FSDP related issues (#1745)
* Update fsdp_utils.py

* other FSDP fixes

* revert as this is resulting in more vram usage

* revert

* Update fsdp_utils.py
2023-07-21 12:16:45 +05:30
a6291e43b0 Expose autocast kwargs and simplify autocast wrapper (#1740)
* kwarg handler

* Proper default

* Enabled

* Rework

* Clean

* Ref autocast properly
2023-07-20 12:49:30 -04:00
2a289f6108 Rework new constant for operations (#1748)
* Rework new constant

* Naming for clarity

* Rm _cpu

* clean
2023-07-20 11:26:35 -04:00
cafc7f785f Remove unused constant (#1749)
* Rm unused

* Clean
2023-07-19 17:12:00 -04:00
39889c7304 Check for misconfiguration of single node & single GPU (#1746)
* Check for misuse

* Right area

* Sapce
2023-07-19 17:11:53 -04:00
12d5a2d0da fix typo (#1747) 2023-07-19 13:25:35 -04:00
243288627d fix KwargsHandler.to_kwargs not working with os.environ initialization in __post_init__ (#1738)
* fix KwargsHandler.to_kwargs not working with os.environ initialization in __post_init__

* fix test_torch_dynamo_plugin such that it wouldn't change os.environ permanently

* move clear_os_environ func to utils/other and rename it

* reformat code in order to pass ci quality check

* modifiy the comment of utils.other.clear_environment
2023-07-19 12:00:53 -04:00
efc1fa8376 Let load_state automatically grab the latest save (#1741)
* Automatic load state

* docstring

* Quality
2023-07-18 14:56:20 -04:00
18e3012489 Fixed the bug that split dict incorrectly (#1742)
* Fixed the bug that split dict incorrectly

* fix list out of index and test script
2023-07-18 14:54:25 -04:00
daa1952f47 Update docs (#1736)
* Still in works

* Utils to check

* More references

* Fin

* add utils

* toctree
2023-07-18 07:28:01 -04:00
653ba110d3 Fixed typo in repr of AlignDevicesHook (#1735)
Changed class name in the repr from AlignDeviceHook to AlignDevicesHook
2023-07-17 10:50:22 -04:00
f518b0ab03 make balanced memory able to work with non continguous GPUs ids (#1734) 2023-07-17 10:49:08 -04:00
3a05e0cf70 Fix errors when optimizer is not a Pytorch optimizer. (#1733)
* Fix errors when optimizer is not a Pytorch optimizer.

* update

---------

Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
2023-07-17 07:11:02 -04:00
299f3ef8ab Adding a shape check for set_module_tensor_to_device. (#1731)
* Fixing set_module_tensor_to_device.

* Adding a shape check for `set_module_tensor_to_device`.

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update error msg.

* Style.

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-14 17:46:52 +02:00
925a13eb04 fix the bug in npu (#1728)
* enable test_sync for npu

* fix the bug in get_cluster_input for npu

* fix the bug in broadcast for npu
2023-07-14 09:31:04 -04:00
4170f395d1 Get rid of calling get_scale() by patching the step method of optimizer. (#1720)
* Get rid of calling get_scale() by patching the step method of optimizer.

* Fix when step() is already patched by other parties.

* support pickle

* Minor updates.

* Change _accelerate_num_step_called to _accelerate_step_called

---------

Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
2023-07-14 07:56:45 -04:00
bb47344c77 Better control over DDP's no_sync (#1726)
* add `ddp_trigger_sync_in_bwd` to accelerator with test

* add example to `ddp_trigger_sync_in_bwd`

* support case of non-DDP model

* style

* make style

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* model_ddp -> model

* .

* .

* .

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* add comment

* style

* style

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-07-13 18:29:02 +02:00
243cd82409 fix failing test on 8GPU (#1724) 2023-07-13 11:45:45 -04:00
51f5e829a8 v0.22.0.dev0 2023-07-13 11:20:38 -04:00
90 changed files with 4365 additions and 1077 deletions

View File

@ -1,6 +1,12 @@
name: "\U0001F41B Bug Report"
description: Submit a bug report to help us improve Accelerate
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to submit a bug report! 🐛
If this is not a bug related to the Accelerate library directly, but instead a general question about your code or the library specifically please use the [forums](https://discuss.huggingface.co/c/accelerate/18).
- type: textarea
id: system-info
attributes:

47
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,47 @@
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
- Big modeling: @SunMarc
- Fully-Sharded Data Parallism: @pacman100
- DeepSpeed: @pacman100
- Command Line Interface: @muellerzr
- Documentation: @muellerzr
- Core parts of the library: @muellerzr @BenjaminBossan
- Maintained examples: @muellerzr or @pacman100
-->

View File

@ -21,44 +21,40 @@ jobs:
version-cpu:
name: "Latest Accelerate CPU [version]"
runs-on: ubuntu-latest
runs-on: [self-hosted, docker-gpu, multi-gpu]
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Check out code
uses: actions/checkout@v2
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v1
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push CPU
uses: docker/build-push-action@v2
uses: docker/build-push-action@v4
with:
context: ./docker/accelerate-cpu
file: docker/accelerate-cpu/Dockerfile
push: true
tags: huggingface/accelerate-cpu:${{needs.get-version.outputs.version}}
version-cuda:
name: "Latest Accelerate GPU [version]"
runs-on: ubuntu-latest
runs-on: [self-hosted, docker-gpu, multi-gpu]
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Check out code
uses: actions/checkout@v2
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v1
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU
uses: docker/build-push-action@v2
uses: docker/build-push-action@v4
with:
context: ./docker/accelerate-gpu
file: docker/accelerate-gpu/Dockerfile
push: true
tags: huggingface/accelerate-gpu:${{needs.get-version.outputs.version}}
tags: huggingface/accelerate-gpu:${{needs.get-version.outputs.version}}

View File

@ -11,44 +11,50 @@ concurrency:
cancel-in-progress: false
jobs:
clean-storage:
name: "Clean docker image storage"
runs-on: [self-hosted, docker-gpu, multi-gpu]
steps:
- name: Clean storage
run: |
docker image prune --all -f --filter "until=48h"
docker system prune --all -f --filter "until=48h"
latest-cpu:
name: "Latest Accelerate CPU [dev]"
runs-on: ubuntu-latest
runs-on: [self-hosted, docker-gpu, multi-gpu]
needs: clean-storage
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Check out code
uses: actions/checkout@v2
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v1
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push CPU
uses: docker/build-push-action@v2
uses: docker/build-push-action@v4
with:
context: ./docker/accelerate-cpu
file: docker/accelerate-cpu/Dockerfile
push: true
tags: huggingface/accelerate-cpu
latest-cuda:
name: "Latest Accelerate GPU [dev]"
runs-on: ubuntu-latest
runs-on: [self-hosted, docker-gpu, multi-gpu]
needs: clean-storage
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Check out code
uses: actions/checkout@v2
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v1
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU
uses: docker/build-push-action@v2
uses: docker/build-push-action@v4
with:
context: ./docker/accelerate-gpu
file: docker/accelerate-gpu/Dockerfile
push: true
tags: huggingface/accelerate-gpu
tags: huggingface/accelerate-gpu

View File

@ -14,5 +14,4 @@ jobs:
commit_sha: ${{ github.sha }}
package: accelerate
secrets:
token: ${{ secrets.HUGGINGFACE_PUSH }}
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}

View File

@ -39,6 +39,7 @@ jobs:
make test
- name: Run examples on GPUs
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y
@ -79,11 +80,13 @@ jobs:
make test_cli
- name: Run Integration tests on GPUs
if: always()
run: |
source activate accelerate
make test_integrations
- name: Run examples on GPUs
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y

View File

@ -35,10 +35,12 @@ jobs:
make test_cli
- name: Run test on GPUs
if: always()
run: |
source activate accelerate
make test
- name: Run examples on GPUs
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y
@ -74,6 +76,7 @@ jobs:
make test
- name: Run examples on GPUs
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y

View File

@ -51,9 +51,9 @@ jobs:
run: |
source activate accelerate
git config --global --add safe.directory '*'
git checkout main && git pull
git checkout main && git pull && git fetch --tags
if [[ ${{ matrix.transformers-version }} = pypi ]]; then
git checkout $(git describe --tags `git rev-list --tags --max-count=1`)
git checkout $(git tag --sort=taggerdate | tail -1)
fi
pip install .[torch,deepspeed-testing]
@ -76,6 +76,7 @@ jobs:
env:
CUDA_VISIBLE_DEVICES: ${{ matrix.cuda_visible_devices }}
WANDB_DISABLED: true
if: always()
run: |
source activate accelerate;
pytest -sv tests/deepspeed
@ -106,7 +107,7 @@ jobs:
run: |
source activate accelerate
git config --global --add safe.directory '*'
git checkout main && git pull
git checkout master && git pull
if [[ ${{ matrix.skorch-version }} = pypi ]]; then
git checkout $(git describe --tags `git rev-list --tags --max-count=1`)
fi

View File

@ -269,7 +269,7 @@ If you use 🤗 Accelerate in your publication, please cite it by using the foll
```bibtex
@Misc{accelerate,
title = {Accelerate: Training and inference at scale made simple, efficient and adaptable.},
author = {Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Mangrulkar},
author = {Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Mangrulkar, Marc Sun, Benjamin Bossan},
howpublished = {\url{https://github.com/huggingface/accelerate}},
year = {2022}
}

View File

@ -23,6 +23,8 @@
title: Example Zoo
- local: usage_guides/big_modeling
title: How to perform inference on large models with small resources
- local: usage_guides/model_size_estimator
title: Knowing how big of a model you can fit into memory
- local: usage_guides/quantization
title: How to quantize model
- local: usage_guides/distributed_inference
@ -35,6 +37,8 @@
title: Saving and loading training states
- local: usage_guides/tracking
title: Using experiment trackers
- local: usage_guides/debug
title: Debugging timeout errors
- local: usage_guides/memory
title: How to avoid CUDA Out-of-Memory
- local: usage_guides/mps
@ -51,6 +55,10 @@
title: How to use 🤗 Accelerate with Intel® Extension for PyTorch for cpu
title: How-To Guides
- sections:
- local: concept_guides/internal_mechanism
title: 🤗 Accelerate's internal mechanism
- local: concept_guides/big_model_inference
title: Loading big models into memory
- local: concept_guides/performance
title: Comparing performance across distributed setups
- local: concept_guides/deferring_execution
@ -85,4 +93,6 @@
title: Utility functions and classes
- local: package_reference/megatron_lm
title: Megatron-LM Utilities
- local: package_reference/fsdp
title: Fully Sharded Data Parallelism Utilities
title: "Reference"

View File

@ -153,6 +153,15 @@ the below example enabling unbuffered stdout and stderr:
python -u -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2}
```
<Tip>
You can run your code on CPU as well! This is helpful for debugging and testing purposes on toy models and datasets.
```bash
accelerate launch --cpu {script_name.py} {--arg1} {--arg2}
```
</Tip>
## Why you should always use `accelerate config`
@ -200,3 +209,24 @@ Launching a script from the location of that custom yaml file looks like the fol
```bash
accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2} ...
```
## Multi-node training
Multi-node training with 🤗Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following:
- Copy your codebase and data to all nodes. (or place them on a shared filesystem)
- Setup your python packages on all nodes.
- Run `accelerate config` on the main single node first. After specifying the number of nodes, you will be asked to specify the rank of each node (this will be 0 for the main/master node), along with the IP address and port for the main process. This is required for the worker nodes to communicate with the main process. Afterwards, you can copy or send this config file across all of your nodes, changing the `machine_rank` to 1, 2,3, etc. to avoid having to run the command (or just follow their directions directly for launching with `torchrun` as well)
Once you have done this, you can start your multi-node training run by running `accelerate launch` (or `torchrun`) on all nodes.
<Tip>
It is required that the command be ran on all nodes for everything to start, not just running it from the main node. You can use something like SLURM or a different process executor to wrap around this requirement and call everything from a single command.
</Tip>
<Tip>
It is recommended to use the intranet IP of your main node over the public IP for better latency. This is the `192.168.x.x` or the `172.x.x.x` address you see when you run `hostname -I` on the main node.
</Tip>
To get a better idea about multi-node training, check out our example for [multi-node training with FSDP](https://huggingface.co/blog/ram-efficient-pytorch-fsdp).

View File

@ -401,6 +401,26 @@ args = ("fp16", 42, 64)
notebook_launcher(training_loop, args, num_processes=2)
```
In the case of running on multiple nodes, you need to set up a Jupyter session at each node and run the launching cell at the same time.
For an environment containing 2 nodes (computers) with 8 GPUs each and the main computer with an IP address of "172.31.43.8", it would look like so:
```python
notebook_launcher(training_loop, args, master_addr="172.31.43.8", node_rank=0, num_nodes=2, num_processes=8)
```
And in the second Jupyter session on the other machine:
<Tip>
Notice how the `node_rank` has changed
</Tip>
```python
notebook_launcher(training_loop, args, master_addr="172.31.43.8", node_rank=1, num_nodes=2, num_processes=8)
```
In the case of running on the TPU, it would look like so:
```python
@ -423,6 +443,13 @@ epoch 4: 94.71
And that's it!
## Debugging
A common issue when running the `notebook_launcher` is receiving a CUDA has already been initialized issue. This usually stems
from an import or prior code in the notebook that makes a call to the PyTorch `torch.cuda` sublibrary. To help narrow down what went wrong,
you can launch the `notebook_launcher` with `ACCELERATE_DEBUG_MODE=yes` in your environment and an additional check
will be made when spawning that a regular process can be created and utilize CUDA without issue. (Your CUDA code can still be ran afterwards).
## Conclusion
This notebook showed how to perform distributed training from inside of a Jupyter Notebook. Some key notes to remember:

View File

@ -0,0 +1,308 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Handling big models for inference
When loading a pre-trained model in PyTorch, the usual workflow looks like this:
```py
import torch
my_model = ModelClass(...)
state_dict = torch.load(checkpoint_file)
my_model.load_state_dict(state_dict)
```
In plain English, those steps are:
1. Create the model with randomly initialized weights
2. Load the model weights (in a dictionary usually called a state dict) from the disk
3. Load those weights inside the model
While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pre-trained weights. If you're loading a model with 6 billion parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16).
<Tip warning={true}>
This API is quite new and still in its experimental stage. While we strive to provide a stable API, it's possible some small parts of the public API will change in the future.
</Tip>
## How the Process Works: A Quick Overview
<Youtube id="MWCSGj9jEAo" />
## How the Process Works: Working with Code
### Instantiating an empty model
The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
```py
from accelerate import init_empty_weights
with init_empty_weights():
my_model = ModelClass(...)
```
For instance:
```py
with init_empty_weights():
model = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
```
initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved to that device.
<Tip warning={true}>
You can't move a model initialized like this on CPU or another device directly, since it doesn't have any data. It's also very likely that a forward pass with that empty model will fail, as not all operations are supported on the meta device.
</Tip>
### Sharded checkpoints
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split into several smaller files that we call checkpoint shards.
🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
```bash
first_state_dict.bin
index.json
second_state_dict.bin
```
with index.json being the following file:
```
{
"linear1.weight": "first_state_dict.bin",
"linear1.bias": "first_state_dict.bin",
"linear2.weight": "second_state_dict.bin",
"linear2.bias": "second_state_dict.bin"
}
```
and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"linear1.bias"`, `second_state_dict.bin` the ones for `"linear2.weight"` and `"linear2.bias"`
### Loading weights
The second tool 🤗 Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
If you want to use big model inference with 🤗 Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
Here is how we can use this to load the [GPT2-1.5B](https://huggingface.co/marcsun13/gpt2-xl-linear-sharded) model.
Let's download the sharded version of this model.
```bash
pip install huggingface_hub
```
```py
from huggingface_hub import snapshot_download
checkpoint = "marcsun13/gpt2-xl-linear-sharded"
weights_location = snapshot_download(repo_id=checkpoint)
```
In order to initialize the model, we will use the library minGPT.
```bash
git clone https://github.com/karpathy/minGPT.git
pip install minGPT/
```
```py
from accelerate import init_empty_weights
from mingpt.model import GPT
model_config = GPT.get_default_config()
model_config.model_type = 'gpt2-xl'
model_config.vocab_size = 50257
model_config.block_size = 1024
with init_empty_weights():
model = GPT(model_config)
```
Then, load the checkpoint we just downloaded with:
```py
from accelerate import load_checkpoint_and_dispatch
model = load_checkpoint_and_dispatch(
model, checkpoint=weights_location, device_map="auto", no_split_module_classes=['Block']
)
```
By passing `device_map="auto"`, we tell 🤗 Accelerate to determine automatically where to put each layer of the model depending on the available resources:
- first, we use the maximum space available on the GPU(s)
- if we still need space, we store the remaining weights on the CPU
- if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors
#### `no_split_module_classes`
This parameter will indicate that some of the modules with the name `"Block"` should not be split across different devices. You should set here all blocks that
include a residutal connection of some kind.
#### The `device_map`
You can see the `device_map` that 🤗 Accelerate picked by accessing the `hf_device_map` attribute of your model:
```py
model.hf_device_map
```
```python out
{'transformer.wte': 0,
'transformer.wpe': 0,
'transformer.drop': 0,
'transformer.h.0': 0,
...
'transformer.h.21': 0,
'transformer.h.22': 1,
'transformer.h.23': 1,
'transformer.h.24': 1,
...
'transformer.h.47': 1,
'transformer.ln_f': 1,
'lm_head': 1}
```
It's fully possible to create your own device map for the layers to use as well, specifying the GPU device to use (a number), `"cpu"`, or `"disk"` and pass this in:
```python
device_map = {
"transformer.wte": "cpu",
"transformer.wpe": 0,
"transformer.drop": "cpu",
"transformer.h.0": "disk"
}
model = load_checkpoint_and_dispatch(
model, checkpoint=weights_location, device_map=device_map
)
```
### Run the model
Now that we have done this, our model lies across several devices, and maybe the hard drive. But it can still be used as a regular PyTorch model:
```py
from mingpt.bpe import BPETokenizer
tokenizer = BPETokenizer()
inputs = tokenizer("Hello, my name is").to(0)
outputs = model.generate(x1, max_new_tokens=10, do_sample=False)[0]
tokenizer.decode(outputs.cpu().squeeze())
```
Behind the scenes, 🤗 Accelerate added hooks to the model, so that:
- at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works)
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after
This way, your model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM!
<Tip warning={true}>
This only supports the inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations.
</Tip>
### Designing a device map
You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
<Tip>
You can derive all sizes of the model (and thus compute a `device_map`) on a model that is on the meta device.
</Tip>
All the options will produce the same result when you don't have enough GPU memory to accommodate the whole model (which is to fit everything that can on the GPU, then offload weights on the CPU or even on the disk if there is not enough RAM).
When you have more GPU memory available than the model size, here is the difference between each option:
- `"auto"` and `"balanced"` evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1.
- `"balanced_low_0"` evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the `generate` function for Transformers models
- `"sequential"` will fit what it can on GPU 0, then move on GPU 1 and so forth (so won't use the last GPUs if it doesn't need to).
<Tip>
The options `"auto"` and `"balanced"` produce the same results for now, but the behavior of `"auto"` might change in the future if we find a strategy that makes more sense, while `"balanced"` will stay stable.
</Tip>
First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want to use for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`.
Here is an example where we don't want to use more than 10GiB on each of the two GPUs and no more than 30GiB of CPU RAM for the model weights:
```python
from accelerate import infer_auto_device_map
device_map = infer_auto_device_map(my_model, max_memory={0: "10GiB", 1: "10GiB", "cpu": "30GiB"})
```
<Tip warning={true}>
When a first allocation happens in PyTorch, it loads CUDA kernels which take about 1-2GB of memory depending on the GPU. Therefore you always have less usable memory than the actual size of the GPU. To see how much memory is actually used do `torch.ones(1).cuda()` and look at the memory usage.
Therefore when you create memory maps with `max_memory` make sure to adjust the available memory accordingly to avoid out-of-memory errors.
</Tip>
Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup, the close-to-ideal map is:
```python
max_memory = {0: "30GIB", 1: "46GIB", 2: "46GIB", 3: "46GIB", 4: "46GIB", 5: "46GIB", 6: "46GIB", 7: "46GIB"}
```
as you can see we gave the remaining 7 GPUs ~50% more memory than GPU 0.
If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance, if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be:
```python
device_map = {"block1": 0, "block2": 1}
```
another one that is valid could be:
```python
device_map = {"block1": 0, "block2.linear1": 0, "block2.linear2": 1, "block2.linear3": 1}
```
On the other hand, this one is not valid as it does not cover every parameter of the model:
```python
device_map = {"block1": 0, "block2.linear1": 1, "block2.linear2": 1}
```
<Tip>
To be the most efficient, make sure your device map puts the parameters on the GPUs in a sequential manner (e.g. don't put one of the first weights on GPU 0, then weights on GPU 1 and the last weight back to GPU 0) to avoid making many transfers of data between the GPUs.
</Tip>
## Limits and further development
We are aware of the current limitations in the API:
- While this could theoretically work on just one CPU with potential disk offload, you need at least one GPU to run this API. This will be fixed in further development.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to a lack of RAM.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) attributes devices sequentially (to avoid moving things back and forth) so if your first layer is bigger than the size of the GPU you have, it will end up with everything on the CPU/Disk.
- [`load_checkpoint_and_dispatch`] and [`load_checkpoint_in_model`] do not perform any check on the correctness of your state dict compared to your model at the moment (this will be fixed in a future version), so you may get some weird errors if trying to load a checkpoint with mismatched or missing keys.
- The model parallelism used when your model is split on several GPUs is naive and not optimized, meaning that only one GPU works at a given time and the other sits idle.
- When weights are offloaded on the CPU/hard drive, there is no pre-fetching (yet, we will work on this for future versions) which means the weights are put on the GPU when they are needed and not before.
- Hard-drive offloading might be very slow if the hardware you run on does not have fast communication between disk and CPU (like NVMes).

View File

@ -108,3 +108,23 @@ with accelerator.main_process_first():
remove_columns=["idx", "sentence1", "sentence2"],
)
```
## Applying checks such as Early Stopping
To have a check that works with a flag set by a particular process, the `set_trigger` and `check_trigger` API should be used. Useful examples
for doing so can include situations such as using early stopping and monitoring the loss (as each loss slightly differs on each process).
Call [`Accelerator.set_trigger`] when your condition has been met, and [`Accelerator.check_trigger`] when checking if that condition has been met in any process:
```python
for (x,y) in data_loader:
logits = model(x)
loss = loss_func(logits, y)
# Assume `should_do_early_stopping` is a custom defined function that returns a conditional
if should_do_early_stopping(loss):
accelerator.set_trigger()
# Later in the training script when we need to check for the breakpoint
if accelerator.check_trigger():
break
```

View File

@ -0,0 +1,72 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# 🤗 Accelerate's internal mechanisms
Internally, 🤗 Accelerate works by first analyzing the environment in which the script is launched to determine which
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
that information is stored in the [`~AcceleratorState`].
This class is initialized the first time you instantiate an [`~Accelerator`] as well as performing any
specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of
[`~state.AcceleratorState`]. (The same can also be done with the [`PartialState`], a more barebones version it inherits)
Then, when calling [`~Accelerator.prepare`], the library:
- wraps your model(s) in the container adapted for the distributed setup,
- wraps your optimizer(s) in an [`~optimizer.AcceleratedOptimizer`],
- wraps your scheduler(s) in an [`~scheduler.AcceleratedScheduler`]
- creates a new version of your dataloader(s) in a [`~data_loader.DataLoaderShard`] or [`~data_loader.DataLoaderDispatcher`]
While the model(s), optimizer(s), and scheduler(s) are just put in simple wrappers, the dataloader(s) are re-created. This is mostly
because PyTorch does not let the user change the `batch_sampler` of a dataloader once it's been created and the
library handles the sharding of your data between processes by changing that `batch_sampler` to yield every other
`num_processes` batches (if enabled).
The [`~data_loader.DataLoaderShard`] subclasses `DataLoader` to add the following functionality:
- it synchronizes the appropriate random number generator of all processes at each new iteration, to ensure any
randomization (like shuffling) is done the exact same way across processes.
- it puts the batches on the proper device before yielding them (unless you have opted out of
`device_placement=True`).
The [`~data_loader.DataLoaderDispatcher`] subclasses differs from the [`~data_loader.DataLoaderShard`] in that when iterating through the `DataLoader`, the data is all starting from process 0 and *then* split and sent off to each process rather than it happening at the dataset level.
The random number generator synchronization will by default synchronize:
- the `generator` attribute of a given sampler (like the PyTorch `RandomSampler`) for PyTorch >= 1.6
- the main random number generator in PyTorch <=1.5.1
You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main
[`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on a local `generator` to avoid
setting the same seed in the main random number generator in all processes.
<Tip warning={true}>
Synchronization of the main torch (or CUDA or XLA) random number generator will affect any other potential random
artifacts you could have in your dataset (like random data augmentation) in the sense that all processes will get
the same random numbers from the torch random modules (so will apply the same random data augmentation if it's
controlled by torch).
</Tip>
<Tip>
The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local
`torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example.
</Tip>
For more details about the internals, see the [Internals page](package_reference/torch_wrappers).

View File

@ -125,7 +125,7 @@ model = MyModel()
model = accelerator.prepare(model)
```
Use [`~Accelerator.save_model`] instead of `torch.save` to save a model. It will remove all model wrappers added during the distributed process, get the state_dict of the model and save it.
Use [`~Accelerator.save_model`] instead of `torch.save` to save a model. It will remove all model wrappers added during the distributed process, get the state_dict of the model and save it. The state_dict will be in the same precision as the model being trained.
```diff
- torch.save(state_dict, "my_state.pkl")

View File

@ -22,6 +22,8 @@ rendered properly in your Markdown viewer.
[[autodoc]] big_modeling.disk_offload
[[autodoc]] big_modeling.dispatch_model
[[autodoc]] big_modeling.load_checkpoint_and_dispatch
[[autodoc]] big_modeling.load_checkpoint_in_model
[[autodoc]] utils.infer_auto_device_map
## Model Hooks

View File

@ -228,6 +228,36 @@ The following arguments are only useful when training in SageMaker
* `--aws_access_key_id AWS_ACCESS_KEY_ID` (`str`) -- The AWS_ACCESS_KEY_ID used to launch the Amazon SageMaker training job
* `--aws_secret_access_key AWS_SECRET_ACCESS_KEY` (`str`) -- The AWS_SECRET_ACCESS_KEY used to launch the Amazon SageMaker training job
## accelerate estimate-memory
**Command**:
`accelerate estimate-memory` or `accelerate-estimate-memory` or `python -m accelerate.commands.estimate`
Estimates the total vRAM a particular model hosted on the Hub needs to be loaded in with an estimate for training. Requires that `huggingface_hub` be installed.
<Tip>
When performing inference, typically add ≤20% to the result as overall allocation [as referenced here](https://blog.eleuther.ai/transformer-math/). We will have more extensive estimations in the future that will automatically be included in the calculation.
</Tip>
**Usage**:
```bash
accelerate estimate-memory {MODEL_NAME} --library_name {LIBRARY_NAME} --dtypes {dtype_1} {dtype_2} ...
```
**Required Arguments**:
* `MODEL_NAME` (`str`)-- The model name on the Hugging Face Hub
**Optional Arguments**:
* `--library_name {timm,transformers}` (`str`) -- The library the model has an integration with, such as `transformers`, needed only if this information is not stored on the Hub
* `--dtypes {float32,float16,int8,int4}` (`[{float32,float16,int8,int4} ...]`) -- The dtypes to use for the model, must be one (or many) of `float32`, `float16`, `int8`, and `int4`
* `--trust_remote_code` (`bool`) -- Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be passed for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
## accelerate tpu-config
`accelerate tpu-config`

View File

@ -0,0 +1,18 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Utilities for Fully Sharded Data Parallelism
[[autodoc]] utils.FullyShardedDataParallelPlugin

View File

@ -18,11 +18,18 @@ rendered properly in your Markdown viewer.
The following objects can be passed to the main [`Accelerator`] to customize how some PyTorch objects
related to distributed training or mixed precision are created.
## AutocastKwargs
[[autodoc]] AutocastKwargs
## DistributedDataParallelKwargs
[[autodoc]] DistributedDataParallelKwargs
## FP8RecipeKwargs
[[autodoc]] utils.FP8RecipeKwargs
## GradScalerKwargs
[[autodoc]] GradScalerKwargs

View File

@ -21,6 +21,7 @@ when calling [`~Accelerator.prepare`].
## Datasets and DataLoaders
[[autodoc]] data_loader.prepare_data_loader
[[autodoc]] data_loader.skip_first_batches
[[autodoc]] data_loader.BatchSamplerShard
[[autodoc]] data_loader.IterableDatasetShard

View File

@ -17,18 +17,55 @@ rendered properly in your Markdown viewer.
Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case.
## Constants
Constants used throughout 🤗 Accelerate for reference
The following are constants used when utilizing [`Accelerator.save_state`]
`utils.MODEL_NAME`: `"pytorch_model"`
`utils.OPTIMIZER_NAME`: `"optimizer"`
`utils.RNG_STATE_NAME`: `"random_states"`
`utils.SCALER_NAME`: `"scaler.pt`
`utils.SCHEDULER_NAME`: `"scheduler`
The following are constants used when utilizing [`Accelerator.save_model`]
`utils.WEIGHTS_NAME`: `"pytorch_model.bin"`
`utils.SAFE_WEIGHTS_NAME`: `"model.safetensors"`
`utils.WEIGHTS_INDEX_NAME`: `"pytorch_model.bin.index.json"`
`utils.SAFE_WEIGHTS_INDEX_NAME`: `"model.safetensors.index.json"`
## Data Classes
These are basic dataclasses used throughout 🤗 Accelerate and they can be passed in as parameters.
[[autodoc]] utils.DistributedType
[[autodoc]] utils.DynamoBackend
[[autodoc]] utils.LoggerType
[[autodoc]] utils.PrecisionType
[[autodoc]] utils.ProjectConfiguration
## Plugins
These are plugins that can be passed to the [`Accelerator`] object. While they are defined elsewhere in the documentation,
for convience all of them are available to see here:
[[autodoc]] utils.DeepSpeedPlugin
[[autodoc]] utils.FullyShardedDataParallelPlugin
[[autodoc]] utils.GradientAccumulationPlugin
[[autodoc]] utils.MegatronLMPlugin
[[autodoc]] utils.TorchDynamoPlugin
## Data Manipulation and Operations
These include data operations that mimic the same `torch` ops but can be used on distributed processes.
@ -51,11 +88,23 @@ These functionalities check the state of the current working environment includi
[[autodoc]] utils.is_bf16_available
[[autodoc]] utils.is_ipex_available
[[autodoc]] utils.is_mps_available
[[autodoc]] utils.is_npu_available
[[autodoc]] utils.is_torch_version
[[autodoc]] utils.is_tpu_available
## Environment Configuration
[[autodoc]] utils.is_xpu_available
## Environment Manipulation
[[autodoc]] utils.patch_environment
[[autodoc]] utils.clear_environment
[[autodoc]] utils.write_basic_config

View File

@ -15,13 +15,20 @@ rendered properly in your Markdown viewer.
# Quick tour
Let's have a look at the 🤗 Accelerate main features and traps to avoid.
This guide aims to help you get started with 🤗 Accelerate quickly. It covers the essential steps you need to take to
enable distributed training, as well as the adjustments that you need to make in some common scenarios.
## Main use
To help you navigate, the guide is split into two sections:
* [Getting Started with 🤗 Accelerate](#getting-started-with--accelerate): start here to learn how to modify your script to enable distributed training with 🤗 Accelerate
* [Common adaptations to the base case](#common-adaptations-to-the-base-case): check out this section for common deviations from the baseline scenario and what adjustments may need to be made to support them.
To use 🤗 Accelerate in your own script, you have to change four things:
## Getting started with 🤗 Accelerate
1. Import the [`Accelerator`] main class and instantiate one in an `accelerator` object:
### Enable distributed training in your script
To use 🤗 Accelerate in your own training script, you have to modify four things:
1. Import the [`Accelerator`] main class and instantiate one in an `accelerator` object.
```python
from accelerate import Accelerator
@ -29,27 +36,27 @@ from accelerate import Accelerator
accelerator = Accelerator()
```
This should happen as early as possible in your training script as it will initialize everything necessary for
distributed training. You don't need to indicate the kind of environment you are in (just one machine with a GPU, one
machines with several GPUs, several machines with multiple GPUs or a TPU), the library will detect this automatically.
Add this at the beginning of your training script as it will initialize everything necessary for distributed training.
You don't need to indicate the kind of environment you are in (a single machine with a GPU, a machine with several GPUs,
or several machines with multiple GPUs or a TPU), the library will detect this automatically.
2. Remove the call `.to(device)` or `.cuda()` for your model and input data. The `accelerator` object
will handle this for you and place all those objects on the right device for you. If you know what you're doing, you
can leave those `.to(device)` calls but you should use the device provided by the `accelerator` object:
`accelerator.device`.
2. Remove the `.to(device)` or `.cuda()` calls for your model and input data.
To fully deactivate the automatic device placement, pass along `device_placement=False` when initializing your
[`Accelerator`].
The `accelerator` object will handle placing these objects on the right device for you.
If you choose to leave those `.to(device)` calls, make sure to use the device provided by the `accelerator` object: `accelerator.device`.
<Tip warning={true}>
If you place your objects manually on the proper device, be careful to create your optimizer after putting your
You can fully deactivate the automatic device placement by passing along `device_placement=False` when
initializing the [`Accelerator`].
However, if you place your objects manually on the proper device, be careful to create your optimizer after putting your
model on `accelerator.device` or your training will fail on TPU.
</Tip>
3. Pass all objects relevant to training (optimizer, model, training dataloader, learning rate scheduler) to the
[`~Accelerator.prepare`] method. This will make sure everything is ready for training.
3. Pass all PyTorch objects relevant to training (optimizer, model, dataloader(s), learning rate scheduler) to the
[`~Accelerator.prepare`] method as soon as these objects are created, before starting your actual
training loop:
```python
model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
@ -57,60 +64,42 @@ model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
)
```
In particular, your training dataloader will be sharded across all GPUs/TPU cores available so that each one sees a
different portion of the training dataset. Also, the random states of all processes will be synchronized at the
beginning of each iteration through your dataloader, to make sure the data is shuffled the same way (if you decided to
use `shuffle=True` or any kind of random sampler).
**Important notes**:
* You should always pass the the learning rate scheduler to [`~Accelerator.prepare`], however if the scheduler should *not* be stepped at each optimization step, pass `step_with_optimizer=False` to the [`Accelerator`] init.
* While you can send your dataloader to [`~Accelerator.prepare`] on its own (and there are cases for doing so, such as distributed inference), it's best to send it to [`~Accelerator.prepare`] together with the model and optimizer.
* If you wish to run distributed evaluation, send your validation dataloader to [`~Accelerator.prepare`] as well. There are some nuances to distributed validation, check the [Distributed evaluation](#add-distributed-evaluation) section of the guide.
* Any instruction using your training dataloader length (for instance if you want to log the number of total training
steps) should go after the call to [`~Accelerator.prepare`].
Passing `DataLoader` objects to the [`~Accelerator.prepare`] method ensures that your dataloader will be sharded across
all GPUs/TPU cores available so that each one sees a different portion of the training dataset. In other words, if there are 8 processes and a dataset of 64 items, each process will see 8 of these items per iteration. Also, the random states
of all processes will be synchronized at the beginning of each iteration through your dataloader, to make sure the data
is shuffled the same way (if you decided to use `shuffle=True` or any kind of random sampler).
<Tip>
The actual batch size for your training will be the number of devices used multiplied by the batch size you set in
your script: for instance training on 4 GPUs with a batch size of 16 set when creating the training dataloader will
train at an actual batch size of 64.
</Tip>
Alternatively, you can use the option `split_batches=True` when creating and initializing your
[`Accelerator`], in which case the batch size will always stay the same, whether you run your
script on 1, 2, 4, or 64 GPUs.
You should execute this instruction as soon as all objects for training are created, before starting your actual
training loop.
<Tip warning={true}>
You should only pass the learning rate scheduler to [`~Accelerator.prepare`] when the scheduler needs to be stepped
at each optimizer step.
</Tip>
<Tip warning={true}>
your script. For instance, training on 4 GPUs with a batch size of 16 set when creating the training dataloader will
train at an actual batch size of 64 (4 * 16).
If you want the batch size remain the same regardless of how many GPUs the script is run on, you can use the
option `split_batches=True` when creating and initializing [`Accelerator`].
Your training dataloader may change length when going through this method: if you run on X GPUs, it will have its
length divided by X (since your actual batch size will be multiplied by X), unless you set
`split_batches=True`.
</Tip>
Any instruction using your training dataloader length (for instance if you want to log the number of total training
steps) should go after the call to [`~Accelerator.prepare`].
You can perfectly send your dataloader to [`~Accelerator.prepare`] on its own, but it's best to send the
model and optimizer to [`~Accelerator.prepare`] together.
You may or may not want to send your validation dataloader to [`~Accelerator.prepare`], depending on
whether you want to run distributed evaluation or not (see below).
4. Replace the line `loss.backward()` by `accelerator.backward(loss)`.
4. Replace the `loss.backward()` line with `accelerator.backward(loss)`.
And you're all set! With all these changes, your script will run on your local machine as well as on multiple GPUs or a
TPU! You can either use your favorite tool to launch the distributed training, or you can use the 🤗 Accelerate
launcher.
### Add distributed evaluation
## Distributed evaluation
You can perform regular evaluation in your training script, if you leave your validation dataloader out of the
You can perform regular evaluation in your training script if you leave your validation dataloader out of the
[`~Accelerator.prepare`] method. In this case, you will need to put the input data on the
`accelerator.device` manually.
@ -121,9 +110,9 @@ method:
validation_dataloader = accelerator.prepare(validation_dataloader)
```
As for your training dataloader, it will mean that (should you run your script on multiple devices) each device will
only see part of the evaluation data. This means you will need to group your predictions together. This is very easy to
do with the [`~Accelerator.gather_for_metrics`] method.
Same as with your training dataloader, each device will only see part of the evaluation data should you run your script
on multiple devices. This means you will need to group your predictions together which you can do with
the [`~Accelerator.gather_for_metrics`] method.
```python
for inputs, targets in validation_dataloader:
@ -142,11 +131,9 @@ for inputs, targets in validation_dataloader:
</Tip>
Any instruction using your training dataloader length (for instance if you need the number of total training steps
to create a learning rate scheduler) should go after the call to [`~Accelerator.prepare`].
Some data at the end of the dataset may be duplicated so the batch can be divided equally among all workers. As a result, metrics
should be calculated through the [`~Accelerator.gather_for_metrics`] method to automatically remove the duplicated data while gathering.
Some data at the end of the dataset may be duplicated so the batch can be divided equally among all workers. As a result,
metrics should be calculated through the [`~Accelerator.gather_for_metrics`] method to automatically remove the duplicated
data while gathering and provide a more accurate metric.
<Tip>
@ -165,36 +152,35 @@ should be calculated through the [`~Accelerator.gather_for_metrics`] method to a
</Tip>
## Launching your distributed script
### Launch your distributed script
You can use the regular commands to launch your distributed training (like `torch.distributed.run` for
PyTorch), they are fully compatible with 🤗 Accelerate.
PyTorch) - they are fully compatible with 🤗 Accelerate.
🤗 Accelerate also provides a CLI tool that unifies all launchers, so you only have to remember one command. To use it,
just run:
Alternatively, 🤗 Accelerate provides a CLI tool that unifies all launchers, so you only have to remember one command. \
To use it, run a quick configuration setup first on your machine and answer the questions:
```bash
accelerate config
```
on your machine and reply to the questions asked. This will save a *default_config.yaml* file in your cache folder for
🤗 Accelerate. That cache folder is (with decreasing order of priority):
At the end of the setup, a *default_config.yaml* file will be saved in your cache folder for 🤗 Accelerate. That cache
folder is (with decreasing order of priority):
- The content of your environment variable `HF_HOME` suffixed with *accelerate*.
- If it does not exist, the content of your environment variable `XDG_CACHE_HOME` suffixed with
*huggingface/accelerate*.
- If this does not exist either, the folder *~/.cache/huggingface/accelerate*
- If this does not exist either, the folder *~/.cache/huggingface/accelerate*.
You can also specify with the flag `--config_file` the location of the file you want to save.
Once this is done, you can test everything is going well on your setup by running:
By specifying the `--config_file` flag you can specify an alternative location of the configuration file.
Once the configuration setup is complete, you can test your setup by running:
```bash
accelerate test
```
This will launch a short script that will test the distributed environment. If it runs fine, you are ready for the next
step!
This will launch a short script that will test the distributed environment. If it runs without issues, you are ready for
the next step!
Note that if you specified a location for the config file in the previous step, you need to pass it here as well:
@ -214,19 +200,23 @@ If you stored the config file in a non-default location, you can indicate it to
accelerate launch --config_file path_to_config.yaml path_to_script.py --args_for_the_script
```
You can also override any of the arguments determined by your config file.
To see the complete list of parameters that you can pass in, run `accelerate launch -h`.
You can override any of the arguments determined by your config file. To see the complete list of parameters that you
can pass in, run `accelerate launch -h`. (And further niche argument help by passing in partial commands, such as `accelerate launch --multi_gpu -h` for all `multi_gpu` args)
Check out the [Launch tutorial](basic_tutorials/launch) for more information about launching your scripts.
Check out the [Launch tutorial](basic_tutorials/launch) for more information about launching your scripts.
## Common modifications of the base case
## Launching training from a notebook
The previous section covers the minimal essential steps to move a training script into a distributed setup with 🤗 Accelerate.
Here we describe common modifications/deviations from the base case scenario and the adjustments you need to make to accommodate for them.
In Accelerate 0.3.0, a new [`notebook_launcher`] has been introduced to help you launch your training
function from a notebook. This launcher supports launching a training with TPUs on Colab or Kaggle, as well as training
on several GPUs (if the machine on which you are running your notebook has them).
### Launch distributed training from a notebook
Just define a function responsible for your whole training and/or evaluation in a cell of the notebook, then execute a
Accelerate has a [`notebook_launcher`] to help you launch your training function from a
notebook. This launcher supports launching a training with TPUs on Colab or Kaggle, as well as training on several GPUs and machines
(if the machine on which you are running your notebook has them).
Define a function responsible for your whole training and/or evaluation in a cell of the notebook, then execute a
cell with the following code:
```python
@ -242,10 +232,9 @@ notebook_launcher(training_function)
</Tip>
Check out the [Notebook Launcher tutorial](basic_tutorials/notebook) for more information about training on TPUs.
Check out the [Notebook Launcher tutorial](basic_tutorials/notebook) for more information about training on TPUs.
## Training on TPU
### Specifics of training on TPU
If you want to launch your script on TPUs, there are a few caveats you should be aware of. Behind the scenes, the TPUs
will create a graph of all the operations happening in your training step (forward pass, backward pass and optimizer
@ -284,12 +273,7 @@ passed your model to [`~Accelerator.prepare`]) will break the tying. You will ne
after. You can find an example of this in the [run_clm_no_trainer](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) script in
the Transformers repository.
Check out the [TPU tutorial](concept_guides/training_tpu) for more information about training on TPUs.
## Other caveats
We list here all smaller issues you could have in your script conversion and how to resolve them.
Check out the [TPU tutorial](concept_guides/training_tpu) for more information about training on TPUs.
### Execute a statement only on one processes
@ -323,14 +307,14 @@ For printing statements you only want executed once per machine, you can just re
`accelerator.print`.
### Defer execution
### Defer execution on multiple GPUs
When you run your usual script, instructions are executed in order. Using 🤗 Accelerate to deploy your script on several
GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
faster than others.
You might need to wait for all processes to have reached a certain point before executing a given instruction. For
instance, you shouldn't save a model before being sure every process is done with training. To do this, just write the
instance, you shouldn't save a model before making sure every process is done with training. To do this, add the
following line in your code:
```
@ -341,7 +325,7 @@ This instruction will block all the processes that arrive first until all the ot
point (if you run your script on just one GPU or CPU, this won't do anything).
### Saving/loading a model
### Save/load a model in a distributed setup
Saving the model you trained might need a bit of adjustment: first you should wait for all processes to reach that
point in the script as shown above, and then, you should unwrap your model before saving it. This is because when going
@ -349,15 +333,16 @@ through the [`~Accelerator.prepare`] method, your model may have been placed ins
which deals with the distributed training. This in turn means that saving your model state dictionary without taking
any precaution will take that potential extra layer into account, and you will end up with weights you can't load back
in your base model. The [`~Accelerator.save_model`] method will help you to achieve that. It will unwrap your model and save
the model state dictionnary.
the model state dictionary.
Here is an example:
```
accelerator.wait_for_everyone()
accelerator.save_model(model, save_directory)
```
The [`~Accelerator.save_model`] method can also save a model into sharded checkpoints or with safetensors format.
Here is an example:
The [`~Accelerator.save_model`] method can also save a model into sharded checkpoints or with safetensors format:
```python
accelerator.wait_for_everyone()
@ -376,15 +361,18 @@ unwrapped_model.load_state_dict(torch.load(path_to_checkpoint))
Note that since all the model parameters are references to tensors, this will load your weights inside `model`.
If you want to load a sharded checkpoint or a checkpoint with safetensors format into the model with a specific `device`, we recommend you to load it with [`~utils.load_checkpoint_in_model`] function. Here's an example:
If you want to load a sharded checkpoint or a checkpoint with safetensors format into the model with a specific `device`,
we recommend you to load it with [`~utils.load_checkpoint_in_model`] function. Here's an example:
```python
load_checkpoint_in_model(unwrapped_model, save_directory, device_map={"":device})
```
## Saving/loading entire states
When training your model, you may want to save the current state of the model, optimizer, random generators, and potentially LR schedulers to be restored in the _same script_.
### Save/load entire states
When training your model, you may want to save the current state of the model, optimizer, random generators, and potentially
learning rate schedulers to be restored in the _same script_.
You can use [`~Accelerator.save_state`] and [`~Accelerator.load_state`] respectively to do so.
To further customize where and how states saved through [`~Accelerator.save_state`] the [`~utils.ProjectConfiguration`] class can be used. For example
@ -399,19 +387,19 @@ If you have registered any other stateful items to be stored through [`~Accelera
</Tip>
### Gradient clipping
### Use gradient clipping
If you are using gradient clipping in your script, you should replace the calls to
`torch.nn.utils.clip_grad_norm_` or `torch.nn.utils.clip_grad_value_` with [`~Accelerator.clip_grad_norm_`]
and [`~Accelerator.clip_grad_value_`] respectively.
### Mixed Precision training
### Train with mixed precision
If you are running your training in Mixed Precision with 🤗 Accelerate, you will get the best result with your loss being
computed inside your model (like in Transformer models for instance). Every computation outside of the model will be
executed in full precision (which is generally what you want for loss computation, especially if it involves a
softmax). However you might want to put your loss computation inside the *accelerator.autocast* context manager:
softmax). However, you might want to put your loss computation inside the [`~Accelerator.autocast`] context manager:
```
with accelerator.autocast():
@ -432,7 +420,7 @@ if not accelerator.optimizer_step_was_skipped:
lr_scheduler.step()
```
### Gradient Accumulation
### Use gradient accumulation
To perform gradient accumulation use [`~Accelerator.accumulate`] and specify a `gradient_accumulation_steps`.
This will also automatically ensure the gradients are synced or unsynced when on multi-device training, check if the step should
@ -451,70 +439,3 @@ for input, label in training_dataloader:
scheduler.step()
optimizer.zero_grad()
```
### DeepSpeed
DeepSpeed support is experimental, so the underlying API will evolve in the near future and may have some slight
breaking changes. In particular, 🤗 Accelerate does not support DeepSpeed config you have written yourself yet, this
will be added in a next version.
<Tip warning={true}>
The [`notebook_launcher`] does not support the DeepSpeed integration yet.
</Tip>
## Internal mechanism
Internally, the library works by first analyzing the environment in which the script is launched to determine which
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
that information is stored in the [`~AcceleratorState`].
This class is initialized the first time you instantiate an [`~Accelerator`] as well as performing any
specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of
[`~state.AcceleratorState`].
Then, when calling [`~Accelerator.prepare`], the library:
- wraps your model(s) in the container adapted for the distributed setup,
- wraps your optimizer(s) in a [`~optimizer.AcceleratedOptimizer`],
- creates a new version of your dataloader(s) in a [`~data_loader.DataLoaderShard`].
While the model(s) and optimizer(s) are just put in simple wrappers, the dataloader(s) are re-created. This is mostly
because PyTorch does not let the user change the `batch_sampler` of a dataloader once it's been created and the
library handles the sharding of your data between processes by changing that `batch_sampler` to yield every other
`num_processes` batches.
The [`~data_loader.DataLoaderShard`] subclasses `DataLoader` to add the following functionality:
- it synchronizes the appropriate random number generator of all processes at each new iteration, to ensure any
randomization (like shuffling) is done the exact same way across processes.
- it puts the batches on the proper device before yielding them (unless you have opted out of
`device_placement=True`).
The random number generator synchronization will by default synchronize:
- the `generator` attribute of a given sampler (like the PyTorch `RandomSampler`) for PyTorch >= 1.6
- the main random number generator in PyTorch <=1.5.1
You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main
[`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on a local `generator` to avoid
setting the same seed in the main random number generator in all processes.
<Tip warning={true}>
Synchronization of the main torch (or CUDA or XLA) random number generator will affect any other potential random
artifacts you could have in your dataset (like random data augmentation) in the sense that all processes will get
the same random numbers from the torch random modules (so will apply the same random data augmentation if it's
controlled by torch).
</Tip>
<Tip>
The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local
`torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example.
</Tip>
For more details about the internals, see the [Internals page](package_reference/torch_wrappers).

View File

@ -15,7 +15,13 @@ rendered properly in your Markdown viewer.
# Handling big models for inference
When loading a pre-trained model in PyTorch, the usual workflow looks like this:
One of the biggest advancements 🤗 Accelerate provides is the concept of [large model inference](../concept_guides/big_model_inference) wherein you can perform *inference* on models that cannot fully fit on your graphics card.
This tutorial will be broken down into two parts showcasing how to use both 🤗 Accelerate and 🤗 Transformers (a higher API-level) to make use of this idea.
## Using 🤗 Accelerate
For these tutorials, we'll assume a typical workflow for loading your model in such that:
```py
import torch
@ -25,307 +31,120 @@ state_dict = torch.load(checkpoint_file)
my_model.load_state_dict(state_dict)
```
In plain English, those steps are:
1. Create the model with randomly initialized weights
2. Load the model weights (in a dictionary usually called a state dict) from the disk
3. Load those weights inside the model
Note that here we assume that `ModelClass` is a model that takes up more video-card memory than what can fit on your device (be it `mps` or `cuda`).
While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pre-trained weights. If you're loading a model with 6 billion parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16).
<Tip warning={true}>
This API is quite new and still in its experimental stage. While we strive to provide a stable API, it's possible some small parts of the public API will change in the future.
</Tip>
## How the Process Works: A Quick Overview
<Youtube id="MWCSGj9jEAo" />
## How the Process Works: Working with Code
### Instantiating an empty model
The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
The first step is to init an empty skeleton of the model which won't take up any RAM using the [`init_empty_weights`] context manager:
```py
from accelerate import init_empty_weights
with init_empty_weights():
my_model = ModelClass(...)
```
For instance:
With this `my_model` currently is "parameterless", hence leaving the smaller footprint than what one would normally get loading this onto the CPU directly.
```py
with init_empty_weights():
model = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
```
Next we need to load in the weights to our model so we can perform inference.
initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved to that device.
For this we will use [`load_checkpoint_and_dispatch`], which as the name implies will load a checkpoint inside your empty model and dispatch the weights for each layer across all the devices you have available (GPU/MPS and CPU RAM).
<Tip warning={true}>
To determine how this `dispatch` can be performed, generally specifying `device_map="auto"` will be good enough as 🤗 Accelerate
will attempt to fill all the space in your GPU(s), then loading them to the CPU, and finally if there is not enough RAM it will be loaded to the disk (the absolute slowest option).
You can't move a model initialized like this on CPU or another device directly, since it doesn't have any data. It's also very likely that a forward pass with that empty model will fail, as not all operations are supported on the meta device.
<Tip>
For more details on desigining your own device map, see this section of the [concept guide](../concept_guide/big_model_inference#designing-a-device-map)
</Tip>
### Sharded checkpoints
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split into several smaller files that we call checkpoint shards.
🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
```bash
first_state_dict.bin
index.json
second_state_dict.bin
```
with index.json being the following file:
```
{
"linear1.weight": "first_state_dict.bin",
"linear1.bias": "first_state_dict.bin",
"linear2.weight": "second_state_dict.bin",
"linear2.bias": "second_state_dict.bin"
}
```
and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"linear1.bias"`, `second_state_dict.bin` the ones for `"linear2.weight"` and `"linear2.bias"`
### Loading weights
The second tool 🤗 Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
If you want to use big model inference with 🤗 Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
Here is how we can use this to load the [GPT2-1.5B](https://huggingface.co/marcsun13/gpt2-xl-linear-sharded) model.
Let's download the sharded version of this model.
```bash
pip install huggingface_hub
```
```py
from huggingface_hub import snapshot_download
checkpoint = "marcsun13/gpt2-xl-linear-sharded"
weights_location = snapshot_download(repo_id=checkpoint)
```
In order to initialize the model, we will use the library minGTP.
```bash
git clone https://github.com/karpathy/minGPT.git
pip install minGPT/
```
```py
from accelerate import init_empty_weights
from mingpt.model import GPT
model_config = GPT.get_default_config()
model_config.model_type = 'gpt2-xl'
model_config.vocab_size = 50257
model_config.block_size = 1024
with init_empty_weights():
model = GPT(model_config)
```
Then, load the checkpoint we just downloaded with:
See an example below:
```py
from accelerate import load_checkpoint_and_dispatch
model = load_checkpoint_and_dispatch(
model, checkpoint=weights_location, device_map="auto", no_split_module_classes=['Block']
model, checkpoint=checkpoint_file, device_map="auto"
)
```
By passing `device_map="auto"`, we tell 🤗 Accelerate to determine automatically where to put each layer of the model depending on the available resources:
- first, we use the maximum space available on the GPU(s)
- if we still need space, we store the remaining weights on the CPU
- if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors
<Tip>
`no_split_module_classes=["Block"]` indicates that the modules that are `Block` should not be split on different devices. You should set here all blocks that include a residual connection of some kind.
You can see the `device_map` that 🤗 Accelerate picked by accessing the `hf_device_map` attribute of your model:
```py
model.hf_device_map
```
```python out
{'transformer.wte': 0,
'transformer.wpe': 0,
'transformer.drop': 0,
'transformer.h.0': 0,
'transformer.h.1': 0,
'transformer.h.2': 0,
'transformer.h.3': 0,
'transformer.h.4': 0,
'transformer.h.5': 0,
'transformer.h.6': 0,
'transformer.h.7': 0,
'transformer.h.8': 0,
'transformer.h.9': 0,
'transformer.h.10': 0,
'transformer.h.11': 0,
'transformer.h.12': 0,
'transformer.h.13': 0,
'transformer.h.14': 0,
'transformer.h.15': 0,
'transformer.h.16': 0,
'transformer.h.17': 0,
'transformer.h.18': 0,
'transformer.h.19': 0,
'transformer.h.20': 0,
'transformer.h.21': 0,
'transformer.h.22': 1,
'transformer.h.23': 1,
'transformer.h.24': 1,
'transformer.h.25': 1,
'transformer.h.26': 1,
'transformer.h.27': 1,
'transformer.h.28': 1,
'transformer.h.29': 1,
'transformer.h.30': 1,
'transformer.h.31': 1,
'transformer.h.32': 1,
'transformer.h.33': 1,
'transformer.h.34': 1,
'transformer.h.35': 1,
'transformer.h.36': 1,
'transformer.h.37': 1,
'transformer.h.38': 1,
'transformer.h.39': 1,
'transformer.h.40': 1,
'transformer.h.41': 1,
'transformer.h.42': 1,
'transformer.h.43': 1,
'transformer.h.44': 1,
'transformer.h.45': 1,
'transformer.h.46': 1,
'transformer.h.47': 1,
'transformer.ln_f': 1,
'lm_head': 1}
```
You can also design your `device_map` yourself if you prefer to explicitly decide where each layer should be. In this case, the command above becomes:
```py
model = load_checkpoint_and_dispatch(model, checkpoint=weights_location, device_map=my_device_map)
```
### Run the model
Now that we have done this, our model lies across several devices, and maybe the hard drive. But it can still be used as a regular PyTorch model:
```py
from mingpt.bpe import BPETokenizer
tokenizer = BPETokenizer()
inputs = tokenizer("Hello, my name is").to(0)
outputs = model.generate(x1, max_new_tokens=10, do_sample=False)[0]
tokenizer.decode(outputs.cpu().squeeze())
```
Behind the scenes, 🤗 Accelerate added hooks to the model, so that:
- at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works)
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after
This way, your model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM!
<Tip warning={true}>
This only supports the inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations.
If there are certain "chunks" of layers that shouldn't be split, you can pass them in as `no_split_module_classes`. Read more about it [here](../concept_guides/big_model_inference#loading-weights)
</Tip>
### Designing a device map
You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
<Tip>
You can derive all sizes of the model (and thus compute a `device_map`) on a model that is on the meta device.
Also to save on memory (such as if the `state_dict` will not fit in RAM), a model's weights can be divided and split into multiple checkpoint files. Read more about it [here](../concept_guides/big_model_inference#sharded-checkpoints)
</Tip>
All the options will produce the same result when you don't have enough GPU memory to accommodate the whole model (which is to fit everything that can on the GPU, then offload weights on the CPU or even on the disk if there is not enough RAM).
Now that the model is dispatched fully, you can perform inference as normal with the model:
When you have more GPU memory available than the model size, here is the difference between each option:
- `"auto"` and `"balanced"` evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1.
- `"balanced_low_0"` evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the `generate` function for Transformers models
- `"sequential"` will fit what it can on GPU 0, then move on GPU 1 and so forth (so won't use the last GPUs if it doesn't need to).
```py
input = torch.randn(2,3)
input = input.to("cuda")
output = model(input)
```
What will happen now is each time the input gets passed through a layer, it will be sent from the CPU to the GPU (or disk to CPU to GPU), the output is calculated, and then the layer is pulled back off the GPU going back down the line. While this adds some overhead to the inference being performed, through this method it is possible to run **any size model** on your system, as long as the largest layer is capable of fitting on your GPU.
<Tip>
The options `"auto"` and `"balanced"` produce the same results for now, but the behavior of `"auto"` might change in the future if we find a strategy that makes more sense, while `"balanced"` will stay stable.
Multiple GPUs can be utilized, however this is considered "model parallism" and as a result only one GPU will be active at a given moment, waiting for the prior one to send it the output. You should launch your script normally with `python`
and not need `torchrun`, `accelerate launch`, etc.
</Tip>
First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want to use for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`.
For a visual representation of this, check out the animation below:
Here is an example where we don't want to use more than 10GiB on each of the two GPUs and no more than 30GiB of CPU RAM for the model weights:
<Youtube id="MWCSGj9jEAo" />
```python
from accelerate import infer_auto_device_map
### Complete Example
device_map = infer_auto_device_map(my_model, max_memory={0: "10GiB", 1: "10GiB", "cpu": "30GiB"})
Below is the full example showcasing what we performed above:
```py
import torch
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
with init_empty_weights():
model = MyModel(...)
model = load_checkpoint_and_dispatch(
model, checkpoint=checkpoint_file, device_map="auto"
)
input = torch.randn(2,3)
input = input.to("cuda")
output = model(input)
```
<Tip warning={true}>
## Using 🤗 Transformers, 🤗 Diffusers, and other 🤗 Open Source Libraries
When a first allocation happens in PyTorch, it loads CUDA kernels which take about 1-2GB of memory depending on the GPU. Therefore you always have less usable memory than the actual size of the GPU. To see how much memory is actually used do `torch.ones(1).cuda()` and look at the memory usage.
Libraries that support 🤗 Accelerate big model inference include all of the earlier logic in their `from_pretrained` constructors.
Therefore when you create memory maps with `max_memory` make sure to adjust the available memory accordingly to avoid out-of-memory errors.
These operate by specifying a string representing the model to download from the [🤗 Hub](https://hf.co/models) and then denoting `device_map="auto"` along with a few extra parameters.
</Tip>
As a brief example, we will look at using `transformers` and loading in Big Science's T0pp model.
Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup, the close-to-ideal map is:
```py
from transformers import AutoModelForSeq2SeqLM
```python
max_memory = {0: "30GIB", 1: "46GIB", 2: "46GIB", 3: "46GIB", 4: "46GIB", 5: "46GIB", 6: "46GIB", 7: "46GIB"}
```
as you can see we gave the remaining 7 GPUs ~50% more memory than GPU 0.
If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance, if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be:
```python
device_map = {"block1": 0, "block2": 1}
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
```
another one that is valid could be:
After loading the model in, the initial steps from before to prepare a model have all been done and the model is fully
ready to make use of all the resources in your machine. Through these constructors, you can also save *more* memory by
specifying the precision the model is loaded into as well, through the `torch_dtype` parameter, such as:
```python
device_map = {"block1": 0, "block2.linear1": 0, "block2.linear2": 1, "block2.linear3": 1}
```py
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto", torch_dtype=torch.float16)
```
On the other hand, this one is not valid as it does not cover every parameter of the model:
To learn more about this, check out the 🤗 Transformers documentation available [here](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
```python
device_map = {"block1": 0, "block2.linear1": 1, "block2.linear2": 1}
```
## Where to go from here
<Tip>
To be the most efficient, make sure your device map puts the parameters on the GPUs in a sequential manner (e.g. don't put one of the first weights on GPU 0, then weights on GPU 1 and the last weight back to GPU 0) to avoid making many transfers of data between the GPUs.
</Tip>
## Limits and further development
We are aware of the current limitations in the API:
- While this could theoretically work on just one CPU with potential disk offload, you need at least one GPU to run this API. This will be fixed in further development.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to a lack of RAM.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) attributes devices sequentially (to avoid moving things back and forth) so if your first layer is bigger than the size of the GPU you have, it will end up with everything on the CPU/Disk.
- [`load_checkpoint_and_dispatch`] and [`load_checkpoint_in_model`] do not perform any check on the correctness of your state dict compared to your model at the moment (this will be fixed in a future version), so you may get some weird errors if trying to load a checkpoint with mismatched or missing keys.
- The model parallelism used when your model is split on several GPUs is naive and not optimized, meaning that only one GPU works at a given time and the other sits idle.
- When weights are offloaded on the CPU/hard drive, there is no pre-fetching (yet, we will work on this for future versions) which means the weights are put on the GPU when they are needed and not before.
- Hard-drive offloading might be very slow if the hardware you run on does not have fast communication between disk and CPU (like NVMes).
For a much more detailed look at big model inference, be sure to check out the [Conceptual Guide on it](../concept_guides/big_model_inference)

View File

@ -0,0 +1,93 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Debugging Distributed Operations
When running scripts in a distributed fashion, often functions such as [`Accelerator.gather`] and [`Accelerator.reduce`] (and others) are neccessary to grab tensors across devices and perform certain operations on them. However, if the tensors which are being grabbed are not the proper shapes then this will result in your code hanging forever. The only sign that exists of this truly happening is hitting a timeout exception from `torch.distributed`, but this can get quite costly as usually the timeout is 10 minutes.
Accelerate now has a `debug` mode which adds a neglible amount of time to each operation, but allows it to verify that the inputs you are bringing in can *actually* perform the operation you want **without** hitting this timeout problem!
## Visualizing the problem
To have a tangible example of this issue, let's take the following setup (on 2 GPUs):
```python
from accelerate import PartialState
state = PartialState()
if state.process_index == 0:
tensor = torch.tensor([[0.0, 1, 2, 3, 4]]).to(state.device)
else:
tensor = torch.tensor([[[0.0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]]).to(state.device)
broadcast_tensor = broadcast(tensor)
print(broadcast_tensor)
```
We've created a single tensor on each device, with two radically different shapes. With this setup if we want to perform an operation such as [`utils.broadcast`], we would forever hit a timeout because `torch.distributed` requires that these operations have the **exact same shape** across all processes for it to work.
If you run this yourself, you will find that `broadcast_tensor` can be printed on the main process, but its results won't quite be right, and then it will just hang never printing it on any of the other processes:
```
>>> tensor([[0, 1, 2, 3, 4]], device='cuda:0')
```
## The solution
By enabling Accelerate's operational debug mode, Accelerate will properly find and catch errors such as this and provide a very clear traceback immediatly:
```
Traceback (most recent call last):
File "/home/zach_mueller_huggingface_co/test.py", line 18, in <module>
main()
File "/home/zach_mueller_huggingface_co/test.py", line 15, in main
main()broadcast_tensor = broadcast(tensor)
File "/home/zach_mueller_huggingface_co/accelerate/src/accelerate/utils/operations.py", line 303, in wrapper
broadcast_tensor = broadcast(tensor)
accelerate.utils.operations.DistributedOperationException: Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
Operation: `accelerate.utils.operations.broadcast`
Input shapes:
- Process 0: [1, 5]
- Process 1: [1, 2, 5]
```
This explains that the shapes across our devices were *not* the same, and that we should ensure that they match properly to be compatible. Typically this means that there is either an extra dimension, or certain dimensions are incompatible with the operation.
To enable this please do one of the following:
Enable it through the questionarre during `accelerate config` (recommended)
From the CLI:
```
accelerate launch --debug {my_script.py} --arg1 --arg2
```
As an environmental variable (which avoids the need for `accelerate launch`):
```
ACCELERATE_DEBUG_MODE="1" accelerate launch {my_script.py} --arg1 --arg2
```
Manually changing the `config.yaml` file:
```diff
compute_environment: LOCAL_MACHINE
+debug: true
```

View File

@ -585,8 +585,10 @@ Mixed precision type: fp16
ds_config: {'bf16': {'enabled': False}, 'zero_optimization': {'stage': 3, 'stage3_gather_16bit_weights_on_model_save': True, 'offload_optimizer': {'device': 'nvme'}, 'offload_param': {'device': 'cpu'}}, 'gradient_clipping': 1.0, 'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 5, 'steps_per_print': inf, 'fp16': {'enabled': True, 'auto_cast': True}}
```
**Note**: Remaining `"auto"` values are handled in `accelerator.prepare()` call as explained in point 2 of
**Note**:
1. Remaining `"auto"` values are handled in `accelerator.prepare()` call as explained in point 2 of
`Important code changes when using DeepSpeed Config File`.
2. Only when `gradient_accumulation_steps` is `auto`, the value passed while creating `Accelerator` object via `Accelerator(gradient_accumulation_steps=k)` will be used. When using DeepSpeed Plugin, the value from it will be used and it will overwrite the value passed while creating Accelerator object.
## Saving and loading

View File

@ -120,7 +120,7 @@ needs to be the same length. Basic inference does not require this.
For instance:
```python
from accelerate import PartialState # Can also be Accelerator or AcceleratorStaet
from accelerate import PartialState # Can also be Accelerator or AcceleratorState
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)

View File

@ -37,14 +37,14 @@ for batch in dataloader:
<div class="block dark:hidden">
<iframe
src="https://muellerzr-accelerate-examples.hf.space?__theme=light"
src="https://hf-accelerate-accelerate-examples.hf.space?__theme=light"
width="850"
height="1600"
></iframe>
</div>
<div class="hidden dark:block">
<iframe
src="https://muellerzr-accelerate-examples.hf.space?__theme=dark"
src="https://hf-accelerate-accelerate-examples.hf.space?__theme=dark"
width="850"
height="1600"
></iframe>

View File

@ -49,7 +49,7 @@ fsdp_config:
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: GPT2Block
fsdp_transformer_layer_cls_to_wrap: BertLayer
machine_rank: 0
main_process_ip: null
main_process_port: null
@ -67,19 +67,38 @@ accelerate launch examples/nlp_example.py
Currently, `Accelerate` supports the following config through the CLI:
```bash
`Sharding Strategy`: [1] FULL_SHARD (shards optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD
`Sharding Strategy`: [1] FULL_SHARD (shards optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD (DDP), [4] HYBRID_SHARD (shards optimizer states, gradients and parameters within each node while each node has full copy), [5] HYBRID_SHARD_ZERO2 (shards optimizer states and gradients within each node while each node has full copy)
`Offload Params`: Decides Whether to offload parameters and gradients to CPU
`Auto Wrap Policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP [4] "HYBRID_SHARD" [5] "HYBRID_SHARD_ZERO2"
`Auto Wrap Policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP
`Transformer Layer Class to Wrap`: When using `TRANSFORMER_BASED_WRAP`, user specifies comma-separated string of transformer layer class names (case-sensitive) to wrap ,e.g,
`BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`...
This is important because submodules that share weights (e.g., embedding layer) should not end up in different FSDP wrapped units.
Using this policy, wrapping happens for each block containing Multi-Head Attention followed by couple of MLP layers.
Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit.
Therefore, use this for transformer based models.
You can use the `model._no_split_modules` for 🤗 Transformer models by answering `yes` to
`Do you want to use the model's `_no_split_modules` to wrap. Only applicable for 🤗 Transformers`.
It will try to use `model._no_split_modules` when available.
`Min Num Params`: minimum number of parameters when using `SIZE_BASED_WRAP`
`Backward Prefetch`: [1] BACKWARD_PRE, [2] BACKWARD_POST, [3] NO_PREFETCH
`State Dict Type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT
`State Dict Type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT
`Forward Prefetch`: if True, then FSDP explicitly prefetches the next upcoming
all-gather while executing in the forward pass. only use with Static graphs.
`Use Orig Params`: If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable paramteres.
Useful in cases such as parameter-efficient fine-tuning.
Please refer this [blog](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019)
`CPU RAM Efficient Model loading`: If True, only the first process loads the pretrained model checkoint while all other processes have empty weights. Only applicable for 🤗 Transformers models. This should be set to False if you experience errors when loading the pretrained 🤗 Transformers model via `from_pretrained` method. When using this, `Sync Module States` needs to be True else all the processes expect the main process would have random empty weights leading to unexpected behaviour during training.
`Sync Module States`: If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0
`Forward Prefetch`: If True, then FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass
```
For additional and more nuanced control, you can specify other FSDP parameters via `FullyShardedDataParallelPlugin`.
@ -137,7 +156,7 @@ When using transformers `save_pretrained`, pass `state_dict=accelerator.get_stat
args.output_dir,
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
+ state_dict=accelerator.get_state_dict(model),
+ state_dict=accelerator.get_state_dict(model, unwrap=False),
)
```

View File

@ -127,6 +127,11 @@ training on. 🤗 Accelerate automagically does this for you by default. Behind
Below is the finished implementation for performing gradient accumulation with 🤗 Accelerate
```python
from accelerate import Accelerator
accelerator = Accelerator(gradient_accumulation_steps=2)
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
for batch in training_dataloader:
with accelerator.accumulate(model):
inputs, targets = batch
@ -138,4 +143,74 @@ for batch in training_dataloader:
optimizer.zero_grad()
```
<Tip warning={true}>
It's important that **only one forward/backward** should be done inside the context manager `with accelerator.accumulate(model)`.
</Tip>
To learn more about what magic this wraps around, read the [Gradient Synchronization concept guide](../concept_guides/gradient_synchronization)
## Self-contained example
Here is a self-contained example that you can run to see gradient accumulation in action with 🤗 Accelerate:
```python
import torch
import copy
from accelerate import Accelerator
from accelerate.utils import set_seed
from torch.utils.data import TensorDataset, DataLoader
# seed
set_seed(0)
# define toy inputs and labels
x = torch.tensor([1., 2., 3., 4., 5., 6., 7., 8.])
y = torch.tensor([2., 4., 6., 8., 10., 12., 14., 16.])
gradient_accumulation_steps = 4
batch_size = len(x) // gradient_accumulation_steps
# define dataset and dataloader
dataset = TensorDataset(x, y)
dataloader = DataLoader(dataset, batch_size=batch_size)
# define model, optimizer and loss function
model = torch.zeros((1, 1), requires_grad=True)
model_clone = copy.deepcopy(model)
criterion = torch.nn.MSELoss()
model_optimizer = torch.optim.SGD([model], lr=0.02)
accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps)
model, model_optimizer, dataloader = accelerator.prepare(model, model_optimizer, dataloader)
model_clone_optimizer = torch.optim.SGD([model_clone], lr=0.02)
print(f"initial model weight is {model.mean().item():.5f}")
print(f"initial model weight is {model_clone.mean().item():.5f}")
for i, (inputs, labels) in enumerate(dataloader):
with accelerator.accumulate(model):
inputs = inputs.view(-1, 1)
print(i, inputs.flatten())
labels = labels.view(-1, 1)
outputs = inputs @ model
loss = criterion(outputs, labels)
accelerator.backward(loss)
model_optimizer.step()
model_optimizer.zero_grad()
loss = criterion(x.view(-1, 1) @ model_clone, y.view(-1, 1))
model_clone_optimizer.zero_grad()
loss.backward()
model_clone_optimizer.step()
print(f"w/ accumulation, the final model weight is {model.mean().item():.5f}")
print(f"w/o accumulation, the final model weight is {model_clone.mean().item():.5f}")
```
```
initial model weight is 0.00000
initial model weight is 0.00000
0 tensor([1., 2.])
1 tensor([3., 4.])
2 tensor([5., 6.])
3 tensor([7., 8.])
w/ accumulation, the final model weight is 2.04000
w/o accumulation, the final model weight is 2.04000
```

View File

@ -0,0 +1,137 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Understanding how big of a model can fit on your machine
One very difficult aspect when exploring potential models to use on your machine is knowing just how big of a model will *fit* into memory with your current graphics card (such as loading the model onto CUDA).
To help alleviate this, 🤗 Accelerate has a CLI interface through `accelerate estimate-memory`. This tutorial will
help walk you through using it, what to expect, and at the end link to the interactive demo hosted on the 🤗 Hub which will
even let you post those results directly on the model repo!
Currently we support searching for models that can be used in `timm` and `transformers`.
<Tip>
This API will load the model into memory on the `meta` device, so we are not actually downloading
and loading the full weights of the model into memory, nor do we need to. As a result it's
perfectly fine to measure 8 billion parameter models (or more), without having to worry about
if your CPU can handle it!
</Tip>
## Gradio Demos
Below are a few gradio demos related to what was described above. The first is the official Hugging Face memory estimation space, utilizing Accelerate directly:
<div class="block dark:hidden">
<iframe
src="https://hf-accelerate-model-memory-usage.hf.space?__theme=light"
width="850"
height="1600"
></iframe>
</div>
<div class="hidden dark:block">
<iframe
src="https://hf-accelerate-model-memory-usage.hf.space?__theme=dark"
width="850"
height="1600"
></iframe>
</div>
A community member has taken the idea and expended it further, allowing you to filter models directly and see if you can run a particular LLM given GPU constraints and LoRA configurations. To play with it, see [here](https://huggingface.co/spaces/Vokturz/can-it-run-llm) for more details.
## The Command
When using `accelerate estimate-memory`, you need to pass in the name of the model you want to use, potentially the framework
that model utilizing (if it can't be found automatically), and the data types you want the model to be loaded in with.
For example, here is how we can calculate the memory footprint for `bert-base-cased`:
```bash
accelerate estimate-memory bert-base-cased
```
This will download the `config.json` for `bert-based-cased`, load the model on the `meta` device, and report back how much space
it will use:
Memory Usage for loading `bert-base-cased`:
| dtype | Largest Layer | Total Size | Training using Adam |
|---------|---------------|------------|---------------------|
| float32 | 84.95 MB | 418.18 MB | 1.61 GB |
| float16 | 42.47 MB | 206.59 MB | 826.36 MB |
| int8 | 21.24 MB | 103.29 MB | 413.18 MB |
| int4 | 10.62 MB | 51.65 MB | 206.59 MB |
By default it will return all the supported dtypes (`int4` through `float32`), but if you are interested in specific ones these can be filtered.
### Specific libraries
If the source library cannot be determined automatically (like it could in the case of `bert-base-cased`), a library name can
be passed in.
```bash
accelerate estimate-memory HuggingFaceM4/idefics-80b-instruct --library_name transformers
```
Memory Usage for loading `HuggingFaceM4/idefics-80b-instruct`:
| dtype | Largest Layer | Total Size | Training using Adam |
|---------|---------------|------------|---------------------|
| float32 | 3.02 GB | 297.12 GB | 1.16 TB |
| float16 | 1.51 GB | 148.56 GB | 594.24 GB |
| int8 | 772.52 MB | 74.28 GB | 297.12 GB |
| int4 | 386.26 MB | 37.14 GB | 148.56 GB |
```bash
accelerate estimate-memory timm/resnet50.a1_in1k --library_name timm
```
Memory Usage for loading `timm/resnet50.a1_in1k`:
| dtype | Largest Layer | Total Size | Training using Adam |
|---------|---------------|------------|---------------------|
| float32 | 9.0 MB | 97.7 MB | 390.78 MB |
| float16 | 4.5 MB | 48.85 MB | 195.39 MB |
| int8 | 2.25 MB | 24.42 MB | 97.7 MB |
| int4 | 1.12 MB | 12.21 MB | 48.85 MB |
### Specific dtypes
As mentioned earlier, while we return `int4` through `float32` by default, any dtype can be used from `float32`, `float16`, `int8`, and `int4`.
To do so, pass them in after specifying `--dtypes`:
```bash
accelerate estimate-memory bert-base-cased --dtypes float32 float16
```
Memory Usage for loading `bert-base-cased`:
| dtype | Largest Layer | Total Size | Training using Adam |
|---------|---------------|------------|---------------------|
| float32 | 84.95 MB | 413.18 MB | 1.61 GB |
| float16 | 42.47 MB | 206.59 MB | 826.36 MB |
## Caveats with this calculator
This calculator will tell you how much memory is needed to purely load the model in, *not* to perform inference.
This calculation is accurate within a few % of the actual value, so it is a very good view of just how much memory it will take. For instance loading `bert-base-cased` actually takes `413.68 MB` when loaded on CUDA in full precision, and the calculator estimates `413.18 MB`.
When performing inference you can expect to add up to an additional 20% as found by [EleutherAI](https://blog.eleuther.ai/transformer-math/). We'll be conducting research into finding a more accurate estimate to these values, and will update
this calculator once done.

View File

@ -71,20 +71,20 @@ Finally, you need to set your quantization configuration with [`~utils.BnbQuanti
Here's an example for 8-bit quantization:
```py
from accelerate.utils import BnbQuantizationConfig
quantization_config = BnbQuantizationConfig(load_in_8bit=True, llm_int8_threshold = 6)
bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True, llm_int8_threshold = 6)
```
Here's an example for 4-bit quantization:
```py
from accelerate.utils import BnbQuantizationConfig
quantization_config = BnbQuantizationConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4")
bnb_quantization_config = BnbQuantizationConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4")
```
To quantize your empty model with the selected configuration, you need to use [`~utils.load_and_quantize_model`].
```py
from accelerate.utils import load_and_quantize_model
quantized_model = load_and_quantize_model(empty_model, weights_location=weights_location, quantization_config=quantization_config, device_map = "auto")
quantized_model = load_and_quantize_model(empty_model, weights_location=weights_location, bnb_quantization_config=bnb_quantization_config, device_map = "auto")
```
### Saving and loading 8-bit model
@ -97,7 +97,7 @@ accelerate = Accelerator()
new_weights_location = "path/to/save_directory"
accelerate.save_model(quantized_model, new_weights_location)
quantized_model_from_saved = load_and_quantize_model(empty_model, weights_location=new_weights_location, quantization_config=quantization_config, device_map = "auto")
quantized_model_from_saved = load_and_quantize_model(empty_model, weights_location=new_weights_location, bnb_quantization_config=bnb_quantization_config, device_map = "auto")
```
Note that 4-bit model serialization is currently not supported.
@ -133,4 +133,4 @@ Note that you dont need to pass `device_map` when loading the model for train
### Example demo - running GPT2 1.5b on a Google Colab
Check out the Google Colab [demo](https://colab.research.google.com/drive/1T1pOgewAWVpR9gKpaEWw4orOrzPFb3yM?usp=sharing) for running quantized models on a GTP2 model. The GPT2-1.5B model checkpoint is in FP32 which uses 6GB of memory. After quantization, it uses 1.6GB with 8-bit modules and 1.2GB with 4-bit modules.
Check out the Google Colab [demo](https://colab.research.google.com/drive/1T1pOgewAWVpR9gKpaEWw4orOrzPFb3yM?usp=sharing) for running quantized models on a GTP2 model. The GPT2-1.5B model checkpoint is in FP32 which uses 6GB of memory. After quantization, it uses 1.6GB with 8-bit modules and 1.2GB with 4-bit modules.

View File

@ -243,39 +243,6 @@ def parse_args():
return args
# New Code #
def checkpoint_model(checkpoint_folder, ckpt_id, model, epoch, last_global_step, **kwargs):
"""Utility function for checkpointing model + optimizer dictionaries
The main purpose for this is to be able to resume training from that instant again
"""
checkpoint_state_dict = {
"epoch": epoch,
"last_global_step": last_global_step,
}
# Add extra kwargs too
checkpoint_state_dict.update(kwargs)
success = model.save_checkpoint(checkpoint_folder, ckpt_id, checkpoint_state_dict)
status_msg = f"checkpointing: checkpoint_folder={checkpoint_folder}, ckpt_id={ckpt_id}"
if success:
logging.info(f"Success {status_msg}")
else:
logging.warning(f"Failure {status_msg}")
return
# New Code #
def load_training_checkpoint(model, load_dir, tag=None, **kwargs):
"""Utility function for checkpointing model + optimizer dictionaries
The main purpose for this is to be able to resume training from that instant again
"""
_, checkpoint_state_dict = model.load_checkpoint(load_dir, tag=tag, **kwargs)
epoch = checkpoint_state_dict["epoch"]
last_global_step = checkpoint_state_dict["last_global_step"]
del checkpoint_state_dict
return (epoch, last_global_step)
# New Code #
def evaluate(args, model, eval_dataloader, accelerator, eval_dataset):
model.eval()
@ -302,9 +269,20 @@ def main():
# Initialize the accelerator. We will let the accelerator handle device placement for us in this example.
# If we're using tracking, we also need to initialize it here and it will by default pick up all supported trackers
# in the environment
# when using DeepSpeed, the `gradient_accumulation_steps` is properly set from the DeepSpeed plugin/config
# or from `accelerate launch` via `--gradient_accumulation_steps` else
# defaulting to the passed `args.gradient_accumulation_steps`
accelerator = (
Accelerator(log_with=args.report_to, logging_dir=args.output_dir) if args.with_tracking else Accelerator()
Accelerator(
log_with=args.report_to,
project_dir=args.output_dir,
gradient_accumulation_steps=args.gradient_accumulation_steps,
)
if args.with_tracking
else Accelerator(gradient_accumulation_steps=args.gradient_accumulation_steps)
)
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
@ -538,17 +516,11 @@ def main():
model.tie_weights()
# Scheduler and math around the number of training steps.
# New Code
# Get gradient accumulation steps from deepspeed config if available
if accelerator.state.deepspeed_plugin is not None:
args.gradient_accumulation_steps = accelerator.state.deepspeed_plugin.deepspeed_config[
"gradient_accumulation_steps"
]
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / accelerator.gradient_accumulation_steps)
overrode_max_train_steps = False
if args.max_train_steps is None:
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
overrode_max_train_steps = True
else:
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
@ -575,16 +547,16 @@ def main():
)
# We need to recalculate our total training steps as the size of the training dataloader may have changed.
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / accelerator.gradient_accumulation_steps)
if overrode_max_train_steps:
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
# Afterwards we recalculate our number of training epochs
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
# Figure out how many steps we should save the Accelerator states
if hasattr(args.checkpointing_steps, "isdigit"):
checkpointing_steps = args.checkpointing_steps
if args.checkpointing_steps.isdigit():
checkpointing_steps = int(args.checkpointing_steps)
else:
checkpointing_steps = None
checkpointing_steps = args.checkpointing_steps
if checkpointing_steps is not None and checkpointing_steps.isdigit():
checkpointing_steps = int(checkpointing_steps)
# We need to initialize the trackers we use, and also store our configuration.
# The trackers initializes automatically on the main process.
@ -595,14 +567,16 @@ def main():
accelerator.init_trackers("clm_no_trainer", experiment_config)
# Train!
total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
total_batch_size = (
args.per_device_train_batch_size * accelerator.num_processes * accelerator.gradient_accumulation_steps
)
logger.info("***** Running training *****")
logger.info(f" Num examples = {len(train_dataset)}")
logger.info(f" Num Epochs = {args.num_train_epochs}")
logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}")
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
logger.info(f" Gradient Accumulation steps = {accelerator.gradient_accumulation_steps}")
logger.info(f" Total optimization steps = {args.max_train_steps}")
# Only show the progress bar once on each machine.
progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
@ -613,45 +587,61 @@ def main():
# Potentially load in the weights and states from a previous save
if args.resume_from_checkpoint:
# New Code #
# Loads the DeepSpeed checkpoint from the specified path
_, last_global_step = load_training_checkpoint(
model,
args.resume_from_checkpoint,
**{"load_optimizer_states": True, "load_lr_scheduler_states": True},
)
accelerator.load_state(args.resume_from_checkpoint)
accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
resume_step = last_global_step
starting_epoch = resume_step // len(train_dataloader)
resume_step -= starting_epoch * len(train_dataloader)
path = os.path.basename(args.resume_from_checkpoint)
training_difference = os.path.splitext(path)[0]
if "epoch" in training_difference:
starting_epoch = int(training_difference.replace("epoch_", "")) + 1
resume_step = None
completed_steps = starting_epoch * num_update_steps_per_epoch
else:
resume_step = int(training_difference.replace("step_", ""))
starting_epoch = resume_step // num_update_steps_per_epoch
resume_step -= starting_epoch * num_update_steps_per_epoch
completed_steps = resume_step
# update progress bar if resumed from checkpoint
progress_bar.update(completed_steps)
for epoch in range(starting_epoch, args.num_train_epochs):
model.train()
if args.with_tracking:
total_loss = 0
for step, batch in enumerate(train_dataloader):
# skip new `skip_first_batches` to skip the batches when resuming from ckpt
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
# We need to skip steps until we reach the resumed step
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
completed_steps += 1
continue
outputs = model(**batch)
loss = outputs.loss
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
loss = loss / args.gradient_accumulation_steps
accelerator.backward(loss)
if (step + 1) % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
else:
# After the first iteration though, we need to go back to the original dataloader
active_dataloader = train_dataloader
for step, batch in enumerate(active_dataloader):
# In particular, DeepSpeed handles `gradient_accumulation` via `DeepSpeedEngine`.
# Below, we use `accelerator.accumulate` if the user
# wants to switch to other approaches such as plain DDP, PyTorch FSDP ...
# This avoids having to change any code as things are all handled across different distributed setups.
with accelerator.accumulate(model):
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
completed_steps += 1
if accelerator.sync_gradients:
progress_bar.update(1)
completed_steps += 1
# We keep track of the loss at each epoch
if args.with_tracking:
step_loss = accelerator.reduce(loss.detach().clone()).item()
total_loss += step_loss
if isinstance(checkpointing_steps, int):
if completed_steps % checkpointing_steps == 0:
output_dir = f"step_{completed_steps }"
output_dir = f"step_{completed_steps}"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
@ -666,34 +656,29 @@ def main():
{
"perplexity": perplexity,
"eval_loss": eval_loss,
"train_loss": total_loss.item() / len(train_dataloader),
"train_loss": total_loss / len(train_dataloader),
"epoch": epoch,
"step": completed_steps,
},
step=completed_steps,
)
# New Code #
# Save the DeepSpeed checkpoint to the specified path
checkpoint_model(args.output_dir, epoch, model, epoch, completed_steps)
if isinstance(checkpointing_steps, str) and checkpointing_steps == "epoch":
accelerator.save_state(os.path.join(args.output_dir, f"epoch_{epoch}"))
# New Code #
# Tracks the best checkpoint and best metric
if best_metric is None or best_metric > perplexity:
best_metric = perplexity
best_metric_checkpoint = os.path.join(args.output_dir, str(epoch))
best_metric_checkpoint = os.path.join(args.output_dir, "best_checkpoint")
accelerator.save_state(best_metric_checkpoint)
accelerator.print(f"New best metric: {best_metric} at epoch {epoch}")
accelerator.print(f"best_metric_checkpoint: {best_metric_checkpoint}")
# New Code #
# Loads the best checkpoint after the training is finished
if args.load_best_model:
_, last_global_step = load_training_checkpoint(
model,
"/".join(best_metric_checkpoint.split("/")[:-1]),
tag=best_metric_checkpoint.split("/")[-1],
**{"load_optimizer_states": True, "load_lr_scheduler_states": True},
)
accelerator.load_state(best_metric_checkpoint)
# New Code #
# Evaluates using the best checkpoint

View File

@ -0,0 +1,246 @@
# coding=utf-8
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
########################################################################
# This is a fully working simple example to use Accelerate
# specifically showcasing how to perform early stopping,
# and builds off the `nlp_example.py` script
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
"""
Creates a set of `DataLoader`s for the `glue` dataset,
using "bert-base-cased" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"],
shuffle=False,
collate_fn=collate_fn,
batch_size=EVAL_BATCH_SIZE,
drop_last=(accelerator.mixed_precision == "fp8"),
)
return train_dataloader, eval_dataloader
# New code
class EarlyStoppingCallback:
"A callback class that helps with early stopping"
def __init__(self, min_delta=0, patience=5):
self.min_delta = min_delta
self.patience = patience
self.counter = 0
self.lowest_loss = float("inf")
def check_early_stopping(self, eval_loss):
delta = self.lowest_loss - eval_loss
if delta >= self.min_delta:
self.lowest_loss = eval_loss
self.counter = 0
else:
self.counter += 1
if self.counter >= self.patience:
return True
return False
callback = EarlyStoppingCallback()
def training_function(config, args):
# Initialize accelerator
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
metric = evaluate.load("glue", "mrpc")
# If the batch size is too big we use gradient accumulation
gradient_accumulation_steps = 1
if batch_size > MAX_GPU_BATCH_SIZE and accelerator.distributed_type != DistributedType.TPU:
gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE
batch_size = MAX_GPU_BATCH_SIZE
set_seed(seed)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs) // gradient_accumulation_steps,
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now we train the model
for epoch in range(num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
loss = loss / gradient_accumulation_steps
accelerator.backward(loss)
if step % gradient_accumulation_steps == 0:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
# New code
# Check if we should stop the training on any processes
if callback.check_early_stopping(loss.item()):
accelerator.set_trigger()
# If so, we break the loop
if accelerator.check_trigger():
break
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -31,6 +31,7 @@ from transformers import (
)
from accelerate import Accelerator, DistributedType, FullyShardedDataParallelPlugin
from accelerate.utils import is_npu_available, is_xpu_available
########################################################################
@ -68,9 +69,18 @@ def b2mb(x):
class TorchTracemalloc:
def __enter__(self):
gc.collect()
torch.cuda.empty_cache()
torch.cuda.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.cuda.memory_allocated()
if torch.cuda.is_available():
torch.cuda.empty_cache()
torch.cuda.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.cuda.memory_allocated()
elif is_xpu_available():
torch.xpu.empty_cache()
torch.xpu.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.xpu.memory_allocated()
elif is_npu_available():
torch.npu.empty_cache()
torch.npu.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.npu.memory_allocated()
self.process = psutil.Process()
self.cpu_begin = self.cpu_mem_used()
@ -100,9 +110,18 @@ class TorchTracemalloc:
self.peak_monitoring = False
gc.collect()
torch.cuda.empty_cache()
self.end = torch.cuda.memory_allocated()
self.peak = torch.cuda.max_memory_allocated()
if torch.cuda.is_available():
torch.cuda.empty_cache()
self.end = torch.cuda.memory_allocated()
self.peak = torch.cuda.max_memory_allocated()
elif is_xpu_available():
torch.xpu.empty_cache()
self.end = torch.xpu.memory_allocated()
self.peak = torch.xpu.max_memory_allocated()
elif is_npu_available():
torch.npu.empty_cache()
self.end = torch.npu.memory_allocated()
self.peak = torch.npu.max_memory_allocated()
self.used = b2mb(self.end - self.begin)
self.peaked = b2mb(self.peak - self.begin)
@ -297,7 +316,6 @@ def training_function(config, args):
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
loss = loss / gradient_accumulation_steps
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()

View File

@ -19,7 +19,9 @@ extras = {}
extras["quality"] = ["black ~= 23.1", "ruff >= 0.0.241", "hf-doc-builder >= 0.3.0", "urllib3 < 2.0.0"]
extras["docs"] = []
extras["test_prod"] = ["pytest", "pytest-xdist", "pytest-subtests", "parameterized"]
extras["test_dev"] = ["datasets", "evaluate", "transformers", "scipy", "scikit-learn", "deepspeed", "tqdm"]
extras["test_dev"] = [
"datasets", "evaluate", "transformers", "scipy", "scikit-learn", "deepspeed", "tqdm", "bitsandbytes", "timm"
]
extras["testing"] = extras["test_prod"] + extras["test_dev"]
extras["rich"] = ["rich"]
@ -32,7 +34,7 @@ extras["sagemaker"] = [
setup(
name="accelerate",
version="0.21.0.dev0",
version="0.24.0",
description="Accelerate",
long_description=open("README.md", "r", encoding="utf-8").read(),
long_description_content_type="text/markdown",
@ -47,11 +49,12 @@ setup(
"console_scripts": [
"accelerate=accelerate.commands.accelerate_cli:main",
"accelerate-config=accelerate.commands.config:main",
"accelerate-estimate-memory=accelerate.commands.estimate:main",
"accelerate-launch=accelerate.commands.launch:main",
]
},
python_requires=">=3.8.0",
install_requires=["numpy>=1.17", "packaging>=20.0", "psutil", "pyyaml", "torch>=1.10.0"],
install_requires=["numpy>=1.17", "packaging>=20.0", "psutil", "pyyaml", "torch>=1.10.0", "huggingface_hub"],
extras_require=extras,
classifiers=[
"Development Status :: 5 - Production/Stable",
@ -67,21 +70,29 @@ setup(
)
# Release checklist
# 1. Change the version in __init__.py and setup.py.
# 2. Commit these changes with the message: "Release: VERSION"
# 3. Add a tag in git to mark the release: "git tag VERSION -m 'Adds tag VERSION for pypi' "
# Push the tag to git: git push --tags origin main
# 4. Run the following commands in the top-level directory:
# 1. Checkout the release branch (for a patch the current release branch, for a new minor version, create one):
# git checkout -b vXX.xx-release
# The -b is only necessary for creation (so remove it when doing a patch)
# 2. Change the version in __init__.py and setup.py to the proper value.
# 3. Commit these changes with the message: "Release: v<VERSION>"
# 4. Add a tag in git to mark the release:
# git tag v<VERSION> -m 'Adds tag v<VERSION> for pypi'
# Push the tag and release commit to git: git push --tags origin vXX.xx-release
# 5. Run the following commands in the top-level directory:
# rm -rf dist
# rm -rf build
# python setup.py bdist_wheel
# python setup.py sdist
# 5. Upload the package to the pypi test server first:
# twine upload dist/* -r pypitest
# twine upload dist/* -r pypitest --repository-url=https://test.pypi.org/legacy/
# 6. Check that you can install it in a virtualenv by running:
# 6. Upload the package to the pypi test server first:
# twine upload dist/* -r testpypi
# 7. Check that you can install it in a virtualenv by running:
# pip install accelerate
# pip uninstall accelerate
# pip install -i https://testpypi.python.org/pypi accelerate
# accelerate env
# accelerate test
# 7. Upload the final version to actual pypi:
# 8. Upload the final version to actual pypi:
# twine upload dist/* -r pypi
# 8. Add release notes to the tag in github once everything is looking hunky-dory.
# 9. Update the version in __init__.py, setup.py to the new version "-dev" and push to master
# 9. Add release notes to the tag in github once everything is looking hunky-dory.
# 10. Go back to the main branch and update the version in __init__.py, setup.py to the new version ".dev" and push to
# main.

View File

@ -1,4 +1,4 @@
__version__ = "0.21.0.dev0"
__version__ = "0.24.0"
from .accelerator import Accelerator
from .big_modeling import (
@ -14,6 +14,7 @@ from .data_loader import skip_first_batches
from .launchers import debug_launcher, notebook_launcher
from .state import PartialState
from .utils import (
AutocastKwargs,
DeepSpeedPlugin,
DistributedDataParallelKwargs,
DistributedType,

View File

@ -16,6 +16,7 @@ from __future__ import annotations
import collections
import contextlib
import functools
import json
import math
import os
@ -45,6 +46,7 @@ from .utils import (
SAFE_WEIGHTS_NAME,
WEIGHTS_INDEX_NAME,
WEIGHTS_NAME,
AutocastKwargs,
DeepSpeedPlugin,
DistributedDataParallelKwargs,
DistributedType,
@ -61,11 +63,13 @@ from .utils import (
ProjectConfiguration,
RNGType,
TorchDynamoPlugin,
check_os_kernel,
compare_versions,
convert_model,
convert_outputs_to_fp32,
extract_model_from_parallel,
gather,
gather_object,
get_mixed_precision_context_manager,
get_pretty_name,
has_transformer_engine_layers,
@ -94,11 +98,10 @@ from .utils import (
wait_for_everyone,
)
from .utils.constants import FSDP_PYTORCH_VERSION
from .utils.other import is_compiled_module
if is_deepspeed_available():
import deepspeed
from .utils import (
DeepSpeedEngineWrapper,
DeepSpeedOptimizerWrapper,
@ -134,6 +137,10 @@ if is_tpu_available(check_device=False):
import torch_xla.distributed.xla_multiprocessing as xmp
if is_npu_available(check_device=False):
import torch_npu # noqa: F401
try:
from torch.optim.lr_scheduler import LRScheduler
except ImportError:
@ -158,9 +165,8 @@ class Accelerator:
mixed_precision (`str`, *optional*):
Whether or not to use mixed precision training. Choose from 'no','fp16','bf16 or 'fp8'. Will default to the
value in the environment variable `ACCELERATE_MIXED_PRECISION`, which will use the default value in the
accelerate config of the current system or the flag passed with the `accelerate.launch` command. 'fp16'
requires pytorch 1.6 or higher. 'bf16' requires pytorch 1.10 or higher. 'fp8' requires the installation of
transformers-engine.
accelerate config of the current system or the flag passed with the `accelerate.launch` command. 'fp8'
requires the installation of transformers-engine.
gradient_accumulation_steps (`int`, *optional*, default to 1):
The number of steps that should pass before gradients are accumulated. A number > 1 should be combined with
`Accelerator.accumulate`. If not passed, will default to the value in the environment variable
@ -259,6 +265,7 @@ class Accelerator:
kwargs_handlers: list[KwargsHandler] | None = None,
dynamo_backend: DynamoBackend | str | None = None,
):
self.trackers = []
if project_config is not None:
self.project_configuration = project_config
else:
@ -328,6 +335,7 @@ class Accelerator:
self.scaler_handler = None
self.init_handler = None
self.fp8_recipe_handler = None
self.autocast_handler = None
if kwargs_handlers is not None:
for handler in kwargs_handlers:
assert isinstance(
@ -353,6 +361,11 @@ class Accelerator:
raise ValueError("You can only pass one `FP8RecipeKwargs` in `kwargs_handler`.")
else:
self.fp8_recipe_handler = handler
elif isinstance(handler, AutocastKwargs):
if self.autocast_handler is not None:
raise ValueError("You can only pass one `AutocastKwargs` in `kwargs_handler`.")
else:
self.autocast_handler = handler
kwargs = self.init_handler.to_kwargs() if self.init_handler is not None else {}
self.state = AcceleratorState(
@ -410,11 +423,10 @@ class Accelerator:
if (
self.state.mixed_precision == "fp16"
and self.device.type != "cpu"
and self.device.type != "xpu"
and self.distributed_type not in (DistributedType.DEEPSPEED, DistributedType.MEGATRON_LM)
):
self.native_amp = True
if self.device.type not in ("cuda", "mps", "npu"):
if self.device.type not in ("xpu", "cuda", "mps", "npu"):
raise ValueError(err.format(mode="fp16", requirement="a GPU"))
kwargs = self.scaler_handler.to_kwargs() if self.scaler_handler is not None else {}
if self.distributed_type == DistributedType.FSDP:
@ -456,6 +468,11 @@ class Accelerator:
if self.rng_types is None:
self.rng_types = ["generator"]
# Set a flag tensor for early stopping and other breakpoints
self.flag_tensor = None
check_os_kernel()
@property
def use_distributed(self):
"""
@ -862,6 +879,55 @@ class Accelerator:
with context():
yield
@staticmethod
@contextmanager
def trigger_sync_in_backward(model):
"""Trigger the sync of the gradients in the next backward pass of the model after multiple forward passes under
`Accelerator.no_sync` (only applicable in multi-GPU scenarios).
If the script is not launched in distributed mode, this context manager does nothing.
Args:
model (`torch.nn.Module`):
The model for which to trigger the gradient synchronization.
Example:
```python
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
>>> dataloader, model, optimizer = accelerator.prepare(dataloader, model, optimizer)
>>> with accelerator.no_sync():
... loss_a = loss_func(model(input_a)) # first forward pass
... loss_b = loss_func(model(input_b)) # second forward pass
>>> accelerator.backward(loss_a) # No synchronization across processes, only accumulate gradients
>>> with accelerator.trigger_sync_in_backward(model):
... accelerator.backward(loss_b) # Synchronization across all processes
>>> optimizer.step()
>>> optimizer.zero_grad()
```
"""
if not isinstance(model, torch.nn.parallel.DistributedDataParallel):
yield
return
old_require_backward_grad_sync = model.require_backward_grad_sync
old_require_forward_param_sync = model.require_forward_param_sync
# EXPERIMENTAL: This will force grad sync during `backward()`, but it is unknown if it breaks other DDP features.
# https://github.com/pytorch/pytorch/blob/e1502c0cdbfd17548c612f25d5a65b1e4b86224d/torch/nn/parallel/distributed.py#L1453-L1466
model.require_backward_grad_sync = True
model.require_forward_param_sync = True
# https://github.com/pytorch/pytorch/blob/e1502c0cdbfd17548c612f25d5a65b1e4b86224d/torch/csrc/distributed/c10d/reducer.cpp#L1371-L1402
model.reducer.prepare_for_backward([])
try:
yield
finally:
model.require_backward_grad_sync = old_require_backward_grad_sync
model.require_forward_param_sync = old_require_forward_param_sync
def _do_sync(self):
"Sets the right `sync_gradients` context and either resets or increases `self.step`"
if self.gradient_state.sync_with_dataloader and self.gradient_state.end_of_dataloader:
@ -888,13 +954,14 @@ class Accelerator:
self.gradient_state.plugin_kwargs.update({"num_steps": gradient_accumulation_steps})
@contextmanager
def accumulate(self, model):
def accumulate(self, *models):
"""
A context manager that will lightly wrap around and perform gradient accumulation automatically
Args:
model (`torch.nn.Module`):
PyTorch Module that was prepared with `Accelerator.prepare`
*models (list of `torch.nn.Module`):
PyTorch Modules that was prepared with `Accelerator.prepare`. Models passed to `accumulate()` will skip
gradient syncing during backward pass in distributed training
Example:
@ -915,12 +982,9 @@ class Accelerator:
```
"""
self._do_sync()
if self.sync_gradients:
context = contextlib.nullcontext
else:
context = self.no_sync
with context(model):
with contextlib.ExitStack() as cm_stack:
for m in models:
cm_stack.enter_context(contextlib.nullcontext() if self.sync_gradients else self.no_sync(m))
yield
@contextmanager
@ -1139,12 +1203,34 @@ class Accelerator:
f"`device_placement` should be a list with {len(args)} elements (the number of objects passed)."
)
for obj in args:
# TODO: Look at enabling native TP training directly with a proper config
if (
isinstance(obj, torch.nn.Module)
and self.verify_device_map(obj)
and self.distributed_type != DistributedType.NO
and os.environ.get("ACCELERATE_BYPASS_DEVICE_MAP", "false") != "true"
):
raise ValueError(
"You can't train a model that has been loaded with `device_map='auto'` in any distributed mode."
" Please rerun your script specifying `--num_processes=1` or by launching with `python {{myscript.py}}`."
)
if self.distributed_type == DistributedType.FSDP:
from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP
model_count = 0
optimizer_present = False
is_type_fsdp = False
for obj in args:
if isinstance(obj, torch.nn.Module):
model_count += 1
# if the model is compiled using PyTorch 2.0,
# check that the wrapped model is FSDP or not;
# else check if it is FSDP or not;
is_type_fsdp = isinstance(obj, FSDP) or (
is_compiled_module(obj) and isinstance(obj._orig_mod, FSDP)
)
if isinstance(obj, torch.optim.Optimizer):
optimizer_present = True
if model_count > 1 and optimizer_present:
@ -1153,7 +1239,7 @@ class Accelerator:
"prepare must be called for all the models before optimizers are created. "
"Then pass the optimizers to the prepare call in the same order as corresponding models."
)
elif model_count == 1 and optimizer_present:
elif model_count == 1 and not is_type_fsdp and optimizer_present:
logger.warning(
"FSDP Warning: When using FSDP, "
"it is efficient and recommended to call prepare for the model before creating the optimizer"
@ -1214,7 +1300,12 @@ class Accelerator:
if isinstance(obj, torch.optim.Optimizer):
obj._switch_parameters(mapping)
if self.distributed_type == DistributedType.FSDP and model_count == 1 and optimizer_present:
if (
self.distributed_type == DistributedType.FSDP
and model_count == 1
and not is_type_fsdp
and optimizer_present
):
result = self._prepare_fsdp(*result)
for item in result:
@ -1254,13 +1345,17 @@ class Accelerator:
if device_placement is None:
device_placement = self.device_placement and self.distributed_type != DistributedType.FSDP
self._models.append(model)
# We check only for models loaded with `accelerate`
# Checks if any of the child module has the attribute `hf_device_map`.
has_hf_device_map = False
for m in model.modules():
if hasattr(m, "hf_device_map"):
has_hf_device_map = True
break
# TODO: Look at enabling native TP training directly with a proper config
if (
self.verify_device_map(model)
and self.distributed_type != DistributedType.NO
and os.environ.get("ACCELERATE_BYPASS_DEVICE_MAP", "false") != "true"
):
raise ValueError(
"You can't train a model that has been loaded with `device_map='auto'` in any distributed mode."
" Please rerun your script specifying `--num_processes=1` or by launching with `python {{myscript.py}}`."
)
if (getattr(model, "is_loaded_in_8bit", False) or getattr(model, "is_loaded_in_4bit", False)) and getattr(
model, "hf_device_map", False
@ -1288,20 +1383,14 @@ class Accelerator:
raise ValueError(
"You can't train a model that has been loaded in 8-bit precision with CPU or disk offload."
)
elif device_placement and not has_hf_device_map:
elif device_placement and not self.verify_device_map(model):
model = model.to(self.device)
if self.native_amp:
model._original_forward = model.forward
model_forward_func = model.forward.__func__ if hasattr(model.forward, "__func__") else model.forward
if self.mixed_precision == "fp16":
if is_npu_available():
new_forward = torch.npu.amp.autocast(dtype=torch.float16)(model_forward_func)
else:
new_forward = torch.cuda.amp.autocast(dtype=torch.float16)(model_forward_func)
elif self.mixed_precision == "bf16" and self.distributed_type != DistributedType.TPU:
new_forward = torch.autocast(device_type=self.device.type, dtype=torch.bfloat16)(model_forward_func)
autocast_context = get_mixed_precision_context_manager(self.native_amp, self.autocast_handler)
new_forward = autocast_context(model_forward_func)
if hasattr(model.forward, "__func__"):
model.forward = MethodType(new_forward, model)
model.forward = MethodType(convert_outputs_to_fp32(model.forward.__func__), model)
@ -1319,9 +1408,7 @@ class Accelerator:
kwargs["fp8_format"] = getattr(te_recipe.Format, kwargs["fp8_format"])
fp8_recipe = te_recipe.DelayedScaling(**kwargs)
cuda_device_capacity = torch.cuda.get_device_capability()
fp8_enabled = cuda_device_capacity[0] >= 9 or (
cuda_device_capacity[0] == 8 and cuda_device_capacity[1] >= 9
)
fp8_enabled = cuda_device_capacity >= (8, 9)
if not fp8_enabled:
logger.warn(
f"The current device has compute capability of {cuda_device_capacity} which is "
@ -1337,15 +1424,27 @@ class Accelerator:
):
if any(p.requires_grad for p in model.parameters()):
kwargs = self.ddp_handler.to_kwargs() if self.ddp_handler is not None else {}
# TODO: Look at enabling native TP training directly with a proper config
if os.environ.get("ACCELERATE_BYPASS_DEVICE_MAP", "false") != "true":
device_ids, output_device = [self.local_process_index], self.local_process_index
else:
device_ids, output_device = None, None
model = torch.nn.parallel.DistributedDataParallel(
model, device_ids=[self.local_process_index], output_device=self.local_process_index, **kwargs
model, device_ids=device_ids, output_device=output_device, **kwargs
)
elif self.distributed_type == DistributedType.FSDP:
from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP
# Check if the model is already a FSDP model due to `Manual Wrapping` and if so,
# don't wrap it again
if type(model) != FSDP:
# In case the model is already compiled using PyTorch 2.0 and the wrapped model in it
# is a FSDP model, don't wrap it again
is_type_fsdp = isinstance(model, FSDP) or (
is_compiled_module(model) and isinstance(model._orig_mod, FSDP)
)
if not is_type_fsdp:
self.state.fsdp_plugin.set_auto_wrap_policy(model)
fsdp_plugin = self.state.fsdp_plugin
kwargs = {
@ -1359,25 +1458,44 @@ class Accelerator:
"use_orig_params": fsdp_plugin.use_orig_params,
"param_init_fn": fsdp_plugin.param_init_fn,
"ignored_modules": fsdp_plugin.ignored_modules,
"ignored_parameters": fsdp_plugin.ignored_parameters,
"limit_all_gathers": fsdp_plugin.limit_all_gathers,
"device_id": self.device,
}
model = FSDP(model, **kwargs)
if fsdp_plugin.activation_checkpointing:
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import (
CheckpointImpl,
apply_activation_checkpointing,
checkpoint_wrapper,
)
apply_activation_checkpointing(
model,
checkpoint_wrapper_fn=functools.partial(
checkpoint_wrapper,
checkpoint_impl=CheckpointImpl.NO_REENTRANT,
),
auto_wrap_policy=fsdp_plugin.auto_wrap_policy,
)
# if the previous and current models are same, delete the previous one
if len(self._models) > 1 and (self._models[-2] is self._models[-1]):
del self._models[-2]
self._models[-1] = model
elif self.distributed_type == DistributedType.MULTI_CPU:
kwargs = self.ddp_handler.to_kwargs() if self.ddp_handler is not None else {}
model = torch.nn.parallel.DistributedDataParallel(model, **kwargs)
elif self.distributed_type == DistributedType.TPU and self.state.fork_launched:
model = xmp.MpModelWrapper(model).to(self.device)
# torch.compile should be called last.
if self.state.dynamo_plugin.backend != DynamoBackend.NO:
# torch.compile should be called last and only if the model isn't already compiled.
if self.state.dynamo_plugin.backend != DynamoBackend.NO and not is_compiled_module(model):
if not is_torch_version(">=", "2.0"):
raise ValueError("Using `torch.compile` requires PyTorch 2.0 or higher.")
model = torch.compile(model, **self.state.dynamo_plugin.to_kwargs())
return model
def _prepare_deepspeed(self, *args):
import deepspeed
deepspeed_plugin = self.state.deepspeed_plugin
is_dataloader_present = any(isinstance(obj, torch.utils.data.DataLoader) for obj in args)
@ -1414,12 +1532,13 @@ class Accelerator:
batch_size_per_device = deepspeed_plugin.deepspeed_config["train_micro_batch_size_per_gpu"]
result = [obj for obj in args]
if self.gradient_accumulation_steps != deepspeed_plugin.deepspeed_config["gradient_accumulation_steps"]:
logger.info(
f"Updating DeepSpeed's gradient accumulation steps to {self.gradient_accumulation_steps} from "
f"{deepspeed_plugin.deepspeed_config['gradient_accumulation_steps']}."
)
deepspeed_plugin.deepspeed_config["gradient_accumulation_steps"] = self.gradient_accumulation_steps
# handle `gradient_accumulation_steps` when the value is `auto`
deepspeed_plugin.fill_match(
"gradient_accumulation_steps",
must_match=False,
gradient_accumulation_steps=self.gradient_accumulation_steps,
)
config_kwargs = {
"train_micro_batch_size_per_gpu": batch_size_per_device,
"train_batch_size": batch_size_per_device
@ -1464,9 +1583,14 @@ class Accelerator:
"Please remove the scheduler from the config file or "
"create `accelerate.utils.DummyScheduler` in the code."
)
elif "scheduler" not in deepspeed_plugin.deepspeed_config and isinstance(scheduler, (DummyScheduler)):
elif (
"scheduler" not in deepspeed_plugin.deepspeed_config
and isinstance(scheduler, (DummyScheduler))
and scheduler.lr_scheduler_callable is None
):
raise ValueError(
"You cannot create a `DummyScheduler` without specifying a scheduler in the config file."
"Either specify a scheduler in the config file or "
"pass in the `lr_scheduler_callable` parameter when using `accelerate.utils.DummyScheduler`."
)
if optimizer is not None and scheduler is not None:
@ -1496,7 +1620,7 @@ class Accelerator:
config_kwargs.update(
{"optimizer.params.lr": optimizer.lr, "optimizer.params.weight_decay": optimizer.weight_decay}
)
if isinstance(scheduler, (DummyScheduler)):
if isinstance(scheduler, (DummyScheduler)) and scheduler.lr_scheduler_callable is None:
max_lr = (
getattr(scheduler.optimizer, "lr", None)
if getattr(scheduler.optimizer, "defaults", None) is None
@ -1521,6 +1645,8 @@ class Accelerator:
if optimizer is not None:
if isinstance(optimizer, (DummyOptim)):
kwargs["model_parameters"] = optimizer.params
if isinstance(scheduler, (DummyScheduler)) and scheduler.lr_scheduler_callable is not None:
kwargs["lr_scheduler"] = scheduler.lr_scheduler_callable
else:
if self.deepspeed_config["zero_optimization"].get("offload_optimizer", {}).get(
"device", "none"
@ -1531,7 +1657,10 @@ class Accelerator:
optimizer = DeepSpeedCPUAdam(optimizer.param_groups, **defaults)
kwargs["optimizer"] = optimizer
if scheduler is not None:
if type(scheduler).__name__ in deepspeed.runtime.lr_schedules.VALID_LR_SCHEDULES:
if (
isinstance(scheduler, LRScheduler)
or type(scheduler).__name__ in deepspeed.runtime.lr_schedules.VALID_LR_SCHEDULES
):
kwargs["lr_scheduler"] = scheduler
engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
@ -1704,7 +1833,9 @@ class Accelerator:
result[i] = optimizer
return tuple(result)
def prepare_data_loader(self, data_loader: torch.utils.data.DataLoader, device_placement=None):
def prepare_data_loader(
self, data_loader: torch.utils.data.DataLoader, device_placement=None, slice_fn_for_dispatch=None
):
"""
Prepares a PyTorch DataLoader for training in any distributed setup. It is recommended to use
[`Accelerator.prepare`] instead.
@ -1715,6 +1846,10 @@ class Accelerator:
device_placement (`bool`, *optional*):
Whether or not to place the batches on the proper device in the prepared dataloader. Will default to
`self.device_placement`.
slice_fn_for_dispatch (`Callable`, *optional*`):
If passed, this function will be used to slice tensors across `num_processes`. Will default to
[`~utils.slice_tensors`]. This argument is used only when `dispatch_batches` is set to `True` and will
be ignored otherwise.
Example:
@ -1744,6 +1879,7 @@ class Accelerator:
rng_types=self.rng_types.copy(),
dispatch_batches=self.dispatch_batches,
even_batches=self.even_batches,
slice_fn_for_dispatch=slice_fn_for_dispatch,
)
self._dataloaders.append(prepared_data_loader)
return prepared_data_loader
@ -1852,6 +1988,65 @@ class Accelerator:
else:
loss.backward(**kwargs)
def set_trigger(self):
"""
Sets the internal trigger tensor to 1 on the current process. A latter check should follow using this which
will check across all processes.
Note:
Does not require `wait_for_everyone()`
Example:
```python
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
>>> # Assume later in the training script
>>> # `should_do_breakpoint` is a custom function to monitor when to break,
>>> # e.g. when the loss is NaN
>>> if should_do_breakpoint(loss):
... accelerator.set_trigger()
>>> # Assume later in the training script
>>> if accelerator.check_breakpoint():
... break
```
"""
self.flag_tensor = torch.tensor(1, device=self.device)
def check_trigger(self):
"""
Checks if the internal trigger tensor has been set to 1 in any of the processes. If so, will return `True` and
reset the trigger tensor to 0.
Note:
Does not require `wait_for_everyone()`
Example:
```python
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
>>> # Assume later in the training script
>>> # `should_do_breakpoint` is a custom function to monitor when to break,
>>> # e.g. when the loss is NaN
>>> if should_do_breakpoint(loss):
... accelerator.set_trigger()
>>> # Assume later in the training script
>>> if accelerator.check_trigger():
... break
```
"""
# Now that we are outside `__init__`, we can initialize it if it is `None` on device
if self.flag_tensor is None:
self.flag_tensor = torch.tensor(0, device=self.device)
flag_tensor = self.reduce(self.flag_tensor)
if flag_tensor.item() >= 1:
self.flag_tensor = torch.tensor(0, device=self.device)
return True
return False
def unscale_gradients(self, optimizer=None):
"""
Unscale the gradients in mixed precision training with AMP. This is a noop in all other settings.
@ -1885,6 +2080,10 @@ class Accelerator:
for opt in optimizer:
while isinstance(opt, AcceleratedOptimizer):
opt = opt.optimizer
# Reduce gradients first for XLA
if self.distributed_type == DistributedType.TPU:
gradients = xm._fetch_gradients(opt)
self.reduce(gradients, scale=1.0 / self.num_processes)
self.scaler.unscale_(opt)
def clip_grad_norm_(self, parameters, max_norm, norm_type=2):
@ -1984,14 +2183,14 @@ class Accelerator:
"""
return gather(tensor)
def gather_for_metrics(self, tensor):
def gather_for_metrics(self, input_data):
"""
Gathers `tensor` and potentially drops duplicates in the last batch if on a distributed system. Should be used
for gathering the inputs and targets for metric calculation.
Gathers `input_data` and potentially drops duplicates in the last batch if on a distributed system. Should be
used for gathering the inputs and targets for metric calculation.
Args:
tensor (`torch.Tensor`, or a nested tuple/list/dictionary of `torch.Tensor`):
The tensors for calculating metrics across all processes.
input (`torch.Tensor`, `object`, a nested tuple/list/dictionary of `torch.Tensor`, or a nested tuple/list/dictionary of `object`):
The tensors or objects for calculating metrics across all processes
Example:
@ -2009,28 +2208,44 @@ class Accelerator:
9
```
"""
tensor = self.gather(tensor)
if self.gradient_state.remainder == -1:
logger.info(
"The used dataset had no length, returning gathered tensors. You should drop the remainder yourself."
)
return tensor
try:
# Then see if we're on the last batch of our eval dataloader
if self.gradient_state.end_of_dataloader and self.gradient_state.remainder > 0:
# Last batch needs to be truncated on distributed systems as it contains additional samples
def _adjust_samples(tensor):
return tensor[: self.gradient_state.remainder]
return recursively_apply(_adjust_samples, tensor)
try:
recursively_apply(lambda x: x, input_data, error_on_other_type=True)
all_tensors = True
except TypeError:
all_tensors = False
if not all_tensors:
data = gather_object(input_data)
else:
data = self.gather(input_data)
try:
if self.gradient_state.end_of_dataloader:
# at the end of a dataloader, `gather_for_metrics` regresses to
# `gather` unless the dataset has a remainder so log.
if self.gradient_state.remainder == -1:
logger.info(
"The used dataset had no length, returning gathered tensors. You should drop the remainder yourself."
)
return data
elif self.gradient_state.remainder > 0:
# Last batch needs to be truncated on distributed systems as it contains additional samples
def _adjust_samples(tensor):
return tensor[: self.gradient_state.remainder]
return recursively_apply(_adjust_samples, data)
else: # remainder is 0
# no remainder even though at end of dataloader, so nothing to do.
return data
else:
# Not at the end of the dataloader, no need to adjust the tensors
return tensor
return data
except Exception:
# Dataset had no length or raised an error
return tensor
return data
def reduce(self, tensor, reduction="sum"):
def reduce(self, tensor, reduction="sum", scale=1.0):
"""
Reduce the values in *tensor* across all processes based on *reduction*.
@ -2042,6 +2257,8 @@ class Accelerator:
The tensors to reduce across all processes.
reduction (`str`, *optional*, defaults to "sum"):
A reduction type, can be one of 'sum', 'mean', or 'none'. If 'none', will not perform any operation.
scale (`float`, *optional*, defaults to 1.0):
A default scaling value to be applied after the reduce, only valied on XLA.
Returns:
`torch.Tensor`, or a nested tuple/list/dictionary of `torch.Tensor`:
@ -2062,7 +2279,7 @@ class Accelerator:
tensor([4, 6])
```
"""
return reduce(tensor, reduction)
return reduce(tensor, reduction, scale)
def pad_across_processes(self, tensor, dim=0, pad_index=0, pad_first=False):
"""
@ -2186,7 +2403,6 @@ class Accelerator:
... )
```
"""
self.trackers = []
for tracker in self.log_with:
if issubclass(type(tracker), GeneralTracker):
# Custom trackers are already initialized
@ -2228,7 +2444,7 @@ class Accelerator:
>>> tensorboard_tracker = accelerator.get_tracker("tensorboard")
```
"""
if len(getattr(self, "trackers", [])) > 0:
if len(self.trackers) > 0:
for tracker in self.trackers:
if tracker.name == name:
return tracker.tracker if unwrap else tracker
@ -2286,13 +2502,18 @@ class Accelerator:
for tracker in self.trackers:
tracker.finish()
def save(self, obj, f):
def save(self, obj, f, safe_serialization=False):
"""
Save the object passed to disk once per machine. Use in place of `torch.save`.
Args:
obj (`object`): The object to save.
f (`str` or `os.PathLike`): Where to save the content of `obj`.
safe_serialization (`bool`, *optional*, defaults to `False`): Whether to save `obj` using `safetensors`
Note:
If `save_on_each_node` was passed in as a `ProjectConfiguration`, will save the object once per node,
rather than only once on the main node.
Example:
@ -2304,7 +2525,12 @@ class Accelerator:
>>> accelerator.save(arr, "array.pkl")
```
"""
save(obj, f)
save(
obj,
f,
save_on_each_node=self.project_configuration.save_on_each_node,
safe_serialization=safe_serialization,
)
def save_model(
self,
@ -2384,7 +2610,7 @@ class Accelerator:
del state_dict[name]
warn_names.add(name)
if len(warn_names) > 0:
logger.warning_once(
logger.warning(
f"Removed shared tensor {warn_names} while saving. This should be OK, but check by verifying that you don't receive any warning while reloading",
)
@ -2415,7 +2641,7 @@ class Accelerator:
# Save the model
for shard_file, shard in shards.items():
self.save(shard, os.path.join(save_directory, shard_file))
self.save(shard, os.path.join(save_directory, shard_file), safe_serialization=safe_serialization)
if index is None:
path_to_weights = os.path.join(save_directory, WEIGHTS_NAME)
@ -2506,8 +2732,10 @@ class Accelerator:
os.makedirs(output_dir, exist_ok=True)
if self.project_configuration.automatic_checkpoint_naming:
folders = [os.path.join(output_dir, folder) for folder in os.listdir(output_dir)]
if self.project_configuration.total_limit is not None and (
len(folders) + 1 > self.project_configuration.total_limit
if (
self.project_configuration.total_limit is not None
and (len(folders) + 1 > self.project_configuration.total_limit)
and self.is_main_process
):
def _inner(folder):
@ -2524,6 +2752,7 @@ class Accelerator:
raise ValueError(
f"Checkpoint directory {output_dir} ({self.save_iteration}) already exists. Please manually override `self.save_iteration` with what iteration to start with."
)
self.wait_for_everyone()
os.makedirs(output_dir, exist_ok=True)
logger.info(f"Saving current state to {output_dir}")
@ -2570,16 +2799,26 @@ class Accelerator:
elif self.distributed_type not in [DistributedType.MEGATRON_LM]:
schedulers = self._schedulers
# Save the samplers of the dataloaders
dataloaders = self._dataloaders
# Call model loading hooks that might have been registered with
# accelerator.register_model_state_hook
for hook in self._save_model_state_pre_hook.values():
hook(self._models, weights, output_dir)
save_location = save_accelerator_state(
output_dir, weights, optimizers, schedulers, self.state.process_index, self.scaler
output_dir,
weights,
optimizers,
schedulers,
dataloaders,
self.state.process_index,
self.scaler,
save_on_each_node=self.project_configuration.save_on_each_node,
)
for i, obj in enumerate(self._custom_objects):
save_custom_state(obj, output_dir, i)
save_custom_state(obj, output_dir, i, save_on_each_node=self.project_configuration.save_on_each_node)
self.project_configuration.iteration += 1
return save_location
@ -2614,7 +2853,7 @@ class Accelerator:
self._load_model_state_pre_hook[handle.id] = hook
return handle
def load_state(self, input_dir: str, **load_model_func_kwargs):
def load_state(self, input_dir: str = None, **load_model_func_kwargs):
"""
Loads the current states of the model, optimizer, scaler, RNG generators, and registered objects.
@ -2627,7 +2866,8 @@ class Accelerator:
Args:
input_dir (`str` or `os.PathLike`):
The name of the folder all relevant weights and states were saved in.
The name of the folder all relevant weights and states were saved in. Can be `None` if
`automatic_checkpoint_naming` is used, and will pick up from the latest checkpoint.
load_model_func_kwargs (`dict`, *optional*):
Additional keyword arguments for loading model which can be passed to the underlying load function,
such as optional arguments for DeepSpeed's `load_checkpoint` function or a `map_location` to load the
@ -2644,10 +2884,23 @@ class Accelerator:
>>> accelerator.load_state("my_checkpoint")
```
"""
# Check if folder exists
input_dir = os.path.expanduser(input_dir)
if not os.path.isdir(input_dir):
raise ValueError(f"Tried to find {input_dir} but folder does not exist")
if input_dir is not None:
# Check if folder exists
input_dir = os.path.expanduser(input_dir)
if not os.path.isdir(input_dir):
raise ValueError(f"Tried to find {input_dir} but folder does not exist")
elif self.project_configuration.automatic_checkpoint_naming:
# Pick up from automatic checkpoint naming
input_dir = os.path.join(self.project_dir, "checkpoints")
folders = [os.path.join(input_dir, folder) for folder in os.listdir(input_dir)]
def _inner(folder):
return list(map(int, re.findall(r"[\/]?([0-9]+)(?=[^\/]*$)", folder)))[0]
folders.sort(key=_inner)
input_dir = folders[-1]
else:
raise ValueError("No input_dir provided and automatic checkpoint naming is disabled.")
logger.info(f"Loading states from {input_dir}")
# Load the models taking care of FSDP and DeepSpeed nuances
@ -2689,6 +2942,8 @@ class Accelerator:
elif self.distributed_type not in [DistributedType.MEGATRON_LM]:
schedulers = self._schedulers
dataloaders = self._dataloaders
# Call model loading hooks that might have been registered with
# accelerator.register_model_state_hook
for hook in self._load_model_state_pre_hook.values():
@ -2709,6 +2964,7 @@ class Accelerator:
models,
optimizers,
schedulers,
dataloaders,
self.state.process_index,
self.scaler,
map_location,
@ -2845,11 +3101,6 @@ class Accelerator:
model = self.unwrap_model(model)
state_dict = model.state_dict()
if state_dict is not None:
for k in state_dict:
if getattr(state_dict[k], "dtype", None) == torch.float16:
state_dict[k] = state_dict[k].float()
return state_dict
def register_for_checkpointing(self, *objects):
@ -2889,11 +3140,14 @@ class Accelerator:
self._custom_objects.extend(objects)
@contextmanager
def autocast(self, cache_enabled: bool = False):
def autocast(self, cache_enabled: bool = False, autocast_handler: AutocastKwargs = None):
"""
Will apply automatic mixed-precision inside the block inside this context manager, if it is enabled. Nothing
different will happen otherwise.
A different `autocast_handler` can be passed in to override the one set in the `Accelerator` object. This is
useful in blocks under `autocast` where you want to revert to fp32.
Example:
```python
@ -2904,7 +3158,19 @@ class Accelerator:
... train()
```
"""
autocast_context = get_mixed_precision_context_manager(self.native_amp, cache_enabled=cache_enabled)
if cache_enabled:
warnings.warn(
"Passing `cache_enabled=True` to `accelerator.autocast` is deprecated and will be removed in v0.23.0. "
"Please use the `AutocastKwargs` class instead and pass it to the `Accelerator` as a `kwarg_handler`.",
FutureWarning,
)
if self.autocast_handler is not None:
self.autocast_handler.cache_enabled = True
else:
self.autocast_handler = AutocastKwargs(cache_enabled=True)
if autocast_handler is None:
autocast_handler = self.autocast_handler
autocast_context = get_mixed_precision_context_manager(self.native_amp, autocast_handler)
autocast_context.__enter__()
yield
autocast_context.__exit__(*sys.exc_info())
@ -2955,3 +3221,14 @@ class Accelerator:
def __deepcopy__(self, memo):
logger.info("Deep copying the `Accelerator` object, note that this will point to the same original object.")
return self
def verify_device_map(self, model: torch.nn.Module) -> bool:
"""
Verifies that `model` has not been prepared with big model inference with a device-map resembling `auto`.
"""
# Checks if any of the child modules has the attribute `hf_device_map` and this map has more than one entry.
for m in model.modules():
if hasattr(m, "hf_device_map") and len(m.hf_device_map) > 1:
return True
return False

View File

@ -12,8 +12,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
from contextlib import contextmanager
from functools import wraps
from typing import Dict, List, Optional, Union
import torch
@ -34,20 +36,25 @@ from .utils import (
find_tied_parameters,
get_balanced_memory,
infer_auto_device_map,
is_torch_version,
load_checkpoint_in_model,
offload_state_dict,
parse_flag_from_env,
retie_parameters,
)
logger = logging.getLogger(__name__)
@contextmanager
def init_empty_weights(include_buffers: bool = False):
def init_empty_weights(include_buffers: bool = None):
"""
A context manager under which models are initialized with all parameters on the meta device, therefore creating an
empty model. Useful when just initializing the model would blow the available RAM.
Args:
include_buffers (`bool`, *optional*, defaults to `False`):
include_buffers (`bool`, *optional*):
Whether or not to also put all buffers on the meta device while initializing.
Example:
@ -68,19 +75,21 @@ def init_empty_weights(include_buffers: bool = False):
</Tip>
"""
if include_buffers is None:
include_buffers = parse_flag_from_env("ACCELERATE_INIT_INCLUDE_BUFFERS", False)
with init_on_device(torch.device("meta"), include_buffers=include_buffers) as f:
yield f
@contextmanager
def init_on_device(device: torch.device, include_buffers: bool = False):
def init_on_device(device: torch.device, include_buffers: bool = None):
"""
A context manager under which models are initialized with all parameters on the specified device.
Args:
device (`torch.device`):
Device to initialize all parameters on.
include_buffers (`bool`, *optional*, defaults to `False`):
include_buffers (`bool`, *optional*):
Whether or not to also put all buffers on the meta device while initializing.
Example:
@ -93,6 +102,15 @@ def init_on_device(device: torch.device, include_buffers: bool = False):
tst = nn.Liner(100, 100) # on `cuda` device
```
"""
if include_buffers is None:
include_buffers = parse_flag_from_env("ACCELERATE_INIT_INCLUDE_BUFFERS", False)
# TODO(shingjan): remove the torch version check once older versions are deprecated
if is_torch_version(">=", "2.0") and include_buffers:
with device:
yield
return
old_register_parameter = nn.Module.register_parameter
if include_buffers:
old_register_buffer = nn.Module.register_buffer
@ -286,6 +304,7 @@ def dispatch_model(
offload_buffers: bool = False,
skip_keys: Optional[Union[str, List[str]]] = None,
preload_module_classes: Optional[List[str]] = None,
force_hooks: bool = False,
):
"""
Dispatches a model according to a given device map. Layers of the model might be spread across GPUs, offloaded on
@ -316,17 +335,23 @@ def dispatch_model(
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
force_hooks (`bool`, *optional*, defaults to `False`):
Whether or not to force device hooks to be attached to the model even if all layers are dispatched to a
single device.
"""
# Error early if the device map is incomplete.
check_device_map(model, device_map)
# for backward compatibility
is_quantized = getattr(model, "is_quantized", False) or getattr(model, "is_loaded_in_8bit", False)
is_bnb_quantized = (
getattr(model, "is_quantized", False) or getattr(model, "is_loaded_in_8bit", False)
) and getattr(model, "quantization_method", "bitsandbytes") == "bitsandbytes"
# We attach hooks if the device_map have at least 2 different devices. Otherwise, the model in already loaded
# We attach hooks if the device_map has at least 2 different devices or if
# force_hooks is set to `True`. Otherwise, the model in already loaded
# in the unique device and the user can decide where to dispatch the model.
# If the model is quantized, we always force-dispatch the model
if (len(set(device_map.values())) > 1) or is_quantized:
if (len(set(device_map.values())) > 1) or is_bnb_quantized or force_hooks:
if main_device is None:
if set(device_map.values()) == {"cpu"} or set(device_map.values()) == {"cpu", "disk"}:
main_device = "cpu"
@ -379,6 +404,22 @@ def dispatch_model(
)
# Attaching the hook may break tied weights, so we retie them
retie_parameters(model, tied_params)
# add warning to cuda and to method
def add_warning(fn, model):
@wraps(fn)
def wrapper(*args, **kwargs):
logger.warning("You shouldn't move a model when it is dispatched on multiple devices.")
for param in model.parameters():
if param.device == torch.device("meta"):
raise RuntimeError("You can't move a model that has some modules offloaded to cpu or disk.")
return fn(*args, **kwargs)
return wrapper
model.to = add_warning(model.to, model)
model.cuda = add_warning(model.cuda, model)
else:
device = list(device_map.values())[0]
if device != "disk":
@ -403,6 +444,7 @@ def load_checkpoint_and_dispatch(
offload_state_dict: Optional[bool] = None,
skip_keys: Optional[Union[str, List[str]]] = None,
preload_module_classes: Optional[List[str]] = None,
force_hooks: bool = False,
):
"""
Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are
@ -445,6 +487,9 @@ def load_checkpoint_and_dispatch(
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
force_hooks (`bool`, *optional*, defaults to `False`):
Whether or not to force device hooks to be attached to the model even if all layers are dispatched to a
single device.
Example:
@ -505,4 +550,5 @@ def load_checkpoint_and_dispatch(
offload_buffers=offload_buffers,
skip_keys=skip_keys,
preload_module_classes=preload_module_classes,
force_hooks=force_hooks,
)

View File

@ -25,6 +25,7 @@ from .utils import (
MODEL_NAME,
OPTIMIZER_NAME,
RNG_STATE_NAME,
SAMPLER_NAME,
SCALER_NAME,
SCHEDULER_NAME,
get_pretty_name,
@ -49,8 +50,10 @@ def save_accelerator_state(
model_states: List[dict],
optimizers: list,
schedulers: list,
dataloaders: list,
process_index: int,
scaler: GradScaler = None,
save_on_each_node: bool = False,
):
"""
Saves the current states of the models, optimizers, scaler, and RNG generators to a given directory.
@ -64,31 +67,49 @@ def save_accelerator_state(
A list of optimizer instances
schedulers (`List[torch.optim.lr_scheduler._LRScheduler]`):
A list of learning rate schedulers
dataloaders (`List[torch.utils.data.DataLoader]`):
A list of dataloader instances to save their sampler states
process_index (`int`):
The current process index in the Accelerator state
scaler (`torch.cuda.amp.GradScaler`, *optional*):
An optional gradient scaler instance to save
save_on_each_node (`bool`, *optional*):
Whether to save on every node, or only the main node.
"""
# Model states
for i, state in enumerate(model_states):
weights_name = f"{MODEL_NAME}.bin" if i == 0 else f"{MODEL_NAME}_{i}.bin"
output_model_file = os.path.join(output_dir, weights_name)
save(state, output_model_file)
save(state, output_model_file, save_on_each_node=save_on_each_node)
logger.info(f"Model weights saved in {output_model_file}")
# Optimizer states
for i, opt in enumerate(optimizers):
state = opt.state_dict()
optimizer_name = f"{OPTIMIZER_NAME}.bin" if i == 0 else f"{OPTIMIZER_NAME}_{i}.bin"
output_optimizer_file = os.path.join(output_dir, optimizer_name)
save(state, output_optimizer_file)
save(state, output_optimizer_file, save_on_each_node=save_on_each_node)
logger.info(f"Optimizer state saved in {output_optimizer_file}")
# Scheduler states
for i, scheduler in enumerate(schedulers):
state = scheduler.state_dict()
scheduler_name = f"{SCHEDULER_NAME}.bin" if i == 0 else f"{SCHEDULER_NAME}_{i}.bin"
output_scheduler_file = os.path.join(output_dir, scheduler_name)
save(state, output_scheduler_file)
save(state, output_scheduler_file, save_on_each_node=save_on_each_node)
logger.info(f"Scheduler state saved in {output_scheduler_file}")
# DataLoader states
for i, dataloader in enumerate(dataloaders):
sampler_name = f"{SAMPLER_NAME}.bin" if i == 0 else f"{SAMPLER_NAME}_{i}.bin"
output_sampler_file = os.path.join(output_dir, sampler_name)
# Only save if we have our custom sampler
from .data_loader import IterableDatasetShard, SeedableRandomSampler
if isinstance(dataloader.dataset, IterableDatasetShard):
sampler = dataloader.sampler.sampler
if isinstance(sampler, SeedableRandomSampler):
save(sampler, output_sampler_file, save_on_each_node=save_on_each_node)
logger.info(f"Sampler state for dataloader {i} saved in {output_sampler_file}")
# GradScaler state
if scaler is not None:
state = scaler.state_dict()
@ -118,6 +139,7 @@ def load_accelerator_state(
models,
optimizers,
schedulers,
dataloaders,
process_index,
scaler=None,
map_location=None,
@ -174,6 +196,19 @@ def load_accelerator_state(
scheduler.load_state_dict(torch.load(input_scheduler_file))
logger.info("All scheduler states loaded successfully")
for i, dataloader in enumerate(dataloaders):
sampler_name = f"{SAMPLER_NAME}.bin" if i == 0 else f"{SAMPLER_NAME}_{i}.bin"
input_sampler_file = os.path.join(input_dir, sampler_name)
# Only load if we have our custom sampler
from .data_loader import IterableDatasetShard, SeedableRandomSampler
if isinstance(dataloader.dataset, IterableDatasetShard):
sampler = dataloader.sampler.sampler
if isinstance(sampler, SeedableRandomSampler):
dataloader.sampler.sampler = torch.load(input_sampler_file)
logger.info("All dataloader sampler states loaded successfully")
# GradScaler state
if scaler is not None:
input_scaler_file = os.path.join(input_dir, SCALER_NAME)
@ -197,14 +232,14 @@ def load_accelerator_state(
logger.info("Could not load random states")
def save_custom_state(obj, path, index: int = 0):
def save_custom_state(obj, path, index: int = 0, save_on_each_node: bool = False):
"""
Saves the state of `obj` to `{path}/custom_checkpoint_{index}.pkl`
"""
# Should this be the right way to get a qual_name type value from `obj`?
save_location = Path(path) / f"custom_checkpoint_{index}.pkl"
logger.info(f"Saving the state of {get_pretty_name(obj)} to {save_location}")
torch.save(obj.state_dict(), save_location)
save(obj.state_dict(), save_location, save_on_each_node=save_on_each_node)
def load_custom_state(obj, path, index: int = 0):

View File

@ -18,6 +18,7 @@ from argparse import ArgumentParser
from accelerate.commands.config import get_config_parser
from accelerate.commands.env import env_command_parser
from accelerate.commands.estimate import estimate_command_parser
from accelerate.commands.launch import launch_command_parser
from accelerate.commands.test import test_command_parser
from accelerate.commands.tpu import tpu_command_parser
@ -29,6 +30,7 @@ def main():
# Register commands
get_config_parser(subparsers=subparsers)
estimate_command_parser(subparsers=subparsers)
env_command_parser(subparsers=subparsers)
launch_command_parser(subparsers=subparsers)
tpu_command_parser(subparsers=subparsers)

View File

@ -21,6 +21,7 @@ from ...utils import (
DistributedType,
is_deepspeed_available,
is_mps_available,
is_npu_available,
is_transformers_available,
is_xpu_available,
)
@ -59,10 +60,11 @@ def get_cluster_input():
main_process_port = None
rdzv_backend = "static"
same_network = True
debug = False
if distributed_type in [
DistributedType.MULTI_GPU,
DistributedType.MULTI_GPU,
DistributedType.MULTI_NPU,
DistributedType.MULTI_XPU,
DistributedType.MULTI_CPU,
]:
@ -94,10 +96,16 @@ def get_cluster_input():
rdzv_backend = _ask_field(
"What rendezvous backend will you use? ('static', 'c10d', ...): ", default="static"
)
debug = _ask_field(
"Should distributed operations be checked while running for errors? This can avoid timeout issues but will be slower. [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if distributed_type == DistributedType.NO:
use_cpu = _ask_field(
"Do you want to run your training on CPU only (even if a GPU / Apple Silicon device is available)? [yes/NO]:",
"Do you want to run your training on CPU only (even if a GPU / Apple Silicon / Ascend NPU device is available)? [yes/NO]:",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
@ -305,7 +313,7 @@ def get_cluster_input():
)
fsdp_config = {}
if distributed_type in [DistributedType.MULTI_GPU]:
if distributed_type in [DistributedType.MULTI_GPU, DistributedType.MULTI_NPU, DistributedType.MULTI_XPU]:
use_fsdp = _ask_field(
"Do you want to use FullyShardedDataParallel? [yes/NO]: ",
_convert_yes_no_to_bool,
@ -335,11 +343,18 @@ def get_cluster_input():
lambda x: FSDP_AUTO_WRAP_POLICY[int(x)],
)
if fsdp_config["fsdp_auto_wrap_policy"] == FSDP_AUTO_WRAP_POLICY[0]:
fsdp_config["fsdp_transformer_layer_cls_to_wrap"] = _ask_field(
"Specify the comma-separated list of transformer layer class names (case-sensitive) to wrap ,e.g, :"
"`BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput` ...? : ",
str,
use_no_split_modules = _ask_field(
"Do you want to use the model's `_no_split_modules` to wrap. Only applicable for 🤗 Transformers [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if not use_no_split_modules:
fsdp_config["fsdp_transformer_layer_cls_to_wrap"] = _ask_field(
"Specify the comma-separated list of transformer layer class names (case-sensitive) to wrap ,e.g, :"
"`BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput` ...? : ",
str,
)
elif fsdp_config["fsdp_auto_wrap_policy"] == FSDP_AUTO_WRAP_POLICY[1]:
fsdp_config["fsdp_min_num_params"] = _ask_field(
"What should be your FSDP's minimum number of parameters for Default Auto Wrapping Policy? [1e8]: ",
@ -371,12 +386,21 @@ def get_cluster_input():
default=False,
error_message="Please enter yes or no.",
)
fsdp_config["fsdp_sync_module_states"] = _ask_field(
"Do you want each individually wrapped FSDP unit to broadcast module parameters from rank 0 at the start? [yes/NO]: ",
fsdp_config["fsdp_cpu_ram_efficient_loading"] = _ask_field(
"Do you want to enable CPU RAM efficient model loading? Only applicable for 🤗 Transformers models. [YES/no]: ",
_convert_yes_no_to_bool,
default=False,
default=True,
error_message="Please enter yes or no.",
)
if fsdp_config["fsdp_cpu_ram_efficient_loading"]:
fsdp_config["fsdp_sync_module_states"] = True
else:
fsdp_config["fsdp_sync_module_states"] = _ask_field(
"Do you want each individually wrapped FSDP unit to broadcast module parameters from rank 0 at the start? [YES/no]: ",
_convert_yes_no_to_bool,
default=True,
error_message="Please enter yes or no.",
)
megatron_lm_config = {}
if distributed_type in [DistributedType.MULTI_GPU]:
@ -477,6 +501,11 @@ def get_cluster_input():
else:
num_processes = 1
if (distributed_type == DistributedType.MULTI_GPU) and (num_machines == 1) and (num_processes == 1):
raise ValueError(
f"Specified distributed type {distributed_type} but only using 1 GPU on a single machine. Please select `No distributed training` for the type of machine you are using."
)
if (
distributed_type
in [
@ -488,8 +517,12 @@ def get_cluster_input():
and not use_cpu
and not use_mps
):
if is_npu_available():
machine_type = "NPU(s)"
else:
machine_type = "GPU(s)"
gpu_ids = _ask_field(
"What GPU(s) (by id) should be used for training on this machine as a comma-seperated list? [all]:",
f"What {machine_type} (by id) should be used for training on this machine as a comma-seperated list? [all]:",
default="all",
)
@ -617,4 +650,5 @@ def get_cluster_input():
tpu_use_sudo=tpu_use_sudo,
tpu_use_cluster=tpu_use_cluster,
dynamo_config=dynamo_config,
debug=debug,
)

View File

@ -78,6 +78,7 @@ class BaseConfig:
distributed_type: Union[DistributedType, SageMakerDistributedType]
mixed_precision: str
use_cpu: bool
debug: bool
def to_dict(self):
result = self.__dict__
@ -106,6 +107,15 @@ class BaseConfig:
config_dict["dynamo_config"] = {} if dynamo_backend == "NO" else {"dynamo_backend": dynamo_backend}
if "use_cpu" not in config_dict:
config_dict["use_cpu"] = False
if "debug" not in config_dict:
config_dict["debug"] = False
extra_keys = sorted(set(config_dict.keys()) - set(cls.__dataclass_fields__.keys()))
if len(extra_keys) > 0:
raise ValueError(
f"The config file at {json_file} had unknown keys ({extra_keys}), please try upgrading your `accelerate`"
" version or fix (and potentially remove) these keys from your config file."
)
return cls(**config_dict)
def to_json_file(self, json_file):
@ -120,7 +130,6 @@ class BaseConfig:
config_dict = yaml.safe_load(f)
if "compute_environment" not in config_dict:
config_dict["compute_environment"] = ComputeEnvironment.LOCAL_MACHINE
if "mixed_precision" not in config_dict:
config_dict["mixed_precision"] = "fp16" if ("fp16" in config_dict and config_dict["fp16"]) else None
if isinstance(config_dict["mixed_precision"], bool) and not config_dict["mixed_precision"]:
@ -132,6 +141,14 @@ class BaseConfig:
config_dict["dynamo_config"] = {} if dynamo_backend == "NO" else {"dynamo_backend": dynamo_backend}
if "use_cpu" not in config_dict:
config_dict["use_cpu"] = False
if "debug" not in config_dict:
config_dict["debug"] = False
extra_keys = sorted(set(config_dict.keys()) - set(cls.__dataclass_fields__.keys()))
if len(extra_keys) > 0:
raise ValueError(
f"The config file at {yaml_file} had unknown keys ({extra_keys}), please try upgrading your `accelerate`"
" version or fix (and potentially remove) these keys from your config file."
)
return cls(**config_dict)
def to_yaml_file(self, yaml_file):

View File

@ -30,13 +30,15 @@ DYNAMO_BACKENDS = [
"EAGER",
"AOT_EAGER",
"INDUCTOR",
"NVFUSER",
"AOT_NVFUSER",
"AOT_CUDAGRAPHS",
"AOT_TS_NVFUSER",
"NVPRIMS_NVFUSER",
"CUDAGRAPHS",
"OFI",
"FX2TRT",
"ONNXRT",
"TENSORRT",
"IPEX",
"TVM",
]

View File

@ -86,6 +86,7 @@ def write_basic_config(mixed_precision="no", save_location: str = default_json_c
config["use_cpu"] = True
config["num_processes"] = 1
config["distributed_type"] = "NO"
config["debug"] = False
config = ClusterConfig(**config)
config.to_json_file(path)
return path

View File

@ -221,6 +221,15 @@ def get_sagemaker_input():
ec2_instance_query += "? [ml.p3.2xlarge]:"
ec2_instance_type = _ask_field(ec2_instance_query, lambda x: str(x).lower(), default="ml.p3.2xlarge")
debug = False
if distributed_type != SageMakerDistributedType.NO:
debug = _ask_field(
"Should distributed operations be checked while running for errors? This can avoid timeout issues but will be slower. [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
num_machines = 1
if distributed_type in (SageMakerDistributedType.DATA_PARALLEL, SageMakerDistributedType.MODEL_PARALLEL):
num_machines = _ask_field(
@ -254,4 +263,5 @@ def get_sagemaker_input():
num_machines=num_machines,
sagemaker_inputs_file=sagemaker_inputs_file,
sagemaker_metrics_file=sagemaker_metrics_file,
debug=debug,
)

View File

@ -0,0 +1,270 @@
#!/usr/bin/env python
# Copyright 2023 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from huggingface_hub import model_info
from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError
from accelerate import init_empty_weights
from accelerate.utils import (
calculate_maximum_sizes,
convert_bytes,
is_timm_available,
is_transformers_available,
)
if is_transformers_available():
import transformers
from transformers import AutoConfig, AutoModel
if is_timm_available():
import timm
def verify_on_hub(repo: str, token: str = None):
"Verifies that the model is on the hub and returns the model info."
try:
return model_info(repo, token=token)
except GatedRepoError:
return "gated"
except RepositoryNotFoundError:
return "repo"
def check_has_model(error):
"""
Checks what library spawned `error` when a model is not found
"""
if is_timm_available() and isinstance(error, RuntimeError) and "Unknown model" in error.args[0]:
return "timm"
elif (
is_transformers_available()
and isinstance(error, OSError)
and "does not appear to have a file named" in error.args[0]
):
return "transformers"
else:
return "unknown"
def create_empty_model(model_name: str, library_name: str, trust_remote_code: bool = False, access_token: str = None):
"""
Creates an empty model from its parent library on the `Hub` to calculate the overall memory consumption.
Args:
model_name (`str`):
The model name on the Hub
library_name (`str`):
The library the model has an integration with, such as `transformers`. Will be used if `model_name` has no
metadata on the Hub to determine the library.
trust_remote_code (`bool`, `optional`, defaults to `False`):
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to `True` for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
access_token (`str`, `optional`, defaults to `None`):
The access token to use to access private or gated models on the Hub. (for use on the Gradio app)
Returns:
`torch.nn.Module`: The torch model that has been initialized on the `meta` device.
"""
model_info = verify_on_hub(model_name, access_token)
# Simplified errors
if model_info == "gated":
raise GatedRepoError(
f"Repo for model `{model_name}` is gated. You must be authenticated to access it. Please run `huggingface-cli login`."
)
elif model_info == "repo":
raise RepositoryNotFoundError(
f"Repo for model `{model_name}` does not exist on the Hub. If you are trying to access a private repo,"
" make sure you are authenticated via `huggingface-cli login` and have access."
)
if library_name is None:
library_name = getattr(model_info, "library_name", False)
if not library_name:
raise ValueError(
f"Model `{model_name}` does not have any library metadata on the Hub, please manually pass in a `--library_name` to use (such as `transformers`)"
)
if library_name == "transformers":
if not is_transformers_available():
raise ImportError(
f"To check `{model_name}`, `transformers` must be installed. Please install it via `pip install transformers`"
)
print(f"Loading pretrained config for `{model_name}` from `transformers`...")
auto_map = model_info.config.get("auto_map", False)
config = AutoConfig.from_pretrained(model_name, trust_remote_code=trust_remote_code)
with init_empty_weights():
# remote code could specify a specific `AutoModel` class in the `auto_map`
constructor = AutoModel
if isinstance(auto_map, dict):
value = None
for key in auto_map.keys():
if key.startswith("AutoModelFor"):
value = key
break
if value is not None:
constructor = getattr(transformers, value)
model = constructor.from_config(config, trust_remote_code=trust_remote_code)
elif library_name == "timm":
if not is_timm_available():
raise ImportError(
f"To check `{model_name}`, `timm` must be installed. Please install it via `pip install timm`"
)
print(f"Loading pretrained config for `{model_name}` from `timm`...")
with init_empty_weights():
model = timm.create_model(model_name, pretrained=False)
else:
raise ValueError(
f"Library `{library_name}` is not supported yet, please open an issue on GitHub for us to add support."
)
return model
def create_ascii_table(headers: list, rows: list, title: str):
"Creates a pretty table from a list of rows, minimal version of `tabulate`."
sep_char, in_between = "", ""
column_widths = []
for i in range(len(headers)):
column_values = [row[i] for row in rows] + [headers[i]]
max_column_width = max(len(value) for value in column_values)
column_widths.append(max_column_width)
formats = [f"%{column_widths[i]}s" for i in range(len(rows[0]))]
pattern = f"{sep_char}{sep_char.join(formats)}{sep_char}"
diff = 0
def make_row(left_char, middle_char, right_char):
return f"{left_char}{middle_char.join([in_between * n for n in column_widths])}{in_between * diff}{right_char}"
separator = make_row("", "", "")
if len(title) > sum(column_widths):
diff = abs(len(title) - len(separator))
column_widths[-1] += diff
# Update with diff
separator = make_row("", "", "")
initial_rows = [
make_row("", in_between, ""),
f"{sep_char}{title.center(len(separator) - 2)}{sep_char}",
make_row("", "", ""),
]
table = "\n".join(initial_rows) + "\n"
column_widths[-1] += diff
centered_line = [text.center(column_widths[i]) for i, text in enumerate(headers)]
table += f"{pattern % tuple(centered_line)}\n{separator}\n"
for i, line in enumerate(rows):
centered_line = [t.center(column_widths[i]) for i, t in enumerate(line)]
table += f"{pattern % tuple(centered_line)}\n"
table += f'{"".join([in_between * n for n in column_widths])}'
return table
def estimate_command_parser(subparsers=None):
if subparsers is not None:
parser = subparsers.add_parser("estimate-memory")
else:
parser = argparse.ArgumentParser(description="Model size estimator for fitting a model onto CUDA memory.")
parser.add_argument("model_name", type=str, help="The model name on the Hugging Face Hub.")
parser.add_argument(
"--library_name",
type=str,
help="The library the model has an integration with, such as `transformers`, needed only if this information is not stored on the Hub.",
choices=["timm", "transformers"],
)
parser.add_argument(
"--dtypes",
type=str,
nargs="+",
default=["float32", "float16", "int8", "int4"],
help="The dtypes to use for the model, must be one (or many) of `float32`, `float16`, `int8`, and `int4`",
choices=["float32", "float16", "int8", "int4"],
)
parser.add_argument(
"--trust_remote_code",
action="store_true",
help="""Whether or not to allow for custom models defined on the Hub in their own modeling files. This flag
should only be used for repositories you trust and in which you have read the code, as it will execute
code present on the Hub on your local machine.""",
)
if subparsers is not None:
parser.set_defaults(func=estimate_command)
return parser
def gather_data(args):
"Creates an empty model and gathers the data for the sizes"
try:
model = create_empty_model(
args.model_name, library_name=args.library_name, trust_remote_code=args.trust_remote_code
)
except (RuntimeError, OSError) as e:
library = check_has_model(e)
if library != "unknown":
raise RuntimeError(
f"Tried to load `{args.model_name}` with `{library}` but a possible model to load was not found inside the repo."
)
raise e
total_size, largest_layer = calculate_maximum_sizes(model)
data = []
for dtype in args.dtypes:
dtype_total_size = total_size
dtype_largest_layer = largest_layer[0]
if dtype == "float16":
dtype_total_size /= 2
dtype_largest_layer /= 2
elif dtype == "int8":
dtype_total_size /= 4
dtype_largest_layer /= 4
elif dtype == "int4":
dtype_total_size /= 8
dtype_largest_layer /= 8
dtype_training_size = dtype_total_size * 4
data.append([dtype, dtype_largest_layer, dtype_total_size, dtype_training_size])
return data
def estimate_command(args):
data = gather_data(args)
for row in data:
for i, item in enumerate(row):
if isinstance(item, (int, float)):
row[i] = convert_bytes(item)
headers = ["dtype", "Largest Layer", "Total Size", "Training using Adam"]
title = f"Memory Usage for loading `{args.model_name}`"
table = create_ascii_table(headers, data, title)
print(table)
def main():
parser = estimate_command_parser()
args = parser.parse_args()
estimate_command(args)
if __name__ == "__main__":
main()

View File

@ -524,9 +524,17 @@ def launch_command_parser(subparsers=None):
help="If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable paramteres."
" (useful only when `use_fsdp` flag is passed).",
)
fsdp_args.add_argument(
"--fsdp_cpu_ram_efficient_loading",
default="true",
type=str,
help="If True, only the first process loads the pretrained model checkoint while all other processes have empty weights. "
"Only applicable for 🤗 Transformers. When using this, `--fsdp_sync_module_states` needs to True. "
"(useful only when `use_fsdp` flag is passed).",
)
fsdp_args.add_argument(
"--fsdp_sync_module_states",
default="false",
default="true",
type=str,
help="If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0."
" (useful only when `use_fsdp` flag is passed).",
@ -877,6 +885,9 @@ def _validate_launch_command(args):
and getattr(args, name, None) is None
):
setattr(args, name, attr)
if not args.debug:
args.debug = defaults.debug
if not args.mixed_precision:
if defaults.mixed_precision is None:
args.mixed_precision = "no"
@ -905,6 +916,8 @@ def _validate_launch_command(args):
else:
args.num_processes = torch.cuda.device_count()
warned.append(f"\t`--num_processes` was set to a value of `{args.num_processes}`")
if args.debug is None:
args.debug = False
if not args.multi_gpu and (
(args.use_xpu and is_xpu_available() and torch.xpu.device_count() > 1)
or (is_npu_available() and torch.npu.device_count() > 1)
@ -926,6 +939,8 @@ def _validate_launch_command(args):
if args.dynamo_backend is None:
warned.append("\t`--dynamo_backend` was set to a value of `'no'`")
args.dynamo_backend = "no"
if args.debug:
logger.debug("Running script in debug mode, expect distributed operations to be slightly slower.")
is_aws_env_disabled = defaults is None or (
defaults is not None and defaults.compute_environment != ComputeEnvironment.AMAZON_SAGEMAKER

View File

@ -14,10 +14,10 @@
import math
from contextlib import suppress
from typing import List, Optional, Union
from typing import Callable, List, Optional, Union
import torch
from torch.utils.data import BatchSampler, DataLoader, IterableDataset
from torch.utils.data import BatchSampler, DataLoader, IterableDataset, RandomSampler
from .logging import get_logger
from .state import AcceleratorState, DistributedType, GradientState, is_tpu_available
@ -64,6 +64,41 @@ for v, additional_kwargs in _PYTORCH_DATALOADER_ADDITIONAL_KWARGS.items():
_PYTORCH_DATALOADER_KWARGS.update(additional_kwargs)
class SeedableRandomSampler(RandomSampler):
"""
Same as a random sampler, except that in `__iter__` a seed can be used.
Needed specifically in distributed cases, when the random generator for each GPU needs to start from the same seed
and be fully reproducable on multiple iterations.
If a custom `generator` is passed, it will rely on its initial seed as well as the current iteration it is on
(stored in `self.epoch`).
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.epoch = 0
def __iter__(self):
g = torch.Generator()
if self.generator is not None:
seed = self.epoch + self.generator.initial_seed()
else:
seed = self.epoch
g.manual_seed(seed)
n = len(self.data_source)
# Taken 1:1 from torch.utils.data.sampler.RandomSampler.__iter__
if self.replacement:
for _ in range(self.num_samples // 32):
yield from torch.randint(high=n, size=(32,), dtype=torch.int64, generator=g).tolist()
else:
yield from torch.randperm(n, generator=g).tolist()
def set_epoch(self, epoch: int):
"Sets the current iteration of the sampler."
self.epoch = epoch
class BatchSamplerShard(BatchSampler):
"""
Wraps a PyTorch `BatchSampler` to generate batches for one of the processes only. Instances of this class will
@ -271,7 +306,25 @@ class IterableDatasetShard(IterableDataset):
self.process_index = process_index
self.split_batches = split_batches
def set_epoch(self, epoch):
self.epoch = epoch
if hasattr(self.dataset, "set_epoch"):
self.dataset.set_epoch(epoch)
def __len__(self):
# We will just raise the downstream error if the underlying dataset is not sized
if self.drop_last:
return (len(self.dataset) // (self.batch_size * self.num_processes)) * self.batch_size
else:
return math.ceil(len(self.dataset) / (self.batch_size * self.num_processes)) * self.batch_size
def __iter__(self):
if (
not hasattr(self.dataset, "set_epoch")
and hasattr(self.dataset, "generator")
and isinstance(self.dataset.generator, torch.Generator)
):
self.dataset.generator.manual_seed(self.epoch)
real_batch_size = self.batch_size if self.split_batches else (self.batch_size * self.num_processes)
process_batch_size = (self.batch_size // self.num_processes) if self.split_batches else self.batch_size
process_slice = range(self.process_index * process_batch_size, (self.process_index + 1) * process_batch_size)
@ -324,8 +377,9 @@ class DataLoaderStateMixin:
"Prepares the gradient state for the current dataloader"
self.reset()
with suppress(Exception):
length = getattr(self.dataset, "total_dataset_length", len(self.dataset))
self.remainder = length % self.total_batch_size
if not self._drop_last:
length = getattr(self.dataset, "total_dataset_length", len(self.dataset))
self.remainder = length % self.total_batch_size
self.gradient_state._add_dataloader(self)
def end(self):
@ -352,7 +406,7 @@ class DataLoaderShard(DataLoader, DataLoaderStateMixin):
- `"generator"`: an optional `torch.Generator`
synchronized_generator (`torch.Generator`, *optional*):
A random number generator to keep synchronized across processes.
split_batches (`int`, *optional*, defaults to 0):
skip_batches (`int`, *optional*, defaults to 0):
The number of batches to skip at the beginning.
kwargs:
All other keyword arguments to pass to the regular `DataLoader` initialization.
@ -366,18 +420,31 @@ class DataLoaderShard(DataLoader, DataLoaderStateMixin):
- **total_dataset_length** (`int`) -- Total length of the inner dataset across all processes.
"""
def __init__(self, dataset, device=None, rng_types=None, synchronized_generator=None, skip_batches=0, **kwargs):
def __init__(
self,
dataset,
device=None,
rng_types=None,
synchronized_generator=None,
skip_batches=0,
_drop_last: bool = False,
**kwargs,
):
super().__init__(dataset, **kwargs)
self.device = device
self.rng_types = rng_types
self.synchronized_generator = synchronized_generator
self.skip_batches = skip_batches
self.gradient_state = GradientState()
self._drop_last = _drop_last
self.iteration = 0
def __iter__(self):
if self.rng_types is not None:
synchronize_rng_states(self.rng_types, self.synchronized_generator)
self.begin()
self.set_epoch(self.iteration)
dataloader_iter = super().__iter__()
# We iterate one batch ahead to check when we are at the end
try:
@ -401,8 +468,21 @@ class DataLoaderShard(DataLoader, DataLoaderStateMixin):
if batch_index >= self.skip_batches:
yield current_batch
break
self.iteration += 1
self.end()
def set_epoch(self, epoch: int):
# In case it is manually passed in, the user can set it to what they like
if self.iteration != epoch:
self.iteration = epoch
if hasattr(self.batch_sampler, "sampler") and hasattr(self.batch_sampler.sampler, "set_epoch"):
self.batch_sampler.sampler.set_epoch(epoch)
# We support if a custom `Dataset` implementation has `set_epoch`
# or in general HF datasets `Datasets`
elif hasattr(self.dataset, "set_epoch"):
self.dataset.set_epoch(epoch)
@property
def total_batch_size(self):
batch_sampler = self.sampler if isinstance(self.sampler, BatchSampler) else self.batch_sampler
@ -485,7 +565,9 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
- **total_dataset_length** (`int`) -- Total length of the inner dataset across all processes.
"""
def __init__(self, dataset, split_batches: bool = False, skip_batches=0, _drop_last: bool = False, **kwargs):
def __init__(
self, dataset, split_batches: bool = False, skip_batches=0, _drop_last: bool = False, slice_fn=None, **kwargs
):
shuffle = False
if is_torch_version(">=", "1.11.0"):
from torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe
@ -503,6 +585,9 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
self._drop_last = _drop_last
self.skip_batches = skip_batches
self.slice_fn = slice_tensors if slice_fn is None else slice_fn
self.iteration = 0
def _fetch_batches(self, iterator):
batches, batch = None, None
# On process 0, we gather the batch to dispatch.
@ -542,6 +627,7 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
def __iter__(self):
self.begin()
self.set_epoch(self.iteration)
main_iterator = None
if is_torch_version(">=", "2.0.1"):
# NOTE PyTorch DataLoader adds forward compatibilities for DataPipes, which broadcasts
@ -567,7 +653,12 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
if not self._drop_last and first_batch is None:
# We keep at least num processes elements of the first batch to be able to complete the last batch
first_batch = slice_tensors(batch, slice(0, self.state.num_processes))
first_batch = self.slice_fn(
batch,
slice(0, self.state.num_processes),
process_index=self.state.process_index,
num_processes=self.state.num_processes,
)
if batch is None:
raise ValueError(
@ -593,7 +684,12 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
batch_size += 1
data_slice = slice(self.state.process_index * batch_size, (self.state.process_index + 1) * batch_size)
batch = slice_tensors(batch, data_slice)
batch = self.slice_fn(
batch,
data_slice,
process_index=self.state.process_index,
num_processes=self.state.num_processes,
)
if stop_iteration:
self.end_of_dataloader = True
@ -601,8 +697,18 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
if batch_index >= self.skip_batches:
yield batch
batch_index += 1
self.iteration += 1
self.end()
def set_epoch(self, epoch: int):
# In case it is manually passed in, the user can set it to what they like
if self.iteration != epoch:
self.iteration = epoch
if hasattr(self.batch_sampler.sampler, "set_epoch"):
self.batch_sampler.sampler.set_epoch(epoch)
elif hasattr(self.dataset, "set_epoch"):
self.dataset.set_epoch(epoch)
def __len__(self):
whole_length = super().__len__()
if self.split_batches:
@ -633,6 +739,7 @@ def prepare_data_loader(
rng_types: Optional[List[Union[str, RNGType]]] = None,
dispatch_batches: Optional[bool] = None,
even_batches: bool = True,
slice_fn_for_dispatch: Optional[Callable] = None,
) -> DataLoader:
"""
Wraps a PyTorch `DataLoader` to generate batches for one of the processes only.
@ -682,6 +789,10 @@ def prepare_data_loader(
If set to `True`, in cases where the total batch size across all processes does not exactly divide the
dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among
all workers.
slice_fn_for_dispatch (`Callable`, *optional*`):
If passed, this function will be used to slice tensors across `num_processes`. Will default to
[`~utils.slice_tensors`]. This argument is used only when `dispatch_batches` is set to `True` and will be
ignored otherwise.
Returns:
`torch.utils.data.dataloader.DataLoader`: A new data loader that will yield the portion of the batches
@ -720,6 +831,23 @@ def prepare_data_loader(
new_batch_sampler = dataloader.batch_sampler if not isinstance(new_dataset, IterableDataset) else None
sampler_is_batch_sampler = False
synchronized_generator = None
sampler_is_batch_sampler = isinstance(dataloader.sampler, BatchSampler)
if sampler_is_batch_sampler:
sampler = dataloader.sampler.sampler
else:
sampler = dataloader.batch_sampler.sampler
if isinstance(sampler, RandomSampler) and num_processes > 1:
# When iterating through the dataloader during distributed processes
# we want to ensure that on each process we are iterating through the same
# samples in the same order if a seed is set. This requires a tweak
# to the `torch.utils.data.RandomSampler` class (if used).
sampler = SeedableRandomSampler(
data_source=sampler.data_source,
replacement=sampler.replacement,
num_samples=sampler._num_samples,
generator=getattr(sampler, "generator", torch.Generator()),
)
# No change if no multiprocess
if (num_processes != 1 or state.distributed_type == DistributedType.MEGATRON_LM) and not dispatch_batches:
if isinstance(new_dataset, IterableDataset):
@ -734,17 +862,6 @@ def prepare_data_loader(
split_batches=split_batches,
)
else:
# New batch sampler for the current process.
sampler_is_batch_sampler = isinstance(dataloader.sampler, BatchSampler)
if sampler_is_batch_sampler:
sampler = dataloader.sampler.sampler
else:
sampler = dataloader.batch_sampler.sampler
if hasattr(sampler, "generator"):
if sampler.generator is None:
sampler.generator = torch.Generator()
synchronized_generator = sampler.generator
batch_sampler = dataloader.sampler if sampler_is_batch_sampler else dataloader.batch_sampler
new_batch_sampler = BatchSamplerShard(
batch_sampler,
@ -778,7 +895,11 @@ def prepare_data_loader(
kwargs["batch_size"] = (
dataloader.batch_size // num_processes if split_batches and not dispatch_batches else dataloader.batch_size
)
if isinstance(sampler, SeedableRandomSampler):
if sampler_is_batch_sampler:
dataloader.sampler.sampler = sampler
else:
dataloader.batch_sampler.sampler = sampler
if dispatch_batches:
kwargs.pop("generator")
dataloader = DataLoaderDispatcher(
@ -786,6 +907,7 @@ def prepare_data_loader(
split_batches=split_batches,
batch_sampler=new_batch_sampler,
_drop_last=dataloader.drop_last,
slice_fn=slice_fn_for_dispatch,
**kwargs,
)
elif sampler_is_batch_sampler:
@ -795,6 +917,7 @@ def prepare_data_loader(
sampler=new_batch_sampler,
batch_size=dataloader.batch_size,
rng_types=rng_types,
_drop_last=dataloader.drop_last,
synchronized_generator=synchronized_generator,
**kwargs,
)
@ -805,6 +928,7 @@ def prepare_data_loader(
batch_sampler=new_batch_sampler,
rng_types=rng_types,
synchronized_generator=synchronized_generator,
_drop_last=dataloader.drop_last,
**kwargs,
)

View File

@ -155,17 +155,17 @@ def add_hook_to_module(module: nn.Module, hook: ModelHook, append: bool = False)
module = hook.init_hook(module)
module._hf_hook = hook
@functools.wraps(old_forward)
def new_forward(*args, **kwargs):
def new_forward(module, *args, **kwargs):
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
if module._hf_hook.no_grad:
with torch.no_grad():
output = old_forward(*args, **kwargs)
output = module._old_forward(*args, **kwargs)
else:
output = old_forward(*args, **kwargs)
output = module._old_forward(*args, **kwargs)
return module._hf_hook.post_forward(module, output)
module.forward = new_forward
module.forward = functools.update_wrapper(functools.partial(new_forward, module), old_forward)
return module
@ -242,7 +242,7 @@ class AlignDevicesHook(ModelHook):
def __repr__(self):
return (
f"AlignDeviceHook(execution_device={self.execution_device}, offload={self.offload}, "
f"AlignDevicesHook(execution_device={self.execution_device}, offload={self.offload}, "
f"io_same_device={self.io_same_device}, offload_buffers={self.offload_buffers}, "
f"place_submodules={self.place_submodules}, skip_keys={repr(self.skip_keys)})"
)
@ -311,6 +311,7 @@ class AlignDevicesHook(ModelHook):
for name, device in self.original_devices.items():
if device != torch.device("meta"):
set_module_tensor_to_device(module, name, device, value=self.weights_map.get(name, None))
return module
def attach_execution_device_hook(

View File

@ -18,20 +18,37 @@ import tempfile
import torch
from .state import AcceleratorState
from .state import AcceleratorState, PartialState
from .utils import PrecisionType, PrepareForLaunch, is_mps_available, patch_environment
def notebook_launcher(function, args=(), num_processes=None, mixed_precision="no", use_port="29500"):
def test_launch():
"Verify a `PartialState` can be initialized."
_ = PartialState()
def notebook_launcher(
function,
args=(),
num_processes=None,
mixed_precision="no",
use_port="29500",
master_addr="127.0.0.1",
node_rank=0,
num_nodes=1,
):
"""
Launches a training function, using several processes if it's possible in the current environment (TPU with
multiple cores for instance).
Launches a training function, using several processes or multiple nodes if it's possible in the current environment
(TPU with multiple cores for instance).
<Tip warning={true}>
To use this function absolutely zero calls to a CUDA device must be made in the notebook session before calling. If
any have been made, you will need to restart the notebook and make sure no cells use any CUDA capability.
Setting `ACCELERATE_DEBUG_MODE="1"` in your environment will run a test before truly launching to ensure that none
of those calls have been made.
</Tip>
Args:
@ -47,6 +64,12 @@ def notebook_launcher(function, args=(), num_processes=None, mixed_precision="no
If `fp16` or `bf16`, will use mixed precision training on multi-GPU.
use_port (`str`, *optional*, defaults to `"29500"`):
The port to use to communicate between processes when launching a multi-GPU training.
master_addr (`str`, *optional*, defaults to `"127.0.0.1"`):
The address to use for communication between processes.
node_rank (`int`, *optional*, defaults to 0):
The rank of the current node.
num_nodes (`int`, *optional*, defaults to 1):
The number of nodes to use for training.
Example:
@ -106,7 +129,8 @@ def notebook_launcher(function, args=(), num_processes=None, mixed_precision="no
raise ValueError(
"You have to specify the number of GPUs you would like to use, add `num_processes=...` to your call."
)
if node_rank >= num_nodes:
raise ValueError("The node_rank must be less than the number of nodes.")
if num_processes > 1:
# Multi-GPU launch
from torch.multiprocessing import start_processes
@ -118,19 +142,33 @@ def notebook_launcher(function, args=(), num_processes=None, mixed_precision="no
"inside your training function. Restart your notebook and make sure no cells initializes an "
"`Accelerator`."
)
if torch.cuda.is_initialized():
raise ValueError(
"To launch a multi-GPU training from your notebook, you need to avoid running any instruction "
"using `torch.cuda` in any cell. Restart your notebook and make sure no cells use any CUDA "
"function."
)
# torch.distributed will expect a few environment variable to be here. We set the ones common to each
# process here (the other ones will be set be the launcher).
with patch_environment(
world_size=num_processes, master_addr="127.0.01", master_port=use_port, mixed_precision=mixed_precision
nproc=num_processes,
node_rank=node_rank,
world_size=num_nodes * num_processes,
master_addr=master_addr,
master_port=use_port,
mixed_precision=mixed_precision,
):
# First dummy launch
if os.environ.get("ACCELERATE_DEBUG_MODE", "false").lower() == "true":
launcher = PrepareForLaunch(test_launch, distributed_type="MULTI_GPU")
try:
start_processes(launcher, args=(), nprocs=num_processes, start_method="fork")
except ProcessRaisedException as e:
err = "An issue was found when verifying a stable environment for the notebook launcher."
if "Cannot re-initialize CUDA in forked subprocess" in e.args[0]:
raise RuntimeError(
f"{err}"
"This likely stems from an outside import causing issues once the `notebook_launcher()` is called. "
"Please review your imports and test them when running the `notebook_launcher()` to identify "
"which one is problematic and causing CUDA to be initialized."
) from e
else:
raise RuntimeError(f"{err} The following error was raised: {e}") from e
# Now the actual launch
launcher = PrepareForLaunch(function, distributed_type="MULTI_GPU")
print(f"Launching training on {num_processes} GPUs.")
try:
@ -141,8 +179,10 @@ def notebook_launcher(function, args=(), num_processes=None, mixed_precision="no
"CUDA has been initialized before the `notebook_launcher` could create a forked subprocess. "
"This likely stems from an outside import causing issues once the `notebook_launcher()` is called. "
"Please review your imports and test them when running the `notebook_launcher()` to identify "
"which one is problematic."
"which one is problematic and causing CUDA to be initialized."
) from e
else:
raise RuntimeError(f"An issue was found when launching the training: {e}") from e
else:
# No need for a distributed launch otherwise as it's either CPU, GPU or MPS.

View File

@ -85,9 +85,11 @@ def get_logger(name: str, log_level: str = None):
```python
>>> from accelerate.logging import get_logger
>>> from accelerate import Accelerator
>>> logger = get_logger(__name__)
>>> accelerator = Accelerator()
>>> logger.info("My log", main_process_only=False)
>>> logger.debug("My log", main_process_only=True)
@ -95,9 +97,6 @@ def get_logger(name: str, log_level: str = None):
>>> logger.info("My log")
>>> logger.debug("My second log")
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
>>> array = ["a", "b", "c", "d"]
>>> letter_at_rank = array[accelerator.process_index]
>>> logger.info(letter_at_rank, in_order=True)

View File

@ -59,7 +59,11 @@ class AcceleratedOptimizer(torch.optim.Optimizer):
self.gradient_state = GradientState()
self.device_placement = device_placement
self._is_overflow = False
self._last_scale = None
if self.scaler is not None:
self._accelerate_step_called = False
self._optimizer_original_step_method = self.optimizer.step
self._optimizer_patched_step_method = patch_optimizer_step(self, self.optimizer.step)
# Handle device placement
if device_placement:
@ -123,21 +127,20 @@ class AcceleratedOptimizer(torch.optim.Optimizer):
optimizer_args = {"closure": closure} if closure is not None else {}
xm.optimizer_step(self.optimizer, optimizer_args=optimizer_args)
elif self.scaler is not None:
new_scale = False
if self._last_scale is None:
# `get_scale` is an async operation requiring full synchronization
# on CPU and GPUs before finishing. As a result, we store away
# the prior one to reduce the call overhead
self._last_scale = self.scaler.get_scale()
new_scale = True
self.optimizer.step = self._optimizer_patched_step_method
self.scaler.step(self.optimizer, closure)
self.scaler.update()
scale_after = self.scaler.get_scale()
if not new_scale:
# If we reduced the loss scale, it means the optimizer step was skipped because of gradient overflow.
self._is_overflow = scale_after < self._last_scale
self._last_scale = scale_after
if not self._accelerate_step_called:
# If the optimizer step was skipped, gradient overflow was detected.
self._is_overflow = True
else:
self._is_overflow = False
# Reset the step method to the original one
self.optimizer.step = self._optimizer_original_step_method
# Reset the indicator
self._accelerate_step_called = False
else:
self.optimizer.step(closure)
@ -161,7 +164,24 @@ class AcceleratedOptimizer(torch.optim.Optimizer):
return self._is_overflow
def __getstate__(self):
return self.__dict__.copy()
_ignored_keys = [
"_accelerate_step_called",
"_optimizer_original_step_method",
"_optimizer_patched_step_method",
]
return {k: v for k, v in self.__dict__.items() if k not in _ignored_keys}
def __setstate__(self, state):
self.__dict__.update(state)
if self.scaler is not None:
self._accelerate_step_called = False
self._optimizer_original_step_method = self.optimizer.step
self._optimizer_patched_step_method = patch_optimizer_step(self, self.optimizer.step)
def patch_optimizer_step(accelerated_optimizer: AcceleratedOptimizer, method):
def patched_step(*args, **kwargs):
accelerated_optimizer._accelerate_step_called = True
return method(*args, **kwargs)
return patched_step

View File

@ -48,6 +48,10 @@ if is_tpu_available(check_device=False):
import torch_xla.core.xla_model as xm
if is_npu_available(check_device=False):
import torch_npu # noqa: F401
def is_initialized() -> bool:
"""
Checks if the `AcceleratorState` has been initialized from `Accelerator`. Same as `AcceleratorState.initialized`,
@ -106,12 +110,13 @@ class PartialState:
in use.
- **local_process_index** (`int`) -- The index of the current process on the current server.
- **mixed_precision** (`str`) -- Whether or not the current script will use mixed precision, and if so the type
of mixed precision being performed.
of mixed precision being performed. (Choose from 'no','fp16','bf16 or 'fp8').
- **num_processes** (`int`) -- The number of processes currently launched in parallel.
- **process_index** (`int`) -- The index of the current process.
- **is_last_process** (`bool`) -- Whether or not the current process is the last one.
- **is_main_process** (`bool`) -- Whether or not the current process is the main one.
- **is_local_main_process** (`bool`) -- Whether or not the current process is the main one on the local node.
- **debug** (`bool`) -- Whether or not the current script is being run in debug mode.
"""
_shared_state = SharedDict()
@ -123,6 +128,7 @@ class PartialState:
self.backend = None
env_device = os.environ.get("ACCELERATE_TORCH_DEVICE", None)
self.device = torch.device(env_device) if env_device is not None else None
self.debug = parse_flag_from_env("ACCELERATE_DEBUG_MODE")
use_sagemaker_dp = kwargs.pop("_use_sagemaker_dp", None)
if use_sagemaker_dp is None:
use_sagemaker_dp = (
@ -166,7 +172,11 @@ class PartialState:
# DeepSpeed always uses nccl
kwargs.pop("backend", None)
self.backend = "nccl"
if is_xpu_available and is_ccl_available():
# Set DeepSpeed backend to ccl for xpu
self.backend = "ccl"
else:
self.backend = "nccl"
dist.init_distributed(dist_backend=self.backend, auto_mpi_discovery=False, **kwargs)
self.num_processes = torch.distributed.get_world_size()
@ -418,23 +428,24 @@ class PartialState:
if self.num_processes == 1:
yield inputs
return
length = len(inputs)
# Nested dictionary of any types
if isinstance(inputs, dict):
length = len(inputs[list(inputs.keys())[0]])
if not all(len(v) == length for v in inputs.values()):
raise ValueError("All values in the dictionary must have the same length")
num_samples_per_process = math.ceil(len(inputs) / self.num_processes)
num_samples_per_process = math.ceil(length / self.num_processes)
start_index = self.process_index * num_samples_per_process
end_index = start_index + num_samples_per_process
if (len(inputs) % self.num_processes != 0) and (self.process_index == self.num_processes - 1):
if isinstance(inputs, (list, tuple, torch.Tensor)):
end_index = len(inputs)
elif isinstance(inputs, dict):
end_index = len(inputs[list(inputs.keys())[0]])
end_index = length
def _split_values(inputs, start_index, end_index):
if isinstance(inputs, (list, tuple, torch.Tensor)):
result = inputs[start_index:end_index]
if start_index >= len(inputs):
result = inputs[-1:]
else:
result = inputs[start_index:end_index]
if apply_padding:
if isinstance(result, torch.Tensor):
from accelerate.utils import pad_across_processes, send_to_device
@ -692,12 +703,13 @@ class AcceleratorState:
- **initialized** (`bool`) -- Whether or not the `AcceleratorState` has been initialized from `Accelerator`.
- **local_process_index** (`int`) -- The index of the current process on the current server.
- **mixed_precision** (`str`) -- Whether or not the current script will use mixed precision, and if so the type
of mixed precision being performed.
of mixed precision being performed. (Choose from 'no','fp16','bf16 or 'fp8').
- **num_processes** (`int`) -- The number of processes currently launched in parallel.
- **process_index** (`int`) -- The index of the current process.
- **is_last_process** (`bool`) -- Whether or not the current process is the last one.
- **is_main_process** (`bool`) -- Whether or not the current process is the main one.
- **is_local_main_process** (`bool`) -- Whether or not the current process is the main one on the local node.
- **debug** (`bool`) -- Whether or not the current script is being run in debug mode.
"""
_shared_state = SharedDict()
@ -722,6 +734,7 @@ class AcceleratorState:
self._check_initialized(mixed_precision, cpu)
if not self.initialized:
self.deepspeed_plugin = None
self.use_ipex = None
mixed_precision = (
parse_choice_from_env("ACCELERATE_MIXED_PRECISION", "no")
if mixed_precision is None
@ -759,12 +772,25 @@ class AcceleratorState:
self.distributed_type = DistributedType.MEGATRON_LM
megatron_lm_plugin.set_mixed_precision(self._mixed_precision)
self.megatron_lm_plugin = megatron_lm_plugin
elif self.distributed_type == DistributedType.MULTI_NPU:
if os.environ.get("ACCELERATE_USE_FSDP", "false") == "true":
self.distributed_type = DistributedType.FSDP
if self._mixed_precision != "no":
fsdp_plugin.set_mixed_precision(self._mixed_precision)
self.fsdp_plugin = fsdp_plugin
elif self.distributed_type in [DistributedType.MULTI_CPU, DistributedType.MULTI_XPU, DistributedType.NO]:
if is_ipex_available():
"check if user disables it explicitly"
self.use_ipex = parse_flag_from_env("ACCELERATE_USE_IPEX", default=True)
else:
self.use_ipex = False
if self.distributed_type == DistributedType.MULTI_XPU:
if os.environ.get("ACCELERATE_USE_FSDP", "false") == "true":
self.distributed_type = DistributedType.FSDP
if self._mixed_precision != "no":
fsdp_plugin.set_mixed_precision(self._mixed_precision)
self.fsdp_plugin = fsdp_plugin
if (
self.dynamo_plugin.backend != DynamoBackend.NO
and self._mixed_precision == "no"

View File

@ -1,5 +1,6 @@
from .testing import (
are_the_same_tensors,
assert_exception,
execute_subprocess_async,
require_bnb,
require_cpu,

View File

@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import math
import os
from copy import deepcopy
@ -21,10 +22,11 @@ import evaluate
import torch
import transformers
from datasets import load_dataset
from torch.utils.data import DataLoader
from torch.utils.data import DataLoader, IterableDataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from accelerate import Accelerator
from accelerate.data_loader import DataLoaderDispatcher
from accelerate.test_utils import RegressionDataset, RegressionModel
from accelerate.utils import is_tpu_available, set_seed
@ -32,6 +34,15 @@ from accelerate.utils import is_tpu_available, set_seed
os.environ["TRANSFORMERS_NO_ADVISORY_WARNINGS"] = "true"
class ListHandler(logging.Handler):
def __init__(self, *args, **kwargs):
super(ListHandler, self).__init__(*args, **kwargs)
self.logs = []
def emit(self, record):
self.logs.append(record)
def get_basic_setup(accelerator, num_samples=82, batch_size=16):
"Returns everything needed to perform basic training"
set_seed(42)
@ -138,6 +149,95 @@ def test_mrpc(dispatch_batches: bool = False, split_batches: bool = False):
), f"Baseline and Distributed are not the same for key {key}:\n\tBaseline: {baseline[key]}\n\tDistributed: {distributed[key]}\n"
def test_gather_for_metrics_with_non_tensor_objects_iterable_dataset():
class DummyIterableDataset(IterableDataset):
def __init__(self, data):
self.data = data
def __len__(self):
return len(self.data)
def __iter__(self):
for element in self.data:
yield element
iterable_dataset = DummyIterableDataset([n for n in range(30)])
dataloader = DataLoader(iterable_dataset, batch_size=4)
accelerator = Accelerator()
prepared_dataloader = accelerator.prepare(dataloader)
if accelerator.is_main_process:
logger = logging.root.manager.loggerDict["accelerate.accelerator"]
list_handler = ListHandler()
logger.addHandler(list_handler)
batches_for_metrics = []
for batch in prepared_dataloader:
batches_for_metrics.append(accelerator.gather_for_metrics(batch))
assert torch.cat(batches_for_metrics).size(0) == 30
if accelerator.is_main_process:
assert len(list_handler.logs) == 0
logger.removeHandler(list_handler)
def test_gather_for_metrics_with_iterable_dataset():
class DummyIterableDataset(IterableDataset):
def __init__(self, data):
self.data = data
def __len__(self):
return len(self.data)
def __iter__(self):
for element in self.data:
yield element
iterable_dataset = DummyIterableDataset(torch.as_tensor(range(30)))
dataloader = DataLoader(iterable_dataset, batch_size=4)
accelerator = Accelerator()
prepared_dataloader = accelerator.prepare(dataloader)
assert isinstance(prepared_dataloader, DataLoaderDispatcher)
if accelerator.is_main_process:
logger = logging.root.manager.loggerDict["accelerate.accelerator"]
list_handler = ListHandler()
logger.addHandler(list_handler)
batches_for_metrics = []
for batch in prepared_dataloader:
batches_for_metrics.append(accelerator.gather_for_metrics(batch))
assert torch.cat(batches_for_metrics).size(0) == 30
if accelerator.is_main_process:
assert len(list_handler.logs) == 0
logger.removeHandler(list_handler)
def test_gather_for_metrics_drop_last():
accelerator = Accelerator()
per_device_batch_size = 5
num_items = (10 * accelerator.num_processes) + 1
dataloader = DataLoader(range(num_items), batch_size=per_device_batch_size, drop_last=True)
dataloader = accelerator.prepare(dataloader)
iterator = iter(dataloader)
next(iterator) # Skip first batch tensor([0, 1, 2, 3, 4], device='cuda:0')
batch = next(iterator)
gathered_items = accelerator.gather_for_metrics(batch)
# Should return a full set of complete batches from each GPU
num_expected_items = per_device_batch_size * accelerator.num_processes
assert gathered_items.size(0) == (
num_expected_items
), f"Expected number of items: {num_expected_items}, Actual: {gathered_items.size(0)}"
def main():
accelerator = Accelerator(split_batches=False, dispatch_batches=False)
if accelerator.is_local_main_process:
@ -156,6 +256,10 @@ def main():
print(f"With: `split_batches={split_batches}`, `dispatch_batches={dispatch_batches}`")
test_mrpc(dispatch_batches, split_batches)
accelerator.state._reset_state()
print("test_gather_for_metrics_with_iterable_dataset")
test_gather_for_metrics_with_iterable_dataset()
print("test gather_for_metrics_with_non_tensor_objects_iterable_dataset")
test_gather_for_metrics_with_non_tensor_objects_iterable_dataset()
if accelerator.is_local_main_process:
print("**Test torch metrics**")
for split_batches in [True, False]:
@ -170,6 +274,10 @@ def main():
accelerator = Accelerator()
test_torch_metrics(accelerator, 512)
accelerator.state._reset_state()
if accelerator.is_local_main_process:
print("**Test that `drop_last` is taken into account**")
test_gather_for_metrics_drop_last()
accelerator.state._reset_state()
def _mp_fn(index):

View File

@ -24,6 +24,7 @@ from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
from accelerate.utils import is_npu_available, is_xpu_available
from accelerate.utils.deepspeed import DummyOptim, DummyScheduler
@ -40,16 +41,34 @@ def b2mb(x):
class TorchTracemalloc:
def __enter__(self):
gc.collect()
torch.cuda.empty_cache()
torch.cuda.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.cuda.memory_allocated()
if torch.cuda.is_available():
torch.cuda.empty_cache()
torch.cuda.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.cuda.memory_allocated()
elif is_npu_available():
torch.npu.empty_cache()
torch.npu.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.npu.memory_allocated()
elif is_xpu_available():
torch.xpu.empty_cache()
torch.xpu.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.xpu.memory_allocated()
return self
def __exit__(self, *exc):
gc.collect()
torch.cuda.empty_cache()
self.end = torch.cuda.memory_allocated()
self.peak = torch.cuda.max_memory_allocated()
if torch.cuda.is_available():
torch.cuda.empty_cache()
self.end = torch.cuda.memory_allocated()
self.peak = torch.cuda.max_memory_allocated()
elif is_npu_available():
torch.npu.empty_cache()
self.end = torch.npu.memory_allocated()
self.peak = torch.npu.max_memory_allocated()
elif is_xpu_available():
torch.xpu.empty_cache()
self.end = torch.xpu.memory_allocated()
self.peak = torch.xpu.max_memory_allocated()
self.used = b2mb(self.end - self.begin)
self.peaked = b2mb(self.peak - self.begin)
# print(f"delta used/peak {self.used:4d}/{self.peaked:4d}")

View File

@ -0,0 +1,17 @@
# Test file to ensure that in general certain situational setups for notebooks work.
import argparse
from accelerate import PartialState, notebook_launcher
parser = argparse.ArgumentParser()
parser.add_argument("--num_processes", type=int, default=1)
args = parser.parse_args()
def function():
print(f"PartialState:\n{PartialState()}")
if __name__ == "__main__":
notebook_launcher(function, num_processes=int(args.num_processes))

View File

@ -17,7 +17,16 @@
import torch
from accelerate import PartialState
from accelerate.utils.operations import broadcast, gather, gather_object, pad_across_processes, reduce
from accelerate.test_utils.testing import assert_exception
from accelerate.utils.dataclasses import DistributedType
from accelerate.utils.operations import (
DistributedOperationException,
broadcast,
gather,
gather_object,
pad_across_processes,
reduce,
)
def create_tensor(state):
@ -37,6 +46,14 @@ def test_gather_object(state):
assert gathered_obj == list(range(state.num_processes)), f"{gathered_obj} != {list(range(state.num_processes))}"
def test_gather_non_contigous(state):
# Create a non-contiguous tensor
tensor = torch.arange(12).view(4, 3).t().to(state.device)
assert not tensor.is_contiguous()
# Shouldn't error out
_ = gather(tensor)
def test_broadcast(state):
tensor = create_tensor(state)
broadcasted_tensor = broadcast(tensor)
@ -77,6 +94,41 @@ def test_reduce_mean(state):
assert torch.allclose(reduced_tensor, truth_tensor), f"{reduced_tensor} != {truth_tensor}"
def test_op_checker(state):
# Must be in a distributed state
if state.distributed_type == DistributedType.NO:
return
state.debug = True
# `pad_across_processes`
if state.process_index == 0:
data = {"tensor": torch.tensor([[0.0, 1, 2, 3, 4]]).to(state.device)}
else:
data = {"tensor": torch.tensor([[[0.0, 1, 2, 3, 4, 5]]]).to(state.device)}
with assert_exception(DistributedOperationException):
pad_across_processes(data, dim=0)
# `reduce`
if state.process_index == 0:
data = {"tensor": torch.tensor([[0.0, 1, 2, 3, 4]]).to(state.device)}
else:
data = {"tensor": torch.tensor([[[0.0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]]).to(state.device)}
with assert_exception(DistributedOperationException):
reduce(data)
# `broadcast`
if state.process_index == 0:
data = {"tensor": torch.tensor([[0.0, 1, 2, 3, 4]]).to(state.device)}
else:
data = {"tensor": torch.tensor([[[0.0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]]).to(state.device)}
with assert_exception(DistributedOperationException):
broadcast(data)
state.debug = False
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
@ -89,6 +141,8 @@ def main():
test_gather(state)
state.print("testing gather_object")
test_gather_object(state)
state.print("testing gather non-contigous")
test_gather_non_contigous(state)
state.print("testing broadcast")
test_broadcast(state)
state.print("testing pad_across_processes")
@ -97,6 +151,8 @@ def main():
test_reduce_sum(state)
state.print("testing reduce_mean")
test_reduce_mean(state)
state.print("testing op_checker")
test_op_checker(state)
if __name__ == "__main__":

View File

@ -25,7 +25,7 @@ import torch
from torch.utils.data import DataLoader
from accelerate import Accelerator
from accelerate.data_loader import prepare_data_loader
from accelerate.data_loader import SeedableRandomSampler, prepare_data_loader
from accelerate.state import AcceleratorState
from accelerate.test_utils import RegressionDataset, are_the_same_tensors
from accelerate.utils import (
@ -292,7 +292,17 @@ def mock_training(length, batch_size, generator):
set_seed(42)
generator.manual_seed(42)
train_set = RegressionDataset(length=length, seed=42)
train_dl = DataLoader(train_set, batch_size=batch_size, shuffle=True, generator=generator)
if AcceleratorState().num_processes > 1:
# The SeedableRandomSampler is needed during distributed setups
# for full reproducability across processes with the `DataLoader`
sampler = SeedableRandomSampler(
generator=generator,
data_source=train_set,
num_samples=len(train_set),
)
train_dl = DataLoader(train_set, batch_size=batch_size, sampler=sampler)
else:
train_dl = DataLoader(train_set, batch_size=batch_size, shuffle=True, generator=generator)
model = RegressionModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
for epoch in range(3):
@ -480,7 +490,7 @@ def test_split_between_processes_list():
len(results) == 2
), f"Each process did not have two items. Process index: {state.process_index}; Length: {len(results)}"
data = list(range(0, (2 * state.num_processes) + 3))
data = list(range(0, (3 * state.num_processes) - 1))
with state.split_between_processes(data, apply_padding=True) as results:
if state.is_last_process:
# Test that the last process gets the extra item(s)
@ -493,35 +503,38 @@ def test_split_between_processes_list():
def test_split_between_processes_nested_dict():
state = AcceleratorState()
a = [1, 2, 3, 4, 5, 6, 7, 8]
b = ["a", "b", "c", "d", "e", "f", "g", "h"]
c = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8])
if state.num_processes in (1, 2, 4):
data = {"a": [1, 2, 3, 4], "b": ["w", "x", "y", "z"], "c": torch.tensor([0, 1, 2, 3])}
data = {"a": a, "b": b, "c": c}
data_copy = deepcopy(data)
with state.split_between_processes(data) as results:
if state.process_index == 0:
assert results["a"] == data_copy["a"][: 4 // state.num_processes]
assert results["a"] == data_copy["a"][: 8 // state.num_processes]
elif state.num_processes == 2:
assert results["a"] == data_copy["a"][2:]
assert results["a"] == data_copy["a"][4:]
elif state.process_index == 3:
# We return a list each time
assert results["a"] == data_copy["a"][-1:], f'Expected: {data_copy["a"][-1]}, Actual: {results["a"]}'
assert results["a"] == data_copy["a"][-2:], f'Expected: {data_copy["a"][-2]}, Actual: {results["a"]}'
if state.process_index == 0:
assert results["b"] == data_copy["b"][: 4 // state.num_processes]
assert results["b"] == data_copy["b"][: 8 // state.num_processes]
elif state.num_processes == 2:
assert results["b"] == data_copy["b"][2:]
assert results["b"] == data_copy["b"][4:]
elif state.process_index == 3:
assert results["b"] == data_copy["b"][-1:]
assert results["b"] == data_copy["b"][-2:]
if state.process_index == 0:
assert torch.allclose(
results["c"], data_copy["c"][: 4 // state.num_processes]
), f"Did not obtain expected values on process 0, expected `{data['c'][:4//state.num_processes]}`, received: {results['c']}"
results["c"], data_copy["c"][: 8 // state.num_processes]
), f"Did not obtain expected values on process 0, expected `{data['c'][:8 // state.num_processes]}`, received: {results['c']}"
elif state.num_processes == 2:
assert torch.allclose(
results["c"], data_copy["c"][2:]
), f"Did not obtain expected values on process 2, expected `{data['c'][2:]}`, received: {results['c']}"
results["c"], data_copy["c"][4:]
), f"Did not obtain expected values on process 2, expected `{data['c'][4:]}`, received: {results['c']}"
elif state.process_index == 3:
assert torch.allclose(
results["c"], data_copy["c"][3]
), f"Did not obtain expected values on process 4, expected `{data['c'][3]}`, received: {results['c']}"
results["c"], data_copy["c"][-2:]
), f"Did not obtain expected values on process 4, expected `{data['c'][-2:]}`, received: {results['c']}"
state.wait_for_everyone()
@ -538,6 +551,23 @@ def test_split_between_processes_tensor():
state.wait_for_everyone()
def test_trigger():
accelerator = Accelerator()
# should start with being false
assert accelerator.check_trigger() is False
# set a breakpoint on the main process
if accelerator.is_main_process:
accelerator.set_trigger()
# check it's been activated across all processes
# calls `all_reduce` and triggers a sync
assert accelerator.check_trigger() is True
# check it's been reset after the sync
assert accelerator.check_trigger() is False
def main():
accelerator = Accelerator()
state = accelerator.state
@ -587,6 +617,10 @@ def main():
print("\n**Training integration test**")
training_check()
if state.local_process_index == 0:
print("\n**Breakpoint trigger test**")
test_trigger()
if __name__ == "__main__":
main()

View File

@ -150,6 +150,60 @@ def test_distributed_sync(accelerator):
ddp_input = ddp_input[torch.randperm(len(ddp_input))]
def test_distributed_sync_multiple_fwd(accelerator):
# Test on distributed setup that context manager behaves properly when used with multiple forwards followed by multiple backwards
model, ddp_model, dataloader = get_training_setup(accelerator)
# Do multiple forwards
losses = []
num_iterations = 3
for iteration in range(num_iterations):
ddp_input, ddp_target = next(iter(dataloader)).values()
# Gather the distributed inputs and targs for the base model
input, target = accelerator.gather((ddp_input, ddp_target))
input, target = input.to(accelerator.device), target.to(accelerator.device)
# Perform our initial ground truth step in non "DDP"
step_model(model, input, target, accelerator)
# Accumulate grads locally
with accelerator.no_sync(ddp_model):
ddp_output = ddp_model(ddp_input)
loss = F.mse_loss(ddp_output, ddp_target.to(ddp_output.device))
losses.append(loss)
# Do multiple backwards and sync only at the last backward
for iteration in range(num_iterations):
loss = losses[iteration]
if iteration < num_iterations - 1:
# Accumulate grads locally
accelerator.backward(loss)
# DDP model and model should only be in sync after last backward
for param, ddp_param in zip(model.parameters(), ddp_model.parameters()):
if not param.requires_grad:
continue
# Grads should not be in sync
assert (
torch.allclose(param.grad, ddp_param.grad) is False
), f"Gradients in sync when they should not be:\nModel grad ({param.grad}) == DDP grad ({ddp_param.grad})"
else:
# Sync grads if last backward
with accelerator.trigger_sync_in_backward(ddp_model):
accelerator.backward(loss)
# DDP model and model should only be in sync after last backward
for param, ddp_param in zip(model.parameters(), ddp_model.parameters()):
if not param.requires_grad:
continue
# Grads should be in sync
assert (
torch.allclose(param.grad, ddp_param.grad) is True
), f"Gradients not in sync when they should be:\nModel grad ({param.grad}) != DDP grad ({ddp_param.grad})"
def test_gradient_accumulation(split_batches=False, dispatch_batches=False):
accelerator = Accelerator(
split_batches=split_batches, dispatch_batches=dispatch_batches, gradient_accumulation_steps=2
@ -266,11 +320,14 @@ def main():
if state.local_process_index == 0:
print("**Test NOOP `no_sync` context manager**")
test_noop_sync(accelerator)
if state.distributed_type in (DistributedType.MULTI_GPU, DistributedType.MULTI_CPU):
if state.distributed_type in (DistributedType.MULTI_GPU, DistributedType.MULTI_NPU, DistributedType.MULTI_CPU):
if state.local_process_index == 0:
print("**Test Distributed `no_sync` context manager**")
test_distributed_sync(accelerator)
if state.distributed_type == DistributedType.MULTI_GPU:
if state.local_process_index == 0:
print("**Test Distributed `no_sync` context manager with multiple forwards**")
test_distributed_sync_multiple_fwd(accelerator)
if state.distributed_type in (DistributedType.MULTI_GPU, DistributedType.MULTI_NPU):
for split_batch in [True, False]:
for dispatch_batches in [True, False]:
if state.local_process_index == 0:
@ -288,7 +345,7 @@ def main():
"`split_batches=False`, `dispatch_batches=False`**",
)
test_gradient_accumulation_with_opt_and_scheduler()
if state.distributed_type == DistributedType.MULTI_GPU:
if state.distributed_type in (DistributedType.MULTI_GPU, DistributedType.MULTI_NPU):
for split_batch in [True, False]:
for dispatch_batches in [True, False]:
if not split_batch and not dispatch_batches:

View File

@ -19,7 +19,7 @@ import subprocess
import sys
import tempfile
import unittest
from distutils.util import strtobool
from contextlib import contextmanager
from functools import partial
from pathlib import Path
from typing import List, Union
@ -37,11 +37,13 @@ from ..utils import (
is_mps_available,
is_safetensors_available,
is_tensorboard_available,
is_timm_available,
is_torch_version,
is_tpu_available,
is_transformers_available,
is_wandb_available,
is_xpu_available,
str_to_bool,
)
@ -54,7 +56,7 @@ def parse_flag_from_env(key, default=False):
else:
# KEY is set, convert it to True or False.
try:
_value = strtobool(value)
_value = str_to_bool(value)
except ValueError:
# More values are supported, but let's keep the message simple.
raise ValueError(f"If set, {key} must be yes or no.")
@ -115,6 +117,20 @@ def require_huggingface_suite(test_case):
)(test_case)
def require_transformers(test_case):
"""
Decorator marking a test that requires transformers. These tests are skipped when they are not.
"""
return unittest.skipUnless(is_transformers_available(), "test requires the transformers library")(test_case)
def require_timm(test_case):
"""
Decorator marking a test that requires transformers. These tests are skipped when they are not.
"""
return unittest.skipUnless(is_timm_available(), "test requires the timm library")(test_case)
def require_bnb(test_case):
"""
Decorator marking a test that requires bitsandbytes. These tests are skipped when they are not.
@ -415,3 +431,22 @@ def run_command(command: List[str], return_stdout=False):
raise SubprocessCallException(
f"Command `{' '.join(command)}` failed with the following error:\n\n{e.output.decode()}"
) from e
@contextmanager
def assert_exception(exception_class: Exception, msg: str = None) -> bool:
"""
Context manager to assert that the right `Exception` class was raised.
If `msg` is provided, will check that the message is contained in the raised exception.
"""
was_ran = False
try:
yield
was_ran = True
except Exception as e:
assert isinstance(e, exception_class), f"Expected exception of type {exception_class} but got {type(e)}"
if msg is not None:
assert msg in str(e), f"Expected message '{msg}' to be in exception but got '{str(e)}'"
if was_ran:
raise AssertionError(f"Expected exception of type {exception_class} but ran without issue.")

View File

@ -39,31 +39,18 @@ from .utils import (
_available_trackers = []
if is_tensorboard_available():
try:
from torch.utils import tensorboard
except ModuleNotFoundError:
import tensorboardX as tensorboard
_available_trackers.append(LoggerType.TENSORBOARD)
if is_wandb_available():
import wandb
_available_trackers.append(LoggerType.WANDB)
if is_comet_ml_available():
from comet_ml import Experiment
_available_trackers.append(LoggerType.COMETML)
if is_aim_available():
from aim import Run
_available_trackers.append(LoggerType.AIM)
if is_mlflow_available():
import mlflow
_available_trackers.append(LoggerType.MLFLOW)
logger = get_logger(__name__)
@ -185,6 +172,10 @@ class TensorBoardTracker(GeneralTracker):
@on_main_process
def __init__(self, run_name: str, logging_dir: Union[str, os.PathLike], **kwargs):
try:
from torch.utils import tensorboard
except ModuleNotFoundError:
import tensorboardX as tensorboard
super().__init__()
self.run_name = run_name
self.logging_dir = os.path.join(logging_dir, run_name)
@ -293,6 +284,9 @@ class WandBTracker(GeneralTracker):
def __init__(self, run_name: str, **kwargs):
super().__init__()
self.run_name = run_name
import wandb
self.run = wandb.init(project=self.run_name, **kwargs)
logger.debug(f"Initialized WandB project {self.run_name}")
logger.debug(
@ -313,7 +307,9 @@ class WandBTracker(GeneralTracker):
Values to be stored as initial hyperparameters as key-value pairs. The values need to have type `bool`,
`str`, `float`, `int`, or `None`.
"""
wandb.config.update(values)
import wandb
wandb.config.update(values, allow_val_change=True)
logger.debug("Stored initial configuration hyperparameters to WandB")
@on_main_process
@ -346,6 +342,8 @@ class WandBTracker(GeneralTracker):
kwargs:
Additional key word arguments passed along to the `wandb.log` method.
"""
import wandb
for k, v in values.items():
self.log({k: [wandb.Image(image) for image in v]}, step=step, **kwargs)
logger.debug("Successfully logged images to WandB")
@ -376,6 +374,7 @@ class WandBTracker(GeneralTracker):
step (`int`, *optional*):
The run step. If included, the log will be affiliated with this step.
"""
import wandb
values = {table_name: wandb.Table(columns=columns, data=data, dataframe=dataframe)}
self.log(values, step=step, **kwargs)
@ -409,6 +408,9 @@ class CometMLTracker(GeneralTracker):
def __init__(self, run_name: str, **kwargs):
super().__init__()
self.run_name = run_name
from comet_ml import Experiment
self.writer = Experiment(project_name=run_name, **kwargs)
logger.debug(f"Initialized CometML project {self.run_name}")
logger.debug(
@ -484,6 +486,9 @@ class AimTracker(GeneralTracker):
@on_main_process
def __init__(self, run_name: str, logging_dir: Optional[Union[str, os.PathLike]] = ".", **kwargs):
self.run_name = run_name
from aim import Run
self.writer = Run(repo=logging_dir, **kwargs)
self.writer.name = self.run_name
logger.debug(f"Initialized Aim project {self.run_name}")
@ -581,6 +586,8 @@ class MLflowTracker(GeneralTracker):
nested_run = os.getenv("MLFLOW_NESTED_RUN", nested_run)
import mlflow
exps = mlflow.search_experiments(filter_string=f"name = '{experiment_name}'")
if len(exps) > 0:
if len(exps) > 1:
@ -620,6 +627,7 @@ class MLflowTracker(GeneralTracker):
values (`dict`):
Values to be stored as initial hyperparameters as key-value pairs.
"""
import mlflow
for name, value in list(values.items()):
# internally, all values are converted to str in MLflow
@ -658,6 +666,7 @@ class MLflowTracker(GeneralTracker):
f'MLflowTracker is attempting to log a value of "{v}" of type {type(v)} for key "{k}" as a metric. '
"MLflow's log_metric() only accepts float and int types so we dropped this attribute."
)
import mlflow
mlflow.log_metrics(metrics, step=step)
logger.debug("Successfully logged to mlflow")
@ -667,6 +676,8 @@ class MLflowTracker(GeneralTracker):
"""
End the active MLflow run.
"""
import mlflow
mlflow.end_run()

View File

@ -4,13 +4,16 @@ from .constants import (
RNG_STATE_NAME,
SAFE_WEIGHTS_INDEX_NAME,
SAFE_WEIGHTS_NAME,
SAMPLER_NAME,
SCALER_NAME,
SCHEDULER_NAME,
TORCH_DISTRIBUTED_OPERATION_TYPES,
TORCH_LAUNCH_PARAMS,
WEIGHTS_INDEX_NAME,
WEIGHTS_NAME,
)
from .dataclasses import (
AutocastKwargs,
BnbQuantizationConfig,
ComputeEnvironment,
CustomDtype,
@ -33,7 +36,7 @@ from .dataclasses import (
TensorInformation,
TorchDynamoPlugin,
)
from .environment import get_int_from_env, parse_choice_from_env, parse_flag_from_env
from .environment import get_int_from_env, parse_choice_from_env, parse_flag_from_env, str_to_bool
from .imports import (
get_ccl_version,
is_4bit_bnb_available,
@ -44,6 +47,7 @@ from .imports import (
is_boto3_available,
is_ccl_available,
is_comet_ml_available,
is_cuda_available,
is_datasets_available,
is_deepspeed_available,
is_fp8_available,
@ -56,12 +60,14 @@ from .imports import (
is_safetensors_available,
is_sagemaker_available,
is_tensorboard_available,
is_timm_available,
is_tpu_available,
is_transformers_available,
is_wandb_available,
is_xpu_available,
)
from .modeling import (
calculate_maximum_sizes,
check_device_map,
check_tied_parameters_in_config,
check_tied_parameters_on_same_device,
@ -159,6 +165,9 @@ from .megatron_lm import prepare_optimizer as megatron_lm_prepare_optimizer
from .megatron_lm import prepare_scheduler as megatron_lm_prepare_scheduler
from .memory import find_executable_batch_size, release_memory
from .other import (
check_os_kernel,
clear_environment,
convert_bytes,
extract_model_from_parallel,
get_pretty_name,
is_port_in_use,

View File

@ -15,6 +15,7 @@
import logging
import os
from copy import deepcopy
from typing import Dict, List, Optional, Union
import torch
@ -23,7 +24,6 @@ import torch.nn as nn
from accelerate.utils.imports import (
is_4bit_bnb_available,
is_8bit_bnb_available,
is_bnb_available,
)
from ..big_modeling import dispatch_model, init_empty_weights
@ -38,12 +38,6 @@ from .modeling import (
)
if is_bnb_available():
import bitsandbytes as bnb
from copy import deepcopy
logger = logging.getLogger(__name__)
@ -65,7 +59,7 @@ def load_and_quantize_model(
Args:
model (`torch.nn.Module`):
Input model. The model can be already loaded or on the meta device
bnb_config (`BnbQuantizationConfig`):
bnb_quantization_config (`BnbQuantizationConfig`):
The bitsandbytes quantization parameters
weights_location (`str` or `os.PathLike`):
The folder weights_location to load. It can be:
@ -320,6 +314,9 @@ def _replace_with_bnb_layers(
Returns the converted model and a boolean that indicates if the conversion has been successfull or not.
"""
# bitsandbytes will initialize CUDA on import, so it needs to be imported lazily
import bitsandbytes as bnb
has_been_replaced = False
for name, module in model.named_children():
if current_key_name is None:
@ -426,6 +423,9 @@ def get_keys_to_not_convert(model):
def has_4bit_bnb_layers(model):
"""Check if we have `bnb.nn.Linear4bit` or `bnb.nn.Linear8bitLt` layers inside our model"""
# bitsandbytes will initialize CUDA on import, so it needs to be imported lazily
import bitsandbytes as bnb
for m in model.modules():
if isinstance(m, bnb.nn.Linear4bit):
return True

View File

@ -20,6 +20,7 @@ MODEL_NAME = "pytorch_model"
RNG_STATE_NAME = "random_states"
OPTIMIZER_NAME = "optimizer"
SCHEDULER_NAME = "scheduler"
SAMPLER_NAME = "sampler"
WEIGHTS_NAME = "pytorch_model.bin"
WEIGHTS_INDEX_NAME = "pytorch_model.bin.index.json"
SAFE_WEIGHTS_NAME = "model.safetensors"
@ -33,7 +34,7 @@ FSDP_AUTO_WRAP_POLICY = ["TRANSFORMER_BASED_WRAP", "SIZE_BASED_WRAP", "NO_WRAP"]
FSDP_BACKWARD_PREFETCH = ["BACKWARD_PRE", "BACKWARD_POST", "NO_PREFETCH"]
FSDP_STATE_DICT_TYPE = ["FULL_STATE_DICT", "LOCAL_STATE_DICT", "SHARDED_STATE_DICT"]
FSDP_PYTORCH_VERSION = "2.0.1"
DEEPSPEED_MULTINODE_LAUNCHERS = ["pdsh", "standard", "openmpi", "mvapich"]
DEEPSPEED_MULTINODE_LAUNCHERS = ["pdsh", "standard", "openmpi", "mvapich", "mpich"]
TORCH_DYNAMO_MODES = ["default", "reduce-overhead", "max-autotune"]
STR_OPERATION_TO_FUNC = {">": op.gt, ">=": op.ge, "==": op.eq, "!=": op.ne, "<=": op.le, "<": op.lt}
@ -66,4 +67,4 @@ TORCH_LAUNCH_PARAMS = [
]
CUDA_DISTRIBUTED_TYPES = ["DEEPSPEED", "MULTI_GPU", "FSDP", "MEGATRON_LM"]
XPU_DISTRIBUTED_TYPES = ["DEEPSPEED", "MULTI_XPU", "FSDP"]
TORCH_DISTRIBUTED_OPERATION_TYPES = CUDA_DISTRIBUTED_TYPES + ["MULTI_NPU", "MULTI_XPU", "MULTI_CPU"]

View File

@ -26,12 +26,14 @@ import warnings
from contextlib import contextmanager
from dataclasses import dataclass, field
from datetime import timedelta
from distutils.util import strtobool
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple
import torch
from .constants import FSDP_AUTO_WRAP_POLICY, FSDP_BACKWARD_PREFETCH, FSDP_STATE_DICT_TYPE
from .environment import str_to_bool
from .imports import is_xpu_available
from .versions import compare_versions
class KwargsHandler:
@ -46,11 +48,37 @@ class KwargsHandler:
"""
Returns a dictionary containing the attributes with values different from the default of this class.
"""
default_dict = self.__class__().to_dict()
# import clear_environment here to avoid circular import problem
from .other import clear_environment
with clear_environment():
default_dict = self.__class__().to_dict()
this_dict = self.to_dict()
return {k: v for k, v in this_dict.items() if default_dict[k] != v}
@dataclass
class AutocastKwargs(KwargsHandler):
"""
Use this object in your [`Accelerator`] to customize how `torch.autocast` behaves. Please refer to the
documentation of this [context manager](https://pytorch.org/docs/stable/amp.html#torch.autocast) for more
information on each argument.
Example:
```python
from accelerate import Accelerator
from accelerate.utils import AutocastKwargs
kwargs = AutocastKwargs(cache_enabled=True)
accelerator = Accelerator(kwargs_handlers=[kwargs])
```
"""
enabled: bool = True
cache_enabled: bool = None
@dataclass
class DistributedDataParallelKwargs(KwargsHandler):
"""
@ -173,6 +201,29 @@ class FP8RecipeKwargs(KwargsHandler):
raise ValueError("`amax_compute_algo` must be 'max' or 'most_recent'")
class EnumWithContains(enum.EnumMeta):
"A metaclass that adds the ability to check if `self` contains an item with the `in` operator"
def __contains__(cls, item):
try:
cls(item)
except ValueError:
return False
return True
class BaseEnum(enum.Enum, metaclass=EnumWithContains):
"An enum class that can get the value of an item with `str(Enum.key)`"
def __str__(self):
return self.value
@classmethod
def list(cls):
"Method to list all the possible items in `cls`"
return list(map(str, cls))
class DistributedType(str, enum.Enum):
"""
Represents a type of distributed environment.
@ -232,7 +283,7 @@ class ComputeEnvironment(str, enum.Enum):
AMAZON_SAGEMAKER = "AMAZON_SAGEMAKER"
class DynamoBackend(str, enum.Enum):
class DynamoBackend(str, BaseEnum):
"""
Represents a dynamo backend (see https://github.com/pytorch/torchdynamo).
@ -246,19 +297,21 @@ class DynamoBackend(str, enum.Enum):
- **INDUCTOR** -- Uses TorchInductor backend with AotAutograd and cudagraphs by leveraging codegened Triton
kernels. [Read
more](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747)
- **NVFUSER** -- nvFuser with TorchScript. [Read
- **AOT_TS_NVFUSER** -- nvFuser with AotAutograd/TorchScript. [Read
more](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593)
- **AOT_NVFUSER** -- nvFuser with AotAutograd. [Read
- **NVPRIMS_NVFUSER** -- nvFuser with PrimTorch. [Read
more](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593)
- **AOT_CUDAGRAPHS** -- cudagraphs with AotAutograd. [Read
more](https://github.com/pytorch/torchdynamo/pull/757)
- **CUDAGRAPHS** -- cudagraphs with AotAutograd. [Read more](https://github.com/pytorch/torchdynamo/pull/757)
- **OFI** -- Uses Torchscript optimize_for_inference. Inference only. [Read
more](https://pytorch.org/docs/stable/generated/torch.jit.optimize_for_inference.html)
- **FX2TRT** -- Uses Nvidia TensorRT for inference optimizations. Inference only. [Read
more](https://github.com/pytorch/TensorRT/blob/master/docsrc/tutorials/getting_started_with_fx_path.rst)
- **ONNXRT** -- Uses ONNXRT for inference on CPU/GPU. Inference only. [Read more](https://onnxruntime.ai/)
- **TENSORRT** -- Uses ONNXRT to run TensorRT for inference optimizations. [Read
more](https://github.com/onnx/onnx-tensorrt)
- **IPEX** -- Uses IPEX for inference on CPU. Inference only. [Read
more](https://github.com/intel/intel-extension-for-pytorch).
- **TVM** -- Uses Apach TVM for inference optimizations. [Read more](https://tvm.apache.org/)
"""
@ -267,36 +320,15 @@ class DynamoBackend(str, enum.Enum):
EAGER = "EAGER"
AOT_EAGER = "AOT_EAGER"
INDUCTOR = "INDUCTOR"
NVFUSER = "NVFUSER"
AOT_NVFUSER = "AOT_NVFUSER"
AOT_CUDAGRAPHS = "AOT_CUDAGRAPHS"
AOT_TS_NVFUSER = "AOT_TS_NVFUSER"
NVPRIMS_NVFUSER = "NVPRIMS_NVFUSER"
CUDAGRAPHS = "CUDAGRAPHS"
OFI = "OFI"
FX2TRT = "FX2TRT"
ONNXRT = "ONNXRT"
TENSORRT = "TENSORRT"
IPEX = "IPEX"
class EnumWithContains(enum.EnumMeta):
"A metaclass that adds the ability to check if `self` contains an item with the `in` operator"
def __contains__(cls, item):
try:
cls(item)
except ValueError:
return False
return True
class BaseEnum(enum.Enum, metaclass=EnumWithContains):
"An enum class that can get the value of an item with `str(Enum.key)`"
def __str__(self):
return self.value
@classmethod
def list(cls):
"Method to list all the possible items in `cls`"
return list(map(str, cls))
TVM = "TVM"
class LoggerType(BaseEnum):
@ -388,6 +420,16 @@ class ProjectConfiguration:
metadata={"help": "The current save iteration."},
)
save_on_each_node: bool = field(
default=False,
metadata={
"help": (
"When doing multi-node distributed training, whether to save models and checkpoints on each node, or"
" only on the main one"
)
},
)
def set_directories(self, project_dir: str = None):
"Sets `self.project_dir` and `self.logging_dir` to the appropriate values."
self.project_dir = project_dir
@ -445,9 +487,9 @@ class TorchDynamoPlugin(KwargsHandler):
if self.mode is None:
self.mode = os.environ.get(prefix + "MODE", "default")
if self.fullgraph is None:
self.fullgraph = strtobool(os.environ.get(prefix + "USE_FULLGRAPH", "False")) == 1
self.fullgraph = str_to_bool(os.environ.get(prefix + "USE_FULLGRAPH", "False")) == 1
if self.dynamic is None:
self.dynamic = strtobool(os.environ.get(prefix + "USE_DYNAMIC", "False")) == 1
self.dynamic = str_to_bool(os.environ.get(prefix + "USE_DYNAMIC", "False")) == 1
def to_dict(self):
dynamo_config = copy.deepcopy(self.__dict__)
@ -468,7 +510,10 @@ class DeepSpeedPlugin:
},
)
gradient_accumulation_steps: int = field(
default=None, metadata={"help": "Number of steps to accumulate gradients before updating optimizer states"}
default=None,
metadata={
"help": "Number of steps to accumulate gradients before updating optimizer states. If not set, will use the value from the `Accelerator` directly."
},
)
gradient_clipping: float = field(default=None, metadata={"help": "Enable gradient clipping with value"})
zero_stage: int = field(
@ -511,7 +556,8 @@ class DeepSpeedPlugin:
from .deepspeed import HfDeepSpeedConfig
if self.gradient_accumulation_steps is None:
self.gradient_accumulation_steps = int(os.environ.get("ACCELERATE_GRADIENT_ACCUMULATION_STEPS", 1))
gas = os.environ.get("ACCELERATE_GRADIENT_ACCUMULATION_STEPS", "auto")
self.gradient_accumulation_steps = int(gas) if gas.isdigit() else gas
if self.gradient_clipping is None:
gradient_clipping = os.environ.get("ACCELERATE_GRADIENT_CLIPPING", "none")
@ -604,7 +650,7 @@ class DeepSpeedPlugin:
self.deepspeed_config["steps_per_print"] = float("inf") # this will stop deepspeed from logging @ stdout
if self.zero3_init_flag is None:
self.zero3_init_flag = (
strtobool(os.environ.get("ACCELERATE_DEEPSPEED_ZERO3_INIT", str(self.hf_ds_config.is_zero3()))) == 1
str_to_bool(os.environ.get("ACCELERATE_DEEPSPEED_ZERO3_INIT", str(self.hf_ds_config.is_zero3()))) == 1
)
if self.zero3_init_flag and not self.hf_ds_config.is_zero3():
warnings.warn("DeepSpeed Zero3 Init flag is only applicable for ZeRO Stage 3. Setting it to False.")
@ -696,10 +742,13 @@ class DeepSpeedPlugin:
or ds_config["train_micro_batch_size_per_gpu"] == "auto"
):
ds_config["train_micro_batch_size_per_gpu"] = 1
if ds_config["train_batch_size"] == "auto":
if ds_config.get("train_batch_size", None) == "auto":
del ds_config["train_batch_size"]
from transformers.deepspeed import HfDeepSpeedConfig
if compare_versions("transformers", "<", "4.33"):
from transformers.deepspeed import HfDeepSpeedConfig
else:
from transformers.integrations import HfDeepSpeedConfig
self.dschf = HfDeepSpeedConfig(ds_config) # keep this object alive # noqa
@ -790,10 +839,6 @@ class FullyShardedDataParallelPlugin:
default=None,
metadata={"help": "A list of modules to ignore for FSDP."},
)
ignored_parameters: Optional[Iterable[torch.nn.Parameter]] = field(
default=None,
metadata={"help": "A list of parameters to ignore for FSDP."},
)
state_dict_type: "typing.Any" = field(
default=None,
metadata={
@ -837,7 +882,7 @@ class FullyShardedDataParallelPlugin:
},
)
sync_module_states: bool = field(
default=False,
default=True,
metadata={
"help": "If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0 "
"to ensure they are the same across all ranks after initialization"
@ -850,23 +895,24 @@ class FullyShardedDataParallelPlugin:
"all-gather while executing in the forward pass. only use with Static graphs."
},
)
activation_checkpointing: bool = field(
default=False,
metadata={
"help": "If True, activation checkpointing is a technique to reduce memory usage by clearing activations of "
"certain layers and recomputing them during a backward pass. Effectively, this trades extra computation time "
"for reduced memory usage."
},
)
def __post_init__(self):
from torch.distributed.fsdp.fully_sharded_data_parallel import (
BackwardPrefetch,
CPUOffload,
FullOptimStateDictConfig,
FullStateDictConfig,
ShardingStrategy,
StateDictType,
)
from torch.distributed.fsdp.fully_sharded_data_parallel import BackwardPrefetch, CPUOffload, ShardingStrategy
prefix = "FSDP_"
if self.sharding_strategy is None:
self.sharding_strategy = ShardingStrategy(int(os.environ.get(prefix + "SHARDING_STRATEGY", 1)))
if self.cpu_offload is None:
if strtobool(os.environ.get(prefix + "OFFLOAD_PARAMS", "False")) == 1:
if str_to_bool(os.environ.get(prefix + "OFFLOAD_PARAMS", "False")) == 1:
self.cpu_offload = CPUOffload(offload_params=True)
else:
self.cpu_offload = CPUOffload(offload_params=False)
@ -878,17 +924,15 @@ class FullyShardedDataParallelPlugin:
if self.state_dict_type is None:
state_dict_type_policy = os.environ.get(prefix + "STATE_DICT_TYPE", "FULL_STATE_DICT")
self.state_dict_type = StateDictType(FSDP_STATE_DICT_TYPE.index(state_dict_type_policy) + 1)
self.set_state_dict_type(state_dict_type_policy)
self.use_orig_params = str_to_bool(os.environ.get(prefix + "USE_ORIG_PARAMS", "False")) == 1
self.sync_module_states = str_to_bool(os.environ.get(prefix + "SYNC_MODULE_STATES", "True")) == 1
self.forward_prefetch = str_to_bool(os.environ.get(prefix + "FORWARD_PREFETCH", "False")) == 1
self.activation_checkpointing = str_to_bool(os.environ.get(prefix + "ACTIVATION_CHECKPOINTING", "False")) == 1
if self.state_dict_type == StateDictType.FULL_STATE_DICT:
if self.state_dict_config is None:
self.state_dict_config = FullStateDictConfig(offload_to_cpu=True, rank0_only=True)
if self.optim_state_dict_config is None:
self.optim_state_dict_config = FullOptimStateDictConfig(offload_to_cpu=True, rank0_only=True)
self.use_orig_params = strtobool(os.environ.get(prefix + "USE_ORIG_PARAMS", "False")) == 1
self.sync_module_states = strtobool(os.environ.get(prefix + "SYNC_MODULE_STATES", "False")) == 1
self.forward_prefetch = strtobool(os.environ.get(prefix + "FORWARD_PREFETCH", "False")) == 1
if self.sync_module_states:
device = torch.cuda.current_device() if not is_xpu_available() else torch.xpu.current_device()
self.param_init_fn = lambda x: x.to_empty(device=device, recurse=False)
@staticmethod
def get_module_class_from_name(module, name):
@ -913,10 +957,15 @@ class FullyShardedDataParallelPlugin:
def set_auto_wrap_policy(self, model):
from torch.distributed.fsdp.wrap import size_based_auto_wrap_policy, transformer_auto_wrap_policy
default_transformer_cls_names_to_wrap = (
",".join(model._no_split_modules) if getattr(model, "_no_split_modules", None) is not None else ""
)
if self.auto_wrap_policy is None:
auto_wrap_policy = os.environ.get("FSDP_AUTO_WRAP_POLICY", "NO_WRAP")
if auto_wrap_policy == FSDP_AUTO_WRAP_POLICY[0]:
transformer_cls_names_to_wrap = os.environ.get("FSDP_TRANSFORMER_CLS_TO_WRAP", "").split(",")
transformer_cls_names_to_wrap = os.environ.get(
"FSDP_TRANSFORMER_CLS_TO_WRAP", default_transformer_cls_names_to_wrap
).split(",")
transformer_cls_to_wrap = set()
for layer_class in transformer_cls_names_to_wrap:
transformer_cls = FullyShardedDataParallelPlugin.get_module_class_from_name(model, layer_class)
@ -949,6 +998,21 @@ class FullyShardedDataParallelPlugin:
if self.mixed_precision_policy is None:
self.mixed_precision_policy = MixedPrecision(param_dtype=dtype, reduce_dtype=dtype, buffer_dtype=dtype)
def set_state_dict_type(self, state_dict_type_policy):
from torch.distributed.fsdp.fully_sharded_data_parallel import (
FullOptimStateDictConfig,
FullStateDictConfig,
StateDictType,
)
self.state_dict_type = StateDictType(FSDP_STATE_DICT_TYPE.index(state_dict_type_policy) + 1)
if self.state_dict_type == StateDictType.FULL_STATE_DICT:
if self.state_dict_config is None:
self.state_dict_config = FullStateDictConfig(offload_to_cpu=True, rank0_only=True)
if self.optim_state_dict_config is None:
self.optim_state_dict_config = FullOptimStateDictConfig(offload_to_cpu=True, rank0_only=True)
@dataclass
class MegatronLMPlugin:
@ -1121,13 +1185,13 @@ class MegatronLMPlugin:
if self.gradient_clipping is None:
self.gradient_clipping = float(os.environ.get(prefix + "GRADIENT_CLIPPING", 1.0))
if self.recompute_activation is None:
self.recompute_activation = strtobool(os.environ.get(prefix + "RECOMPUTE_ACTIVATION", "False")) == 1
self.recompute_activation = str_to_bool(os.environ.get(prefix + "RECOMPUTE_ACTIVATION", "False")) == 1
if self.use_distributed_optimizer is None:
self.use_distributed_optimizer = (
strtobool(os.environ.get(prefix + "USE_DISTRIBUTED_OPTIMIZER", "False")) == 1
str_to_bool(os.environ.get(prefix + "USE_DISTRIBUTED_OPTIMIZER", "False")) == 1
)
if self.sequence_parallelism is None:
self.sequence_parallelism = strtobool(os.environ.get(prefix + "SEQUENCE_PARALLELISM", "False")) == 1
self.sequence_parallelism = str_to_bool(os.environ.get(prefix + "SEQUENCE_PARALLELISM", "False")) == 1
if self.pp_degree > 1 or self.use_distributed_optimizer:
self.DDP_impl = "local"

View File

@ -254,16 +254,19 @@ class DummyScheduler:
Args:
optimizer (`torch.optim.optimizer.Optimizer`):
The optimizer to wrap.
total_num_steps (int):
total_num_steps (int, *optional*):
Total number of steps.
warmup_num_steps (int):
warmup_num_steps (int, *optional*):
Number of steps for warmup.
lr_scheduler_callable (callable, *optional*):
A callable function that creates an LR Scheduler. It accepts only one argument `optimizer`.
**kwargs:
Other arguments.
"""
def __init__(self, optimizer, total_num_steps=None, warmup_num_steps=0, **kwargs):
def __init__(self, optimizer, total_num_steps=None, warmup_num_steps=0, lr_scheduler_callable=None, **kwargs):
self.optimizer = optimizer
self.total_num_steps = total_num_steps
self.warmup_num_steps = warmup_num_steps
self.lr_scheduler_callable = lr_scheduler_callable
self.kwargs = kwargs

View File

@ -13,7 +13,21 @@
# limitations under the License.
import os
from distutils.util import strtobool
def str_to_bool(value) -> int:
"""
Converts a string representation of truth to `True` (1) or `False` (0).
True values are `y`, `yes`, `t`, `true`, `on`, and `1`; False value are `n`, `no`, `f`, `false`, `off`, and `0`;
"""
value = value.lower()
if value in ("y", "yes", "t", "true", "on", "1"):
return 1
elif value in ("n", "no", "f", "false", "off", "0"):
return 0
else:
raise ValueError(f"invalid truth value {value}")
def get_int_from_env(env_keys, default):
@ -28,7 +42,7 @@ def get_int_from_env(env_keys, default):
def parse_flag_from_env(key, default=False):
"""Returns truthy value for `key` from the env if available else the default."""
value = os.environ.get(key, str(default))
return strtobool(value) == 1 # As its name indicates `strtobool` actually returns an int...
return str_to_bool(value) == 1 # As its name indicates `str_to_bool` actually returns an int...
def parse_choice_from_env(key, default="no"):

View File

@ -17,10 +17,11 @@ import torch
from ..logging import get_logger
from .constants import FSDP_PYTORCH_VERSION, MODEL_NAME, OPTIMIZER_NAME
from .imports import is_torch_distributed_available
from .versions import is_torch_version
if is_torch_version(">=", FSDP_PYTORCH_VERSION):
if is_torch_version(">=", FSDP_PYTORCH_VERSION) and is_torch_distributed_available():
import torch.distributed.checkpoint as dist_cp
from torch.distributed.checkpoint.default_planner import DefaultLoadPlanner, DefaultSavePlanner
from torch.distributed.checkpoint.optimizer import load_sharded_optimizer_state_dict
@ -33,6 +34,14 @@ logger = get_logger(__name__)
def save_fsdp_model(fsdp_plugin, accelerator, model, output_dir, model_index=0):
os.makedirs(output_dir, exist_ok=True)
if fsdp_plugin.state_dict_type == StateDictType.FULL_STATE_DICT:
# FSDP raises error when single GPU is used with `offload_to_cpu=True` for FULL_STATE_DICT
# so, only enable it when num_processes>1
is_multi_process = accelerator.num_processes > 1
fsdp_plugin.state_dict_config.offload_to_cpu = is_multi_process
fsdp_plugin.state_dict_config.rank0_only = is_multi_process
with FSDP.state_dict_type(
model, fsdp_plugin.state_dict_type, fsdp_plugin.state_dict_config, fsdp_plugin.optim_state_dict_config
):
@ -70,6 +79,12 @@ def save_fsdp_model(fsdp_plugin, accelerator, model, output_dir, model_index=0):
def load_fsdp_model(fsdp_plugin, accelerator, model, input_dir, model_index=0):
accelerator.wait_for_everyone()
if fsdp_plugin.state_dict_type == StateDictType.FULL_STATE_DICT:
# FSDP raises error when single GPU is used with `offload_to_cpu=True` for FULL_STATE_DICT
# so, only enable it when num_processes>1
is_multi_process = accelerator.num_processes > 1
fsdp_plugin.state_dict_config.offload_to_cpu = is_multi_process
fsdp_plugin.state_dict_config.rank0_only = is_multi_process
with FSDP.state_dict_type(
model, fsdp_plugin.state_dict_type, fsdp_plugin.state_dict_config, fsdp_plugin.optim_state_dict_config
):
@ -111,7 +126,8 @@ def load_fsdp_model(fsdp_plugin, accelerator, model, input_dir, model_index=0):
)
state_dict = state_dict["model"]
logger.info(f"Model loaded from {ckpt_dir}")
model.load_state_dict(state_dict)
load_result = model.load_state_dict(state_dict)
return load_result
def save_fsdp_optimizer(fsdp_plugin, accelerator, optimizer, model, output_dir, optimizer_index=0):
@ -172,5 +188,5 @@ def load_fsdp_optimizer(fsdp_plugin, accelerator, optimizer, model, input_dir, o
)
optim_state = optim_state["optimizer"]
logger.info(f"Optimizer loaded from {ckpt_dir}")
flattened_osd = FSDP.optim_state_dict_to_load(optim_state, model, optimizer)
flattened_osd = FSDP.optim_state_dict_to_load(model=model, optim=optimizer, optim_state_dict=optim_state)
optimizer.load_state_dict(flattened_osd)

View File

@ -16,14 +16,13 @@ import importlib
import importlib.metadata
import os
import warnings
from distutils.util import strtobool
from functools import lru_cache
import torch
from packaging import version
from packaging.version import parse
from .environment import parse_flag_from_env
from .environment import parse_flag_from_env, str_to_bool
from .versions import compare_versions, is_torch_version
@ -77,19 +76,33 @@ def is_fp8_available():
return _is_package_available("transformer_engine")
def is_cuda_available():
"""
Checks if `cuda` is available via an `nvml-based` check which won't trigger the drivers and leave cuda
uninitialized.
"""
try:
os.environ["PYTORCH_NVML_BASED_CUDA_CHECK"] = str(1)
available = torch.cuda.is_available()
finally:
os.environ.pop("PYTORCH_NVML_BASED_CUDA_CHECK", None)
return available
@lru_cache
def is_tpu_available(check_device=True):
"Checks if `torch_xla` is installed and potentially if a TPU is in the environment"
# Due to bugs on the amp series GPUs, we disable torch-xla on them
if torch.cuda.is_available():
if is_cuda_available():
return False
if _tpu_available and check_device:
try:
# Will raise a RuntimeError if no XLA configuration is found
_ = xm.xla_device()
return True
except RuntimeError:
return False
if check_device:
if _tpu_available:
try:
# Will raise a RuntimeError if no XLA configuration is found
_ = xm.xla_device()
return True
except RuntimeError:
return False
return _tpu_available
@ -103,8 +116,6 @@ def is_bf16_available(ignore_tpu=False):
return not ignore_tpu
if torch.cuda.is_available():
return torch.cuda.is_bf16_supported()
if is_npu_available():
return False
return True
@ -129,7 +140,7 @@ def is_bnb_available():
def is_megatron_lm_available():
if strtobool(os.environ.get("ACCELERATE_USE_MEGATRON_LM", "False")) == 1:
if str_to_bool(os.environ.get("ACCELERATE_USE_MEGATRON_LM", "False")) == 1:
package_exists = importlib.util.find_spec("megatron") is not None
if package_exists:
try:
@ -152,8 +163,16 @@ def is_datasets_available():
return _is_package_available("datasets")
def is_timm_available():
return _is_package_available("timm")
def is_aim_available():
return _is_package_available("aim")
package_exists = _is_package_available("aim")
if package_exists:
aim_version = version.parse(importlib.metadata.version("aim"))
return compare_versions(aim_version, "<", "4.0.0")
return False
def is_tensorboard_available():
@ -192,7 +211,16 @@ def is_tqdm_available():
def is_mlflow_available():
return _is_package_available("mlflow")
if _is_package_available("mlflow"):
return True
if importlib.util.find_spec("mlflow") is not None:
try:
_ = importlib.metadata.metadata("mlflow-skinny")
return True
except importlib.metadata.PackageNotFoundError:
return False
return False
def is_mps_available():

View File

@ -21,7 +21,6 @@ from typing import Any, Dict, List, Tuple
import torch
from ..commands.config.config_args import SageMakerConfig
from ..commands.config.config_utils import DYNAMO_BACKENDS
from ..utils import (
DynamoBackend,
PrecisionType,
@ -61,6 +60,8 @@ def prepare_simple_launcher_cmd_env(args: argparse.Namespace) -> Tuple[List[str]
current_env = os.environ.copy()
current_env["ACCELERATE_USE_CPU"] = str(args.cpu or args.use_cpu)
if args.debug:
current_env["ACCELERATE_DEBUG_MODE"] = "true"
if args.gpu_ids != "all" and args.gpu_ids is not None:
if is_xpu_available():
current_env["ZE_AFFINITY_MASK"] = args.gpu_ids
@ -87,7 +88,9 @@ def prepare_simple_launcher_cmd_env(args: argparse.Namespace) -> Tuple[List[str]
try:
dynamo_backend = DynamoBackend(args.dynamo_backend.upper())
except ValueError:
raise ValueError(f"Unknown dynamo backend: {args.dynamo_backend.upper()}. Choose between {DYNAMO_BACKENDS}.")
raise ValueError(
f"Unknown dynamo backend: {args.dynamo_backend.upper()}. Choose between {DynamoBackend.list()}."
)
current_env["ACCELERATE_DYNAMO_BACKEND"] = dynamo_backend.value
current_env["ACCELERATE_DYNAMO_MODE"] = args.dynamo_mode
current_env["ACCELERATE_DYNAMO_USE_FULLGRAPH"] = str(args.dynamo_use_fullgraph)
@ -140,9 +143,11 @@ def prepare_multi_gpu_env(args: argparse.Namespace) -> Dict[str, str]:
setattr(args, "no_python", True)
current_env = os.environ.copy()
if args.debug:
current_env["ACCELERATE_DEBUG_MODE"] = "true"
gpu_ids = getattr(args, "gpu_ids", "all")
if gpu_ids != "all" and args.gpu_ids is not None:
if not is_xpu_available():
if is_xpu_available():
current_env["ZE_AFFINITY_MASK"] = gpu_ids
elif is_npu_available():
current_env["ASCEND_RT_VISIBLE_DEVICES"] = gpu_ids
@ -159,7 +164,9 @@ def prepare_multi_gpu_env(args: argparse.Namespace) -> Dict[str, str]:
try:
dynamo_backend = DynamoBackend(args.dynamo_backend.upper())
except ValueError:
raise ValueError(f"Unknown dynamo backend: {args.dynamo_backend.upper()}. Choose between {DYNAMO_BACKENDS}.")
raise ValueError(
f"Unknown dynamo backend: {args.dynamo_backend.upper()}. Choose between {DynamoBackend.list()}."
)
current_env["ACCELERATE_DYNAMO_BACKEND"] = dynamo_backend.value
current_env["ACCELERATE_DYNAMO_MODE"] = args.dynamo_mode
current_env["ACCELERATE_DYNAMO_USE_FULLGRAPH"] = str(args.dynamo_use_fullgraph)
@ -167,6 +174,9 @@ def prepare_multi_gpu_env(args: argparse.Namespace) -> Dict[str, str]:
if args.use_fsdp:
current_env["ACCELERATE_USE_FSDP"] = "true"
if args.fsdp_cpu_ram_efficient_loading and not args.fsdp_sync_module_states:
raise ValueError("When using `--fsdp_cpu_ram_efficient_loading` set `--fsdp_sync_module_states` to `True`")
current_env["FSDP_SHARDING_STRATEGY"] = str(args.fsdp_sharding_strategy)
current_env["FSDP_OFFLOAD_PARAMS"] = str(args.fsdp_offload_params).lower()
current_env["FSDP_MIN_NUM_PARAMS"] = str(args.fsdp_min_num_params)
@ -180,6 +190,7 @@ def prepare_multi_gpu_env(args: argparse.Namespace) -> Dict[str, str]:
current_env["FSDP_STATE_DICT_TYPE"] = str(args.fsdp_state_dict_type)
current_env["FSDP_FORWARD_PREFETCH"] = str(args.fsdp_forward_prefetch).lower()
current_env["FSDP_USE_ORIG_PARAMS"] = str(args.fsdp_use_orig_params).lower()
current_env["FSDP_CPU_RAM_EFFICIENT_LOADING"] = str(args.fsdp_cpu_ram_efficient_loading).lower()
current_env["FSDP_SYNC_MODULE_STATES"] = str(args.fsdp_sync_module_states).lower()
if args.use_megatron_lm:
@ -276,6 +287,8 @@ def prepare_deepspeed_cmd_env(args: argparse.Namespace) -> Tuple[List[str], Dict
setattr(args, "no_python", True)
current_env = os.environ.copy()
if args.debug:
current_env["ACCELERATE_DEBUG_MODE"] = "true"
gpu_ids = getattr(args, "gpu_ids", "all")
if gpu_ids != "all" and args.gpu_ids is not None:
if not is_xpu_available():
@ -323,6 +336,8 @@ def prepare_tpu(
current_env["XLA_DOWNCAST_BF16"] = "1"
else:
current_env["XLA_USE_BF16"] = "1"
if args.debug:
current_env["ACCELERATE_DEBUG_MODE"] = "true"
if pod:
# Take explicit args and set them up for XLA
args.vm = args.tpu_vm
@ -411,7 +426,9 @@ def prepare_sagemager_args_inputs(
try:
dynamo_backend = DynamoBackend(args.dynamo_backend.upper())
except ValueError:
raise ValueError(f"Unknown dynamo backend: {args.dynamo_backend.upper()}. Choose between {DYNAMO_BACKENDS}.")
raise ValueError(
f"Unknown dynamo backend: {args.dynamo_backend.upper()}. Choose between {DynamoBackend.list()}."
)
# Environment variables to be set for use during training job
environment = {
@ -529,7 +546,9 @@ class PrepareForLaunch:
):
# Prepare the environment for torch.distributed
os.environ["LOCAL_RANK"] = str(index)
os.environ["RANK"] = str(index)
nproc = int(os.environ.get("NPROC", 1))
node_rank = int(os.environ.get("NODE_RANK", 0))
os.environ["RANK"] = str(nproc * node_rank + index)
os.environ["FORK_LAUNCHED"] = str(1)
self.launcher(*args)

View File

@ -28,13 +28,17 @@ import torch
import torch.nn as nn
from ..state import AcceleratorState
from .constants import WEIGHTS_NAME
from .dataclasses import CustomDtype, DistributedType
from .imports import is_mps_available, is_safetensors_available, is_xpu_available
from .constants import SAFE_WEIGHTS_NAME, WEIGHTS_NAME
from .dataclasses import AutocastKwargs, CustomDtype, DistributedType
from .imports import is_mps_available, is_npu_available, is_safetensors_available, is_xpu_available
from .offload import load_offloaded_weight, offload_weight, save_offload_index
from .tqdm import is_tqdm_available, tqdm
if is_npu_available(check_device=False):
import torch_npu # noqa: F401
if is_safetensors_available():
from safetensors import safe_open
from safetensors.torch import load_file as safe_load_file
@ -59,24 +63,34 @@ def convert_file_size_to_int(size: Union[int, str]):
1048576
```
"""
if isinstance(size, int):
return size
if size.upper().endswith("GIB"):
return int(size[:-3]) * (2**30)
if size.upper().endswith("MIB"):
return int(size[:-3]) * (2**20)
if size.upper().endswith("KIB"):
return int(size[:-3]) * (2**10)
if size.upper().endswith("GB"):
int_size = int(size[:-2]) * (10**9)
return int_size // 8 if size.endswith("b") else int_size
if size.upper().endswith("MB"):
int_size = int(size[:-2]) * (10**6)
return int_size // 8 if size.endswith("b") else int_size
if size.upper().endswith("KB"):
int_size = int(size[:-2]) * (10**3)
return int_size // 8 if size.endswith("b") else int_size
raise ValueError("`size` is not in a valid format. Use an integer followed by the unit, e.g., '5GB'.")
mem_size = 0
err_msg = (
f"`size` {size} is not in a valid format. Use an integer for bytes, or a string with an unit (like '5.0GB')."
)
try:
if isinstance(size, int):
mem_size = size
elif size.upper().endswith("GIB"):
mem_size = int(float(size[:-3]) * (2**30))
elif size.upper().endswith("MIB"):
mem_size = int(float(size[:-3]) * (2**20))
elif size.upper().endswith("KIB"):
mem_size = int(float(size[:-3]) * (2**10))
elif size.upper().endswith("GB"):
int_size = int(float(size[:-2]) * (10**9))
mem_size = int_size // 8 if size.endswith("b") else int_size
elif size.upper().endswith("MB"):
int_size = int(float(size[:-2]) * (10**6))
mem_size = int_size // 8 if size.endswith("b") else int_size
elif size.upper().endswith("KB"):
int_size = int(float(size[:-2]) * (10**3))
mem_size = int_size // 8 if size.endswith("b") else int_size
except ValueError:
raise ValueError(err_msg)
if mem_size <= 0:
raise ValueError(err_msg)
return mem_size
def dtype_byte_size(dtype: torch.dtype):
@ -236,7 +250,7 @@ def set_module_tensor_to_device(
Args:
module (`torch.nn.Module`):
The module in which the tensor we want to move lives.
param_name (`str`):
tensor_name (`str`):
The full name of the parameter/buffer.
device (`int`, `str` or `torch.device`):
The device on which to set the tensor.
@ -267,6 +281,11 @@ def set_module_tensor_to_device(
raise ValueError(f"{tensor_name} is on the meta device, we need a `value` to put in on {device}.")
if value is not None:
if old_value.shape != value.shape:
raise ValueError(
f'Trying to set a tensor of shape {value.shape} in "{tensor_name}" (which has shape {old_value.shape}), this look incorrect.'
)
if dtype is None:
# For compatibility with PyTorch load_state_dict which converts state dict dtype to existing dtype in model
value = value.to(old_value.dtype)
@ -637,6 +656,26 @@ def get_max_memory(max_memory: Optional[Dict[Union[int, str], Union[int, str]]]
for key in max_memory:
if isinstance(max_memory[key], str):
max_memory[key] = convert_file_size_to_int(max_memory[key])
# Need to sort the device by type to make sure that we allocate the gpu first.
# As gpu/xpu are represented by int, we need to sort them first.
gpu_devices = [k for k in max_memory.keys() if isinstance(k, int)]
gpu_devices.sort()
# check if gpu/xgpu devices are available and if not, throw a warning
num_devices = torch.xpu.device_count() if is_xpu_available() else torch.cuda.device_count()
for device in gpu_devices:
if device >= num_devices or device < 0:
logger.warning(f"Device {device} is not available, available devices are {list(range(num_devices))}")
# Add the other devices in the preset order if they are available
all_devices = gpu_devices + [k for k in ["mps", "cpu", "disk"] if k in max_memory.keys()]
# Raise an error if a device is not recognized
for k in max_memory.keys():
if k not in all_devices:
raise ValueError(
f"Device {k} is not recognized, available devices are integers(for GPU/XPU), 'mps', 'cpu' and 'disk'"
)
max_memory = {k: max_memory[k] for k in all_devices}
return max_memory
@ -728,11 +767,9 @@ def get_balanced_memory(
Transformers generate function).
"""
# Get default / clean up max_memory
user_not_set_max_memory = max_memory is None
max_memory = get_max_memory(max_memory)
if not (torch.cuda.is_available() or is_xpu_available()) or is_mps_available():
return max_memory
if not is_xpu_available():
num_devices = len([d for d in max_memory if torch.device(d).type == "cuda" and max_memory[d] > 0])
else:
@ -740,14 +777,30 @@ def get_balanced_memory(
[
d
for d in max_memory
if (torch.device(d).type == "xpu" or torch.xpu.get_device_properties(d).dev_type == "gpu")
if (
d != "cpu"
and (torch.device(d).type == "xpu" or torch.xpu.get_device_properties(d).dev_type == "gpu")
)
and max_memory[d] > 0
]
)
if num_devices == 0:
return max_memory
if num_devices == 1:
# We cannot do low_zero on just one GPU
# We cannot do low_zero on just one GPU, but we will still reserve some memory for the buffer
low_zero = False
# If user just asked us to handle memory usage, we should avoid OOM
if user_not_set_max_memory:
for key in max_memory.keys():
if isinstance(key, int):
max_memory[key] *= 0.9 # 90% is a good compromise
logger.info(
f"We will use 90% of the memory on device {key} for storing the model, and 10% for the buffer to avoid OOM. "
"You can set `max_memory` in to a higher value to use more memory (at your own risk)."
)
break # only one device
module_sizes = compute_module_sizes(model, dtype=dtype, special_dtypes=special_dtypes)
per_gpu = module_sizes[""] // (num_devices - 1 if low_zero else num_devices)
@ -790,11 +843,15 @@ def get_balanced_memory(
buffer = int(1.25 * max(buffer, mean_leaves))
per_gpu += buffer
max_memory = get_max_memory(max_memory)
last_gpu = max(i for i in max_memory if isinstance(i, int) and max_memory[i] > 0)
# Sorted list of GPUs id (we may have some gpu ids not included in the our max_memory list - let's ignore them)
gpus_idx_list = list(
sorted(
device_id for device_id, device_mem in max_memory.items() if isinstance(device_id, int) and device_mem > 0
)
)
# The last device is left with max_memory just in case the buffer is not enough.
for i in range(last_gpu):
max_memory[i] = min(max_memory[0] if low_zero and i == 0 else per_gpu, max_memory[i])
for idx in gpus_idx_list[:-1]:
max_memory[idx] = min(max_memory[0] if low_zero and idx == 0 else per_gpu, max_memory[idx])
if low_zero:
min_zero = max(0, module_sizes[""] - sum([max_memory[i] for i in range(1, num_devices)]))
@ -803,6 +860,24 @@ def get_balanced_memory(
return max_memory
def calculate_maximum_sizes(model: torch.nn.Module):
"Computes the total size of the model and its largest layer"
sizes = compute_module_sizes(model)
# `transformers` models store this information for us
no_split_modules = getattr(model, "_no_split_modules", None)
if no_split_modules is None:
no_split_modules = []
modules_to_treat = (
list(model.named_parameters(recurse=False))
+ list(model.named_children())
+ list(model.named_buffers(recurse=False))
)
largest_layer = get_max_layer_size(modules_to_treat, sizes, no_split_modules)
total_size = sizes[""]
return total_size, largest_layer
def infer_auto_device_map(
model: nn.Module,
max_memory: Optional[Dict[Union[int, str], Union[int, str]]] = None,
@ -852,9 +927,9 @@ def infer_auto_device_map(
no_split_module_classes = [no_split_module_classes]
devices = list(max_memory.keys())
gpus = [device for device in devices if device != "cpu"]
if "disk" not in devices:
devices.append("disk")
gpus = [device for device in devices if device not in ["cpu", "disk"]]
# Devices that need to keep space for a potential offloaded layer.
if "mps" in gpus:
@ -1183,7 +1258,7 @@ def load_checkpoint_in_model(
- a path to a file containing a whole model state dict
- a path to a `.json` file containing the index to a sharded checkpoint
- a path to a folder containing a unique `.index.json` file and the shards of a checkpoint.
- a path to a folder containing a unique pytorch_model.bin file.
- a path to a folder containing a unique pytorch_model.bin or a model.safetensors file.
device_map (`Dict[str, Union[int, str, torch.device]]`, *optional*):
A map that specifies where each submodule should go. It doesn't need to be refined to each parameter/buffer
name, once a given module name is inside, every submodule of it will be sent to the same device.
@ -1235,15 +1310,18 @@ def load_checkpoint_in_model(
checkpoint_files = [checkpoint]
elif os.path.isdir(checkpoint):
# check if the whole state dict is present
potential_state = [f for f in os.listdir(checkpoint) if f == WEIGHTS_NAME]
if len(potential_state) == 1:
checkpoint_files = [os.path.join(checkpoint, potential_state[0])]
potential_state_bin = [f for f in os.listdir(checkpoint) if f == WEIGHTS_NAME]
potential_state_safetensor = [f for f in os.listdir(checkpoint) if f == SAFE_WEIGHTS_NAME]
if len(potential_state_bin) == 1:
checkpoint_files = [os.path.join(checkpoint, potential_state_bin[0])]
elif len(potential_state_safetensor) == 1:
checkpoint_files = [os.path.join(checkpoint, potential_state_safetensor[0])]
else:
# otherwise check for sharded checkpoints
potential_index = [f for f in os.listdir(checkpoint) if f.endswith(".index.json")]
if len(potential_index) == 0:
raise ValueError(
f"{checkpoint} is not a folder containing a `.index.json` file or a {WEIGHTS_NAME} file"
f"{checkpoint} is not a folder containing a `.index.json` file or a {WEIGHTS_NAME} or a {SAFE_WEIGHTS_NAME} file"
)
elif len(potential_index) == 1:
index_filename = os.path.join(checkpoint, potential_index[0])
@ -1356,7 +1434,7 @@ def load_checkpoint_in_model(
retie_parameters(model, tied_params)
def get_mixed_precision_context_manager(native_amp: bool = False, cache_enabled: bool = True):
def get_mixed_precision_context_manager(native_amp: bool = False, autocast_kwargs: AutocastKwargs = None):
"""
Return a context manager for autocasting mixed precision
@ -1367,17 +1445,23 @@ def get_mixed_precision_context_manager(native_amp: bool = False, cache_enabled:
Whether the weight cache inside autocast should be enabled.
"""
state = AcceleratorState()
if autocast_kwargs is None:
autocast_kwargs = {}
else:
autocast_kwargs = autocast_kwargs.to_kwargs()
if native_amp:
if state.mixed_precision == "fp16":
return torch.autocast(device_type=state.device.type, dtype=torch.float16, cache_enabled=cache_enabled)
return torch.autocast(device_type=state.device.type, dtype=torch.float16, **autocast_kwargs)
elif state.mixed_precision == "bf16" and state.distributed_type in [
DistributedType.NO,
DistributedType.MULTI_CPU,
DistributedType.MULTI_GPU,
DistributedType.MULTI_NPU,
DistributedType.MULTI_XPU,
DistributedType.FSDP,
]:
return torch.autocast(device_type=state.device.type, dtype=torch.bfloat16, cache_enabled=cache_enabled)
return torch.autocast(device_type=state.device.type, dtype=torch.bfloat16, **autocast_kwargs)
else:
return torch.autocast(device_type=state.device.type, cache_enabled=cache_enabled)
return torch.autocast(device_type=state.device.type, **autocast_kwargs)
else:
return contextlib.nullcontext()

View File

@ -146,8 +146,8 @@ class OffloadedWeightsLoader(Mapping):
index: Mapping = None,
device=None,
):
if state_dict is None and save_folder is None:
raise ValueError("Need either a `state_dict` or a `save_folder` containing offloaded weights.")
if state_dict is None and save_folder is None and index is None:
raise ValueError("Need either a `state_dict`, a `save_folder` or an `index` containing offloaded weights.")
self.state_dict = {} if state_dict is None else state_dict
self.save_folder = save_folder

View File

@ -17,15 +17,15 @@ A set of basic tensor ops compatible with tpu, gpu, and multigpu
"""
import pickle
from functools import update_wrapper
from functools import update_wrapper, wraps
from typing import Any, Mapping
import torch
from ..state import PartialState
from .constants import CUDA_DISTRIBUTED_TYPES
from .constants import TORCH_DISTRIBUTED_OPERATION_TYPES
from .dataclasses import DistributedType, TensorInformation
from .imports import is_torch_distributed_available, is_tpu_available
from .imports import is_torch_distributed_available, is_torch_version, is_tpu_available
if is_tpu_available(check_device=False):
@ -189,6 +189,24 @@ def get_data_structure(data):
return recursively_apply(_get_data_structure, data)
def get_shape(data):
"""
Recursively gathers the shape of a nested list/tuple/dictionary of tensors as a list.
Args:
data (nested list/tuple/dictionary of `torch.Tensor`):
The data to send to analyze.
Returns:
The same data structure as `data` with lists of tensor shapes instead of tensors.
"""
def _get_shape(tensor):
return list(tensor.shape)
return recursively_apply(_get_shape, data)
def initialize_tensors(data_structure):
"""
Recursively initializes tensors from a nested list/tuple/dictionary of [`~utils.TensorInformation`].
@ -251,6 +269,9 @@ def _tpu_gather(tensor):
if tensor.ndim == 0:
tensor = tensor.clone()[None]
# Can only gather contiguous tensors
if not tensor.is_contiguous():
tensor = tensor.contiguous()
return xm.all_gather(tensor)
res = recursively_apply(_tpu_gather_one, tensor, error_on_other_type=True)
@ -259,19 +280,102 @@ def _tpu_gather(tensor):
def _gpu_gather(tensor):
state = PartialState()
if is_torch_version(">=", "1.13"):
gather_op = torch.distributed.all_gather_into_tensor
else:
gather_op = torch.distributed._all_gather_base
def _gpu_gather_one(tensor):
if tensor.ndim == 0:
tensor = tensor.clone()[None]
output_tensors = [torch.empty_like(tensor) for _ in range(torch.distributed.get_world_size())]
torch.distributed.all_gather(output_tensors, tensor)
return torch.cat(output_tensors, dim=0)
# Can only gather contiguous tensors
if not tensor.is_contiguous():
tensor = tensor.contiguous()
if state.backend is not None and state.backend != "gloo":
# We use `empty` as `all_gather_into_tensor` slightly
# differs from `all_gather` for better efficiency,
# and we rely on the number of items in the tensor
# rather than its direct shape
output_tensors = torch.empty(
state.num_processes * tensor.numel(),
dtype=tensor.dtype,
device=state.device,
)
gather_op(output_tensors, tensor)
return output_tensors.view(-1, *tensor.size()[1:])
else:
# a backend of `None` is always CPU
# also gloo does not support `all_gather_into_tensor`,
# which will result in a larger memory overhead for the op
output_tensors = [torch.empty_like(tensor) for _ in range(state.num_processes)]
torch.distributed.all_gather(output_tensors, tensor)
return torch.cat(output_tensors, dim=0)
return recursively_apply(_gpu_gather_one, tensor, error_on_other_type=True)
_cpu_gather = _gpu_gather
class DistributedOperationException(Exception):
"""
An exception class for distributed operations. Raised if the operation cannot be performed due to the shape of the
tensors.
"""
pass
def verify_operation(function):
"""
Verifies that `tensor` is the same shape across all processes. Only ran if `PartialState().debug` is `True`.
"""
@wraps(function)
def wrapper(*args, **kwargs):
if PartialState().distributed_type == DistributedType.NO or not PartialState().debug:
return function(*args, **kwargs)
operation = f"{function.__module__}.{function.__name__}"
if "tensor" in kwargs:
tensor = kwargs["tensor"]
else:
tensor = args[0]
shapes = get_shape(tensor)
output = gather_object([shapes])
if output[0] is not None:
are_same = output.count(output[0]) == len(output)
if not are_same:
process_shape_str = "\n - ".join([f"Process {i}: {shape}" for i, shape in enumerate(output)])
raise DistributedOperationException(
f"Cannot apply desired operation due to shape mismatches. "
"All shapes across devices must be valid."
f"\n\nOperation: `{operation}`\nInput shapes:\n - {process_shape_str}"
)
return function(*args, **kwargs)
return wrapper
def chained_operation(function):
"""
Checks that `verify_operation` failed and if so reports a more helpful error chaining the existing
`DistributedOperationException`.
"""
@wraps(function)
def wrapper(*args, **kwargs):
try:
return function(*args, **kwargs)
except DistributedOperationException as e:
operation = f"{function.__module__}.{function.__name__}"
raise DistributedOperationException(
f"Error found while calling `{operation}`. Please see the earlier error for more details."
) from e
return wrapper
@verify_operation
def gather(tensor):
"""
Recursively gather tensor in a nested list/tuple/dictionary of tensors from all devices.
@ -285,14 +389,8 @@ def gather(tensor):
"""
if PartialState().distributed_type == DistributedType.TPU:
return _tpu_gather(tensor)
elif PartialState().distributed_type in CUDA_DISTRIBUTED_TYPES:
elif PartialState().distributed_type in TORCH_DISTRIBUTED_OPERATION_TYPES:
return _gpu_gather(tensor)
elif PartialState().distributed_type in DistributedType.MULTI_NPU:
return _gpu_gather(tensor)
elif PartialState().distributed_type in DistributedType.MULTI_XPU:
return _gpu_gather(tensor)
elif PartialState().distributed_type == DistributedType.MULTI_CPU:
return _cpu_gather(tensor)
else:
return tensor
@ -304,9 +402,6 @@ def _gpu_gather_object(object: Any):
return [x for y in output_objects for x in y]
_cpu_gather_object = _gpu_gather_object
def gather_object(object: Any):
"""
Recursively gather object in a nested list/tuple/dictionary of objects from all devices.
@ -320,14 +415,8 @@ def gather_object(object: Any):
"""
if PartialState().distributed_type == DistributedType.TPU:
raise NotImplementedError("gather objects in TPU is not supported")
elif PartialState().distributed_type in CUDA_DISTRIBUTED_TYPES:
elif PartialState().distributed_type in TORCH_DISTRIBUTED_OPERATION_TYPES:
return _gpu_gather_object(object)
elif PartialState().distributed_type in DistributedType.MULTI_NPU:
return _gpu_gather_object(object)
elif PartialState().distributed_type in DistributedType.MULTI_XPU:
return _gpu_gather_object(object)
elif PartialState().distributed_type == DistributedType.MULTI_CPU:
return _cpu_gather_object(object)
else:
return object
@ -348,6 +437,7 @@ def _tpu_broadcast(tensor, src=0, name="broadcast tensor"):
return xm.mesh_reduce(name, tensor, lambda x: x[src])
@verify_operation
def broadcast(tensor, from_process: int = 0):
"""
Recursively broadcast tensor in a nested list/tuple/dictionary of tensors to all devices.
@ -363,13 +453,7 @@ def broadcast(tensor, from_process: int = 0):
"""
if PartialState().distributed_type == DistributedType.TPU:
return _tpu_broadcast(tensor, src=from_process, name="accelerate.utils.broadcast")
elif PartialState().distributed_type in CUDA_DISTRIBUTED_TYPES:
return _gpu_broadcast(tensor, src=from_process)
elif PartialState().distributed_type in DistributedType.MULTI_NPU:
return _gpu_gather_object(object)
elif PartialState().distributed_type in DistributedType.MULTI_XPU:
return _gpu_broadcast(tensor, src=from_process)
elif PartialState().distributed_type == DistributedType.MULTI_CPU:
elif PartialState().distributed_type in TORCH_DISTRIBUTED_OPERATION_TYPES:
return _gpu_broadcast(tensor, src=from_process)
else:
return tensor
@ -391,18 +475,12 @@ def broadcast_object_list(object_list, from_process: int = 0):
if PartialState().distributed_type == DistributedType.TPU:
for i, obj in enumerate(object_list):
object_list[i] = xm.mesh_reduce("accelerate.utils.broadcast_object_list", obj, lambda x: x[from_process])
elif PartialState().distributed_type in CUDA_DISTRIBUTED_TYPES:
torch.distributed.broadcast_object_list(object_list, src=from_process)
elif PartialState().distributed_type in DistributedType.MULTI_NPU:
torch.distributed.broadcast_object_list(object_list, src=from_process)
elif PartialState().distributed_type in DistributedType.MULTI_XPU:
torch.distributed.broadcast_object_list(object_list, src=from_process)
elif PartialState().distributed_type == DistributedType.MULTI_CPU:
elif PartialState().distributed_type in TORCH_DISTRIBUTED_OPERATION_TYPES:
torch.distributed.broadcast_object_list(object_list, src=from_process)
return object_list
def slice_tensors(data, tensor_slice):
def slice_tensors(data, tensor_slice, process_index=None, num_processes=None):
"""
Recursively takes a slice in a nested list/tuple/dictionary of tensors.
@ -444,6 +522,7 @@ def concatenate(data, dim=0):
return torch.cat(data, dim=dim)
@chained_operation
def pad_across_processes(tensor, dim=0, pad_index=0, pad_first=False):
"""
Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so they
@ -490,7 +569,8 @@ def pad_across_processes(tensor, dim=0, pad_index=0, pad_first=False):
)
def reduce(tensor, reduction="mean"):
@verify_operation
def reduce(tensor, reduction="mean", scale=1.0):
"""
Recursively reduce the tensors in a nested list/tuple/dictionary of lists of tensors across all processes by the
mean of a given operation.
@ -500,31 +580,29 @@ def reduce(tensor, reduction="mean"):
The data to reduce.
reduction (`str`, *optional*, defaults to `"mean"`):
A reduction method. Can be of "mean", "sum", or "none"
scale (`float`, *optional*):
A default scaling value to be applied after the reduce, only valied on XLA.
Returns:
The same data structure as `data` with all the tensors reduced.
"""
def _reduce_across_processes(tensor, reduction="mean"):
def _reduce_across_processes(tensor, reduction="mean", scale=1.0):
state = PartialState()
cloned_tensor = tensor.clone()
if state.distributed_type == DistributedType.NO:
return cloned_tensor
if state.distributed_type == DistributedType.TPU:
xm.all_reduce("sum", cloned_tensor)
elif state.distributed_type.value in CUDA_DISTRIBUTED_TYPES:
torch.distributed.all_reduce(cloned_tensor, ReduceOp.SUM)
elif state.distributed_type.value in DistributedType.MULTI_NPU:
torch.distributed.all_reduce(cloned_tensor, ReduceOp.SUM)
elif state.distributed_type.value in DistributedType.MULTI_XPU:
torch.distributed.all_reduce(cloned_tensor, ReduceOp.SUM)
elif state.distributed_type == DistributedType.MULTI_CPU:
xm.all_reduce("sum", cloned_tensor, scale)
elif state.distributed_type.value in TORCH_DISTRIBUTED_OPERATION_TYPES:
torch.distributed.all_reduce(cloned_tensor, ReduceOp.SUM)
if reduction == "mean":
cloned_tensor /= state.num_processes
return cloned_tensor
return recursively_apply(_reduce_across_processes, tensor, error_on_other_type=True, reduction=reduction)
return recursively_apply(
_reduce_across_processes, tensor, error_on_other_type=True, reduction=reduction, scale=scale
)
def convert_to_fp32(tensor):

View File

@ -13,25 +13,35 @@
# limitations under the License.
import os
import platform
import re
import socket
from contextlib import contextmanager
from functools import partial
from types import MethodType
import torch
from packaging.version import Version
from ..commands.config.default import write_basic_config # noqa: F401
from ..logging import get_logger
from ..state import PartialState
from .constants import FSDP_PYTORCH_VERSION
from .dataclasses import DistributedType
from .imports import is_deepspeed_available, is_tpu_available
from .imports import is_deepspeed_available, is_safetensors_available, is_tpu_available
from .transformer_engine import convert_model
from .versions import is_torch_version
if is_deepspeed_available():
from deepspeed import DeepSpeedEngine
logger = get_logger(__name__)
if is_tpu_available(check_device=False):
import torch_xla.core.xla_model as xm
if is_safetensors_available():
from safetensors.torch import save_file as safe_save_file
def is_compiled_module(module):
"""
@ -63,8 +73,15 @@ def extract_model_from_parallel(model, keep_fp32_wrapper: bool = True):
model = model._orig_mod
if is_deepspeed_available():
from deepspeed import DeepSpeedEngine
options += (DeepSpeedEngine,)
if is_torch_version(">=", FSDP_PYTORCH_VERSION):
from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP
options += (FSDP,)
while isinstance(model, options):
model = model.module
@ -76,7 +93,7 @@ def extract_model_from_parallel(model, keep_fp32_wrapper: bool = True):
forward = forward.__wrapped__
if forward == original_forward:
break
model.forward = forward
model.forward = MethodType(forward, model)
if getattr(model, "_converted_to_transformer_engine", False):
convert_model(model, to_transformer_engine=False)
@ -100,18 +117,60 @@ def wait_for_everyone():
PartialState().wait_for_everyone()
def save(obj, f):
def save(obj, f, save_on_each_node: bool = False, safe_serialization: bool = False):
"""
Save the data to disk. Use in place of `torch.save()`.
Args:
obj: The data to save
f: The file (or file-like object) to use to save the data
obj:
The data to save
f:
The file (or file-like object) to use to save the data
save_on_each_node (`bool`, *optional*, defaults to `False`):
Whether to only save on the global main process
safe_serialization (`bool`, *optional*, defaults to `False`):
Whether to save `obj` using `safetensors`
"""
save_func = torch.save if not safe_serialization else partial(safe_save_file, metadata={"format": "pt"})
if PartialState().distributed_type == DistributedType.TPU:
xm.save(obj, f)
elif PartialState().local_process_index == 0:
torch.save(obj, f)
elif PartialState().is_main_process and not save_on_each_node:
save_func(obj, f)
elif PartialState().is_local_main_process and save_on_each_node:
save_func(obj, f)
@contextmanager
def clear_environment():
"""
A context manager that will cache origin `os.environ` and replace it with a empty dictionary in this context.
When this context exits, the cached `os.environ` will be back.
Example:
```python
>>> import os
>>> from accelerate.utils import clear_environment
>>> os.environ["FOO"] = "bar"
>>> with clear_environment():
... print(os.environ)
... os.environ["FOO"] = "new_bar"
... print(os.environ["FOO"])
{}
new_bar
>>> print(os.environ["FOO"])
bar
```
"""
_old_os_environ = os.environ
os.environ = dict()
yield
os.environ = _old_os_environ
@contextmanager
@ -132,14 +191,22 @@ def patch_environment(**kwargs):
>>> print(os.environ["FOO"]) # raises KeyError
```
"""
existing_vars = {}
for key, value in kwargs.items():
os.environ[key.upper()] = str(value)
key = key.upper()
if key in os.environ:
existing_vars[key] = os.environ[key]
os.environ[key] = str(value)
yield
for key in kwargs:
if key.upper() in os.environ:
del os.environ[key.upper()]
key = key.upper()
if key in existing_vars:
# restore previous value
os.environ[key] = existing_vars[key]
else:
os.environ.pop(key, None)
def get_pretty_name(obj):
@ -182,3 +249,31 @@ def is_port_in_use(port: int = None) -> bool:
port = 29500
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
return s.connect_ex(("localhost", port)) == 0
def convert_bytes(size):
"Converts `size` from bytes to the largest possible unit"
for x in ["bytes", "KB", "MB", "GB", "TB"]:
if size < 1024.0:
return f"{round(size, 2)} {x}"
size /= 1024.0
return f"{round(size, 2)} PB"
def check_os_kernel():
"""Warns if the kernel version is below the recommended minimum on Linux."""
# see issue #1929
info = platform.uname()
system = info.system
if system != "Linux":
return
_, version, *_ = re.split(r"(\d+\.\d+\.\d+)", info.release)
min_version = "5.5.0"
if Version(version) < Version(min_version):
msg = (
f"Detected kernel version {version}, which is below the recommended minimum of {min_version}; this can "
"cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher."
)
logger.warning(msg, main_process_only=True)

View File

@ -31,7 +31,6 @@ from transformers.utils import is_torch_bf16_available
import accelerate
from accelerate.accelerator import Accelerator
from accelerate.scheduler import AcceleratedScheduler
from accelerate.state import AcceleratorState
from accelerate.test_utils.testing import (
AccelerateTestCase,
@ -56,8 +55,6 @@ from accelerate.utils.other import patch_environment
set_seed(42)
T5_SMALL = "t5-small"
T5_TINY = "patrickvonplaten/t5-tiny-random"
GPT2_TINY = "sshleifer/tiny-gpt2"
ZERO2 = "zero2"
@ -332,7 +329,8 @@ class DeepSpeedConfigIntegration(AccelerateTestCase):
model, optimizer, train_dataloader, eval_dataloader, dummy_lr_scheduler
)
self.assertTrue(
"You cannot create a `DummyScheduler` without specifying a scheduler in the config file."
"Either specify a scheduler in the config file or "
"pass in the `lr_scheduler_callable` parameter when using `accelerate.utils.DummyScheduler`."
in str(cm.exception)
)
@ -352,7 +350,7 @@ class DeepSpeedConfigIntegration(AccelerateTestCase):
self.assertTrue(accelerator.deepspeed_config["train_batch_size"], 16)
self.assertEqual(type(model), DeepSpeedEngine)
self.assertEqual(type(optimizer), DeepSpeedOptimizerWrapper)
self.assertEqual(type(lr_scheduler), AcceleratedScheduler)
self.assertEqual(type(lr_scheduler), DeepSpeedSchedulerWrapper)
self.assertEqual(type(accelerator.deepspeed_engine_wrapped), DeepSpeedEngineWrapper)
elif optim_type == DS_OPTIMIZER and scheduler_type == DS_SCHEDULER:
@ -483,6 +481,31 @@ class DeepSpeedConfigIntegration(AccelerateTestCase):
in str(cm.exception)
)
# passing `DummyScheduler` without `lr_scheduler_callable` should fail
with self.assertRaises(ValueError) as cm:
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, dummy_optimizer, train_dataloader, eval_dataloader, dummy_lr_scheduler
)
self.assertTrue(
"Either specify a scheduler in the config file or "
"pass in the `lr_scheduler_callable` parameter when using `accelerate.utils.DummyScheduler`."
in str(cm.exception)
)
# passing `lr_scheduler_callable` to DummyScheduler should enable DS Optim + Custom Scheduler
def _lr_scheduler_callable(optimizer):
return get_scheduler(
name="linear",
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=1000,
)
dummy_lr_scheduler = DummyScheduler(dummy_optimizer, lr_scheduler_callable=_lr_scheduler_callable)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, dummy_optimizer, train_dataloader, eval_dataloader, dummy_lr_scheduler
)
def test_save_checkpoints(self):
deepspeed_plugin = DeepSpeedPlugin(
hf_ds_config=self.ds_config_file[ZERO3],
@ -599,7 +622,7 @@ class DeepSpeedConfigIntegration(AccelerateTestCase):
deepspeed_plugin = DeepSpeedPlugin(
hf_ds_config=ds_config,
zero3_init_flag=True,
gradient_accumulation_steps=1,
gradient_accumulation_steps=2,
gradient_clipping=1.0,
zero_stage=2,
offload_optimizer_device="cpu",
@ -611,7 +634,7 @@ class DeepSpeedConfigIntegration(AccelerateTestCase):
accelerator = Accelerator(deepspeed_plugin=deepspeed_plugin, mixed_precision=dtype)
deepspeed_plugin = accelerator.state.deepspeed_plugin
self.assertEqual(deepspeed_plugin.deepspeed_config["gradient_clipping"], 1.0)
self.assertEqual(deepspeed_plugin.deepspeed_config["gradient_accumulation_steps"], 1)
self.assertEqual(deepspeed_plugin.deepspeed_config["gradient_accumulation_steps"], 2)
self.assertEqual(deepspeed_plugin.deepspeed_config["zero_optimization"]["stage"], 2)
self.assertEqual(
deepspeed_plugin.deepspeed_config["zero_optimization"]["offload_optimizer"]["device"], "cpu"
@ -632,6 +655,42 @@ class DeepSpeedConfigIntegration(AccelerateTestCase):
in str(cm.exception)
)
# base case of passing in `gradient_accumulation_steps` to `DeepSpeedPlugin`
AcceleratorState._reset_state(True)
deepspeed_plugin = DeepSpeedPlugin(zero_stage=2, gradient_accumulation_steps=4)
with mockenv_context(**self.dist_env):
accelerator = Accelerator(deepspeed_plugin=deepspeed_plugin, mixed_precision=dtype)
deepspeed_plugin = accelerator.state.deepspeed_plugin
self.assertEqual(deepspeed_plugin.deepspeed_config["gradient_accumulation_steps"], 4)
# filling the `auto` gradient_accumulation_steps via Accelerator's value
AcceleratorState._reset_state(True)
deepspeed_plugin = DeepSpeedPlugin(
hf_ds_config=ds_config,
zero3_init_flag=True,
gradient_clipping=1.0,
zero_stage=2,
offload_optimizer_device="cpu",
offload_param_device="cpu",
zero3_save_16bit_model=True,
)
with mockenv_context(**self.dist_env):
accelerator = Accelerator(
deepspeed_plugin=deepspeed_plugin, mixed_precision=dtype, gradient_accumulation_steps=8
)
train_set = RegressionDataset(length=80)
eval_set = RegressionDataset(length=20)
train_dataloader = DataLoader(train_set, batch_size=16, shuffle=True)
eval_dataloader = DataLoader(eval_set, batch_size=32, shuffle=False)
model = AutoModelForCausalLM.from_pretrained("gpt2")
dummy_optimizer = DummyOptim(params=model.parameters(), lr=5e-5, weight_decay=1e-4)
dummy_lr_scheduler = DummyScheduler(dummy_optimizer, warmup_num_steps=10, total_num_steps=1000)
model, _, train_dataloader, eval_dataloader, _ = accelerator.prepare(
model, dummy_optimizer, train_dataloader, eval_dataloader, dummy_lr_scheduler
)
deepspeed_plugin = accelerator.state.deepspeed_plugin
self.assertEqual(deepspeed_plugin.deepspeed_config["gradient_accumulation_steps"], 8)
def test_ds_config_assertions(self):
ambiguous_env = self.dist_env.copy()
ambiguous_env[

View File

@ -1,5 +1,6 @@
import json
import os
import pickle
import tempfile
from unittest.mock import patch
@ -9,9 +10,10 @@ from torch.utils.data import DataLoader, TensorDataset
from accelerate import DistributedType, infer_auto_device_map, init_empty_weights
from accelerate.accelerator import Accelerator
from accelerate.state import GradientState, PartialState
from accelerate.test_utils import require_bnb, require_multi_gpu, slow
from accelerate.test_utils import require_bnb, require_multi_gpu, require_safetensors, slow
from accelerate.test_utils.testing import AccelerateTestCase, require_cuda
from accelerate.utils import patch_environment
from accelerate.utils.modeling import load_checkpoint_in_model
def create_components():
@ -113,6 +115,30 @@ class AcceleratorTester(AccelerateTestCase):
accelerator.load_state(tmpdirname)
self.assertTrue(abs(model_signature - get_signature(model)) < 1e-3)
def test_save_model_pytorch(self):
accelerator = Accelerator()
model = torch.nn.Linear(10, 10)
model_signature = get_signature(model)
with tempfile.TemporaryDirectory() as tmpdirname:
accelerator.save_model(model, tmpdirname, safe_serialization=False)
# make sure loaded weights match
load_checkpoint_in_model(model, tmpdirname)
self.assertTrue(abs(model_signature - get_signature(model)) < 1e-3)
@require_safetensors
def test_save_model_safetensors(self):
accelerator = Accelerator()
model = torch.nn.Linear(10, 10)
model_signature = get_signature(model)
with tempfile.TemporaryDirectory() as tmpdirname:
accelerator.save_model(model, tmpdirname, safe_serialization=True)
# make sure loaded weights match
load_checkpoint_in_model(model, tmpdirname)
self.assertTrue(abs(model_signature - get_signature(model)) < 1e-3)
def test_save_load_model_with_hooks(self):
accelerator = Accelerator()
model, optimizer, scheduler, train_dl, valid_dl = create_components()
@ -329,3 +355,35 @@ class AcceleratorTester(AccelerateTestCase):
sgd = torch.optim.SGD(model.parameters(), lr=0.01)
accelerator = Accelerator(cpu=True)
_ = accelerator.prepare(sgd)
@require_cuda
def test_can_unwrap_model_fp16(self):
# test for a regression introduced in #872
# before the fix, after unwrapping with keep_fp32_wrapper=False, there would be the following error:
# Linear.forward() missing 1 required positional argument: 'input'
model = create_components()[0]
accelerator = Accelerator(mixed_precision="fp16")
inputs = torch.randn(10, 2).cuda()
model = accelerator.prepare(model)
model(inputs) # sanity check that this works
model = accelerator.unwrap_model(model, keep_fp32_wrapper=False)
model(inputs) # check that this still works
# check that pickle roundtrip works
model_loaded = pickle.loads(pickle.dumps(model))
model_loaded(inputs)
def test_can_unwrap_model(self):
model = create_components()[0]
accelerator = Accelerator(mixed_precision="no", cpu=True)
inputs = torch.randn(10, 2)
model = accelerator.prepare(model)
model(inputs) # sanity check that this works
model = accelerator.unwrap_model(model, keep_fp32_wrapper=False)
model(inputs) # check that this still works
# check that pickle roundtrip works
model_loaded = pickle.loads(pickle.dumps(model))
model_loaded(inputs)

View File

@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import os
import unittest
from tempfile import TemporaryDirectory
@ -31,7 +31,7 @@ from accelerate.big_modeling import (
)
from accelerate.hooks import remove_hook_from_submodules
from accelerate.test_utils import require_bnb, require_cuda, require_mps, require_multi_gpu, slow
from accelerate.utils import offload_state_dict
from accelerate.utils import is_torch_version, offload_state_dict
class ModelForTest(nn.Module):
@ -45,6 +45,18 @@ class ModelForTest(nn.Module):
return self.linear2(self.batchnorm(self.linear1(x)))
class ModelForTestCopy(nn.Module):
def __init__(self, id: int):
super().__init__()
self.id = id
self.linear1 = nn.Linear(3, 4)
self.batchnorm = nn.BatchNorm1d(4)
self.linear2 = nn.Linear(4, 5)
def forward(self, x):
return self.linear2(self.batchnorm(self.linear1(x))), self.id
class ModelForTestTiedWeights(nn.Module):
def __init__(self):
super().__init__()
@ -106,8 +118,14 @@ class BigModelingTester(unittest.TestCase):
self.assertEqual(module.running_mean.device, torch.device("cpu"))
# Use with include_buffers=True
register_parameter_func = nn.Module.register_parameter
register_buffer_func = nn.Module.register_buffer
with init_empty_weights(include_buffers=True):
module = nn.BatchNorm1d(4)
# nn.Module.register_parameter/buffer shouldn't be changed with torch >= 2.0
if is_torch_version(">=", "2.0"):
self.assertEqual(register_parameter_func, nn.Module.register_parameter)
self.assertEqual(register_buffer_func, nn.Module.register_buffer)
self.assertEqual(module.weight.device, torch.device("meta"))
self.assertEqual(module.running_mean.device, torch.device("meta"))
@ -319,6 +337,48 @@ class BigModelingTester(unittest.TestCase):
output = model(x)
self.assertTrue(torch.allclose(expected, output.cpu(), atol=1e-5))
@require_cuda
def test_dispatch_model_copy(self):
original_model = ModelForTestCopy(id=1)
device_map = {"linear1": 0, "batchnorm": "cpu", "linear2": 0}
x = torch.randn(2, 3)
expected, original_output_id = original_model(x)
dispatch_model(original_model, device_map)
copied_model = copy.deepcopy(original_model)
copied_model.id = 2
output, copied_output_id = copied_model(x)
self.assertEqual(original_model.id, original_output_id)
self.assertEqual(copied_model.id, copied_output_id)
self.assertFalse(copied_model.linear1.forward is original_model.linear1.forward)
self.assertTrue(torch.allclose(expected, output.cpu(), atol=1e-5))
@require_cuda
def test_dispatch_model_move_offloaded_model(self):
model = ModelForTest()
device_map = {"linear1": "disk", "batchnorm": "cpu", "linear2": 0}
with TemporaryDirectory() as tmp_dir:
dispatch_model(model, device_map, offload_dir=tmp_dir)
with self.assertRaises(RuntimeError):
model.to(0)
@require_multi_gpu
def test_dispatch_model_move_model_warning(self):
model = ModelForTest()
device_map = {"linear1": 0, "batchnorm": 0, "linear2": 1}
with TemporaryDirectory() as tmp_dir:
dispatch_model(model, device_map, offload_dir=tmp_dir)
with self.assertLogs("accelerate.big_modeling", level="WARNING"):
model.to("cpu")
with self.assertLogs("accelerate.big_modeling", level="WARNING"):
model.cuda(0)
with self.assertRaises(RuntimeError):
x = torch.randn(2, 3)
model(x)
@slow
@require_multi_gpu
def test_dispatch_model_gpt2_on_two_gpus(self):
@ -415,6 +475,18 @@ class BigModelingTester(unittest.TestCase):
output = model(x)
self.assertTrue(torch.allclose(expected, output.cpu(), atol=1e-5))
@require_cuda
def test_dispatch_model_force_hooks(self):
model = ModelForTest()
device_map = {"": 0}
x = torch.randn(2, 3)
expected = model(x)
dispatch_model(model, device_map, force_hooks=True)
output = model(x)
self.assertTrue(torch.allclose(expected, output.cpu(), atol=1e-5))
@require_cuda
def test_load_checkpoint_and_dispatch(self):
model = ModelForTest()
@ -609,22 +681,16 @@ class BigModelingTester(unittest.TestCase):
with init_empty_weights():
model = AutoModel.from_config(AutoConfig.from_pretrained("bigscience/bloom-560m"))
# TODO: @younesbelkada remove the positional arg on the next `transformers` release
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = replace_with_bnb_linear(
model, modules_to_not_convert=["lm_head"], quantization_config=quantization_config
)
# TODO: @younesbelkada remove this block on the next `transformers` release
for p in model.parameters():
p.requires_grad = False
model_path = hf_hub_download("bigscience/bloom-560m", "pytorch_model.bin")
model = load_checkpoint_and_dispatch(
model,
checkpoint=model_path,
# device_map="auto",
device_map="balanced",
)
@ -645,16 +711,11 @@ class BigModelingTester(unittest.TestCase):
with init_empty_weights():
model = AutoModel.from_config(AutoConfig.from_pretrained("bigscience/bloom-560m"))
# TODO: @younesbelkada remove the positional arg on the next `transformers` release
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = replace_with_bnb_linear(
model, modules_to_not_convert=["lm_head"], quantization_config=quantization_config
)
# TODO: @younesbelkada remove this block on the next `transformers` release
for p in model.parameters():
p.requires_grad = False
model_path = hf_hub_download("bigscience/bloom-560m", "pytorch_model.bin")
# test with auto
@ -670,14 +731,10 @@ class BigModelingTester(unittest.TestCase):
with init_empty_weights():
model = AutoModel.from_config(AutoConfig.from_pretrained("bigscience/bloom-560m"))
# TODO: @younesbelkada remove the positional arg on the next `transformers` release
model = replace_with_bnb_linear(
model, modules_to_not_convert=["lm_head"], quantization_config=quantization_config
)
for p in model.parameters():
p.requires_grad = False
# test with str device map
model = load_checkpoint_and_dispatch(
model,
@ -691,15 +748,10 @@ class BigModelingTester(unittest.TestCase):
with init_empty_weights():
model = AutoModel.from_config(AutoConfig.from_pretrained("bigscience/bloom-560m"))
# TODO: @younesbelkada remove the positional arg on the next `transformers` release
model = replace_with_bnb_linear(
model, modules_to_not_convert=["lm_head"], quantization_config=quantization_config
)
# TODO: @younesbelkada remove this block on the next `transformers` release
for p in model.parameters():
p.requires_grad = False
# test with torch.device device map
model = load_checkpoint_and_dispatch(
model,
@ -712,7 +764,6 @@ class BigModelingTester(unittest.TestCase):
@slow
@require_bnb
@unittest.skip("Un-skip in the next transformers release")
def test_dipatch_model_fp4_simple(self):
"""Tests that `dispatch_model` quantizes fp4 layers"""
from huggingface_hub import hf_hub_download

View File

@ -18,10 +18,17 @@ import unittest
from pathlib import Path
import torch
from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError
import accelerate
from accelerate.commands.estimate import estimate_command, estimate_command_parser, gather_data
from accelerate.test_utils import execute_subprocess_async
from accelerate.test_utils.testing import run_command
from accelerate.test_utils.testing import (
require_timm,
require_transformers,
run_command,
)
from accelerate.utils import patch_environment
class AccelerateLauncherTester(unittest.TestCase):
@ -60,10 +67,22 @@ class AccelerateLauncherTester(unittest.TestCase):
def test_config_compatibility(self):
for config in sorted(self.test_config_path.glob("**/*.yaml")):
with self.subTest(config_file=config):
execute_subprocess_async(
self.base_cmd + ["--config_file", str(config), self.test_file_path], env=os.environ.copy()
)
if "invalid" not in str(config):
with self.subTest(config_file=config):
execute_subprocess_async(
self.base_cmd + ["--config_file", str(config), self.test_file_path], env=os.environ.copy()
)
def test_invalid_keys(self):
with self.assertRaises(
RuntimeError,
msg="The config file at 'invalid_keys.yaml' had unknown keys ('another_invalid_key', 'invalid_key')",
):
execute_subprocess_async(
self.base_cmd
+ ["--config_file", str(self.test_config_path / "invalid_keys.yaml"), self.test_file_path],
env=os.environ.copy(),
)
def test_accelerate_test(self):
execute_subprocess_async(["accelerate", "test"], env=os.environ.copy())
@ -211,3 +230,137 @@ class TpuConfigTester(unittest.TestCase):
f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; pip install accelerate==12.0.0; echo "hello world"; echo "this is a second command" --worker all',
output,
)
class ModelEstimatorTester(unittest.TestCase):
"""
Test case for checking the output of `accelerate estimate-memory` is correct.
- Uses `estimate_command` when trying to catch raised errors
- Uses `gather_data` when just verifying the calculations are correct
"""
parser = estimate_command_parser()
def test_invalid_model_name(self):
with self.assertRaises(
RepositoryNotFoundError, msg="Repo for model `somebrokenname` does not exist on the Hub"
):
args = self.parser.parse_args(["somebrokenname"])
estimate_command(args)
@require_timm
def test_invalid_model_name_timm(self):
with self.assertRaises(RuntimeError, msg="Tried to load `muellerzr/dummy` with `timm` but"):
args = self.parser.parse_args(["muellerzr/dummy", "--library_name", "timm"])
estimate_command(args)
@require_transformers
def test_invalid_model_name_transformers(self):
with self.assertRaises(RuntimeError, msg="Tried to load `muellerzr/dummy` with `transformers` but"):
args = self.parser.parse_args(["muellerzr/dummy", "--library_name", "transformers"])
estimate_command(args)
def test_no_metadata(self):
with self.assertRaises(
ValueError, msg="Model `muellerzr/dummy` does not have any library metadata on the Hub"
):
args = self.parser.parse_args(["muellerzr/dummy"])
estimate_command(args)
def test_gated(self):
with self.assertRaises(GatedRepoError, msg="Repo for model `meta-llama/Llama-2-7b` is gated"):
args = self.parser.parse_args(["meta-llama/Llama-2-7b"])
with patch_environment(hf_hub_disable_implicit_token="1"):
estimate_command(args)
@require_transformers
def test_remote_code(self):
# Also tests that custom `Auto` classes work
args = self.parser.parse_args(["hf-internal-testing/test_dynamic_model"])
with self.assertRaises(ValueError, msg="--trust_remote_code"):
gather_data(args)
# Verify it works with the flag
args = self.parser.parse_args(["hf-internal-testing/test_dynamic_model", "--trust_remote_code"])
gather_data(args)
@require_transformers
def test_explicit_dtypes(self):
args = self.parser.parse_args(["bert-base-cased", "--dtypes", "float32", "float16"])
output = gather_data(args)
# The largest layer and total size of the model in bytes
largest_layer, total_size = 89075712, 433249280
# Check that full precision -> int4 is calculating correctly
self.assertEqual(len(output), 2, f"Output was missing a precision, expected 2 but received {len(output)}")
for i, factor in enumerate([1, 2]):
precision = 32 // factor
precision_str = f"float{precision}"
largest_layer_estimate = largest_layer / factor
total_size_estimate = total_size / factor
total_training_size_estimate = total_size_estimate * 4
self.assertEqual(precision_str, output[i][0], f"Output is missing precision `{precision_str}`")
self.assertEqual(
largest_layer_estimate,
output[i][1],
f"Calculation for largest layer size in `{precision_str}` is incorrect.",
)
self.assertEqual(
total_size_estimate,
output[i][2],
msg=f"Calculation for total size in `{precision_str}` is incorrect.",
)
self.assertEqual(
total_training_size_estimate,
output[i][3],
msg=f"Calculation for total training size in `{precision_str}` is incorrect.",
)
@require_transformers
def test_transformers_model(self):
args = self.parser.parse_args(["bert-base-cased", "--dtypes", "float32"])
output = gather_data(args)
# The largest layer and total size of the model in bytes
largest_layer, total_size = 89075712, 433249280
self.assertEqual(
largest_layer,
output[0][1],
f"Calculation for largest layer size in `fp32` is incorrect, expected {largest_layer} but received {output[0][1]}",
)
self.assertEqual(
total_size,
output[0][2],
f"Calculation for total size in `fp32` is incorrect, expected {total_size} but received {output[0][2]}",
)
@require_transformers
def test_no_split_modules(self):
# idefics-80b-instruct has ["IdeficsDecoderLayer", "IdeficsGatedCrossAttentionLayer"]
args = self.parser.parse_args(["HuggingFaceM4/idefics-80b-instruct", "--dtypes", "float32"])
output = gather_data(args)
# without factoring in `no_split` modules, the largest layer is 721420288 bytes
self.assertNotEqual(
output[0][1], 721420288, "Largest layer calculation incorrect, did not factor in `no_split` modules."
)
# the real answer is 3240165632 bytes
self.assertEqual(output[0][1], 3240165632)
@require_timm
def test_timm_model(self):
args = self.parser.parse_args(["timm/resnet50.a1_in1k", "--library_name", "timm"])
output = gather_data(args)
# The largest layer and total size of the model in bytes
largest_layer, total_size = 9437184, 102441032
self.assertEqual(
largest_layer,
output[0][1],
f"Calculation for largest layer size in `fp32` is incorrect, expected {largest_layer} but received {output[0][1]}",
)
self.assertEqual(
total_size,
output[0][2],
f"Calculation for total size in `fp32` is incorrect, expected {total_size} but received {output[0][2]}",
)

View File

@ -0,0 +1,15 @@
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: 'NO'
downcast_bf16: 'no'
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 1
use_cpu: false
invalid_key: "invalid_value"
another_invalid_key: "another_invalid_value"

View File

@ -41,6 +41,7 @@ EXCLUDE_EXAMPLES = [
"fsdp_with_peak_mem_tracking.py",
"deepspeed_with_config_support.py",
"megatron_lm_gpt_pretraining.py",
"early_stopping.py",
]
@ -222,3 +223,7 @@ class FeatureExamplesTests(TempDirTestCase):
def test_local_sgd(self):
testargs = ["examples/by_feature/local_sgd.py"]
run_command(self._launch_args + testargs)
def test_early_stopping(self):
testargs = ["examples/by_feature/early_stopping.py"]
run_command(self._launch_args + testargs)

View File

@ -22,7 +22,7 @@ import torch
from accelerate import Accelerator, DistributedDataParallelKwargs, GradScalerKwargs
from accelerate.state import AcceleratorState
from accelerate.test_utils import execute_subprocess_async, require_cuda, require_multi_gpu
from accelerate.utils import KwargsHandler
from accelerate.utils import AutocastKwargs, KwargsHandler, TorchDynamoPlugin, clear_environment
@dataclass
@ -32,7 +32,7 @@ class MockClass(KwargsHandler):
c: float = 3.0
class DataLoaderTester(unittest.TestCase):
class KwargsHandlerTester(unittest.TestCase):
def test_kwargs_handler(self):
# If no defaults are changed, `to_kwargs` returns an empty dict.
self.assertDictEqual(MockClass().to_kwargs(), {})
@ -63,6 +63,41 @@ class DataLoaderTester(unittest.TestCase):
cmd = ["torchrun", f"--nproc_per_node={torch.cuda.device_count()}", inspect.getfile(self.__class__)]
execute_subprocess_async(cmd, env=os.environ.copy())
@require_cuda
def test_autocast_kwargs(self):
kwargs = AutocastKwargs(enabled=False)
AcceleratorState._reset_state()
accelerator = Accelerator(mixed_precision="fp16")
a_float32 = torch.rand((8, 8), device=accelerator.device)
b_float32 = torch.rand((8, 8), device=accelerator.device)
c_float32 = torch.rand((8, 8), device=accelerator.device)
d_float32 = torch.rand((8, 8), device=accelerator.device)
with accelerator.autocast():
e_float16 = torch.mm(a_float32, b_float32)
assert e_float16.dtype == torch.float16
with accelerator.autocast(autocast_handler=kwargs):
# Convert e_float16 to float32
f_float32 = torch.mm(c_float32, e_float16.float())
assert f_float32.dtype == torch.float32
g_float16 = torch.mm(d_float32, f_float32)
# We should be back in fp16
assert g_float16.dtype == torch.float16
def test_torch_dynamo_plugin(self):
with clear_environment():
prefix = "ACCELERATE_DYNAMO_"
# nvfuser's dynamo backend name is "nvprims_nvfuser"
# use "nvfuser" here to cause exception if this test causes os.environ changed permanently
os.environ[prefix + "BACKEND"] = "aot_ts_nvfuser"
os.environ[prefix + "MODE"] = "reduce-overhead"
dynamo_plugin_kwargs = TorchDynamoPlugin().to_kwargs()
self.assertEqual(dynamo_plugin_kwargs, {"backend": "aot_ts_nvfuser", "mode": "reduce-overhead"})
if __name__ == "__main__":
ddp_scaler = DistributedDataParallelKwargs(bucket_cap_mb=15, find_unused_parameters=True)

View File

@ -58,5 +58,5 @@ class MetricTester(unittest.TestCase):
def test_metric_gpu_multi(self):
print(f"Found {torch.cuda.device_count()} devices.")
cmd = ["torchrun", f"--nproc_per_node={torch.cuda.device_count()}", self.test_file_path]
with patch_environment(omp_num_threads=1):
with patch_environment(omp_num_threads=1, ACCELERATE_LOG_LEVEL="INFO"):
execute_subprocess_async(cmd, env=os.environ.copy())

View File

@ -27,6 +27,7 @@ from accelerate.utils.modeling import (
check_device_map,
clean_device_map,
compute_module_sizes,
convert_file_size_to_int,
find_tied_parameters,
get_balanced_memory,
infer_auto_device_map,
@ -138,6 +139,16 @@ class ModelingUtilsTester(unittest.TestCase):
set_module_tensor_to_device(model, "linear1.weight", "cpu", value=model.linear1.weight, dtype=torch.float16)
self.assertEqual(model.linear1.weight.dtype, torch.float16)
def test_set_module_tensor_checks_shape(self):
model = ModelForTest()
tensor = torch.zeros((2, 2))
with self.assertRaises(ValueError) as cm:
set_module_tensor_to_device(model, "linear1.weight", "cpu", value=tensor)
self.assertEqual(
str(cm.exception),
'Trying to set a tensor of shape torch.Size([2, 2]) in "weight" (which has shape torch.Size([4, 3])), this look incorrect.',
)
def test_named_tensors(self):
model = nn.BatchNorm1d(4)
named_tensors = named_module_tensors(model)
@ -517,6 +528,10 @@ class ModelingUtilsTester(unittest.TestCase):
max_memory = get_balanced_memory(model, max_memory={0: 200, 1: 200})
self.assertDictEqual({0: 200, 1: 200}, max_memory)
# We should be able to set models on a non-contiguous sub-set of
max_memory = get_balanced_memory(model, max_memory={0: 200, 2: 200})
self.assertDictEqual({0: 200, 2: 200}, max_memory)
max_memory = get_balanced_memory(model, max_memory={0: 300, 1: 300})
self.assertDictEqual({0: 215, 1: 300}, max_memory)
@ -532,6 +547,10 @@ class ModelingUtilsTester(unittest.TestCase):
max_memory = get_balanced_memory(model, max_memory={0: 0, 1: 300, 2: 300})
self.assertDictEqual({0: 0, 1: 215, 2: 300}, max_memory)
# If we set a device to 0, it's not counted.
max_memory = get_balanced_memory(model, max_memory={0: 0, "cpu": 100})
self.assertDictEqual({0: 0, "cpu": 100}, max_memory)
@require_cuda
@require_safetensors
def test_load_state_dict(self):
@ -550,3 +569,31 @@ class ModelingUtilsTester(unittest.TestCase):
for param, device in device_map.items():
device = device if device != "disk" else "cpu"
self.assertEqual(loaded_state_dict[param].device, torch.device(device))
def test_convert_file_size(self):
result = convert_file_size_to_int("100MB")
self.assertEqual(result, 100 * (10**6))
result = convert_file_size_to_int("2GiB")
self.assertEqual(result, 2 * (2**30))
result = convert_file_size_to_int("512KiB")
self.assertEqual(result, 512 * (2**10))
result = convert_file_size_to_int("1.5GB")
self.assertEqual(result, 1.5 * (10**9))
result = convert_file_size_to_int("100KB")
self.assertEqual(result, 100 * (10**3))
result = convert_file_size_to_int(500)
self.assertEqual(result, 500)
with self.assertRaises(ValueError):
convert_file_size_to_int("5MBB")
with self.assertRaises(ValueError):
convert_file_size_to_int("5k0MB")
with self.assertRaises(ValueError):
convert_file_size_to_int("-1GB")

View File

@ -20,7 +20,8 @@ import torch
import accelerate
from accelerate import Accelerator
from accelerate.test_utils import execute_subprocess_async, require_multi_gpu
from accelerate.big_modeling import dispatch_model
from accelerate.test_utils import assert_exception, execute_subprocess_async, require_multi_gpu, skip
from accelerate.utils import patch_environment
@ -65,6 +66,24 @@ class MultiGPUTester(unittest.TestCase):
with patch_environment(omp_num_threads=1, cuda_visible_devices="0,1"):
execute_subprocess_async(cmd, env=os.environ.copy())
# Need to see why this test raises forking issues when ran as a suite
@skip
@require_multi_gpu
def test_notebook_launcher(self):
"""
This test checks that the `notebook_launcher` will be able to intialize
a `PartialState` without issue
"""
cmd = [
"python",
"-m",
"accelerate.test_utils.scripts.test_notebook",
"--num_processes",
str(torch.cuda.device_count()),
]
with patch_environment(omp_num_threads=1):
execute_subprocess_async(cmd, env=os.environ.copy())
if __name__ == "__main__":
accelerator = Accelerator()
@ -93,3 +112,22 @@ if __name__ == "__main__":
# Raise error at the end to make sure we don't stop at the first failure.
if len(error_msg) > 0:
raise ValueError(error_msg)
# Check device_map
accelerator.print("Test `device_map` cannot be prepared.")
class ModelForTest(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear1 = torch.nn.Linear(3, 4)
self.batchnorm = torch.nn.BatchNorm1d(4)
self.linear2 = torch.nn.Linear(4, 5)
def forward(self, x):
return self.linear2(self.batchnorm(self.linear1(x)))
device_map = {"linear1": 0, "batchnorm": "cpu", "linear2": 1}
model = ModelForTest()
dispatch_model(model, device_map=device_map)
with assert_exception(ValueError, "You can't train a model that has been loaded with"):
model = accelerator.prepare_model(model)

View File

@ -19,7 +19,7 @@ import torch
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.test_utils import require_cpu
from accelerate.test_utils import require_cpu, require_cuda
@require_cpu
@ -34,3 +34,52 @@ class OptimizerTester(unittest.TestCase):
except Exception as e:
self.fail(f"Accelerated optimizer pickling failed with {e}")
AcceleratorState._reset_state()
@require_cuda
class CudaOptimizerTester(unittest.TestCase):
def test_accelerated_optimizer_step_was_skipped(self):
model = torch.nn.Linear(5, 5)
optimizer = torch.optim.SGD(model.parameters(), 0.1)
accelerator = Accelerator(mixed_precision="fp16")
model, optimizer = accelerator.prepare(model, optimizer)
loss = model(torch.randn(2, 5, device=accelerator.device)).sum()
accelerator.backward(loss)
for p in model.parameters():
# Fake the gradients, as if there's no overflow
p.grad.fill_(0.01)
optimizer.step()
self.assertTrue(optimizer.step_was_skipped is False)
loss = model(torch.randn(2, 5, device=accelerator.device)).sum()
accelerator.backward(loss)
for p in model.parameters():
p.grad.fill_(0.01)
# Manually set the gradients to be NaN, as if there's an overflow
p.grad[0] = torch.tensor(float("nan"))
optimizer.step()
self.assertTrue(optimizer.step_was_skipped is True)
loss = model(torch.randn(2, 5, device=accelerator.device)).sum()
accelerator.backward(loss)
for p in model.parameters():
p.grad.fill_(0.01)
# Manually set the gradients to be NaN, as if there's an overflow
p.grad[0] = torch.tensor(float("nan"))
optimizer.step()
self.assertTrue(optimizer.step_was_skipped is True)
loss = model(torch.randn(2, 5, device=accelerator.device)).sum()
accelerator.backward(loss)
for p in model.parameters():
# Fake the gradients, as if there's no overflow
p.grad.fill_(0.01)
optimizer.step()
self.assertTrue(optimizer.step_was_skipped is False)
AcceleratorState._reset_state()

View File

@ -19,6 +19,8 @@ import random
import shutil
import tempfile
import unittest
import uuid
from contextlib import contextmanager
import pytest
import torch
@ -201,6 +203,71 @@ class CheckpointTest(unittest.TestCase):
self.assertEqual(opt_state1, opt_state3)
self.assertEqual(ground_truth_rands, test_rands)
def test_can_resume_training_checkpoints_relative_path(self):
# See #1983
# This test is like test_can_resume_training but uses a relative path for the checkpoint and automatically
# infers the checkpoint path when loading.
@contextmanager
def temporary_relative_directory():
# This is equivalent to tempfile.TemporaryDirectory() except that it returns a relative path
rand_dir = f"test_path_{uuid.uuid4()}"
os.mkdir(rand_dir)
try:
yield rand_dir
finally:
shutil.rmtree(rand_dir)
with temporary_relative_directory() as tmpdir:
set_seed(42)
model = DummyModel()
optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-3)
train_dataloader, valid_dataloader = dummy_dataloaders()
project_config = ProjectConfiguration(automatic_checkpoint_naming=True)
# Train baseline
accelerator = Accelerator(project_dir=tmpdir, project_config=project_config)
model, optimizer, train_dataloader, valid_dataloader = accelerator.prepare(
model, optimizer, train_dataloader, valid_dataloader
)
# Save initial
accelerator.save_state()
(a, b) = model.a.item(), model.b.item()
opt_state = optimizer.state_dict()
ground_truth_rands = train(3, model, train_dataloader, optimizer, accelerator)
(a1, b1) = model.a.item(), model.b.item()
opt_state1 = optimizer.state_dict()
# Train partially
set_seed(42)
model = DummyModel()
optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-3)
train_dataloader, valid_dataloader = dummy_dataloaders()
project_config = ProjectConfiguration(iteration=1, automatic_checkpoint_naming=True)
accelerator = Accelerator(project_dir=tmpdir, project_config=project_config)
model, optimizer, train_dataloader, valid_dataloader = accelerator.prepare(
model, optimizer, train_dataloader, valid_dataloader
)
accelerator.load_state() # <= infer the directory automatically
(a2, b2) = model.a.item(), model.b.item()
opt_state2 = optimizer.state_dict()
self.assertEqual(a, a2)
self.assertEqual(b, b2)
self.assertEqual(opt_state, opt_state2)
test_rands = train(2, model, train_dataloader, optimizer, accelerator)
# Save everything
accelerator.save_state()
# Load everything back in and make sure all states work
accelerator.load_state(os.path.join(tmpdir, "checkpoints", "checkpoint_1"))
test_rands += train(1, model, train_dataloader, optimizer, accelerator)
(a3, b3) = model.a.item(), model.b.item()
opt_state3 = optimizer.state_dict()
self.assertEqual(a1, a3)
self.assertEqual(b1, b3)
self.assertEqual(opt_state1, opt_state3)
self.assertEqual(ground_truth_rands, test_rands)
def test_invalid_registration(self):
t = torch.tensor([1, 2, 3])
t1 = torch.tensor([2, 3, 4])
@ -238,6 +305,35 @@ class CheckpointTest(unittest.TestCase):
accelerator.load_state(os.path.join(tmpdir, "checkpoints", "checkpoint_0"))
self.assertEqual(scheduler_state, scheduler.state_dict())
def test_automatic_loading(self):
with tempfile.TemporaryDirectory() as tmpdir:
set_seed(42)
model = DummyModel()
optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.99)
train_dataloader, valid_dataloader = dummy_dataloaders()
project_config = ProjectConfiguration(automatic_checkpoint_naming=True)
# Train baseline
accelerator = Accelerator(project_dir=tmpdir, project_config=project_config)
model, optimizer, train_dataloader, valid_dataloader, scheduler = accelerator.prepare(
model, optimizer, train_dataloader, valid_dataloader, scheduler
)
# Save initial
accelerator.save_state()
train(2, model, train_dataloader, optimizer, accelerator, scheduler)
(a2, b2) = model.a.item(), model.b.item()
# Save a first time
accelerator.save_state()
train(1, model, train_dataloader, optimizer, accelerator, scheduler)
(a3, b3) = model.a.item(), model.b.item()
# Load back in the last saved checkpoint, should point to a2, b2
accelerator.load_state()
self.assertNotEqual(a3, model.a.item())
self.assertNotEqual(b3, model.b.item())
self.assertEqual(a2, model.a.item())
self.assertEqual(b2, model.b.item())
def test_checkpoint_deletion(self):
with tempfile.TemporaryDirectory() as tmpdir:
set_seed(42)

View File

@ -15,13 +15,17 @@
import os
import pickle
import unittest
import warnings
from collections import UserDict, namedtuple
from unittest.mock import Mock, patch
import torch
from accelerate.state import PartialState
from accelerate.test_utils.testing import require_cuda, require_torch_min_version
from accelerate.test_utils.training import RegressionModel
from accelerate.utils import (
check_os_kernel,
convert_outputs_to_fp32,
extract_model_from_parallel,
find_device,
@ -36,6 +40,10 @@ ExampleNamedTuple = namedtuple("ExampleNamedTuple", "a b c")
class UtilsTester(unittest.TestCase):
def setUp(self):
# logging requires initialized state
PartialState()
def test_send_to_device(self):
tensor = torch.randn(5, 2)
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
@ -103,6 +111,25 @@ class UtilsTester(unittest.TestCase):
self.assertNotIn("AA", os.environ)
self.assertNotIn("BB", os.environ)
def test_patch_environment_key_exists(self):
# check that patch_environment correctly restores pre-existing env vars
with patch_environment(aa=1, BB=2):
self.assertEqual(os.environ.get("AA"), "1")
self.assertEqual(os.environ.get("BB"), "2")
with patch_environment(Aa=10, bb="20", cC=30):
self.assertEqual(os.environ.get("AA"), "10")
self.assertEqual(os.environ.get("BB"), "20")
self.assertEqual(os.environ.get("CC"), "30")
self.assertEqual(os.environ.get("AA"), "1")
self.assertEqual(os.environ.get("BB"), "2")
self.assertNotIn("CC", os.environ)
self.assertNotIn("AA", os.environ)
self.assertNotIn("BB", os.environ)
self.assertNotIn("CC", os.environ)
def test_can_undo_convert_outputs(self):
model = RegressionModel()
model._original_forward = model.forward
@ -154,3 +181,27 @@ class UtilsTester(unittest.TestCase):
self.assertEqual(find_device([1, "a", torch.tensor([1, 2, 3])]), torch.device("cpu"))
self.assertEqual(find_device({"a": 1, "b": torch.tensor([1, 2, 3])}), torch.device("cpu"))
self.assertIsNone(find_device([1, "a"]))
def test_check_os_kernel_no_warning_when_release_gt_min(self):
# min version is 5.5
with patch("platform.uname", return_value=Mock(release="5.15.0-35-generic", system="Linux")):
with warnings.catch_warnings(record=True) as w:
check_os_kernel()
self.assertEqual(len(w), 0)
def test_check_os_kernel_no_warning_when_not_linux(self):
# system must be Linux
with patch("platform.uname", return_value=Mock(release="5.4.0-35-generic", system="Darwin")):
with warnings.catch_warnings(record=True) as w:
check_os_kernel()
self.assertEqual(len(w), 0)
def test_check_os_kernel_warning_when_release_lt_min(self):
# min version is 5.5
with patch("platform.uname", return_value=Mock(release="5.4.0-35-generic", system="Linux")):
with self.assertLogs() as ctx:
check_os_kernel()
self.assertEqual(len(ctx.records), 1)
self.assertEqual(ctx.records[0].levelname, "WARNING")
self.assertIn("5.4.0", ctx.records[0].msg)
self.assertIn("5.5.0", ctx.records[0].msg)

View File

@ -17,6 +17,7 @@ https://github.com/allenai/allennlp.
"""
import os
from datetime import datetime as dt
from datetime import timezone
from github import Github
@ -36,7 +37,7 @@ def main():
for issue in open_issues:
comments = sorted([comment for comment in issue.get_comments()], key=lambda i: i.created_at, reverse=True)
last_comment = comments[0] if len(comments) > 0 else None
current_time = dt.utcnow()
current_time = dt.now(timezone.utc)
days_since_updated = (current_time - issue.updated_at).days
days_since_creation = (current_time - issue.created_at).days
if (