Compare commits

...

263 Commits

Author SHA1 Message Date
a54cd0abd8 Release: v0.13.0 2022-10-05 14:24:25 -04:00
5fff81bac8 Auto grad accum example (#742)
* Auto grad accum example

* Include auto grad accum to exlcusion list

* Typo fix calculate -> calculate

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-05 11:42:08 -04:00
a75a56f1c2 Include examples for CI (#740) 2022-10-04 15:55:46 -04:00
b437b8b893 Fix memory leak (#739)
* Fix memory example

* Include update to docs

* batch size
2022-10-04 15:55:40 -04:00
ffca93b4a9 trlx (#738) 2022-10-04 10:23:01 -04:00
e5c9b4f2ce Add an example zoo to the documentation (#737)
* Training zoo

* Reword
2022-10-03 14:44:55 -04:00
9eb9aeefaf Add a tutorial on proper benchmarking (#734)
* Performance tut

* toc

* Apply suggestions from code review - Tips will be the death of me

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-03 13:55:13 -04:00
6ab88253cc Remove auto-bug label in issue template (#735) 2022-10-03 12:53:27 -04:00
870a7badc4 Allow for GPU-ID specification on CLI (#732)
* Specifiy GPU ids on CLI

* Configurable gpu-ids

* Expand to deepspeed

* all

* Fix nit

* Fix typo in docs

* futher tweaks

* Further tweaks

* Change for mps specifically
2022-09-30 15:35:54 -04:00
9e4fe78b95 Fix issue with one-cycle logic (#728)
* Fixed!

* Fix and write tests
2022-09-28 16:35:36 -04:00
f3c39b4c9c Fix old naming (#727) 2022-09-28 12:00:22 -04:00
2088172c9f Make running tests more efficient (#611)
* Restructure actions and make running tests more efficient

* Try with source code adjustment

* First make sure they work

* Don't move

* Local workflows reference

* Keep it as a step

* Try changing a line

* Try not using tertiary

* Fix test

* Make tests wait

* Remove linechange

* Include and run based on new setup

* Try with removing workflow

* Re-add in, it works!

* Rename for clarity
2022-09-28 11:53:14 -04:00
68fad169e6 Build and Release docker images on a release (#725)
* Docker on release

* Releases

* FOR TESTING, REVERT ONCE DONE

* With checkout

* Revert, works!

* published

* Accidental regression
2022-09-28 06:58:00 -04:00
d21c213318 Fix default for num processes (#726)
* Fix default for num processes

* Apply suggestions from code review

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-09-27 17:09:51 -04:00
40bd4aa5ce Fix regression issue (#724) 2022-09-27 12:47:48 -04:00
6d038e19a1 Specify gradients in model preparation (#722)
* Specify when a model doesn't need to be prepared more
2022-09-26 14:19:29 -04:00
b67b760f66 Allow custom device placements for different objects (#716)
* Allow custom device placements for different objects

* Update src/accelerate/accelerator.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Style

* Make doc-builder happy

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-09-23 11:31:15 -04:00
56ce94dc29 More docstring nits (#715)
* More docstring examples + nits

* Just use module since everything is wrapped

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-09-23 10:20:59 -04:00
8b16276a41 refactor(accelerate): readability improvements (#713)
* refactor(accelerate): readability improvements

Signed-off-by: Ryan Russell <git@ryanrussell.org>

* docs: `all` fixup

Signed-off-by: Ryan Russell <git@ryanrussell.org>

* Style

Signed-off-by: Ryan Russell <git@ryanrussell.org>
Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com>
2022-09-22 09:36:05 -04:00
6a39d010d7 sagemaker fixes and improvements (#708)
* adding aws sagemaker examples to examples readme

* refactoring and correcting documentation
2022-09-22 10:56:46 +05:30
82a7afdde2 docs: hooks readability improvements (#712)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-21 16:49:41 -04:00
a5d0278055 refactor(test_tracking): key_occurrence readability fixup (#710)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-21 16:26:35 -04:00
9ba82f9ca4 docs: utils readability fixups (#711)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-21 16:26:05 -04:00
293a17b4f7 docs: examples readability improvements (#709)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-21 15:57:36 -04:00
efb33d67ea Update runners with report structure, adjust env variable (#704)
* Fixup rest of the runners

* Install pytest-reportlog

* Use more explicit env var

* Fixup
2022-09-20 10:10:58 -04:00
6dc429f6f7 Add in report generation for test failures and make fail-fast false (#703)
* Add logging
2022-09-19 17:24:46 -04:00
9dfc6da9ad [doc] Fix 404'd link in memory usage guides (#702)
* Fix 404'd link in memory usage guides

* Add a dot to the final sentence
2022-09-16 07:34:17 -04:00
1044c30cb1 override DeepSpeed grad_acc_steps from accelerator obj (#698)
* override DeepSpeed `grad_acc_steps` from `accelerator` obj

* resolving comment
2022-09-15 00:37:03 +05:30
4f0a1102d1 Improve init_empty_weights to override tensor constructor (#699)
* Prevent module constructor from building tensor in cpu and then move it to meta

* Patch torch.load

* Maybe the hack to override torch.load is too dangerous?

* Make style

* No need to override torch.load as one can just load from config intead

* No sure why there's a include_buffers argument, but we need to override tensor constructor only when include_buffers argument is True
2022-09-14 18:14:51 +02:00
8d275977c3 fixing rng sync when using custom sampler and batch_sampler (#696)
* fixing rng sync when using custom sampler and batch_sampler

* addressing comments

* 

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-09-12 20:16:06 +05:30
84444658a6 fixing support for Apple Silicon GPU in notebook_launcher (#695) 2022-09-12 17:18:49 +05:30
bc70074350 Fix DataLoader with samplers that are batch samplers (#687) 2022-09-09 11:49:19 -04:00
293757d2ae rng state sync for FSDP (#688) 2022-09-09 17:34:52 +05:30
98823de572 Clean up DispatchDataloader a bit more (#686) 2022-09-07 13:13:15 -04:00
2b08b27bed Fix skip in dispatch dataloaders (#682)
* Fix skip in dispatch dataloaders

* Remove skip altogether

* Fix last occurence
2022-09-07 07:44:36 -04:00
c69659ce39 🐛 fix (#683) 2022-09-06 21:36:00 +05:30
4274a419ef adding torchrun elastic params (#680) 2022-09-06 20:24:16 +05:30
4400eb90b2 DeepSpeed launcher related changes (#626)
* launcher related changes + minor fixes

* removing minor fixes

* remove minor change

* deepspeed multinode standard launcher

* undo

* fixing the multi-node standard launcher
2022-09-06 17:36:19 +05:30
200546c5d3 deepspeed enhancements and fixes (#676)
* deepspeed enhancement and fixes

* refactor code

* 🐛 fix

* 😅
2022-09-06 17:30:35 +05:30
60d6807c36 Test for min torch version + fix all issues (#638)
* Test for min torch

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-09-02 16:56:35 -04:00
3ab46514c9 Specify local network on multinode (#674)
* Specify local

* Update src/accelerate/commands/config/cluster.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-09-02 16:48:23 -04:00
c9a88a8e06 Add aim tracker for accelerate (#649)
* Add aim tracker for accelerate

* Use close and name arg specifically


* Fix nits
2022-09-02 16:39:38 -04:00
a2a369e026 Make rich an optional dep (#673)
* Make rich an optional dep

* lagging import fix
2022-09-02 15:43:18 -04:00
44be28fbef Fix multi-node issues from launch (#672)
* Use different bits based on cloud vs non

* rdvz_backend fix
2022-09-02 15:04:49 -04:00
cf1e8dce75 Manim animation of big model inference (#671)
* Manim animation of big model inference

* Make into big section, not small

* Revert back to old style of headers
2022-09-02 10:34:46 -04:00
52c2b1c244 Cache torch_tpu check (#670) 2022-09-01 10:38:38 -04:00
efa8e7f89b accelerate bibtex (#660) 2022-09-01 08:19:57 +05:30
5e5148852b Improve docstrings more (#666) 2022-08-31 21:54:18 -04:00
00f47d035e Use debug for loggers (#655) 2022-08-31 11:29:35 -04:00
cb54e1023e Saving hyperparams in yaml file for Tensorboard for #521 (#657)
* Saving hyperparams in yaml file for Tensorboard

* Saving yaml file in logging dir

* Changing hardcoded path

* Adding try/catch, cleaning path name

* Raise error

* Updating path name

* Path create
2022-08-29 11:14:44 -04:00
d0f5f4a630 Small nits to grad accum docs (#656)
* Small nits to docs

* Be explicit on one vs other

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-08-26 06:22:07 -04:00
469b61e0bf Add static_graph arg to DistributedDataParallelKwargs. (#637)
* Add static_graph arg to DistributedDataParallelKwargs.

supported by https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html

This is particularly useful when using gradient checkpointing

See https://discuss.pytorch.org/t/ddp-and-gradient-checkpointing/132244/3 for more details

* Add 1.11 warning for static graph argument.
2022-08-20 15:30:59 -04:00
4484438626 fix link (#645) 2022-08-20 14:15:56 +02:00
36420f53f3 remove check for main process for trackers initialization (#643)
* remove check for main process for trackers initialization

* removed is_main_process check for trackers initialization
2022-08-20 07:08:22 -04:00
a3d94916a8 make init_trackers to launch on main process (#642) 2022-08-19 09:20:33 -04:00
b0f8189d34 Put back in place the guard (#634) 2022-08-12 15:21:55 -04:00
55907ef1fb Use torchrun for multinode (#631)
* Distrib launch with config

* Add param for rdvz
2022-08-12 13:06:22 -04:00
e31d8ecaf1 minor tracker fixes for complete* examples (#630)
* minor tracker fixes for complete* examples

* state repr minor fix
2022-08-12 21:32:22 +05:30
cd46dc2f4f update MPS support docs (#629) 2022-08-12 08:49:18 -04:00
5020788db8 Integrate Rich into Accelerate (#613)
Pretty error logs are here 🤗
2022-08-11 12:59:55 -04:00
010aa93cbc Fix multi-node issues and simplify param logic (#627)
* Less hacky version for args, fix multinode param
2022-08-11 12:56:33 -04:00
92341b6233 M1 mps fixes (#625)
* M1 mps fixes

* Update src/accelerate/state.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-08-11 21:03:57 +05:30
9fd08d79f9 Fully remove subprocess from the multi-gpu launcher (#623)
* Remove one of the subprocesses!
2022-08-10 11:00:46 -04:00
2656ca619f Update README.md (#622) 2022-08-09 15:00:14 -04:00
4df9010b70 Fix example (#620) 2022-08-09 12:32:50 -04:00
94b8c17b4a Added GANs example to examples (#619)
* Added link to example of Accelerator with GANs

* Update README.md

* Update examples/README.md

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-08-09 11:59:19 -04:00
35e1cd3978 Trigger doc build 2022-08-09 08:38:05 -04:00
a08779f603 Fix DeepSpeed CI (#612)
* Try with integration on makefile
2022-08-08 14:54:56 -04:00
efc7aeb064 Fix typo in docs/index.mdx (#610) 2022-08-08 18:34:08 +02:00
080f4bd7c1 v.0.13.0.dev0 2022-08-04 09:04:05 -04:00
9a660e082f fixing deepspeed slow tests issue (#604)
* fixing deepspeed slow tests issue

* skip `checkpointing` test as it leads to RAM overuasge

* disabling fsdp cpu offload mem test
2022-08-04 17:59:54 +05:30
0bb808276a add more conditions on casting (#606) 2022-08-04 08:22:16 -04:00
67d68b8adf Remove redundant .run in WandBTracker. (#605) 2022-08-04 07:23:22 -04:00
24c28a1adc Fix some typos + wordings (#603)
* Fix all typos + wordings

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-08-03 11:19:20 -04:00
afa7490ff4 M1 GPU mps device integration (#596)
* fixing metric computation

* refactoring

* Mac M1 GPU `mps` device support

* Update state.py

* reverting the `nlp_example.py` changes from the copied branch

* resolve comments

Co-Authored-By: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* docs quality

* Update docs/source/usage_guides/mps.mdx

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* resolving comments

* resolving comments

Co-Authored-By: Zachary Mueller <7831895+muellerzr@users.noreply.github.com>

* resolving comments

* resolving comments

* resolving comments on docs

Co-Authored-By: Zachary Mueller <7831895+muellerzr@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
Co-authored-by: Zachary Mueller <7831895+muellerzr@users.noreply.github.com>
2022-08-03 18:55:57 +05:30
b10fd818f9 reorg of test scripts and minor changes to tests (#602)
* reorg of test scripts and minor changes to tests

* adding the recent fix of deepspeed
2022-08-03 18:03:43 +05:30
8944975a3c Reenable Gather for Metrics (#590)
* Clean and finish

Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com>
2022-08-02 13:45:17 -04:00
15a8c6c7be Move warning (#598) 2022-08-02 13:42:08 -04:00
b52b793ea8 Shorthand way to grab a tracker (#594)
* Enable grabbing the underlying tracker
2022-08-02 09:12:32 -04:00
5dd4eaf6fa Pin deepspeed (#595) 2022-08-02 09:11:34 -04:00
29a222a261 Improve docstring (#591) 2022-08-01 17:41:27 -04:00
217dd69682 TESTS! (#589) 2022-08-01 15:56:02 -04:00
7a5a96b7b2 Fix DispatchDataloader (#588)
* Fix DispatchDataloader

* Fix last bug

* Revert part of the test fixes
2022-08-01 15:55:35 -04:00
447ad0e635 Complete revamp of the docs (#495)
Completely revamp the entirety of the Accelerate documentation
2022-08-01 10:09:14 -04:00
d5a0fc2d62 Small fixed for balanced device maps (#583) 2022-07-28 15:27:27 -04:00
7f5c60c182 Use main_process_first in the examples (#581) 2022-07-28 12:11:07 -04:00
503057132d Skip and raise NotImplementedError for gather_for_metrics for now (#580)
* Skip and raise NotImplementedError for now
2022-07-28 11:56:00 -04:00
c826b51a82 minor FSDP launcher fix (#579) 2022-07-28 20:38:21 +05:30
e0212893ea Fix gather_for_metrics (#578)
* Fix gather_for_metrics
2022-07-27 14:20:52 -04:00
e809268580 Refine test in set_module_tensor_to_device (#577) 2022-07-27 11:36:48 -04:00
f438a813ff Fix set_module_tensor_to_device (#576)
* Fix

* Refine test

* Fix test
2022-07-27 09:46:12 -04:00
75053e45c3 Add 8 bit support - chapter II (#539)
* Meta init/tensor_to_device logic for Int8 Parameters.

* add 8 bit support

* add special modules support

Co-authored-by: timdettmers <timdettmers@users.noreply.github.com>

* bad formatting

* bad formatting

* restoring the poor lines that were alone!

* small hack

- replaced paramter replacement logic

* add int8 support - v1

* replace cpu by device

* better refactoring

* put to buffer

* add else statement to avoid breaking changes

* styling

Co-authored-by: Tim Dettmers <tim.dettmers@gmail.com>
Co-authored-by: timdettmers <timdettmers@users.noreply.github.com>
2022-07-27 07:12:49 -04:00
015f228c5e Fix tests, add wandb to gitignore (#573)
* Fix tests, add wandb to gitignore

* Clean
2022-07-26 16:08:35 -04:00
1486fa35b1 Fix step (#572) 2022-07-26 12:29:05 -04:00
7a49418e51 Speed up main CI (#571)
* Speed up ci by reducing training epochs
2022-07-26 11:35:18 -04:00
d26478b95d ccl version check and import different module according to version (#567)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2022-07-26 10:11:05 -04:00
bf0017f0a8 set default num_cpu_threads_per_process to improve oob performance (#562)
* set default num_cpu_threads_per_process to improve oob performance

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* fix log info

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2022-07-26 10:10:51 -04:00
e3642a469f Add a tqdm helper (#564)
* tqdm helper
2022-07-26 10:00:00 -04:00
d6b7536750 Rename actions to be a bit more accurate (#568)
* Run slow + rename

* Name message more accuratly
2022-07-26 09:42:21 -04:00
5e25edd3b6 Fix clean (#569) 2022-07-26 09:26:05 -04:00
0c6bdc2c23 enhancements and fixes for FSDP and DeepSpeed (#532)
* checkpointing enhancements and fixes for FSDP and DeepSpeed

* resolving comments

1. Adding deprecation args and warnings in launcher for FSDP
2. Handling old configs to work with new launcher args wrt FSDP.
3. Reverting changes to public methods in `checkpointing.py` and handling it in `Accelerator`
4. Explicitly writing the defaults of various FSDP options in `dataclasses` for readability.

* fixes

1. FSDP wrapped model being added to the `_models`.
2. Not passing the env variables when args are None.

* resolving comments

* adding FSDP for all the collective operations

* adding deepspeed and fsdp tests

1. Removes mrpc datafiles and directly relies on HF datasets as it was throwing `file not found` error when running from within `tests` folder. Updating `moke_dataloaders` as a result.
2. adding `test_performance.py`, `test_memory.py` and `test_checkpointing.py` for multi-gpu FSDP and DeepSpeed tests

* reverting `mocked_dataloader` changes

* adding FSDP tests

* data files revert

* excluding fsdp tests from `tests_core`

* try 2

* adding time delay to avoid `torchrun` from crashing at times leading which causing flaky behaviour

* reducing the time of tests

* fixes

* fix

* fixes and reduce time further

* reduce time further and minor fixes

* adding a deepspeed basic e2e test for single gpu setup
2022-07-26 18:14:29 +05:30
91ff425bb0 fix: saving model weights (#556)
* fix: saving model weights

checkpointing not saving model weights if calling `accelerator.prepare_model` instead of `accelerator.prepare`
resolves issue: https://github.com/huggingface/accelerate/issues/555

* fix: saveing model weights for optimizer and scheduler
2022-07-26 08:44:09 -04:00
cc1007163b Fix wrong indentation 2022-07-26 07:47:40 -04:00
7d97e9c641 add on_main_process decorators (#488)
* add some useful decorators

* make on_(local_)main_process member of Accelerator

* update examples

* add on_process and on_local_process

* fixes wrong name for `on_local_process`

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-07-26 13:14:35 +02:00
Kim
f90ec5255b Update imports.py (#554)
torch_ccl rename
2022-07-26 13:07:37 +02:00
5391412d64 unpin datasets (#563) 2022-07-25 16:56:08 +02:00
6c4edc362f Create good defaults in accelerate launch (#553)
* Support not passing in args to launch
2022-07-22 09:40:59 -04:00
b08ae9730e Fix a few minor issues with example code in docs (#551)
* Fix a few minor issues with example code in docs

- enumerate is not actually used
- variable name "labels" does nto match
- prepare method should be called

* Apply style
2022-07-22 14:39:15 +02:00
e98dc22a37 deepspeed version 0.6.7 fix (#544)
* deepspeed version hotfix

* Update setup.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* resolving the issue! yay 🤗

* resolving circular dependency issue 😅

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-07-22 11:51:06 +05:30
27d8d45817 Rename test extras to testing (#545)
* Extras test to testing

* Fix naming
2022-07-21 15:09:38 -04:00
fdf471519c Add production testing + fix failing CI (#547)
* Add production testing

* Fix CI failure on transformers
2022-07-21 14:32:27 -04:00
164943c7d7 Add a gather_for_metrics capability (#540)
* Add test and full implementation
2022-07-21 07:40:37 -04:00
9c1e68849e Allow for kwargs to be passed to trackers (#542)
* Allow for kwarg passing to trackers

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-07-21 07:30:55 -04:00
d6c72bdff6 Add balanced option for auto device map creation (#534)
* Add balanced option for auto device map creation

* More options

* Add low0 option

* Add documentation

* Add tests

* Fix tests

* Update docs/source/big_modeling.mdx

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-07-20 17:39:52 +02:00
158acdd22c Add support for downcasting bf16 on TPUs (#523)
* Allow for downcast
2022-07-20 05:50:08 -04:00
f6df405b5c Add more documentation for device maps computations (#530)
* Add more documentation

* Unbreak navbar

* Apply suggestions from code review

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Address review comments

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2022-07-20 09:11:54 +02:00
7cf13b229f Restyle prepare one (#531) 2022-07-18 11:53:56 -04:00
e965b56bb3 Pick a better default for offload_state_dict (#529) 2022-07-18 16:55:59 +02:00
ddedeb4062 fix some parameter setting does not work for CPU DDP and bf16 fail in… (#527)
* fix some parameter setting does not work for CPU DDP and bf16 fail in DDP path

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* if number_machine > 1, get the ip and port accelerate config set

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* if main_process_ip and port is set by user, use them, else use default "127.0.0.1" when DDP is used in one machine

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2022-07-18 15:19:52 +02:00
0ee319b39b Really v0.12.0.dev0 2022-07-18 09:14:15 -04:00
ae5ca34f13 v0.12.0.dev0 2022-07-18 08:55:50 -04:00
eebeb59a36 Fix accelerate tests command (#528) 2022-07-18 14:47:34 +02:00
be4b74f42f Relese: v0.11.0 2022-07-18 08:27:58 -04:00
c93b3eb5d7 FSDP integration enhancements and fixes (#522)
* FSDP integration enhancements and fixes

* bug fixes

1. fix circular dependency
2. Add model print statement in FSDP example
3. minor fixes

* removing `always_wrap` as it is rarely useful

* removing comment

* resolving comments

* fsdp fp16 mp uses ShardedGradScaler

* fix import

* fix check

* add exception when class to wrap not found in model

* adding `FSDP_BACKWARD_PREFETCH`

* fix
2022-07-18 17:45:58 +05:30
3eea8ceee0 Warn user if no trackers are installed (#524) 2022-07-15 18:16:00 +02:00
7abc708be2 Fixup all example CI tests and properly fail (#517)
* Clean and make all tests pass
2022-07-15 18:15:45 +02:00
bb78b04cce fixing deepspeed multi-node launcher (#514)
* fixing deepspeed multi-node launcher

* minor fixes

* handling env variables for accelerate to correctly work

* resolving comments
2022-07-14 18:40:48 +05:30
7e6593756f Add special Parameters modules support (#519)
* Meta init/tensor_to_device logic for Int8 Parameters.

* add 8 bit support

* add special modules support

Co-authored-by: timdettmers <timdettmers@users.noreply.github.com>

* bad formatting

* bad formatting

* restoring the poor lines that were alone!

Co-authored-by: Tim Dettmers <tim.dettmers@gmail.com>
Co-authored-by: timdettmers <timdettmers@users.noreply.github.com>
2022-07-13 12:46:36 -04:00
960fd9d86a Don't unwrap in save_state() (#489) 2022-07-13 12:46:21 -04:00
70ca65a9a1 Fix a bug when reduce a tensor. (#513)
* return reduced result

* update doc for Accelerator.reduce

* update doc in Accelerator.reduce

* fix reduce behavior for gpu devices
2022-07-13 09:19:01 -04:00
ea0d5368bd Add benchmarks (#506)
* Add benchmarks

* Oops! Forgot one file

* Update benchmarks/README.md

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-07-12 15:16:45 -04:00
78357f44b3 Add gradient accumulation doc (#511)
* Gradient accumulation doc

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-07-12 17:36:45 +02:00
c7526e9483 Make gradient accumulation work with dispatched dataloaders (#510)
* Make grad accum work with dispatch dl

* Split print over multiple lines
2022-07-12 17:12:39 +02:00
f5ef120e77 Fix DispatchDataLoader length when split_batches=True (#509) 2022-07-12 10:35:35 -04:00
3c1f97c386 SageMaker enhancements to allow custom docker image, input channels referring to s3/remote data locations and metrics logging (#504)
* SageMaker DP and MP Support

* fix 😅

* removing SageMaker MP option

* adding support for custom image_uri, data location and metrics
2022-07-12 13:25:52 +05:30
a0514dd809 SageMaker DP Support (#494)
* SageMaker DP and MP Support

* fix 😅

* removing SageMaker MP option
2022-07-09 00:14:57 +05:30
b20f90ab17 Fix scheduler in gradient accumulation example (#500)
* Fix scheduler in gradient accumulation example

* Phrase better how the scheduler is stepped during grad accum

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-07-08 13:41:43 -04:00
cfb2a3e239 update dataloader wrappers to have total_batch_size attribute (#493)
* update dataloader wrappers to have `total_batch_size` attribute

* fix

* resolving comments

* Update src/accelerate/data_loader.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* quality

* add docstrings

* Update src/accelerate/data_loader.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* docstrings iter 2 + quality + minor change in doc

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-07-08 21:16:31 +05:30
86ce737d7f Introduce automatic gradient accumulation wrapper + fix a few test issues (#484)
* Have accelerator handle gradient accumulation

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-07-05 15:49:36 -04:00
deffaba8d6 add use_distributed property (#487)
* add distributed property in accelerate_state

* ensure num_process > 1
2022-07-05 09:19:44 -04:00
6ebddcd5e0 fixing fsdp autowrap functionality (#475)
* fixing fsdp autowrap functionality

* updating version requirements

* update version to latest torch stable version

* quality
2022-07-01 10:00:47 +05:30
4a7bc3bcb7 Use datasets 2.2.0 for now (#481) 2022-06-28 12:31:41 -04:00
1f96f3cf85 Rm gradient accumulation on TPU (#479)
* Rm gradient accumulation on TPU for now
2022-06-28 12:29:58 -04:00
bbca2700c7 Revert "Pin datasets for now (#477)" (#478)
This reverts commit a8eca60d57e8294e666b765b5331770aa0c58893.
2022-06-28 10:09:11 -04:00
a8eca60d57 Pin datasets for now (#477) 2022-06-28 09:47:39 -04:00
329209871f Some typos and cosmetic fixes (#472) 2022-06-27 05:40:07 -07:00
619ef04f09 Dev version 2022-06-24 16:41:09 -04:00
9d8ed50f7b Fix when TPU device check is ran (#469) 2022-06-24 12:07:38 -04:00
196856f357 Refactor Utility Documentation (#467)
* Add a utilities doc
2022-06-23 16:34:01 -04:00
3a5490b066 Add docbuilder to quality (#468) 2022-06-23 14:36:16 -04:00
24be733d84 Expose some is_*_available utils in docs (#466) 2022-06-23 10:34:45 -04:00
775bc790e7 Cleanup CI Warnings (#465)
* Fix named tuple warning

* Use torch AdamW instead of transformers

* Make regex string instead of literal
2022-06-23 10:06:19 -04:00
799fa935e9 Link CI slow runners to the commit (#464)
* Tweak trigger logic to link actions together
2022-06-23 08:56:01 -04:00
3ccbd9f7a0 Fix subtle bug in BF16 (#463)
* mixed precision bugfix

* Use is_tpu_available
2022-06-23 08:55:13 -04:00
f13c59f91e Include bf16 support for TPUs and CPUs, and a better check for if a CUDA device supports BF16 (#462)
* Support bf16 on TPU, CPU, and GPU in Accelerator directly
2022-06-22 17:53:42 -04:00
d39c57c11f Handle bfloat16 weights in disk offload without adding memory overhead (#460) (#461) 2022-06-22 09:13:23 -04:00
e2a968c66d Handle bfloat16 weights in disk offload (#460)
* Handle bfloat16 weights in disk offload

* Address review comments
2022-06-21 18:06:57 -04:00
dc243c0db1 Raise a clear warning if a user tries to modify the AcceleratorState (#458)
* Reinitialize warning
2022-06-21 16:42:35 -04:00
97f4c9de61 Right step point (#459) 2022-06-21 15:11:03 -04:00
73a596593e Better checks for if a TPU device exists (#456)
* Check if a TPU device actually exists
2022-06-21 12:12:00 -04:00
eeaba598f4 Offload and modules with unused submodules (#442)
* Offload and modules with unused submodules

* Renaming

* Update src/accelerate/hooks.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Address review comment

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-06-17 20:04:39 -04:00
3d92caa241 Release: v0.10.0 2022-06-15 13:58:22 -04:00
fa17f207b5 Fix docstring (#447) 2022-06-15 13:54:04 -04:00
873dcc63a4 Migrate HFDeepSpeedConfig from trfrs to accelerate (#432)
* Migrate HFDeepSpeedConfig from trfrs to accelerate

* update state.py to resolve comments

1. Adds static method to have a simple API for integrating deepspeed config in transformers trainer.

* reverting changes and addressing comments

* Marking DepSpeed and FSDP as experimental in accelerate
2022-06-15 20:56:39 +05:30
40b6fe1784 Add psutil as depenedency (#445) 2022-06-15 11:03:52 -04:00
29eef234c9 Revamp TPU internals to be more efficient + enable mixed precision types (#441)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-06-14 17:41:20 -04:00
3f0876ac03 fix fsdp torch version dependency (#437) 2022-06-11 00:36:44 +05:30
450d51ce01 Create Gradient Accumulation Example (#431)
* Gradient accumulation example
2022-06-08 14:46:04 -04:00
1b2da6c6a5 init (#429) 2022-06-08 14:07:10 -04:00
1424a8e00d Introduce no_sync context wrapper + clean up some more warnings for DDP (#428) 2022-06-08 12:56:21 -04:00
b2afd4e8da updating tests to resolve runner failures wrt deepspeed revamp (#427)
* deepspeed revamp

* Update dataclasses.py

* Update deepspeed.py

* quality

* fixing code

* quality

* FIx imports

* saving 16bit model in zero stage 3

1. Saving 16bit model in zero stage 3
2. zero init in stage 3 support using HFDeepSpeedConfig

* quality

* adding test and fixing bugs

* update makefile for deepspeed tests

* Update test.yml

* adding `deepspeed` as requirement for tests

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* quality

* addressing comments

* add example and minor updates

1. Add example to show the usage of config file with revamped deepspeed support.
2. update required deepspeed version to 0.6.5
2. reverting `reinit` change as it is not required,
3. raising Exception when using `clip_grad_value` with DeepSpeed/FSDP.

* Documentation and Zero-3 Inference Support

1. Changes to support ZeRo Stage-3 Inference support.
2. minor bug fixes.
3. Documentation.

* doc fix

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* addressing comments

* update doc to address comments and bug fixes

1. update tests and add new one testing autofill functionality of `prepare` method.
2. fix bug related to zero-3 init related to HFDeepSpeedConfig
3. Update documentation addressing comments.

* removing image and hosting it on `documentation-images` dataset

* check for hidden_size for zero_opt heurisitics

* updating tests to resolve runner failures

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-06-07 16:21:26 +05:30
2130205626 Fix secrets in Docker workflow (#426)
* Fix secrets
2022-06-07 06:47:09 -04:00
1703b79a79 DeepSpeed Revamp (#405)
* deepspeed revamp

* Update dataclasses.py

* Update deepspeed.py

* quality

* fixing code

* quality

* FIx imports

* saving 16bit model in zero stage 3

1. Saving 16bit model in zero stage 3
2. zero init in stage 3 support using HFDeepSpeedConfig

* quality

* adding test and fixing bugs

* update makefile for deepspeed tests

* Update test.yml

* adding `deepspeed` as requirement for tests

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* quality

* addressing comments

* add example and minor updates

1. Add example to show the usage of config file with revamped deepspeed support.
2. update required deepspeed version to 0.6.5
2. reverting `reinit` change as it is not required,
3. raising Exception when using `clip_grad_value` with DeepSpeed/FSDP.

* Documentation and Zero-3 Inference Support

1. Changes to support ZeRo Stage-3 Inference support.
2. minor bug fixes.
3. Documentation.

* doc fix

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* addressing comments

* update doc to address comments and bug fixes

1. update tests and add new one testing autofill functionality of `prepare` method.
2. fix bug related to zero-3 init related to HFDeepSpeedConfig
3. Update documentation addressing comments.

* removing image and hosting it on `documentation-images` dataset

* check for hidden_size for zero_opt heurisitics

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-06-07 00:52:18 +05:30
05c641bc0c Introduce a Dependency Checker to trigger new Docker Builds on main (#424)
* Introduce warning + auto build

* Trigger only on merge to main
2022-06-06 07:30:39 -04:00
da78e296ba Enable slow tests nightly (#421) 2022-06-01 20:28:31 -04:00
9e0fff9291 Push out python 3.6 + fix all tests related to the upgrade (#420)
* Update Docker for py 3.7

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-06-01 16:49:27 -04:00
938b8f358d Speedup main CI (#419)
* Speed up workflow
2022-06-01 10:59:01 -04:00
d04e8e2baa Switch to evaluate for metrics (#417)
* Switch to evaluate for metrics

* Why the heck?

* Fix syntax error

* Install from githug

* Is this the culprit?

* Upgrade Python

* Protobouf 💩

* Install from git not necessary now

* Sneaky last tensorboard

* Let's try this way

* Forgot to add all files :-/
2022-06-01 09:57:57 -04:00
8db128498c Create an issue template for Accelerate (#415) 2022-06-01 09:15:23 -04:00
114707449b Introduce post-merge runners (#416)
* Introduce post-merge runners
2022-05-31 15:11:29 -04:00
3b51d6e9ad Fix debug_launcher issues (#413)
* change to require_cpu only
2022-05-31 14:59:28 -04:00
174eb3af1d Use main egg (#414) 2022-05-31 14:58:38 -04:00
d176b552c9 Introduce nightly runners (#410)
* Introduce nightly builds
* Fixup docker images slightly
* Make device-count specific test use `torch.cuda.device_count()` rather than `Accelerator.num_processes` to avoid bug.
2022-05-31 14:14:02 -04:00
95d1edbf8d Update requirements to pin tensorboard and include psutil (#408)
* Update test requirements to include psutil, tensorboard, and the right tensorflow version
2022-05-31 09:52:16 -04:00
a91575f1bb Fix CUDA examples tests (#407)
* Fix CUDA tests

* Use num_processes to keep everything under one test
2022-05-31 09:51:21 -04:00
146ce3df48 Move datasets and transformers to under func (#411) 2022-05-31 08:47:16 -04:00
94d88fb50d Fix CUDA Dockerfile (#409)
* Install git

* Fix CPU image as well
2022-05-31 08:47:08 -04:00
b515800947 Hotfix all failing GPU tests (#401)
* Fix up makefile
2022-05-26 14:13:19 -04:00
d1f7f99684 improve metrics logged in examples (#399) 2022-05-26 17:29:49 +05:30
00ee34d9a6 Refactor offload_state_dict and fix in offload_weight (#398) 2022-05-25 16:09:25 -04:00
f6ec2660f0 Refactor version checking into a utility (#395)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-05-25 14:07:39 -04:00
b3e21686de Include fastai in frameworks (#396) 2022-05-25 13:42:09 -04:00
f12ef1416e Add packaging to requirements (#394)
* Add packaging to requirements
2022-05-25 11:33:14 -04:00
18085fa250 Better dispatch for submodules (#392) 2022-05-25 10:51:18 -04:00
6be221f15e Build Docker Images nightly (#391) 2022-05-24 15:02:08 -04:00
3c4308e8cd Revert "Better dispatch for modules"
This reverts commit 17046bfaf8b805ebbc8ac4695f731b58c61004ed.
2022-05-24 13:48:19 -04:00
17046bfaf8 Better dispatch for modules 2022-05-24 13:47:39 -04:00
07ed7e92b5 Small bugfix for the stalebot workflow (#390)
* Bugfix dispatch
2022-05-24 11:58:26 -04:00
5a679d08d3 Introduce stalebot (#387)
* Add stalebot
2022-05-23 17:10:14 -04:00
5a00ece500 Create Dockerfiles for Accelerate (#377) 2022-05-23 17:09:56 -04:00
f62ae86cfb Mix precision -> Mixed precision (#388) 2022-05-23 15:02:29 -04:00
f9de557037 Fix OneCycle step length when in multiprocess (#385)
* Special onecycle fix
2022-05-23 12:28:44 -04:00
517cbf408b V0.10.0.dev0 2022-05-20 13:51:21 -04:00
f626d87eb7 Release: v0.9.0 2022-05-20 13:46:17 -04:00
8b8c5345cd Refactor some parts in utils (#380) 2022-05-20 12:23:54 -04:00
41427c594a Better check for deepspeed availability (#379)
* Better check for deepspeed availability

* Address comment

* Simplify a bit
2022-05-20 11:05:18 -04:00
3c45b6f760 fix shuffling for ShufflerIterDataPipe instances (#376)
* fix shuffling for ShufflerIterDataPipe instances

* add versioning test for Pytorch

* fix minimum Pytorch version

Co-authored-by: Loubna ben allal <loubnabenallal@gmail.com>
2022-05-20 08:55:03 -04:00
b922c63322 fix zero stage-1 (#378) 2022-05-20 17:18:17 +05:30
23c0341262 Refactor tests to use accelerate launch (#373)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-05-19 11:48:12 -04:00
6163e20b14 deepspeed save model temp fix (#374)
* fix deepspeed model saving

* fix deepspeed zero stage-3 model save

fixes #369

Co-Authored-By: Kovvuri Satyanarayana Reddy <54667784+KOVVURISATYANARAYANAREDDY@users.noreply.github.com>

Co-authored-by: Kovvuri Satyanarayana Reddy <54667784+KOVVURISATYANARAYANAREDDY@users.noreply.github.com>
2022-05-19 18:01:53 +05:30
d33dc39a32 fix deepspeed model saving (#370) 2022-05-19 00:07:20 +05:30
043d2ec52d Add a utility for writing a barebones config file (#371)
* Create a basic_config function
2022-05-18 13:39:19 -04:00
64e41a4995 Remove tensor call (#365) 2022-05-13 10:51:14 -04:00
4736c754bf fix tracking (#361)
* fixing trackers

* quality

* bug fix

* bug fix

* addressing comments and fixing tests

* Fixing script diff test
2022-05-13 17:20:27 +05:30
28edac2c4c Update launchers.py (#363) 2022-05-13 07:25:44 -04:00
1700716760 Handle deprication errors in launch (#360)
* Adjust based on deprication
2022-05-12 11:13:50 -04:00
aa9b614967 v0.9.0.dev0 2022-05-12 11:02:19 -04:00
2943172b8f v0.8.0 Release 2022-05-12 10:52:54 -04:00
f56f4441b3 Big model inference (#345)
* Big model inference

* Reorganize port cleanup

* Last cleanup

* Test fix

* Quality

* Update src/accelerate/big_modeling.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Fix bug in default mem

* Check device map is complete

* More tests

* Make load function more general

* Apply suggestions from code review

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Quality

* Address more review comments

* Check generation results for gpt2

* Add main wrapper around everything

* Tests for final API

* Clean infer_auto_device

* Type annotations

* Apply suggestions from code review

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>

* Address review comments

* Last review comment for now

* Fix bug in clean_device_map

* Add doc

* Style

* Fixes + dtype support

* Fix test

* Add option to offload CPU state_dict

* Indent typo

* Final tweaks

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
2022-05-12 10:09:28 -04:00
45359a73ff DeepSpeed and FSDP plugin support through script (#356)
* DeepSpeed and FSDP plugin support through script

Setting env variables when DeepSpeed /FSDP plugins are provided directly through script without using accelerate launch.

* quality
2022-05-11 19:37:49 +05:30
b5b68fbb4d Fixing metric eval in distributed setup (#355) 2022-05-10 17:17:22 +05:30
d190ed7e41 Fix sample calculation in examples (#352)
* Fix metric calculation across examples
2022-05-09 15:44:49 -04:00
b923e134e7 Fix prompt for num_processes (#347)
* Fix prompt for num_processes

* Fix prompting

Handling FSDP and DeepSpeed num_processes while prompting.

* quality
2022-05-06 17:42:23 +05:30
b2956acbe9 Better prompt for number of training devices (#344)
* TPU specific
2022-05-05 13:12:32 -04:00
be0f7ce44f Handle Manual Wrapping in FSDP. Minor fix of fsdp example. (#342)
* Handle manual wrapping in FSDP. Fix fsdp example.
2022-05-05 21:15:53 +05:30
603a53f056 Improve num_processes question in CLI (#343)
* Rephrase num_processes question
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-05-05 11:07:23 -04:00
02e2ed567b Refactor utils into its own module (#340)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-05-05 10:48:07 -04:00
8abd274a7f Introduce multiprocess logger (#337) 2022-05-02 09:45:10 -04:00
b05d483944 Fixed a typo to enable running accelerate correctly (#339) 2022-05-02 07:54:57 -04:00
a74c7c9538 Create peak_memory_uasge_tracker.py (#336)
* Create peak_memory_uasge_tracker.py

Adding the example by feature for tracking peak memory usage of GPU. One example of usage is to track the peak memory reduction when using FSDP.

* fixing the typo in the file name

* reformatting

* exclude peak_memory_usage_tracker.py from tests

* renaming and highlighting proper usage

* Update test_examples.py

😅
2022-04-29 22:38:34 +05:30
a60640d7e2 Patchfix infinite loop (#335) 2022-04-29 08:34:37 -04:00
611546f12d Add guards for batch size finder (#334)
* Fix zero reached

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-28 16:34:07 -04:00
7d2a259e3d Fix fdsp config in cluster (#331)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-28 16:01:28 -04:00
e5c17f36a8 Clean up tests + fix import (#330) 2022-04-28 13:37:02 -04:00
20de3fc959 v0.8.0.dev0 with setup 2022-04-28 11:27:50 -04:00
f84cb0c1fa v0.8.0.dev0 2022-04-28 11:27:39 -04:00
136437e3e8 Fix default config dicts (#329)
* Fix default config dicts

* style
2022-04-28 11:23:44 -04:00
2622cc0f98 PyTorch FSDP Feature Incorporation (#321)
* PyTorch FSDP Feature Incorporation

Changes to enable the PyTorch FSDP features.

* removing fsdp_kwargs

* Addressing the comments and removing the .DS_Store files

* adding fsdp_config to the FSDP Plugin

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Addressing comments and little refactoring

* Create fsdp.mdx

* Update _toctree.yml

* refactoring documentation and undo indentation in _toctree.yml

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-28 17:09:01 +05:30
5f433673e1 Introduce reduce operator (#326)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-26 15:39:11 -04:00
b028a1981d Add a memory-aware decorator for CUDA OOM avoidance (#324)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-26 10:43:06 -04:00
3e14dd16be Fixup all checkpointing examples (#323) 2022-04-21 14:25:10 -04:00
fa476d03ce Update examples to show how to deal with extra validation copies (#319)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-20 14:02:58 -04:00
53638352a0 fix typo (#320) 2022-04-20 07:29:41 -04:00
5791d3dd6b Create alias for Accelerator.free_memory (#318)
* Add `Accelerator.clear` alias

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-19 16:26:03 -04:00
2d7fbbdc73 Create Cross-Validation example (#317) 2022-04-19 16:14:07 -04:00
461ac7d476 Refactor Tracker logic and write guards for logging_dir (#316) 2022-04-19 10:21:11 -04:00
209db19dc8 Create a testing framework for example scripts and fix current ones (#313)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-13 13:24:36 -04:00
381ae20027 Fix DataLoader sharding for deepspeed in accelerate (#315)
* set first_pass on calls from deepspeed to _prepare_one(...) so that it is not a noop and actually wraps our dataloaders

* fixed style
2022-04-13 11:35:06 -04:00
8595834292 Refactor Examples by Feature (#312)
Splits up examples into by feature scripts
2022-04-11 15:59:13 -04:00
fa2ec4ba16 Fix Accelerate CLI CPU option + small fix for W&B tests (#311)
* Fix command input

* Make W&B log test more stable by changing assertEqual -> assertTrue
2022-04-08 12:22:08 -04:00
1d95ebdaa4 Use --no_local_rank for DeepSpeed launch (#309)
* Use --no_local_rank for DeepSpeed launch

* Plus one typo
2022-04-04 17:58:55 -04:00
38e6d941fa Update example scripts (#307)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-04 17:19:25 -04:00
7eb5255694 Fix training in DeepSpeed (#308)
* Fix training in DeepSpeed

* Be more defensive

* Apply suggestions from code review

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-04-04 16:52:05 -04:00
e72a125502 Write tests for comet_ml (#306)
* Write tests for comet_ml

* No need for second mock
2022-03-31 17:39:18 -04:00
e361dcc2a7 Have custom trackers work with the API (#305)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-03-31 14:57:19 -04:00
e66ba31af2 Create new TestCase classes and clean up W&B tests (#304) 2022-03-31 14:04:26 -04:00
2c554b056c Pass lr_scheduler to Accelerator.prepare (#301)
* Work in progress

* Pass scheduler to Accelerator.prapre

* Fix tests

* Apply suggestions from code review

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Style post comments

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-03-31 09:55:41 -04:00
5668270de7 Add logging capabilities (#293)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

- Added experiment tracking API, and support for Weights and Biases, TensorBoard, and CometML + Tests
- Added `tensorflow` to a new dependency list to be used during tests
- Added three new functions in `Accelerator` to interact with the API
2022-03-30 17:40:32 -04:00
f03f18252f Leave default as None (#300) 2022-03-30 13:48:10 -04:00
5b2e6edab2 Fix example for datasets v2 (#298) 2022-03-29 15:48:16 -04:00
1e0b96f814 Load model and optimizet states on CPU to void OOMs (#299) 2022-03-29 14:49:44 -04:00
5d83eed3d2 Refactor precisions to its own enum (#292)
* Refactor precision

* Add in enum subclass and inheritence
2022-03-24 12:44:06 -04:00
69ff072643 Document save/load state (#290) 2022-03-23 14:19:07 -04:00
156 changed files with 21053 additions and 1681 deletions

58
.github/ISSUE_TEMPLATE/bug-report.yml vendored Normal file
View File

@ -0,0 +1,58 @@
name: "\U0001F41B Bug Report"
description: Submit a bug report to help us improve Accelerate
body:
- type: textarea
id: system-info
attributes:
label: System Info
description: Please share your accelerate configuration with us. You can run the command `accelerate env` and copy-paste its outputs below
render: Shell
placeholder: accelerate version, OS, python version, numpy version, torch version, and accelerate's configuration
validations:
required: true
- type: checkboxes
id: information-scripts-examples
attributes:
label: Information
description: 'The problem arises when using:'
options:
- label: "The official example scripts"
- label: "My own modified scripts"
- type: checkboxes
id: information-tasks
attributes:
label: Tasks
description: "The tasks I am working on are:"
options:
- label: "One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)"
- label: "My own task or dataset (give details below)"
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Reproduction
description: |
Please provide a code sample that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
Steps to reproduce the behavior:
1.
2.
3.
- type: textarea
id: expected-behavior
validations:
required: true
attributes:
label: Expected behavior
description: "A clear and concise description of what you would expect to happen."
render: Shell

View File

@ -0,0 +1,64 @@
name: Build Docker images (releases)
on:
workflow_dispatch:
release:
types: [published]
concurrency:
group: docker-image-builds
cancel-in-progress: false
jobs:
get-version:
runs-on: ubuntu-latest
outputs:
version: ${{ steps.step1.outputs.version }}
steps:
- uses: actions/checkout@v3
- id: step1
run: echo "::set-output name=version::$(python setup.py --version)"
version-cpu:
name: "Latest Accelerate CPU [version]"
runs-on: ubuntu-latest
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Check out code
uses: actions/checkout@v2
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push CPU
uses: docker/build-push-action@v2
with:
context: ./docker/accelerate-cpu
push: true
tags: huggingface/accelerate-cpu:${{needs.get-version.outputs.version}}
version-cuda:
name: "Latest Accelerate GPU [version]"
runs-on: ubuntu-latest
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Check out code
uses: actions/checkout@v2
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU
uses: docker/build-push-action@v2
with:
context: ./docker/accelerate-gpu
push: true
tags: huggingface/accelerate-gpu:${{needs.get-version.outputs.version}}

View File

@ -0,0 +1,45 @@
name: Trigger docker images and run tests
on:
push:
branches:
- main
workflow_dispatch:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
jobs:
check-for-source:
runs-on: ubuntu-latest
name: Check if setup was changed
outputs:
changed: ${{ steps.was_changed.outputs.changed }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: "2"
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v22.2
- name: Was setup changed
id: was_changed
run: |
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
if [ `basename "${file}"` == "setup.py" ]; then
echo ::set-output name=changed::"1"
fi
done
build-docker-containers:
needs: check-for-source
if: (github.event_name == 'push') && (needs.check-for-source.outputs.changed == '1')
uses: ./.github/workflows/build_docker_images.yml
secrets: inherit
run-merge-tests:
needs: build-docker-containers
if: always()
uses: ./.github/workflows/run_merge_tests.yml

View File

@ -0,0 +1,54 @@
name: Build Docker images (scheduled)
on:
workflow_dispatch:
workflow_call:
schedule:
- cron: "0 1 * * *"
concurrency:
group: docker-image-builds
cancel-in-progress: false
jobs:
latest-cpu:
name: "Latest Accelerate CPU [dev]"
runs-on: ubuntu-latest
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Check out code
uses: actions/checkout@v2
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push CPU
uses: docker/build-push-action@v2
with:
context: ./docker/accelerate-cpu
push: true
tags: huggingface/accelerate-cpu
latest-cuda:
name: "Latest Accelerate GPU [dev]"
runs-on: ubuntu-latest
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Check out code
uses: actions/checkout@v2
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU
uses: docker/build-push-action@v2
with:
context: ./docker/accelerate-gpu
push: true
tags: huggingface/accelerate-gpu

88
.github/workflows/nightly.yml vendored Normal file
View File

@ -0,0 +1,88 @@
name: Self-hosted runner with slow tests (scheduled)
on:
workflow_dispatch:
schedule:
- cron: "0 2 * * *"
env:
RUN_SLOW: "yes"
IS_GITHUB_CI: "1"
jobs:
run_all_tests_single_gpu:
runs-on: [self-hosted, docker-gpu, multi-gpu]
env:
CUDA_VISIBLE_DEVICES: "0"
container:
image: huggingface/accelerate-gpu:latest
options: --gpus all --shm-size "16gb"
defaults:
run:
working-directory: accelerate/
shell: bash
steps:
- name: Update clone & pip install
run: |
source activate accelerate
git config --global --add safe.directory '*'
git fetch && git checkout ${{ github.sha }}
pip install -e . --no-deps
pip install pytest-reportlog
- name: Run test on GPUs
run: |
source activate accelerate
make test
- name: Run examples on GPUs
run: |
source activate accelerate
pip uninstall comet_ml -y
make test_examples
- name: Generate Report
if: always()
run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_all_tests_multi_gpu:
runs-on: [self-hosted, docker-gpu, multi-gpu]
env:
CUDA_VISIBLE_DEVICES: "0,1"
container:
image: huggingface/accelerate-gpu:latest
options: --gpus all --shm-size "16gb"
defaults:
run:
working-directory: accelerate/
shell: bash
steps:
- name: Update clone
run: |
source activate accelerate
git config --global --add safe.directory '*'
git fetch && git checkout ${{ github.sha }}
pip install -e . --no-deps
pip install pytest-reportlog
- name: Run core and big modeling tests on GPUs
run: |
source activate accelerate
make test_big_modeling
make test_core
- name: Run Integration tests on GPUs
run: |
source activate accelerate
make test_integrations
- name: Run examples on GPUs
run: |
source activate accelerate
pip uninstall comet_ml -y
make test_examples
- name: Generate Report
if: always()
run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

View File

@ -7,10 +7,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.6
uses: actions/setup-python@v2
- name: Set up Python 3.7
uses: actions/setup-python@v3
with:
python-version: 3.6
python-version: 3.7
- name: Install Python dependencies
run: pip install -e .[quality]
- name: Run Quality check

79
.github/workflows/run_merge_tests.yml vendored Normal file
View File

@ -0,0 +1,79 @@
name: Self-hosted runner tests (push to "main")
on:
workflow_call:
workflow_dispatch:
env:
TESTING_MOCKED_DATALOADERS: "1"
IS_GITHUB_CI: "1"
jobs:
run_all_tests_single_gpu:
runs-on: [self-hosted, docker-gpu, multi-gpu]
env:
CUDA_VISIBLE_DEVICES: "0"
container:
image: huggingface/accelerate-gpu:latest
options: --gpus all --shm-size "16gb"
defaults:
run:
working-directory: accelerate/
shell: bash
steps:
- name: Update clone & pip install
run: |
source activate accelerate
git config --global --add safe.directory '*'
git fetch && git checkout ${{ github.sha }}
pip install -e .[testing,test_trackers]
pip install pytest-reportlog
- name: Run test on GPUs
run: |
source activate accelerate
make test
- name: Run examples on GPUs
run: |
source activate accelerate
pip uninstall comet_ml -y
make test_examples
- name: Generate Report
if: always()
run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_all_tests_multi_gpu:
runs-on: [self-hosted, docker-gpu, multi-gpu]
container:
image: huggingface/accelerate-gpu:latest
options: --gpus all --shm-size "16gb"
defaults:
run:
working-directory: accelerate/
shell: bash
steps:
- name: Update clone
run: |
source activate accelerate
git config --global --add safe.directory '*'
git fetch && git checkout ${{ github.sha }}
pip install -e .[testing,test_trackers]
pip install pytest-reportlog
- name: Run test on GPUs
run: |
source activate accelerate
make test
- name: Run examples on GPUs
run: |
source activate accelerate
pip uninstall comet_ml -y
make test_examples
- name: Generate Report
if: always()
run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

28
.github/workflows/stale.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: Stale Bot
on:
schedule:
- cron: "0 15 * * *"
workflow_dispatch:
jobs:
close_stale_issues:
name: Close Stale Issues
if: github.repository == 'huggingface/accelerate'
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v1
with:
python-version: 3.7
- name: Install requirements
run: |
pip install PyGithub
- name: Close stale issues
run: |
python utils/stale.py

View File

@ -1,17 +1,70 @@
name: Run Tests
on: [pull_request]
on:
pull_request:
paths:
- "src/**"
- "tests/**"
- ".github/**"
- "examples/**"
- "setup.py"
types: [opened, synchronize, reopened]
env:
HF_HOME: ~/hf_cache
TESTING_MOCKED_DATALOADERS: "1"
IS_GITHUB_CI: "1"
jobs:
test:
run-tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
pytorch-version: [
latest,
minimum
]
test-kind: [
test_prod,
test_core,
test_big_modeling,
test_deepspeed,
test_fsdp,
test_example_differences,
test_checkpoint_step,
test_checkpoint_epoch,
test_rest
]
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.6
uses: actions/setup-python@v2
- uses: actions/checkout@v3
- name: Set up python 3.7
uses: actions/setup-python@v3
with:
python-version: 3.6
- name: Install Python dependencies
run: pip install -e .[test]
python-version: 3.7
- name: Activate python cache
uses: actions/cache@v3
with:
path: |
${{ env.pythonLocation }}
${{ env.HF_HOME }}
key: ${{ env.pythonLocation }}-${{ matrix.test-kind }}-${{ hashFiles('setup.py') }}
- name: Install the library
run: |
pip install --upgrade pip
if [[ ${{ matrix.test-kind }} = test_prod ]]; then pip install -e .[test_prod]; fi
if [[ ${{ matrix.test-kind }} != test_prod ]]; then pip install -e .[testing,test_trackers]; fi
if [[ ${{ matrix.test-kind }} = test_rest ]]; then pip uninstall comet_ml -y; fi
if [[ ${{ matrix.pytorch-version }} = minimum ]]; then pip install torch==1.6.0; fi
pip install pytest-reportlog
- name: Run Tests
run: make test
run: |
make ${{ matrix.test-kind }}
- name: Generate Report
if: always()
run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

8
.gitignore vendored
View File

@ -132,4 +132,10 @@ dmypy.json
.vscode
# IntelliJ
.idea
.idea
# Mac .DS_Store
.DS_Store
# More test things
wandb

View File

@ -54,7 +54,7 @@ Did not find it? :( So we can act quickly on it, please follow these steps:
* Include your **OS type and version**, the versions of **Python** and **PyTorch**.
* A short, self-contained, code snippet that allows us to reproduce the bug in
less than 30s;
* Provide the with your Accelerate configuration (located by default in `~/.cache/huggingface/accelerate/default_congig.yml`)
* Provide the with your Accelerate configuration (located by default in `~/.cache/huggingface/accelerate/default_config.yaml`)
### Do you want a new feature?

View File

@ -1,6 +1,6 @@
.PHONY: quality style test docs
check_dirs := tests src examples
check_dirs := tests src examples benchmarks
# Check that source code meets quality standards
@ -25,4 +25,40 @@ style:
# Run tests for the library
test:
python -m pytest -n auto --dist=loadfile -s -v ./tests/
python -m pytest -s -v ./tests/ --ignore=./tests/test_examples.py $(if $(IS_GITHUB_CI),--report-log 'all.log',)
test_big_modeling:
python -m pytest -s -v ./tests/test_big_modeling.py $(if $(IS_GITHUB_CI),--report-log 'big_modeling.log',)
test_core:
python -m pytest -s -v ./tests/ --ignore=./tests/test_examples.py --ignore=./tests/deepspeed --ignore=./tests/test_big_modeling.py \
--ignore=./tests/fsdp $(if $(IS_GITHUB_CI),--report-log 'core.log',)
test_deepspeed:
python -m pytest -s -v ./tests/deepspeed $(if $(IS_GITHUB_CI),--report-log 'deepspeed.log',)
test_fsdp:
python -m pytest -s -v ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log 'fsdp.log',)
test_examples:
python -m pytest -s -v ./tests/test_examples.py $(if $(IS_GITHUB_CI),--report-log 'examples.log',)
# Broken down example tests for the CI runners
test_integrations:
python -m pytest -s -v ./tests/deepspeed ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log 'integrations.log',)
test_example_differences:
python -m pytest -s -v ./tests/test_examples.py::ExampleDifferenceTests $(if $(IS_GITHUB_CI),--report-log 'example_diff.log',)
test_checkpoint_epoch:
python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "by_epoch" $(if $(IS_GITHUB_CI),--report-log 'checkpoint_epoch.log',)
test_checkpoint_step:
python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "by_step" $(if $(IS_GITHUB_CI),--report-log 'checkpoint_step.log',)
# Same as test but used to install only the base dependencies
test_prod:
$(MAKE) test_core
test_rest:
python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "not by_step and not by_epoch" $(if $(IS_GITHUB_CI),--report-log 'rest.log',)

View File

@ -168,7 +168,7 @@ mpirun -np 2 python examples/nlp_example.py
## Launching training using DeepSpeed
🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. to use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your python script, we provide you the `DeepSpeedPlugin`.
🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your python script, we provide you the `DeepSpeedPlugin`.
```python
from accelerator import Accelerator, DeepSpeedPlugin
@ -196,7 +196,7 @@ from accelerate import notebook_launcher
notebook_launcher(training_function)
```
An example can be found in [this notebook](https://github.com/huggingface/notebooks/blob/master/examples/accelerate/simple_nlp_example.ipynb). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/accelerate/simple_nlp_example.ipynb)
An example can be found in [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb)
## Why should I use 🤗 Accelerate?
@ -212,6 +212,7 @@ If you like the simplicity of 🤗 Accelerate but would prefer a higher-level ab
* [Animus](https://github.com/Scitator/animus) is a minimalistic framework to run machine learning experiments. Animus highlights common "breakpoints" in ML experiments and provides a unified interface for them within [IExperiment](https://github.com/Scitator/animus/blob/main/animus/core.py#L76).
* [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model train, and inference logic.
* [fastai](https://github.com/fastai/fastai#installing) is a PyTorch framework for Deep Learning that simplifies training fast and accurate neural nets using modern best practices. fastai provides a [Learner](https://docs.fast.ai/learner.html#Learner) to handle the training, fine-tuning, and inference of deep learning algorithms.
* [Kornia](https://kornia.readthedocs.io/en/latest/get-started/introduction.html) is a differentiable library that allows classical computer vision to be integrated into deep learning models. Kornia provides a [Trainer](https://kornia.readthedocs.io/en/latest/x.html#kornia.x.Trainer) with the specific purpose to train and fine-tune the supported deep learning algorithms within the library.
* [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) is a lightweight training library, with a streamlined feature set centred around a general-purpose [Trainer](https://pytorch-accelerated.readthedocs.io/en/latest/trainer.html), that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves!
@ -240,4 +241,18 @@ pip install accelerate
- multi-GPU on several nodes (machines)
- TPU
- FP16 with native AMP (apex on the roadmap)
- DeepSpeed support (experimental)
- DeepSpeed support (Experimental)
- PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental)
## Citing 🤗 Accelerate
If you use 🤗 Accelerate in your publication, please cite it by using the following BibTeX entry.
```bibtex
@Misc{accelerate,
title = {Accelerate: Training and inference at scale made simple, efficient and adaptable.},
author = {Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Mangrulkar},
howpublished = {\url{https://github.com/huggingface/accelerate}},
year = {2022}
}
```

46
benchmarks/README.md Normal file
View File

@ -0,0 +1,46 @@
# Big model inference benchmarks
Running inference with Accelerate on big models.
## Setup
These benchmarks use the `transformers` library:
```bash
pip install transformers
```
To reproduce or test a new setup, run
```py
python inference_acc.py model_name
```
This script supports `gpt-j-6b`, `gpt-neox`, `opt` (30B version) and `T0pp` out of the box, but you can specify any valid checkpoint for `model_name`.
To force a different `torch_dtype` than the one in the config: `--torch_dtype xxx`.
If you get an error linked to disk offload, you need to add the option `--disk-offload`
## Results
On a setup with two Titan RTXs (24GB of RAM) and 32GB of RAM, we get the following benchmarks (T0pp does not run in float16, which is why it's not included).
| Model | Model load time | Generation time | dtype | GPU 0 use | GPU 1 use | CPU use | Disk offload |
|:-----:|:---------------:|:---------------:|:-----:|:---------:|:---------:|:-------:|:------------:|
| GPT-J-6B | 8.7s | 0.05s per token | float16 | 11.7GB | 0GB | 0GB | no |
| GPT-J-6B | 12.4s | 0.06s per token | float32 | 21.9GB | 1.5GB | 0GB | no |
| GPT-Neo-X-20B | 30.9s | 0.08s per token | float16 | 21.5GB | 18GB | 0GB | no |
| GPT-Neo-X-20B | 78.2s | 10.72s per token | float32 | 20.3GB | 22.7 GB | 24.4GB | yes |
| T0pp (11B) | 29.4s | 0.05s per token | float32 | 21.1GB | 21.3GB | 0GB | no |
| OPT-30B | 34.5s | 2.37s per token | float16 | 20.7GB | 22.3GB | 14.1GB | no |
| OPT-30B | 112.3s | 33.9s per token | float32 | 20.2GB | 21.2GB | 23.5GB | yes |
Note on the results:
- using two GPUs instead of one does not slow down generation
- using CPU offload slows down a bit (see OPT-30b)
- using disk offload slows down a lot (need to implement prefetching)
You will also note that Accelerate does not use anymore GPU and CPU RAM than necessary:
- peak GPU memory is exactly the size of the model put on a given GPU
- peak CPU memory is either the size of the biggest checkpoint shard or the part of the model offloaded on CPU, whichever is bigger.

View File

@ -0,0 +1,143 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import time
import torch
import transformers
from accelerate.utils import compute_module_sizes
from measures_util import end_measure, log_measures, start_measure
from transformers import AutoConfig, AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer
DEFAULT_MODELS = {
"gpt-j-6b": {"is_causal": True, "model": "sgugger/sharded-gpt-j-6B", "tokenizer": "EleutherAI/gpt-j-6B"},
"gpt-neox": {"is_causal": True, "model": "EleutherAI/gpt-neox-20b"},
"opt": {"is_causal": True, "model": "facebook/opt-30b"},
"T0pp": {"is_causal": False, "model": "bigscience/T0pp", "model_revision": "sharded"},
}
PROMPTS = [
"Hello, my name is",
"Are unicorns real? Unicorns are",
"For the first time in several years,",
"My name is Julien and I am",
"The goal of life is",
"Whenever I'm sad, I like to",
]
def parse_args():
parser = argparse.ArgumentParser(description="Run and time generations on a big model using Accelerate.")
parser.add_argument("model_name", type=str, default=None, help="The name of the model to try.")
parser.add_argument(
"--tokenizer_name", type=str, default=None, help="The name of the tokenizer (if different from the model."
)
parser.add_argument("--is_causal", type=bool, default=None, help="Whether or not the model is causal.")
parser.add_argument(
"--model_revision", type=str, default=None, help="The revision to use for the model checkpoint."
)
parser.add_argument("--torch_dtype", type=str, default=None, help="The dtype for the model.")
parser.add_argument("--disk_offload", action="store_true")
args = parser.parse_args()
# Sanitize args
if args.model_name in DEFAULT_MODELS:
defaults = DEFAULT_MODELS[args.model_name]
args.model_name = defaults["model"]
if args.tokenizer_name is None:
args.tokenizer_name = defaults.get("tokenizer", args.model_name)
if args.is_causal is None:
args.is_causal = defaults["is_causal"]
if args.model_revision is None:
args.model_revision = defaults.get("model_revision", "main")
if args.is_causal is None:
raise ValueError("Could not infer the default for `--is_causal`, pass either True or False for it.")
if args.tokenizer_name is None:
args.tokenizer_name = args.model_name
if args.model_revision is None:
args.model_revision = "main"
return args
def main():
transformers.utils.logging.set_verbosity_error()
args = parse_args()
if args.torch_dtype is None:
config = AutoConfig.from_pretrained(args.model_name)
torch_dtype = getattr(config, "torch_dtype", torch.float32)
else:
torch_dtype = getattr(torch, args.torch_dtype)
model_cls = AutoModelForCausalLM if args.is_causal else AutoModelForSeq2SeqLM
kwargs = {
"torch_dtype": torch_dtype,
"revision": args.model_revision,
}
if args.disk_offload:
kwargs["offload_folder"] = "tmp_offload"
kwargs["offload_state_dict"] = True
start_measures = start_measure()
model = model_cls.from_pretrained(args.model_name, device_map="auto", **kwargs)
end_measures = end_measure(start_measures)
log_measures(end_measures, "Model loading")
module_sizes = compute_module_sizes(model)
device_size = {v: 0 for v in model.hf_device_map.values()}
for module, device in model.hf_device_map.items():
device_size[device] += module_sizes[module]
message = "\n".join([f"- {device}: {size // 2**20}MiB" for device, size in device_size.items()])
print(f"\nTheoretical use:\n{message}")
tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name)
start_measures = start_measure()
generation_times = []
gen_tokens = []
texts_outs = []
for prompt in PROMPTS:
inputs = tokenizer(prompt, return_tensors="pt").to(0)
tokens = inputs["input_ids"][0].tolist()
before_generate = time.time()
outputs = model.generate(inputs["input_ids"])
after_generate = time.time()
outputs = outputs[0].tolist()
num_gen_tokens = len(outputs) if outputs[: len(tokens)] != tokens else len(outputs) - len(tokens)
generation_time = after_generate - before_generate
text_out = tokenizer.decode(outputs, skip_special_tokens=True)
texts_outs.append(text_out)
generation_times.append(generation_time)
gen_tokens.append(num_gen_tokens)
print(f"Prompt: {prompt}\nGeneration {text_out}\nIn {generation_time:.2f}s for {num_gen_tokens} tokens\n")
end_measures = end_measure(start_measures)
log_measures(end_measures, "Model generation")
generation_times_per_token = [gen / tok for gen, tok in zip(generation_times, gen_tokens)]
avg_gen = sum(generation_times_per_token) / len(generation_times)
print(f"Average time of generation per token: {avg_gen:.2f}s")
print(f"First generation (avg time per token): {generation_times_per_token[0]:.2f}s")
avg_gen = sum(generation_times_per_token[1:]) / (len(generation_times_per_token) - 1)
print(f"Average time of generation per token (excluding the first): {avg_gen:.2f}s")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,86 @@
import gc
import threading
import time
import torch
import psutil
class PeakCPUMemory:
def __init__(self):
self.process = psutil.Process()
self.peak_monitoring = False
def peak_monitor(self):
self.cpu_memory_peak = -1
while True:
self.cpu_memory_peak = max(self.process.memory_info().rss, self.cpu_memory_peak)
# can't sleep or will not catch the peak right (this comment is here on purpose)
if not self.peak_monitoring:
break
def start(self):
self.peak_monitoring = True
self.thread = threading.Thread(target=self.peak_monitor)
self.thread.daemon = True
self.thread.start()
def stop(self):
self.peak_monitoring = False
self.thread.join()
return self.cpu_memory_peak
cpu_peak_tracker = PeakCPUMemory()
def start_measure():
# Time
measures = {"time": time.time()}
gc.collect()
torch.cuda.empty_cache()
# CPU mem
measures["cpu"] = psutil.Process().memory_info().rss
cpu_peak_tracker.start()
# GPU mem
for i in range(torch.cuda.device_count()):
measures[str(i)] = torch.cuda.memory_allocated(i)
torch.cuda.reset_peak_memory_stats()
return measures
def end_measure(start_measures):
# Time
measures = {"time": time.time() - start_measures["time"]}
gc.collect()
torch.cuda.empty_cache()
# CPU mem
measures["cpu"] = (psutil.Process().memory_info().rss - start_measures["cpu"]) / 2**20
measures["cpu-peak"] = (cpu_peak_tracker.stop() - start_measures["cpu"]) / 2**20
# GPU mem
for i in range(torch.cuda.device_count()):
measures[str(i)] = (torch.cuda.memory_allocated(i) - start_measures[str(i)]) / 2**20
measures[f"{i}-peak"] = (torch.cuda.max_memory_allocated(i) - start_measures[str(i)]) / 2**20
return measures
def log_measures(measures, description):
print(f"{description}:")
print(f"- Time: {measures['time']:.2f}s")
for i in range(torch.cuda.device_count()):
print(f"- GPU {i} allocated: {measures[str(i)]:.2f}MiB")
peak = measures[f"{i}-peak"]
print(f"- GPU {i} peak: {peak:.2f}MiB")
print(f"- CPU RAM allocated: {measures['cpu']:.2f}MiB")
print(f"- CPU RAM peak: {measures['cpu-peak']:.2f}MiB")

View File

@ -0,0 +1,35 @@
# Builds CPU-only Docker image of PyTorch
# Uses multi-staged approach to reduce size
# Stage 1
FROM python:3.7-slim as compile-image
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update
RUN apt-get install -y --no-install-recommends \
build-essential \
git \
gcc
# Setup virtual environment for Docker
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv ${VIRTUAL_ENV}
# Make sure we use the virtualenv
ENV PATH="${VIRTUAL_ENV}/bin:$PATH"
WORKDIR /workspace
# Install specific CPU torch wheel to save on space
RUN python3 -m pip install --upgrade --no-cache-dir pip
RUN python3 -m pip install --no-cache-dir \
jupyter \
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers] \
--extra-index-url https://download.pytorch.org/whl/cpu
# Stage 2
FROM python:3.7-slim AS build-image
COPY --from=compile-image /opt/venv /opt/venv
RUN useradd -ms /bin/bash user
USER user
# Make sure we use the virtualenv
ENV PATH="/opt/venv/bin:$PATH"
CMD ["/bin/bash"]

View File

@ -0,0 +1,42 @@
# Builds GPU docker image of PyTorch
# Uses multi-staged approach to reduce size
# Stage 1
# Use base conda image to reduce time
FROM continuumio/miniconda3:latest AS compile-image
# Specify py version
ENV PYTHON_VERSION=3.7.3
# Install apt libs
RUN apt-get update && \
apt-get install -y curl git wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists*
# Create our conda env
RUN conda create --name accelerate python=${PYTHON_VERSION} ipython jupyter pip
# We don't install pytorch here yet since CUDA isn't available
# instead we use the direct torch wheel
ENV PATH /opt/conda/envs/accelerate/bin:$PATH
# Activate our bash shell
RUN chsh -s /bin/bash
SHELL ["/bin/bash", "-c"]
# Activate the conda env and install torch + accelerate
RUN source activate accelerate && \
python3 -m pip install --no-cache-dir \
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers] \
--extra-index-url https://download.pytorch.org/whl/cu113
# Stage 2
FROM nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04 AS build-image
COPY --from=compile-image /opt/conda /opt/conda
ENV PATH /opt/conda/bin:$PATH
# Install apt libs
RUN apt-get update && \
apt-get install -y curl git wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists*
RUN echo "source activate accelerate" >> ~/.profile
# Activate the virtualenv
CMD ["/bin/bash"]

View File

@ -1,22 +1,74 @@
- sections:
- sections:
- local: index
title: 🤗 Accelerate
- local: quicktour
title: Quick tour
- local: installation
- local: basic_tutorials/install
title: Installation
title: Get started
- local: quicktour
title: Quicktour
title: Getting started
- sections:
- local: sagemaker
title: Amazon SageMaker
title: Guides
- local: basic_tutorials/overview
title: Overview
- local: basic_tutorials/migration
title: Migrating to 🤗 Accelerate
- local: basic_tutorials/launch
title: Launching distributed code
- local: basic_tutorials/notebook
title: Launching distributed training from Jupyter Notebooks
title: Tutorials
- sections:
- local: accelerator
title: Accelerator
- local: launcher
title: Notebook Launcher
- local: kwargs
title: Kwargs Handlers
- local: internal
title: Internals
title: API Reference
- local: usage_guides/gradient_accumulation
title: Performing gradient accumulation
- local: usage_guides/fsdp
title: Fully Sharded Data Parallelism
- local: usage_guides/checkpoint
title: Saving and loading training states
- local: usage_guides/deepspeed
title: How to use DeepSpeed
- local: usage_guides/tracking
title: Using experiment trackers
- local: usage_guides/big_modeling
title: How to use large models with small resources
- local: usage_guides/memory
title: How to avoid CUDA Out-of-Memory
- local: usage_guides/sagemaker
title: Using 🤗 Accelerate on SageMaker
- local: usage_guides/mps
title: How to use Apple Silicon M1 GPUs
- local: usage_guides/training_zoo
title: 🤗 Accelerate Example Zoo
title: How-To Guides
- sections:
- local: concept_guides/performance
title: Comparing performance across distributed setups
- local: concept_guides/gradient_synchronization
title: Gradient synchronization
- local: concept_guides/deferring_execution
title: Executing and deferring jobs
- local: concept_guides/training_tpu
title: TPU best practices
title: Concepts and fundamentals
- sections:
- local: package_reference/accelerator
title: Main Accelerator class
- local: package_reference/state
title: Stateful configuration classes
- local: package_reference/cli
title: The Command Line
- local: package_reference/torch_wrappers
title: Torch wrapper classes
- local: package_reference/tracking
title: Experiment trackers
- local: package_reference/launchers
title: Distributed launchers
- local: package_reference/deepspeed
title: DeepSpeed utilities
- local: package_reference/logging
title: Logging
- local: package_reference/big_modeling
title: Working with large models
- local: package_reference/kwargs
title: Kwargs handlers
- local: package_reference/utilities
title: Utility functions and classes
title: "Reference"

View File

@ -1,41 +0,0 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Accelerator
The [`Accelerator`] is the main class provided by 🤗 Accelerate. It serves at the main entrypoint for
the API. To quickly adapt your script to work on any kind of setup with 🤗 Accelerate juste:
1. Initialize an [`Accelerator`] object (that we will call `accelerator` in the rest of this
page) as early as possible in your script.
2. Pass along your model(s), optimizer(s), dataloader(s) to the [`~Accelerator.prepare`] method.
3. (Optional but best practice) Remove all the `.cuda()` or `.to(device)` in your code and let the
`accelerator` handle device placement for you.
4. Replace the `loss.backward()` in your code by `accelerator.backward(loss)`.
5. (Optional, when using distributed evaluation) Gather your predictions and labelsbefore storing them or using them
for metric computation using [`~Accelerator.gather`].
This is all what is needed in most cases. For more advanced case or a nicer experience here are the functions you
should search for and replace by the corresponding methods of your `accelerator`:
- `print` statements should be replaced by [`~Accelerator.print`] to be only printed once per
process.
- Use [`~Accelerator.is_local_main_process`] for statements that should be executed once per server.
- Use [`~Accelerator.is_main_process`] for statements that should be executed once only.
- Use [`~Accelerator.wait_for_everyone`] to make sure all processes join that point before continuing
(useful before a model save for instance).
- Use [`~Accelerator.unwrap_model`] to unwrap your model before saving it.
- Use [`~Accelerator.save`] instead of `torch.save`.
- Use [`~Accelerator.clip_grad_norm_`] instead of `torch.nn.utils.clip_grad_norm_` and
[`~Accelerator.clip_grad_value_`] instead of `torch.nn.utils.clip_grad_value_`.
[[autodoc]] Accelerator

View File

@ -0,0 +1,99 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Installation and Configuration
Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 Accelerate. 🤗 Accelerate is tested on **Python 3.7+**.
## Installing 🤗 Accelerate
🤗 Accelerate is available on pypi and conda, as well as on GitHub. Details to install from each are below:
### pip
To install 🤗 Accelerate from pypi, perform:
```bash
pip install accelerate
```
### conda
🤗 Accelerate can also be installed with conda with:
```bash
conda install -c conda-forge accelerate
```
### Source
New features are added every day that haven't been released yet. To try them out yourself, install
from the GitHub repository:
```bash
pip install git+https://github.com/huggingface/accelerate
```
If you're working on contributing to the library or wish to play with the source code and see live
results as you run the code, an editable version can be installed from a locally-cloned version of the
repository:
```bash
git clone https://github.com/huggingface/accelerate
cd accelerate
pip install -e .
```
## Configuring 🤗 Accelerate
After installing, you need to configure 🤗 Accelerate for how the current system is setup for training.
To do so run the following and answer the questions prompted to you:
```bash
accelerate config
```
To write a barebones configuration that doesn't include options such as DeepSpeed configuration or running on TPUs, you can quickly run:
```bash
python -c "from accelerate.utils import write_basic_config; write_basic_config(mixed_precision='fp16')"
```
🤗 Accelerate will automatically utilize the maximum number of GPUs available and set the mixed precision mode.
To check that your configuration looks fine, run:
```bash
accelerate env
```
An example output is shown below, which describes two GPUs on a single machine with no mixed precision being used:
```bash
- `Accelerate` version: 0.11.0.dev0
- Platform: Linux-5.10.0-15-cloud-amd64-x86_64-with-debian-11.3
- Python version: 3.7.12
- Numpy version: 1.19.5
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- main_process_ip: None
- main_process_port: None
- main_training_function: main
- deepspeed_config: {}
- fsdp_config: {}
```

View File

@ -0,0 +1,178 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Launching your 🤗 Accelerate scripts
In the previous tutorial, you were introduced to how to modify your current training script to use 🤗 Accelerate.
The final version of that code is shown below:
```python
from accelerate import Accelerator
accelerator = Accelerator()
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
```
But how do you run this code and have it utilize the special hardware available to it?
First you should rewrite the above code into a function, and make it callable as a script. For example:
```diff
from accelerate import Accelerator
+ def main():
accelerator = Accelerator()
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
+ if __name__ == "__main__":
+ main()
```
Next you need to launch it with `accelerate launch`.
<Tip warning={true}>
It's recommended you run `accelerate config` before using `accelerate launch` to configure your environment to your liking.
Otherwise 🤗 Accelerate will use very basic defaults depending on your system setup.
</Tip>
## Using accelerate launch
🤗 Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them are.
<Tip>
If you are familiar with launching scripts in PyTorch yourself such as with `torchrun`, you can still do this. It is not required to use `accelerate launch`.
</Tip>
You can launch your script quickly by using:
```bash
accelerate launch {script_name.py} --arg1 --arg2 ...
```
Just put `accelerate launch` at the start of your command, and pass in additional arguments and parameters to your script afterwards like normal!
Since this runs the various torch spawn methods, all of the expected environment variables can be modified here as well.
For example, here is how to use `accelerate launch` with a single GPU:
```bash
CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...
```
You can also use `accelerate launch` without performing `accelerate config` first, but you may need to manually pass in the right configuration parameters.
In this case, 🤗 Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision.
Here is how you would use all GPUs and train with mixed precision disabled:
```bash
accelerate launch --multi_gpu {script_name.py} {--arg1} {--arg2} ...
```
To get more specific you should pass in the needed parameters yourself. For instance, here is how you
would also launch that same script on two GPUs using mixed precision while avoiding all of the warnings:
```bash
accelerate launch --multi_gpu --mixed_precision=fp16 --num_processes=2 {script_name.py} {--arg1} {--arg2} ...
```
For a complete list of parameters you can pass in, run:
```bash
accelerate launch -h
```
<Tip>
Even if you are not using 🤗 Accelerate in your code, you can still use the launcher for starting your scripts!
</Tip>
For a visualization of this difference, that earlier `accelerate launch` on multi-gpu would look something like so with `torchrun`:
```bash
MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --num_machines=1 {script_name.py} {--arg1} {--arg2} ...
```
## Why you should always use `accelerate config`
Why is it useful to the point you should **always** run `accelerate config`?
Remember that earlier call to `accelerate launch` as well as `torchrun`?
Post configuration, to run that script with the needed parts you just need to use `accelerate launch` outright, without passing anything else in:
```bash
accelerate launch {script_name.py} {--arg1} {--arg2} ...
```
## Custom Configurations
As briefly mentioned earlier, `accelerate launch` should be mostly used through combining set configurations
made with the `accelerate config` command. These configs are saved to a `default_config.yaml` file in your cache folder for 🤗 Accelerate.
This cache folder is located at (with decreasing order of priority):
- The content of your environment variable `HF_HOME` suffixed with `accelerate`.
- If it does not exist, the content of your environment variable `XDG_CACHE_HOME` suffixed with
`huggingface/accelerate`.
- If this does not exist either, the folder `~/.cache/huggingface/accelerate`.
To have multiple configurations, the flag `--config_file` can be passed to the `accelerate launch` command paired
with the location of the custom yaml.
An example yaml may look something like the following for two GPUs on a single machine using `fp16` for mixed precision:
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: MULTI_GPU
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
use_cpu: false
```
Launching a script from the location of that custom yaml file looks like the following:
```bash
accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2} ...
```

View File

@ -0,0 +1,123 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Migrating your code to 🤗 Accelerate
This tutorial will detail how to easily convert existing PyTorch code to use 🤗 Accelerate!
You'll see that by just changing a few lines of code, 🤗 Accelerate can perform its magic and get you on
your way towards running your code on distributed systems with ease!
## The base training loop
To begin, write out a very basic PyTorch training loop.
<Tip>
We are under the presumption that `training_dataloader`, `model`, `optimizer`, `scheduler`, and `loss_function` have been defined beforehand.
</Tip>
```python
device = "cuda"
model.to(device)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss.backward()
optimizer.step()
scheduler.step()
```
## Add in 🤗 Accelerate
To start using 🤗 Accelerate, first import and create an [`Accelerator`] instance:
```python
from accelerate import Accelerator
accelerator = Accelerator()
```
[`Accelerator`] is the main force behind utilizing all the possible options for distributed training!
### Setting the right device
The [`Accelerator`] class knows the right device to move any PyTorch object to at any time, so you should
change the definition of `device` to come from [`Accelerator`]:
```diff
- device = 'cuda'
+ device = accelerator.device
model.to(device)
```
### Preparing your objects
Next you need to pass all of the important objects related to training into [`~Accelerator.prepare`]. 🤗 Accelerate will
make sure everything is setup in the current environment for you to start training:
```
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
```
These objects are returned in the same order they were sent in with. By default when using `device_placement=True`, all of the objects that can be sent to the right device will be.
If you need to work with data that isn't passed to [~Accelerator.prepare] but should be on the active device, you should pass in the `device` you made earlier.
<Tip warning={true}>
Accelerate will only prepare objects that inherit from their respective PyTorch classes (such as `torch.optim.Optimizer`).
</Tip>
### Modifying the training loop
Finally, three lines of code need to be changed in the training loop. 🤗 Accelerate's DataLoader classes will automatically handle the device placement by default,
and [`~Accelerator.backward`] should be used for performing the backward pass:
```diff
- inputs = inputs.to(device)
- targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
- loss.backward()
+ accelerator.backward(loss)
```
With that, your training loop is now ready to use 🤗 Accelerate!
## The finished code
Below is the final version of the converted code:
```python
from accelerate import Accelerator
accelerator = Accelerator()
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
```

View File

@ -0,0 +1,429 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Launching Multi-Node Training from a Jupyter Environment
This tutorial teaches you how to fine tune a computer vision model with 🤗 Accelerate from a Jupyter Notebook on a distributed system.
You will also learn how to setup a few requirements needed for ensuring your environment is configured properly, your data has been prepared properly, and finally how to launch training.
<Tip>
This tutorial is also available as a Jupyter Notebook [here](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_cv_example.ipynb)
</Tip>
## Configuring the Environment
Before any training can be performed, a 🤗 Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
```bash
accelerate config
```
However, if general defaults are fine and you are *not* running on a TPU, 🤗Accelerate has a utility to quickly write your GPU configuration into a config file via [`utils.write_basic_config`].
The following code will restart Jupyter after writing the configuration, as CUDA code was called to perform this.
<Tip warning={true}>
CUDA can't be initialized more than once on a multi-node system. It's fine to debug in the notebook and have calls to CUDA, but in order to finally train a full cleanup and restart will need to be performed.
</Tip>
```python
import os
from accelerate.utils import write_basic_config
write_basic_config() # Write a config file
os._exit(00) # Restart the notebook
```
## Preparing the Dataset and Model
Next you should prepare your dataset. As mentioned at earlier, great care should be taken when preparing the `DataLoaders` and model to make sure that **nothing** is put on *any* GPU.
If you do, it is recommended to put that specific code into a function and call that from within the notebook launcher interface, which will be shown later.
Make sure the dataset is downloaded based on the directions [here](https://github.com/huggingface/accelerate/tree/main/examples#simple-vision-example)
```python
import os, re, torch, PIL
import numpy as np
from torch.optim.lr_scheduler import OneCycleLR
from torch.utils.data import DataLoader, Dataset
from torchvision.transforms import Compose, RandomResizedCrop, Resize, ToTensor
from accelerate import Accelerator
from accelerate.utils import set_seed
from timm import create_model
```
First you need to create a function to extract the class name based on a filename:
```python
import os
data_dir = "../../images"
fnames = os.listdir(data_dir)
fname = fnames[0]
print(fname)
```
```python out
beagle_32.jpg
```
In the case here, the label is `beagle`. Using regex you can extract the label from the filename:
```python
import re
def extract_label(fname):
stem = fname.split(os.path.sep)[-1]
return re.search(r"^(.*)_\d+\.jpg$", stem).groups()[0]
```
```python
extract_label(fname)
```
And you can see it properly returned the right name for our file:
```python out
"beagle"
```
Next a `Dataset` class should be made to handle grabbing the image and the label:
```python
class PetsDataset(Dataset):
def __init__(self, file_names, image_transform=None, label_to_id=None):
self.file_names = file_names
self.image_transform = image_transform
self.label_to_id = label_to_id
def __len__(self):
return len(self.file_names)
def __getitem__(self, idx):
fname = self.file_names[idx]
raw_image = PIL.Image.open(fname)
image = raw_image.convert("RGB")
if self.image_transform is not None:
image = self.image_transform(image)
label = extract_label(fname)
if self.label_to_id is not None:
label = self.label_to_id[label]
return {"image": image, "label": label}
```
Now to build the dataset. Outside the training function you can find and declare all the filenames and labels and use them as references inside the
launched function:
```python
fnames = [os.path.join("../../images", fname) for fname in fnames if fname.endswith(".jpg")]
```
Next gather all the labels:
```python
all_labels = [extract_label(fname) for fname in fnames]
id_to_label = list(set(all_labels))
id_to_label.sort()
label_to_id = {lbl: i for i, lbl in enumerate(id_to_label)}
```
Next, you should make a `get_dataloaders` function that will return your built dataloaders for you. As mentioned earlier, if data is automatically
sent to the GPU or a TPU device when building your `DataLoaders`, they must be built using this method.
```python
def get_dataloaders(batch_size: int = 64):
"Builds a set of dataloaders with a batch_size"
random_perm = np.random.permutation(len(fnames))
cut = int(0.8 * len(fnames))
train_split = random_perm[:cut]
eval_split = random_perm[:cut]
# For training a simple RandomResizedCrop will be used
train_tfm = Compose([RandomResizedCrop((224, 224), scale=(0.5, 1.0)), ToTensor()])
train_dataset = PetsDataset([fnames[i] for i in train_split], image_transform=train_tfm, label_to_id=label_to_id)
# For evaluation a deterministic Resize will be used
eval_tfm = Compose([Resize((224, 224)), ToTensor()])
eval_dataset = PetsDataset([fnames[i] for i in eval_split], image_transform=eval_tfm, label_to_id=label_to_id)
# Instantiate dataloaders
train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size, num_workers=4)
eval_dataloader = DataLoader(eval_dataset, shuffle=False, batch_size=batch_size * 2, num_workers=4)
return train_dataloader, eval_dataloader
```
Finally, you should import the scheduler to be used later:
```python
from torch.optim.lr_scheduler import CosineAnnealingLR
```
## Writing the Training Function
Now you can build the training loop. [`notebook_launcher`] works by passing in a function to call that will be ran across the distributed system.
Here is a basic training loop for the animal classification problem:
<Tip>
The code has been split up to allow for explainations on each section. A full version that can be copy and pasted will be available at the end
</Tip>
```python
def training_loop(mixed_precision="fp16", seed: int = 42, batch_size: int = 64):
set_seed(seed)
accelerator = Accelerator(mixed_precision=mixed_precision)
```
First you should set the seed and create an [`Accelerator`] object as early in the training loop as possible.
<Tip warning={true}>
If training on the TPU, your training loop should take in the model as a parameter and it should be instantiated
outside of the training loop function. See the [TPU best practices](../concept_guides/training_tpu)
to learn why
</Tip>
Next you should build your dataloaders and create your model:
```python
train_dataloader, eval_dataloader = get_dataloaders(batch_size)
model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
```
<Tip>
You build the model here so that the seed also controls the new weight initialization
</Tip>
As you are performing transfer learning in this example, the encoder of the model starts out frozen so the head of the model can be
trained only initially:
```python
for param in model.parameters():
param.requires_grad = False
for param in model.get_classifier().parameters():
param.requires_grad = True
```
Normalizing the batches of images will make training a little faster:
```python
mean = torch.tensor(model.default_cfg["mean"])[None, :, None, None]
std = torch.tensor(model.default_cfg["std"])[None, :, None, None]
```
To make these constants available on the active device, you should set it to the Accelerator's device:
```python
mean = mean.to(accelerator.device)
std = std.to(accelerator.device)
```
Next instantiate the rest of the PyTorch classes used for training:
```python
optimizer = torch.optim.Adam(params=model.parameters(), lr=3e-2 / 25)
lr_scheduler = OneCycleLR(optimizer=optimizer, max_lr=3e-2, epochs=5, steps_per_epoch=len(train_dataloader))
```
Before passing everything to [`~Accelerator.prepare`].
<Tip>
There is no specific order to remember, you just need to unpack the objects in the same order you gave them to the prepare method.
</Tip>
```python
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
```
Now train the model:
```python
for epoch in range(5):
model.train()
for batch in train_dataloader:
inputs = (batch["image"] - mean) / std
outputs = model(inputs)
loss = torch.nn.functional.cross_entropy(outputs, batch["label"])
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
```
The evaluation loop will look slightly different compared to the training loop. The number of elements passed as well as the overall
total accuracy of each batch will be added to two constants:
```python
model.eval()
accurate = 0
num_elems = 0
```
Next you have the rest of your standard PyTorch loop:
```python
for batch in eval_dataloader:
inputs = (batch["image"] - mean) / std
with torch.no_grad():
outputs = model(inputs)
predictions = outputs.argmax(dim=-1)
```
Before finally the last major difference.
When performing distributed evaluation, the predictions and labels need to be passed through
[`~Accelerator.gather`] so that all of the data is available on the current device and a properly calculated metric can be achieved:
```python
accurate_preds = accelerator.gather(predictions) == accelerator.gather(batch["label"])
num_elems += accurate_preds.shape[0]
accurate += accurate_preds.long().sum()
```
Now you just need to calculate the actual metric for this problem, and you can print it on the main process using [`~Accelerator.print`]:
```python
eval_metric = accurate.item() / num_elems
accelerator.print(f"epoch {epoch}: {100 * eval_metric:.2f}")
```
A full version of this training loop is available below:
```python
def training_loop(mixed_precision="fp16", seed: int = 42, batch_size: int = 64):
set_seed(seed)
# Initialize accelerator
accelerator = Accelerator(mixed_precision=mixed_precision)
# Build dataloaders
train_dataloader, eval_dataloader = get_dataloaders(batch_size)
# Instantiate the model (you build the model here so that the seed also controls new weight initaliziations)
model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
# Freeze the base model
for param in model.parameters():
param.requires_grad = False
for param in model.get_classifier().parameters():
param.requires_grad = True
# You can normalize the batches of images to be a bit faster
mean = torch.tensor(model.default_cfg["mean"])[None, :, None, None]
std = torch.tensor(model.default_cfg["std"])[None, :, None, None]
# To make this constant available on the active device, set it to the accelerator device
mean = mean.to(accelerator.device)
std = std.to(accelerator.device)
# Intantiate the optimizer
optimizer = torch.optim.Adam(params=model.parameters(), lr=3e-2 / 25)
# Instantiate the learning rate scheduler
lr_scheduler = OneCycleLR(optimizer=optimizer, max_lr=3e-2, epochs=5, steps_per_epoch=len(train_dataloader))
# Prepare everything
# There is no specific order to remember, you just need to unpack the objects in the same order you gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now you train the model
for epoch in range(5):
model.train()
for batch in train_dataloader:
inputs = (batch["image"] - mean) / std
outputs = model(inputs)
loss = torch.nn.functional.cross_entropy(outputs, batch["label"])
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
accurate = 0
num_elems = 0
for batch in eval_dataloader:
inputs = (batch["image"] - mean) / std
with torch.no_grad():
outputs = model(inputs)
predictions = outputs.argmax(dim=-1)
accurate_preds = accelerator.gather(predictions) == accelerator.gather(batch["label"])
num_elems += accurate_preds.shape[0]
accurate += accurate_preds.long().sum()
eval_metric = accurate.item() / num_elems
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}: {100 * eval_metric:.2f}")
```
## Using the notebook_launcher
All that's left is to use the [`notebook_launcher`].
You pass in the function, the arguments (as a tuple), and the number of processes to train on. (See the [documentation](../package_reference/launchers) for more information)
```python
from accelerate import notebook_launcher
```
```python
args = ("fp16", 42, 64)
notebook_launcher(training_loop, args, num_processes=2)
```
In the case of running on the TPU, it would look like so:
```python
model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
args = (model, "fp16", 42, 64)
notebook_launcher(training_loop, args, num_processes=8)
```
As it's running it will print the progress as well as state how many devices you ran on. This tutorial was ran with two GPUs:
```python out
Launching training on 2 GPUs.
epoch 0: 88.12
epoch 1: 91.73
epoch 2: 92.58
epoch 3: 93.90
epoch 4: 94.71
```
And that's it!
## Conclusion
This notebook showed how to perform distributed training from inside of a Jupyter Notebook. Some key notes to remember:
- Make sure to save any code that use CUDA (or CUDA imports) for the function passed to [`notebook_launcher`]
- Set the `num_processes` to be the number of devices used for training (such as number of GPUs, CPUs, TPUs, etc)
- If using the TPU, declare your model outside the training loop function

View File

@ -0,0 +1,21 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Overview
Welcome to the 🤗 Accelerate tutorials! These introductory guides will help catch you up to speed on working with 🤗 Accelerate.
You'll learn how to modify your code to have it work with the API seamlessly, how to launch your script properly,
and more!
These tutorials assume some basic knowledge of Python and familiarity with the PyTorch framework.
If you have any questions about 🤗 Accelerate, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/accelerate/18).

View File

@ -0,0 +1,107 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Deferring Executions
When you run your usual script, instructions are executed in order. Using 🤗 Accelerate to deploy your script on several
GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
faster than others.
You might need to wait for all processes to have reached a certain point before executing a given instruction. For
instance, you shouldn't save a model before being sure every process is done with training, and you wouldn't want to
continue training before all the model weights have been loaded in. To do this, just write the following line in your code:
```
accelerator.wait_for_everyone()
```
This instruction will block all the processes that arrive first until all the other processes have reached that
point (if you run your script on just one GPU or CPU, this won't do anything).
A few example cases for when to use this utility are listed below:
<Tip>
Some of these are utilized with the [`~Accelerator.main_process_first`] context manager, which utilizes [`~Accelerator.wait_for_everyone`] to
run a particular set of code on the main process beforehand before triggering and launching the other processes
</Tip>
## Downloading a Dataset
When downloading a dataset, you should download it first on the main process and then loading the cached dataset in afterwards
<Tip>
`load_dataset` will perform a lock under the hood to stop multiple downloads from happening at once, but if you are downloading something
not using this library you should use this method.
</Tip>
```python
with accelerator.main_process_first():
datasets = load_dataset("glue", "mrpc")
```
Under the hood this is the same as calling:
```python
# First do something on the main process
if accelerator.is_main_process:
datasets = load_dataset("glue", "mrpc")
else:
accelerator.wait_for_everyone()
# And then send it to the rest of them
if not accelerator.is_main_process:
datasets = load_dataset("glue", "mrpc")
else:
accelerator.wait_for_everyone()
```
## Saving the `state_dict`
When saving the `state_dict` of the model, since you would normally save one file on just the main process
you should specify that:
```python
if accelerator.is_main_process:
model = accelerator.unwrap_model(model)
torch.save(model.state_dict(), "weights.pth")
```
## Loading in the `state_dict`
When loading in the `state_dict` to a model, optimizer, or scheduler, you should wait
for all workers to have the weights loaded in before moving on to training
```python
with accelerator.main_process_first():
state = torch.load("weights.pth")
model.load_state_dict(state)
```
## Applying a multi-worker CPU operation
Applying a `map()` operation on multiple workers, such as tokenizing should be done on the
main process first, and then propagated to each one.
```python
datasets = load_dataset("glue", "mrpc")
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
```

View File

@ -0,0 +1,119 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Gradient Synchronization
PyTorch's distributed module operates by communicating back and forth between all of the GPUs in your system.
This communication takes time, and ensuring all processes know the states of each other happens at particular triggerpoints
when using the `ddp` module.
These triggerpoints are added to the PyTorch model, specifically their `forward()` and `backward()` methods.
This happens when the model is wrapped with `DistributedDataParallel`:
```python
import torch.nn as nn
from torch.nn.parallel import DistributedDataParallel
model = nn.Linear(10, 10)
ddp_model = DistributedDataParallel(model)
```
In 🤗 Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model.
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
import torch.nn as nn
- from torch.nn.parallel import DistributedDataParallel
model = nn.Linear(10,10)
+ model = accelerator.prepare(model)
```
## The slowdown in gradient accumulation
You now understand that PyTorch adds hooks to the `forward` and `backward` method of your PyTorch model when
training in a distributed setup. But how does this risk slowing down your code?
In DDP (distributed data parallel), the specific order in which processes are performed and ran are expected
at specific points and these must also occur at roughly the same time before moving on.
The most direct example is when you update all of the parameters in a model through `.backward()`. All instances of the model
need to have updated their gradients, collated, and updated again before moving onto the next batch of data. But when performing
gradient accumulation, you accumulate `n` losses and skip `.backward()` until `n` batches have been reached. This
can cause a significant slowdown since all the processes need to communicate with them more times than needed. How
can you avoid this overhead?
## Solving the slowdown problem
Since you are skipping these batches, their gradients do not need to be synchronized until the point where `.backward()` is actually called.
PyTorch cannot automagically tell when you need to do this, but they do provide a tool to help through the [`no_sync`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.no_sync) context manager
that is added to your model after converting it to DDP.
Under this context manager, PyTorch will skip synchronizing the gradients when `.backward()` is called, and the first call to `.backward()` outside this
context manager will trigger the synchronization. See an example below:
```python
ddp_model, dataloader = accelerator.prepare(model, dataloader)
for index, batch in enumerate(dataloader):
inputs, targets = batch
# Trigger gradient synchronization on the last batch
if index != (len(dataloader) - 1):
with ddp_model.no_sync():
# Gradients only accumulate
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
else:
# Gradients finally sync
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
```
In 🤗 Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
`ddp_model.no_sync` gets replaced with [`~Accelerator.no_sync`] and operates the same way:
```diff
ddp_model, dataloader = accelerator.prepare(model, dataloader)
for index, batch in enumerate(dataloader):
inputs, targets = batch
# Trigger gradient synchronization on the last batch
if index != (len(dataloader)-1):
- with ddp_model.no_sync():
+ with accelerator.no_sync(model):
# Gradients only accumulate
outputs = ddp_model(inputs)
loss = loss_func(outputs, targets)
accelerator.backward(loss)
else:
# Gradients finally sync
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
```
As you may expect, the [`~Accelerator.accumulate`] function wraps around this conditional check by keeping track of the current batch number, leaving you with the final
gradient accumulation API:
```python
ddp_model, dataloader = accelerator.prepare(model, dataloader)
for batch in dataloader:
with accelerator.accumulate(model):
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
```
As a result, you should either use *`accelerator.accumulate` or `accelerator.no_sync`* when it comes to API choice.

View File

@ -0,0 +1,91 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Comparing performance between different device setups
Evaluating and comparing the performance from different setups can be quite tricky if you don't know what to look for.
For example, you cannot run the same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate
and expect your results to line up.
But why?
There's three reasons for this that this tutorial will cover:
1. **Setting the right seeds**
2. **Observed Batch Sizes**
3. **Learning Rates**
## Setting the Seed
While this issue has not come up as much, make sure to use [`utils.set_seed`] to fully set the seed in all distributed cases so training will be reproducable:
```python
from accelerate import set_seed
set_seed(42)
```
Why is this important? Under the hood this will set **5** different seed settings:
```python
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
# ^^ safe to call this function even if cuda is not available
if is_tpu_available():
xm.set_rng_state(seed)
```
The random state, numpy's state, torch, torch's cuda state, and if TPUs are available torch_xla's cuda state.
## Observed Batch Sizes
When training with Accelerate, the batch size passed to the dataloader is the **batch size per GPU**. What this entails is
a batch size of 64 on two GPUs is truly a batch size of 128. As a result, when testing on a single GPU this needs to be accounted for,
as well as similarly for TPUs.
The below table can be used as a quick reference to try out different batch sizes:
<Tip>
In this example there are two GPUs for "Multi-GPU" and a TPU pod with 8 workers
</Tip>
| Single GPU Batch Size | Multi-GPU Equivalent Batch Size | TPU Equivalent Batch Size |
|-----------------------|---------------------------------|---------------------------|
| 256 | 128 | 32 |
| 128 | 64 | 16 |
| 64 | 32 | 8 |
| 32 | 16 | 4 |
## Learning Rates
As noted in multiple sources[[1](https://aws.amazon.com/blogs/machine-learning/scalable-multi-node-deep-learning-training-using-gpus-in-the-aws-cloud/)][[2](https://docs.nvidia.com/clara/tlt-mi_archive/clara-train-sdk-v2.0/nvmidl/appendix/training_with_multiple_gpus.html)], the learning rate should be scaled *linearly* based on the number of devices present. The below
snippet shows doing so with Accelerate:
<Tip>
Since users can have their own learning rate schedulers defined, we leave this up to the user to decide if they wish to scale their
learning rate or not.
</Tip>
```python
learning_rate = 1e-3
accelerator = Accelerator()
learning_rate *= accelerator.num_processes
optimizer = AdamW(params=model.parameters(), lr=learning_rate)
```

View File

@ -0,0 +1,164 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Training on TPUs with 🤗 Accelerate
Training on TPUs can be slightly different than training on multi-gpu, even with 🤗 Accelerate. This guide aims to show you
where you should be careful and why, as well as the best practices in general.
## Training in a Notebook
The main carepoint when training on TPUs comes from the [`notebook_launcher`]. As mentioned in the [notebook tutorial](../usage_guides/notebook), you need to
restructure your training code into a function that can get passed to the [`notebook_launcher`] function and be careful about not declaring any tensors on the GPU.
While on a TPU that last part is not as important, a critical part to understand is that when you launch code from a notebook you do so through a process called **forking**.
When launching from the command-line, you perform **spawning**, where a python process is not currently running and you *spawn* a new process in. Since your Jupyter notebook is already
utilizing a python process, you need to *fork* a new process from it to launch your code.
Where this becomes important is in regards to declaring your model. On forked TPU processes, it is recommended that you instantiate your model *once* and pass this into your
training function. This is different than training on GPUs where you create `n` models that have their gradients synced and back-propagated at certain moments. Instead one
model instance is shared between all the nodes and it is passed back and forth. This is important especially when training on low-resource TPUs such as those provided in Kaggle kernels or
on Google Colaboratory.
Below is an example of a training function passed to the [`notebook_launcher`] if training on CPUs or GPUs:
<Tip>
This code snippet is based off the one from the `simple_nlp_example` notebook found [here](https://github.com/huggingface/notebooks/blob/main/examples/accelerate/simple_nlp_example.ipynb) with slight
modifications for the sake of simplicity
</Tip>
```python
def training_function():
# Initialize accelerator
accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
train_dataloader, eval_dataloader = create_dataloaders(
train_batch_size=hyperparameters["train_batch_size"], eval_batch_size=hyperparameters["eval_batch_size"]
)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=hyperparameters["learning_rate"])
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader
)
num_epochs = hyperparameters["num_epochs"]
# Now we train the model
for epoch in range(num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
```python
from accelerate import notebook_launcher
notebook_launcher(training_function)
```
<Tip>
The `notebook_launcher` will default to 8 processes if 🤗 Accelerate has been configured for a TPU
</Tip>
If you use this example and declare the model *inside* the training loop, then on a low-resource system you will potentially see an error
like:
```
ProcessExitedException: process 0 terminated with signal SIGSEGV
```
This error is *extremely* cryptic but the basic explanation is you ran out of system RAM. You can avoid this entirely by reconfiguring the training function to
accept a single `model` argument, and declare it in an outside cell:
```python
# In another Jupyter cell
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
```
```diff
+ def training_function(model):
# Initialize accelerator
accelerator = Accelerator()
- model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
train_dataloader, eval_dataloader = create_dataloaders(
train_batch_size=hyperparameters["train_batch_size"], eval_batch_size=hyperparameters["eval_batch_size"]
)
...
```
And finally calling the training function with:
```diff
from accelerate import notebook_launcher
- notebook_launcher(training_function)
+ notebook_launcher(training_function, (model,))
```
<Tip>
The above workaround is only needed when launching a TPU instance from a Jupyter Notebook on a low-resource server such as Google Colaboratory or Kaggle. If
using a script or launching on a much beefier server declaring the model beforehand is not needed.
</Tip>
## Mixed Precision and Global Variables
As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), 🤗 Accelerate supports fp16 and bf16, both of which can be used on TPUs.
That being said, ideally `bf16` should be utilized as it is extremely efficient to use.
There are two "layers" when using `bf16` and 🤗 Accelerate on TPUs, at the base level and at the operation level.
At the base level, this is enabled when passing `mixed_precision="bf16"` to `Accelerator`, such as:
```python
accelerator = Accelerator(mixed_precision="bf16")
```
By default this will cast `torch.float` and `torch.double` to `bfloat16` on TPUs.
The specific configuration being set is an environmental variable of `XLA_USE_BF16` is set to `1`.
There is a further configuration you can perform which is setting the `XLA_DOWNCAST_BF16` environmental variable. If set to `1`, then
`torch.float` is `bfloat16` and `torch.double` is `float32`.
This is performed in the `Accelerator` object when passing `downcast_bf16=True`:
```python
accelerator = Accelerator(mixed_precision="bf16", downcast_bf16=True)
```
Using downcasting instead of bf16 everywhere is good for when you are trying to calculate metrics, log values, and more where raw bf16 tensors would be unusable.
## Training Times on TPUs
As you launch your script, you may notice that training seems exceptionally slow at first. This is because TPUs
first run through a few batches of data to see how much memory to allocate before finally utilizing this configured
memory allocation extremely efficiently.
If you notice that your evaluation code to calculate the metrics of your model takes longer due to a larger batch size being used,
it is recommended to keep the batch size the same as the training data if it is too slow. Otherwise the memory will reallocate to this
new batch size after the first few iterations.
<Tip>
Just because the memory is allocated does not mean it will be used or that the batch size will increase when going back to your training dataloader.
</Tip>

View File

@ -1,4 +1,4 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
@ -12,121 +12,60 @@ specific language governing permissions and limitations under the License.
# Accelerate
Run your *raw* PyTorch training script on any kind of device
## Features
- 🤗 Accelerate provides an easy API to make your scripts run with mixed precision and on any kind of distributed
setting (multi-GPUs, TPUs etc.) while still letting you write your own training loop. The same code can then runs
seamlessly on your local machine for debugging or your training environment.
- 🤗 Accelerate also provides a CLI tool that allows you to quickly configure and test your training environment then
launch the scripts.
## Easy to integrate
A traditional training loop in PyTorch looks like this:
```python
my_model.to(device)
for batch in my_training_dataloader:
my_optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
loss.backward()
my_optimizer.step()
```
Changing it to work with accelerate is really easy and only adds a few lines of code:
🤗 Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
# Use the device given by the *accelerator* object.
+ device = accelerator.device
my_model.to(device)
# Pass every important object (model, optimizer, dataloader) to *accelerator.prepare*
+ my_model, my_optimizer, my_training_dataloader = accelerate.prepare(
+ my_model, my_optimizer, my_training_dataloader
+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
+ model, optimizer, training_dataloader, scheduler
+ )
for batch in my_training_dataloader:
my_optimizer.zero_grad()
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
# Just a small change for the backward instruction
- loss.backward()
outputs = model(inputs)
loss = loss_function(outputs, targets)
+ accelerator.backward(loss)
my_optimizer.step()
optimizer.step()
scheduler.step()
```
and with this, your script can now run in a distributed environment (multi-GPU, TPU).
Built on `torch_xla` and `torch.distributed`, 🤗 Accelerate takes care of the heavy lifting, so you don't have to write any custom code to adapt to these platforms.
Convert existing codebases to utilize [DeepSpeed](usage_guides/deepspeed), perform [fully sharded data parallelism](usage_guides/fsdp), and have automatic support for mixed-precision training!
You can even simplify your script a bit by letting 🤗 Accelerate handle the device placement for you (which is safer,
especially for TPU training):
<Tip>
```diff
+ from accelerate import Accelerator
To get a better idea of this process, make sure to check out the [Tutorials](basic_tutorials/overview)!
+ accelerator = Accelerator()
- my_model.to(device)
# Pass every important object (model, optimizer, dataloader) to *accelerator.prepare*
+ my_model, my_optimizer, my_training_dataloader = accelerate.prepare(
+ my_model, my_optimizer, my_training_dataloader
+ )
</Tip>
for batch in my_training_dataloader:
my_optimizer.zero_grad()
inputs, targets = batch
- inputs = inputs.to(device)
- targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
# Just a small change for the backward instruction
- loss.backward()
+ accelerator.backward(loss)
my_optimizer.step()
```
## Script launcher
No need to remember how to use `torch.distributed.launch` or to write a specific launcher for TPU training! 🤗
Accelerate comes with a CLI tool that will make your life easier when launching distributed scripts.
On your machine(s) just run:
This code can then be launched on any system through Accelerate's CLI interface:
```bash
accelerate config
accelerate launch {my_script.py}
```
and answer the questions asked. This will generate a config file that will be used automatically to properly set the
default options when doing
```bash
accelerate launch my_script.py --args_to_my_script
```
For instance, here is how you would run the NLP example (from the root of the repo):
```bash
accelerate launch examples/nlp_example.py
```
## Supported integrations
- CPU only
- single GPU
- multi-GPU on one node (machine)
- multi-GPU on several nodes (machines)
- TPU
- FP16 with native AMP (apex on the roadmap)
- DeepSpeed (experimental support)
<div class="mt-10">
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="/docs/accelerate/basic_tutorials/overview"
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
<p class="text-gray-700">Learn the basics and become familiar with using 🤗 Accelerate. Start here if you are using 🤗 Accelerate for the first time!</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="/docs/accelerate/usage_guides/gradient_accumulation"
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
<p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Accelerate to solve real-world problems.</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="/docs/accelerate/concept_guides/gradient_synchronization"
><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
<p class="text-gray-700">High-level explanations for building a better understanding of important topics such as avoiding subtle nuances and pitfalls in distributed training and DeepSpeed.</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="/docs/accelerate/package_reference/accelerator"
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
<p class="text-gray-700">Technical descriptions of how 🤗 Accelerate classes and methods work.</p>
</a>
</div>
</div>

View File

@ -1,96 +0,0 @@
<!---
Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Installation
🤗 Accelerate is tested on Python 3.6+, and PyTorch 1.6.0+.
You should install 🤗 Accelerate in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're
unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Create a virtual environment with the version of Python you're going
to use and activate it.
Now, if you want to use 🤗 Accelerate, you can install it with pip.
## Installation with pip
First you need to install PyTorch. Please refer to the
[PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) regarding the specific install command for your platform.
When PyTorch has been installed, 🤗 Accelerate can be installed using pip as follows:
```bash
pip install accelerate
```
Alternatively, for CPU-support only, you can install 🤗 Accelerate and PyTorch in one line with:
```bash
pip install accelerate[torch]
```
To check 🤗 Accelerate is properly installed, run the following command:
```bash
python -c "TODO write"
```
## Installing from source
Here is how to quickly install `accelerate` from source:
```bash
pip install git+https://github.com/huggingface/accelerate
```
Note that this will install not the latest released version, but the bleeding edge `main` version, which you may want to use in case a bug has been fixed since the last official release and a new release hasn't been yet rolled out.
While we strive to keep `main` operational at all times, if you notice some issues, they usually get fixed within a few hours or a day and and you're more than welcome to help us detect any problems by opening an [Issue](https://github.com/huggingface/accelerate/issues) and this way, things will get fixed even sooner.
Again, you can run:
```bash
python -c "TODO write"
```
to check 🤗 Accelerate is properly installed.
## Editable install
If you want to constantly use the bleeding edge `main` version of the source code, or if you want to contribute to the library and need to test the changes in the code you're making, you will need an editable install. This is done by cloning the repository and installing with the following commands:
``` bash
git clone https://github.com/huggingface/accelerate.git
cd accelerate
pip install -e .
```
This command performs a magical link between the folder you cloned the repository to and your python library paths, and it'll look inside this folder in addition to the normal library-wide paths. So if normally your python packages get installed into:
```
~/anaconda3/envs/main/lib/python3.7/site-packages/
```
now this editable install will reside where you clone the folder to, e.g. `~/accelerate/` and python will search it too.
Do note that you have to keep that `accelerate` folder around and not delete it to continue using the 🤗 Accelerate library.
Now, let's get to the real benefit of this installation approach. Say, you saw some new feature has been just committed into `main`. If you have already performed all the steps above, to update your accelerate repo to include all the latest commits, all you need to do is to `cd` into that cloned repository folder and update the clone to the latest version:
```bash
cd ~/accelerate/
git pull
```
There is nothing else to do. Your python environment will find the bleeding edge version of 🤗 Accelerate on the next run.

View File

@ -1,28 +0,0 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Notebook Launcher
Launch your training function inside a notebook. Currently supports launching a training with TPUs on [Google
Colab](https://colab.research.google.com/) and [Kaggle kernels](https://www.kaggle.com/code), as well as training on
several GPUs (if the machine on which you are running your notebook has them).
An example can be found in [this notebook](https://github.com/huggingface/notebooks/blob/master/examples/accelerate/simple_nlp_example.ipynb).
<Tip warning={true}>
Your `Accelerator` object should only be defined inside the training function. This is because the
initialization should be done inside the launcher only.
</Tip>
[[autodoc]] notebook_launcher

View File

@ -0,0 +1,163 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Accelerator
The [`Accelerator`] is the main class provided by 🤗 Accelerate.
It serves at the main entrypoint for the API.
## Quick adaptation of your code
To quickly adapt your script to work on any kind of setup with 🤗 Accelerate just:
1. Initialize an [`Accelerator`] object (that we will call `accelerator` throughout this page) as early as possible in your script.
2. Pass your dataloader(s), model(s), optimizer(s), and scheduler(s) to the [`~Accelerator.prepare`] method.
3. Remove all the `.cuda()` or `.to(device)` from your code and let the `accelerator` handle the device placement for you.
<Tip>
Step three is optional, but considered a best practice.
</Tip>
4. Replace `loss.backward()` in your code with `accelerator.backward(loss)`
5. Gather your predictions and labels before storing them or using them for metric computation using [`~Accelerator.gather`]
<Tip warning={true}>
Step five is mandatory when using distributed evaluation
</Tip>
In most cases this is all that is needed. The next section lists a few more advanced use cases and nice features
you should search for and replace by the corresponding methods of your `accelerator`:
## Advanced recommendations
### Printing
`print` statements should be replaced by [`~Accelerator.print`] to be printed once per process
```diff
- print("My thing I want to print!")
+ accelerator.print("My thing I want to print!")
```
### Executing processes
#### Once on a single server
For statements that should be executed once per server, use [`~Accelerator.is_local_main_process`]:
```python
if accelerator.is_local_main_process:
do_thing_once_per_server()
```
A function can be wrapped using the [`~Accelerator.on_local_main_process`] function to achieve the same
behavior on a function's execution:
```python
@accelerator.on_local_main_process
def do_my_thing():
"Something done once per server"
do_thing_once_per_server()
```
#### Only ever once across all servers
For statements that should only ever be executed once, use [`~Accelerator.is_main_process`]:
```python
if accelerator.is_main_process:
do_thing_once()
```
A function can be wrapped using the [`~Accelerator.on_main_process`] function to achieve the same
behavior on a function's execution:
```python
@accelerator.on_main_process
def do_my_thing():
"Something done once per server"
do_thing_once()
```
#### On specific processes
If a function should be ran on a specific overall or local process index, there are similar decorators
to achieve this:
```python
@accelerator.on_local_process(local_process_idx=0)
def do_my_thing():
"Something done on process index 0 on each server"
do_thing_on_index_zero_on_each_server()
```
```python
@accelerator.on_process(process_index=0)
def do_my_thing():
"Something done on process index 0"
do_thing_on_index_zero()
```
### Synchronicity control
Use [`~Accelerator.wait_for_everyone`] to make sure all processes join that point before continuing. (Useful before a model save for instance)
### Saving and loading
Use [`~Accelerator.unwrap_model`] before saving to remove all special model wrappers added during the distributed process.
```python
model = MyModel()
model = accelerator.prepare(model)
# Unwrap
model = accelerator.unwrap_model(model)
```
Use [`~Accelerator.save`] instead of `torch.save`:
```diff
state_dict = model.state_dict()
- torch.save(state_dict, "my_state.pkl")
+ accelerator.save(state_dict, "my_state.pkl")
```
### Operations
Use [`~Accelerator.clip_grad_norm_`] instead of ``torch.nn.utils.clip_grad_norm_`` and [`~Accelerator.clip_grad_value_`] instead of ``torch.nn.utils.clip_grad_value``
### Gradient Accumulation
To perform gradient accumulation use [`~Accelerator.accumulate`] and specify a gradient_accumulation_steps.
This will also automatically ensure the gradients are synced or unsynced when on
multi-device training, check if the step should actually be performed, and auto-scale the loss:
```diff
- accelerator = Accelerator()
+ accelerator = Accelerator(gradient_accumulation_steps=2)
for (input, label) in training_dataloader:
+ with accelerator.accumulate(model):
predictions = model(input)
loss = loss_function(predictions, labels)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
## Overall API documentation:
[[autodoc]] Accelerator

View File

@ -0,0 +1,41 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Working with large models
## Dispatching and Offloading Models
[[autodoc]] big_modeling.init_empty_weights
[[autodoc]] big_modeling.cpu_offload
[[autodoc]] big_modeling.disk_offload
[[autodoc]] big_modeling.dispatch_model
[[autodoc]] big_modeling.load_checkpoint_and_dispatch
## Model Hooks
### Hook Classes
[[autodoc]] hooks.ModelHook
[[autodoc]] hooks.AlignDevicesHook
[[autodoc]] hooks.SequentialHook
### Adding Hooks
[[autodoc]] hooks.add_hook_to_module
[[autodoc]] hooks.attach_execution_device_hook
[[autodoc]] hooks.attach_align_device_hook
[[autodoc]] hooks.attach_align_device_hook_on_blocks
### Removing Hooks
[[autodoc]] hooks.remove_hook_from_module
[[autodoc]] hooks.remove_hook_from_submodules

View File

@ -0,0 +1,153 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# The Command Line
Below is a list of all the available commands 🤗 Accelerate with their parameters
## accelerate config
**Command**:
`accelerate config` or `accelerate-config`
Launches a series of prompts to create and save a `default_config.yml` configuration file for your training system. Should
always be ran first on your machine.
**Usage**:
```bash
accelerate config [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit
## accelerate env
**Command**:
`accelerate env` or `accelerate-env`
Lists the contents of the passed 🤗 Accelerate configuration file. Should always be used when opening an issue on the [GitHub repository](https://github.com/huggingface/accelerate).
**Usage**:
```bash
accelerate env [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit
## accelerate launch
**Command**:
`accelerate launch` or `accelerate-launch`
Launches a specified script on a distributed system with the right parameters.
**Usage**:
```bash
accelerate launch [arguments] {training_script} --{training_script-argument-1} --{training_script-argument-2} ...
```
**Positional Arguments**:
- `{training_script}` -- The full path to the script to be launched in parallel
- `--{training_script-argument-1}` -- Arguments of the training script
**Optional Arguments**:
* `-h`, `--help` (`bool`) -- Show a help message and exit
* `--config_file CONFIG_FILE` (`str`)-- The config file to use for the default values in the launching script.
* `--cpu` (`bool`) -- Whether or not to force the training on the CPU.
* `--mixed_precision {no,fp16,bf16}` (`str`) -- Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on
Nvidia Ampere GPUs and PyTorch 1.10 or later.
* `--multi_gpu` (`bool`, defaults to `False`) -- Whether or not this should launch a distributed GPU training.
* `-m`, `--module` (`bool`) -- Change each process to interpret the launch script as a Python module, executing with the same behavior as 'python -m'.
* `--no_python` (`bool`) -- Skip prepending the training script with 'python' - just execute it directly. Useful when the script is not a Python script.
The rest of these arguments are configured through `accelerate config` and are read in from the specified `--config_file` (or default configuration) for their
values. They can also be passed in manually.
**Machine Configuration Arguments**:
The following arguments are useful for customization of worker machines
* `--machine_rank MACHINE_RANK` (`int`) -- The rank of the machine on which this script is launched.
* `--num_machines NUM_MACHINES` (`int`) -- The total number of machines used in this training.
* `--num_processes NUM_PROCESSES` (`int`) -- The total number of processes to be launched in parallel.
* `--gpu_ids` (`str`) -- What GPUs (by id) should be used for training on this machine as a comma-seperated list
* `--main_process_ip MAIN_PROCESS_IP` (`str`) -- The IP address of the machine of rank 0.
* `--main_process_port MAIN_PROCESS_PORT` (`int`) -- The port to use to communicate with the machine of rank 0.
* `--num_cpu_threads_per_process NUM_CPU_THREADS_PER_PROCESS` (`int`) -- The number of CPU threads per process. Can be tuned for optimal performance.
**DeepSpeed Arguments**:
The following arguments are only useful when `use_deepspeed` is passed:
* `--use_deepspeed` (`bool`) -- Whether to use deepspeed.
* `--deepspeed_config_file DEEPSPEED_CONFIG_FILE` (`str`) -- DeepSpeed config file.
* `--zero_stage ZERO_STAGE` (`str`) -- DeepSpeed's ZeRO optimization stage
* `--offload_optimizer_device OFFLOAD_OPTIMIZER_DEVICE` (`str`) -- Decides where (none|cpu|nvme) to offload optimizer states
* `--offload_param_device OFFLOAD_PARAM_DEVICE` (`str`) -- Decides where (none|cpu|nvme) to offload parameters
* `--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS` (`int`) -- Number of gradient_accumulation_steps used in your training script
* `--gradient_clipping GRADIENT_CLIPPING` (`float`) -- gradient clipping value used in your training script
The following arguments are related to using ZeRO Stage-3
* `--zero3_init_flag ZERO3_INIT_FLAG` (`bool`) -- Decides Whether (true|false) to enable `deepspeed.zero.Init` for constructing massive models
* `--zero3_save_16bit_model ZERO3_SAVE_16BIT_MODEL` (`bool`) -- Decides Whether (true|false) to save 16-bit model weights when using ZeRO Stage-3
**Fully Sharded Data Parallelism Arguments**:
The following arguments are only useful when `use_fdsp` is passed:
* `--use_fsdp` (`bool`) -- Whether to use fsdp.
* `--offload_params OFFLOAD_PARAMS` (`bool`) -- Decides Whether (true|false) to offload parameters and gradients to CPU.
* `--min_num_params MIN_NUM_PARAMS` (`int`) -- FSDP's minimum number of parameters for Default Auto Wrapping.
* `--sharding_strategy SHARDING_STRATEGY` (`str`) -- FSDP's Sharding Strategy.
**TPU Arguments**:
The following arguments are only useful when `tpu` is passed:
* `--tpu` (`bool`) - Whether or not this should launch a TPU training.
* `--main_training_function MAIN_TRAINING_FUNCTION` (`str`) -- The name of the main function to be executed in your script.
**AWS SageMaker Arguments**:
The following arguments are only useful when training in SageMaker
* `--aws_access_key_id AWS_ACCESS_KEY_ID` (`str`) -- The AWS_ACCESS_KEY_ID used to launch the Amazon SageMaker training job
* `--aws_secret_access_key AWS_SECRET_ACCESS_KEY` (`str`) -- The AWS_SECRET_ACCESS_KEY used to launch the Amazon SageMaker training job
## accelerate test
`accelerate test` or `accelerate-test`
Runs `accelerate/test_utils/test_script.py` to verify that 🤗 Accelerate has been properly configured on your system and runs.
**Usage**:
```bash
accelerate test [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit

View File

@ -0,0 +1,25 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Utilities for DeepSpeed
[[autodoc]] utils.DeepSpeedPlugin
[[autodoc]] utils.DummyOptim
[[autodoc]] utils.DummyScheduler
[[autodoc]] utils.DeepSpeedEngineWrapper
[[autodoc]] utils.DeepSpeedOptimizerWrapper
[[autodoc]] utils.DeepSpeedSchedulerWrapper

View File

@ -0,0 +1,19 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Launchers
Functions for launching training on distributed processes.
[[autodoc]] accelerate.notebook_launcher
[[autodoc]] accelerate.debug_launcher

View File

@ -0,0 +1,24 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Logging with Accelerate
Accelerate has its own logging utility to handle logging while in a distributed system.
To utilize this replace cases of `logging` with `accelerate.logging`:
```diff
- import logging
+ from accelerate.logging import get_logger
- logger = logging.getLogger(__name__)
+ logger = get_logger(__name__)
```
[[autodoc]] logging.get_logger

View File

@ -0,0 +1,23 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Stateful Classes
Below are variations of a [singleton class](https://en.wikipedia.org/wiki/Singleton_pattern) in the sense that all
instances share the same state, which is initialized on the first instantiation.
These classes are immutable and store information about certain configurations or
states.
[[autodoc]] state.AcceleratorState
[[autodoc]] state.GradientState

View File

@ -10,52 +10,24 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
specific language governing permissions and limitations under the License.
-->
# Internals
# Wrapper classes for torch Dataloaders, Optimizers, and Schedulers
## Optimizer
The internal classes Accelerate uses to prepare objects for distributed training
when calling [`~Accelerator.prepare`].
[[autodoc]] optimizer.AcceleratedOptimizer
## DataLoader
The main work on your PyTorch `DataLoader` is done by the following function:
## Datasets and DataLoaders
[[autodoc]] data_loader.prepare_data_loader
### BatchSamplerShard
[[autodoc]] data_loader.DataLoaderShard
### BatchSamplerShard
[[autodoc]] data_loader.BatchSamplerShard
### IterableDatasetShard
[[autodoc]] data_loader.IterableDatasetShard
[[autodoc]] data_loader.DataLoaderShard
[[autodoc]] data_loader.DataLoaderDispatcher
## Distributed Config
## Optimizers
### AcceleratorState
[[autodoc]] optimizer.AcceleratedOptimizer
[[autodoc]] state.AcceleratorState
## Schedulers
### DistributedType
[[autodoc]] state.DistributedType
## Utilities
[[autodoc]] utils.extract_model_from_parallel
[[autodoc]] utils.gather
[[autodoc]] utils.send_to_device
[[autodoc]] utils.set_seed
[[autodoc]] utils.synchronize_rng_state
[[autodoc]] utils.synchronize_rng_states
[[autodoc]] utils.wait_for_everyone
[[autodoc]] scheduler.AcceleratedScheduler

View File

@ -0,0 +1,26 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Experiment Tracking
## The Base Tracker Class
[[autodoc]] tracking.GeneralTracker
## Integrated Trackers
[[autodoc]] tracking.TensorBoardTracker
- __init__
[[autodoc]] tracking.WandBTracker
- __init__
[[autodoc]] tracking.CometMLTracker
- __init__

View File

@ -0,0 +1,95 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Helpful Utilities
Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case.
## Data Classes
These are basic dataclasses used throughout 🤗 Accelerate and they can be passed in as parameters.
[[autodoc]] utils.DistributedType
[[autodoc]] utils.LoggerType
[[autodoc]] utils.PrecisionType
## Data Manipulation and Operations
These include data operations that mimic the same `torch` ops but can be used on distributed processes.
[[autodoc]] utils.broadcast
[[autodoc]] utils.concatenate
[[autodoc]] utils.gather
[[autodoc]] utils.pad_across_processes
[[autodoc]] utils.reduce
[[autodoc]] utils.send_to_device
## Environment Checks
These functionalities check the state of the current working environment including information about the operating system itself, what it can support, and if particular dependencies are installed.
[[autodoc]] utils.is_bf16_available
[[autodoc]] utils.is_torch_version
[[autodoc]] utils.is_tpu_available
## Environment Configuration
[[autodoc]] utils.write_basic_config
When setting up 🤗 Accelerate for the first time, rather than running `accelerate config` [~utils.write_basic_config] can be used as an alternative for quick configuration.
## Memory
[[autodoc]] utils.get_max_memory
[[autodoc]] utils.find_executable_batch_size
## Modeling
These utilities relate to interacting with PyTorch models
[[autodoc]] utils.extract_model_from_parallel
[[autodoc]] utils.get_max_layer_size
[[autodoc]] utils.offload_state_dict
## Parallel
These include general utilities that should be used when working in parallel.
[[autodoc]] utils.extract_model_from_parallel
[[autodoc]] utils.save
[[autodoc]] utils.wait_for_everyone
## Random
These utilities relate to setting and synchronizing of all the random states.
[[autodoc]] utils.set_seed
[[autodoc]] utils.synchronize_rng_state
[[autodoc]] utils.synchronize_rng_states

View File

@ -12,13 +12,13 @@ specific language governing permissions and limitations under the License.
# Quick tour
Let's have a look at a look at 🤗 Accelerate main features and traps to avoid.
Let's have a look at the 🤗 Accelerate main features and traps to avoid.
## Main use
To use 🤗 Accelerate in your own script, you have to change four things:
1. Import the [`Accelerator`] main class instantiate one in an `accelerator` object:
1. Import the [`Accelerator`] main class and instantiate one in an `accelerator` object:
```python
from accelerate import Accelerator
@ -28,7 +28,7 @@ accelerator = Accelerator()
This should happen as early as possible in your training script as it will initialize everything necessary for
distributed training. You don't need to indicate the kind of environment you are in (just one machine with a GPU, one
match with several GPUs, several machines with multiple GPUs or a TPU), the library will detect this automatically.
machines with several GPUs, several machines with multiple GPUs or a TPU), the library will detect this automatically.
2. Remove the call `.to(device)` or `.cuda()` for your model and input data. The `accelerator` object
will handle this for you and place all those objects on the right device for you. If you know what you're doing, you
@ -40,28 +40,30 @@ To fully deactivate the automatic device placement, pass along `device_placement
<Tip warning={true}>
If you place your objects manually on the proper device, be careful to create your optimizer after putting your
model on `accelerator.device` or your training will fail on TPU.
If you place your objects manually on the proper device, be careful to create your optimizer after putting your
model on `accelerator.device` or your training will fail on TPU.
</Tip>
3. Pass all objects relevant to training (optimizer, model, training dataloader) to the
3. Pass all objects relevant to training (optimizer, model, training dataloader, learning rate scheduler) to the
[`~Accelerator.prepare`] method. This will make sure everything is ready for training.
```python
model, optimizer, train_dataloader = accelerator.prepare(model, optimizer, train_dataloader)
model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, lr_scheduler
)
```
In particular, your training dataloader will be sharded accross all GPUs/TPU cores available so that each one sees a
In particular, your training dataloader will be sharded across all GPUs/TPU cores available so that each one sees a
different portion of the training dataset. Also, the random states of all processes will be synchronized at the
beginning of each iteration through your dataloader, to make sure the data is shuffled the same way (if you decided to
use `shuffle=True` or any kind of random sampler).
<Tip>
The actual batch size for your training will be the number of devices used multiplied by the batch size you set in
your script: for instance training on 4 GPUs with a batch size of 16 set when creating the training dataloader will
train at an actual batch size of 64.
The actual batch size for your training will be the number of devices used multiplied by the batch size you set in
your script: for instance training on 4 GPUs with a batch size of 16 set when creating the training dataloader will
train at an actual batch size of 64.
</Tip>
@ -74,14 +76,21 @@ training loop.
<Tip warning={true}>
Your training dataloader may change length when going through this method: if you run on X GPUs, it will have its
length divided by X (since your actual batch size will be multiplied by X), unless you set
`split_batches=True`.
You should only pass the learning rate scheduler to [`~Accelerator.prepare`] when the scheduler needs to be stepped
at each optimizer step.
</Tip>
Any instruction using your training dataloader length (for instance if you need the number of total training steps
to create a learning rate scheduler) should go after the call to [`~Accelerator.prepare`].
<Tip warning={true}>
Your training dataloader may change length when going through this method: if you run on X GPUs, it will have its
length divided by X (since your actual batch size will be multiplied by X), unless you set
`split_batches=True`.
</Tip>
Any instruction using your training dataloader length (for instance if you want to log the number of total training
steps) should go after the call to [`~Accelerator.prepare`].
You can perfectly send your dataloader to [`~Accelerator.prepare`] on its own, but it's best to send the
model and optimizer to [`~Accelerator.prepare`] together.
@ -109,37 +118,47 @@ method:
validation_dataloader = accelerator.prepare(validation_dataloader)
```
Like for your training dataloader, it will mean that (should you run your script on multiple devices) each device will
As for your training dataloader, it will mean that (should you run your script on multiple devices) each device will
only see part of the evaluation data. This means you will need to group your predictions together. This is very easy to
do with the [`~Accelerator.gather`] method.
do with the [`~Accelerator.gather_for_metrics`] method.
```python
for inputs, targets in validation_dataloader:
predictions = model(inputs)
# Gather all predictions and targets
all_predictions = accelerator.gather(predictions)
all_targets = accelerator.gather(targets)
all_predictions, all_targets = accelerator.gather_for_metrics((predictions, targets))
# Example of use with a *Datasets.Metric*
metric.add_batch(all_predictions, all_targets)
```
<Tip warning={true}>
Like for the training dataloader, passing your validation dataloader through
[`~Accelerator.prepare`] may change its: if you run on X GPUs, it will have its length divided by X
(since your actual batch size will be multiplied by X), unless you set `split_batches=True`.
Any instruction using your training dataloader length (for instance if you need the number of total training steps
to create a learning rate scheduler) should go after the call to [`~Accelerator.prepare`].
Similar to the training dataloader, passing your validation dataloader through
[`~Accelerator.prepare`] may change it: if you run on X GPUs, it will have its length divided by X
(since your actual batch size will be multiplied by X), unless you set `split_batches=True`.
</Tip>
Any instruction using your training dataloader length (for instance if you need the number of total training steps
to create a learning rate scheduler) should go after the call to [`~Accelerator.prepare`].
Some data at the end of the dataset may be duplicated so the batch can be divided equally among all workers. As a result, metrics
should be calculated through the [`~Accelerator.gather_for_metrics`] method to automatically remove the duplicated data while gathering.
<Tip>
If for some reason you don't wish to have this automatically done, [`~Accelerator.gather`] can be used instead to gather
the data across all processes and this can manually be done instead.
</Tip>
<Tip warning={true}>
The [`~Accelerator.gather`] method requires the tensors to be all the same size on each process. If
you have tensors of different sizes on each process (for instance when dynamically padding to the maximum length in
a batch), you should use the [`~Accelerator.pad_across_processes`] method to pad you tensor to the
biggest size across processes.
The [`~Accelerator.gather`] and [`~Accelerator.gather_for_metrics`] methods require the tensors to be all the same size on each process. If
you have tensors of different sizes on each process (for instance when dynamically padding to the maximum length in
a batch), you should use the [`~Accelerator.pad_across_processes`] method to pad you tensor to the
biggest size across processes.
</Tip>
@ -149,8 +168,8 @@ You can use the regular commands to launch your distributed training (like `torc
PyTorch), they are fully compatible with 🤗 Accelerate. The only caveat here is that 🤗 Accelerate uses the environment
to determine all useful information, so `torch.distributed.launch` should be used with the flag `--use_env`.
🤗 Accelerate also provides a CLI tool that unifies all launcher, so you only have to remember one command. To use it,
just run
🤗 Accelerate also provides a CLI tool that unifies all launchers, so you only have to remember one command. To use it,
just run:
```bash
accelerate config
@ -166,7 +185,7 @@ on your machine and reply to the questions asked. This will save a *default_conf
You can also specify with the flag `--config_file` the location of the file you want to save.
Once this is done, you can test everything is going well on your setup by running
Once this is done, you can test everything is going well on your setup by running:
```bash
accelerate test
@ -193,7 +212,10 @@ If you stored the config file in a non-default location, you can indicate it to
accelerate launch --config_file path_to_config.yaml path_to_script.py --args_for_the_script
```
You can also override any of the arguments determined by your config file, see TODO: insert ref here.
You can also override any of the arguments determined by your config file.
To see the complete list of parameters that you can pass in, run `accelerate launch -h`.
Check out the [Launch tutorial](basic_tutorials/launch) for more information about launching your scripts.
## Launching training from a notebook
@ -213,11 +235,14 @@ notebook_launcher(training_function)
<Tip warning={true}>
Your `Accelerator` object should only be defined inside the training function. This is because the
initialization should be done inside the launcher only.
Your [`Accelerator`] object should only be defined inside the training function. This is because the
initialization should be done inside the launcher only.
</Tip>
Check out the [Notebook Launcher tutorial](basic_tutorials/notebook) for more information about training on TPUs.
## Training on TPU
If you want to launch your script on TPUs, there are a few caveats you should be aware of. Behind the scenes, the TPUs
@ -226,14 +251,14 @@ step). This is why your first step of training will always be very long as build
optimizations takes some time.
The good news is that this compilation will be cached so the second step and all the following will be much faster. The
bas news is that it only applies if all of your steps do exactly the same operations, which implies:
bad news is that it only applies if all of your steps do exactly the same operations, which implies:
- having all tensors of the same length in all your lengths
- having all tensors of the same length in all your batches
- having static code (i.e., not a for loop of length that could change from step to step)
Having any of the things above change between two steps will trigger a new compilation which will, once again, take a
lot of time. In practice, that means you must take special care to have all your tensors in your inputs of the same
shape (so no dynamic padding for instance if you are in an NLP problem) and should not use layer with for loops that
shape (so no dynamic padding for instance if you are in an NLP problem) and should not use layers with for loops that
have different lengths depending on the inputs (such as an LSTM) or the training will be excruciatingly slow.
To introduce special behavior in your script for TPUs you can check the `distributed_type` of your
@ -248,15 +273,17 @@ else:
# go crazy and be dynamic
```
The [NLP example](https://github.com/huggingface/accelerate/blob/main/examples/nlp_example.py) shows an example in
The [NLP example](https://github.com/huggingface/accelerate/blob/main/examples/nlp_example.py) shows an example in a
situation with dynamic padding.
One last thing to pay close attnetion to: if your model has tied weights (such as language models which tie the weights
One last thing to pay close attention to: if your model has tied weights (such as language models which tie the weights
of the embedding matrix with the weights of the decoder), moving this model to the TPU (either yourself or after you
passed your model to [`~Accelerator.prepare`]) will break the tying. You will need to retie the weights
after. You can find an example of this in the [run_clm_no_trainer](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) script in
the Transformers repository.
Check out the [TPU tutorial](concept_guides/training_tpu) for more information about training on TPUs.
## Other caveats
@ -308,8 +335,8 @@ following line in your code:
accelerator.wait_for_everyone()
```
This instruction will block all the processes that arrive them first until all the other processes have reached that
point (if you run your script on just one GPU or CPU, this wont' do anything).
This instruction will block all the processes that arrive first until all the other processes have reached that
point (if you run your script on just one GPU or CPU, this won't do anything).
### Saving/loading a model
@ -329,7 +356,7 @@ unwrapped_model = accelerator.unwrap_model(model)
accelerator.save(unwrapped_model.state_dict(), filename)
```
If your script contains a logic to load checkpoint, we also recommend you load your weights in the unwrapped model
If your script contains logic to load a checkpoint, we also recommend you load your weights in the unwrapped model
(this is only useful if you use the load function after making your model go through
[`~Accelerator.prepare`]). Here is an example:
@ -340,18 +367,31 @@ unwrapped_model.load_state_dict(torch.load(filename))
Note that since all the model parameters are references to tensors, this will load your weights inside `model`.
## Saving/loading entire states
When training your model, you may want to save the current state of the model, optimizer, random generators, and potentially LR schedulers to be restored in the _same script_.
You can use [`~Accelerator.save_state`] and [`~Accelerator.load_state`] respectively to do so, just by simply passing in a save location.
If you have registered any other stateful items to be stored through [`~Accelerator.register_for_checkpointing`] they will also be saved and/or loaded.
<Tip>
Every object passed to [`~Accelerator.register_for_checkpointing`] must have a `load_state_dict` and `state_dict` function to be stored
</Tip>
### Gradient clipping
If you are using gradient clipping in your script, you should replace the calls to
`torch.nn.utils.clip_grad_norm_` or `torch.nn.utils.clip_grad_value_` with `accelerator.clip_grad_norm_`
and `accelerator.clip_grad_value_` respectively.
`torch.nn.utils.clip_grad_norm_` or `torch.nn.utils.clip_grad_value_` with [`~Accelerator.clip_grad_norm_`]
and [`~Accelerator.clip_grad_value_`] respectively.
### Mixed Precision training
If you are running your training in Mixed Precision with Accelerate, you will get the best result with your loss being
If you are running your training in Mixed Precision with 🤗 Accelerate, you will get the best result with your loss being
computed inside your model (like in Transformer models for instance). Every computation outside of the model will be
executed in full precision (which is generally what you want for loss computation, expecially if it involves a
executed in full precision (which is generally what you want for loss computation, especially if it involves a
softmax). However you might want to put your loss computation inside the *accelerator.autocast* context manager:
```
@ -373,19 +413,35 @@ if not accelerator.optimizer_step_was_skipped:
lr_scheduler.step()
```
### Gradient Accumulation
To perform gradient accumulation use [`~Accelerator.accumulate`] and specify a `gradient_accumulation_steps`.
This will also automatically ensure the gradients are synced or unsynced when on multi-device training, check if the step should
actually be performed, and auto-scale the loss:
```python
accelerator = Accelerator(gradient_accumulation_steps=2)
model, optimizer, training_dataloader = accelerator.prepare(model, optimizer, training_dataloader)
for input, label in training_dataloader:
with accelerator.accumulate(model):
predictions = model(input)
loss = loss_function(predictions, label)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
### DeepSpeed
DeepSpeed support is experimental, so the underlying API will evolve in the near future and may have some slight
breaking changes. In particular, 🤗 Accelerate does not support DeepSpeed config you have written yourself yet, this
will be added in a next version.
One main caveat for the DeepSpeed integration is that the DeepSpeed launcher always passes a `local_rank` variable to
the training script, so your training script should accept it (whether you launch training with the DeepSpeed launcher
or `accelerate launch`).
<Tip warning={true}>
The [`notebook_launcher`] does not support the DeepSpeed integration yet.
The [`notebook_launcher`] does not support the DeepSpeed integration yet.
</Tip>
@ -395,7 +451,7 @@ Internally, the library works by first analyzing the environment in which the sc
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
that information is stored in the [`~AcceleratorState`].
This class is initialized the first time you instantiate a [`Accelerator`] as well as performing any
This class is initialized the first time you instantiate an [`~Accelerator`] as well as performing any
specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of
[`~state.AcceleratorState`].
@ -423,23 +479,23 @@ The random number generator synchronization will by default synchronize:
- the main random number generator in PyTorch <=1.5.1
You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main
[`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on local `generator` to avoid
[`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on a local `generator` to avoid
setting the same seed in the main random number generator in all processes.
<Tip warning={true}>
Synchronization the main torch (or CUDA or XLA) random number generator will affect any other potential random
artifacts you could have in your dataset (like random data augmentation) in the sense all processes will get the
same random numbers from the torch random modules (so will apply the same random data augmentation if it's
controlled by torch).
Synchronization of the main torch (or CUDA or XLA) random number generator will affect any other potential random
artifacts you could have in your dataset (like random data augmentation) in the sense that all processes will get
the same random numbers from the torch random modules (so will apply the same random data augmentation if it's
controlled by torch).
</Tip>
<Tip>
The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local
`torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example.
The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local
`torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example.
</Tip>
See more details about the internal in the [Internals page](internal).
For more details about the internals, see the [Internals page](package_reference/torch_wrappers).

View File

@ -0,0 +1,294 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Handling big models
When loading a pretrained model in PyTorch, the usual workflow looks like this:
```py
import torch
my_model = ModelClass(...)
state_dict = torch.load(checkpoint_file)
my_model.load_state_dict(state_dict)
```
In plain English, those steps are:
1. Create the model with randomly initialized weights
2. Load the model weights (in a dictionary usually called a state dict) from the disk
3. Load those weights inside the model
While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pretrained weights. If you're loading a model with 6 billions parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16).
<Tip warning={true}>
This API is quite new and still in its experimental stage. While we strive to provide a stable API, it's possible some small parts of the public API will change in the future.
</Tip>
## How the Process Works: A Quick Overview
<Youtube id="MWCSGj9jEAo" />
## How the Process Works: Working with Code
### Instantiating an empty model
The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM, so that step 1 can be done on models of any size. Here is how it works:
```py
from accelerate import init_empty_weights
with init_empty_weights():
my_model = ModelClass(...)
```
For instance:
```py
with init_empty_weights():
model = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
```
initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved on that device.
<Tip warning={true}>
You can't move a model initialized like this on CPU or another device directly, since it doesn't have any data. It's also very likely that a forward pass with that empty model will fail, as not all operations are supported on the meta device.
</Tip>
### Sharded checkpoints
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split in several smaller files that we call checkpoint shards.
🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. For instance we could have a folder containing:
```bash
first_state_dict.bin
index.json
second_state_dict.bin
```
with index.json being the following file:
```
{
"linear1.weight": "first_state_dict.bin",
"linear1.bias": "first_state_dict.bin",
"linear2.weight": "second_state_dict.bin",
"linear2.bias": "second_state_dict.bin"
}
```
and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"linear1.bias"`, `second_state_dict.bin` the ones for `"linear2.weight"` and `"linear2.bias"`
### Loading weights
The second tool 🤗 Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
Here is how we can use this to load the [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) model. You clone the sharded version of this model with:
```bash
git clone https://huggingface.co/sgugger/sharded-gpt-j-6B
cd sharded-gpt-j-6B
git-lfs install
git pull
```
then we can initialize the model with
```py
from accelerate import init_empty_weights
from transformers import AutoConfig, AutoModelForCausalLM
checkpoint = "EleutherAI/gpt-j-6B"
config = AutoConfig.from_pretrained(checkpoint)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config)
```
and load the checkpoint we just downloaded with:
```py
from accelerate import load_checkpoint_and_dispatch
model = load_checkpoint_and_dispatch(
model, "sharded-gpt-j-6B", device_map="auto", no_split_module_classes=["GPTJBlock"]
)
```
By passing `device_map="auto"`, we tell 🤗 Accelerate to determine automatically where to put each layer of the model depending on the available resources:
- first we use the maximum space available on the GPU(s)
- if we still need space, we store the remaining weights on the CPU
- if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors
`no_split_module_classes=["GPTJBlock"]` indicates that the modules that are `GPTJBlock` should not be split on different devices. You should set here all blocks that include a residual connection of some kind.
You can see the `device_map` that 🤗 Accelerate picked by accessing the `hf_device_map` attribute of your model:
```py
model.hf_device_map
```
```python out
{'transformer.wte': 0,
'transformer.drop': 0,
'transformer.h.0': 0,
'transformer.h.1': 0,
'transformer.h.2': 0,
'transformer.h.3': 0,
'transformer.h.4': 0,
'transformer.h.5': 0,
'transformer.h.6': 0,
'transformer.h.7': 0,
'transformer.h.8': 0,
'transformer.h.9': 0,
'transformer.h.10': 0,
'transformer.h.11': 0,
'transformer.h.12': 0,
'transformer.h.13': 0,
'transformer.h.14': 0,
'transformer.h.15': 0,
'transformer.h.16': 0,
'transformer.h.17': 0,
'transformer.h.18': 0,
'transformer.h.19': 0,
'transformer.h.20': 0,
'transformer.h.21': 0,
'transformer.h.22': 0,
'transformer.h.23': 0,
'transformer.h.24': 1,
'transformer.h.25': 1,
'transformer.h.26': 1,
'transformer.h.27': 1,
'transformer.ln_f': 1,
'lm_head': 1}
```
You can also design your `device_map` yourself, if you prefer to explicitly decide where each layer should be. In this case, the command above becomes:
```py
model = load_checkpoint_and_dispatch(model, "sharded-gpt-j-6B", device_map=my_device_map)
```
### Run the model
Now that we have done this, our model lies across several devices, and maybe the hard drive. But it can still be used as a regular PyTorch model:
```py
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
inputs = tokenizer("Hello, my name is", return_tensors="pt")
inputs = inputs.to(0)
output = model.generate(inputs["input_ids"])
tokenizer.decode(output[0].tolist())
```
Behind the scenes, 🤗 Accelerate added hooks to the model, so that:
- at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works)
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass, and cleaned up just after
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass, and cleaned up just after
This way, you model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM!
<Tip warning={true}>
This only supports inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations.
</Tip>
### Designing a device map
You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself, if you want more control over where each layer should go.
<Tip>
You can derive all sizes of the model (and thus compute a `device_map`) on a model that is on the meta device.
</Tip>
All the options will produce the same result when you don't have enough GPU memory to accommodate the whole model (which is to fit everything that can on the GPU, then offload weights on the CPU or even on the disk if there is not enough RAM).
When you have more GPU memory available than the model size, here the difference between each option:
- `"auto"` and `"balanced"` evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1.
- `"balanced_low_0"` evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the `generate` function for Transformers models
- `"sequential"` will fit what it can on GPU 0, then move on GPU 1 and so forth (so won't use the last GPUs if it doesn't need to).
<Tip>
The options `"auto"` and `"balanced"` produce the same results for now, but the behavior of `"auto"` might change in the future if we find a strategy that makes more sense, while `"balanced"` will stay stable.
</Tip>
First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want used for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`.
Here is an example where we don't want to use more than 10GiB on each of two GPUs and no more than 30GiB of CPU RAM for the model weights:
```python
from accelerate import infer_auto_device_map
device_map = infer_auto_device_map(my_model, max_memory={0: "10GiB", 1: "10GiB", "cpu": "30GiB"})
```
<Tip warning={true}>
When a first allocation happens in PyTorch, it loads CUDA kernels which take about 1-2GB of memory depending on the GPU. Therefore you always have less usable memory than the actual size of the GPU. To see how much memory is actually used do `torch.ones(1).cuda()` and look at the memory usage.
Therefore when you create memory maps with `max_memory` make sure to adjust the avaialble memory accordingly to avoid out-of-memory errors.
</Tip>
Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup the close to ideal map is:
```python
max_memory = {0: "30GIB", 1: "46GIB", 2: "46GIB", 3: "46GIB", 4: "46GIB", 5: "46GIB", 6: "46GIB", 7: "46GIB"}
```
as you can see we gave the remaining 7 GPUs ~50% more memory than GPU 0.
If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be:
```python
device_map = {"block1": 0, "block2": 1}
```
another one that is valid could be:
```python
device_map = {"block1": 0, "block2.linear1": 0, "block2.linear2": 1, "block2.linear3": 1}
```
On the other hand, this one is not valid as it does not cover every parameter of the model:
```python
device_map = {"block1": 0, "block2.linear1": 1, "block2.linear2": 1}
```
<Tip>
To be the most efficient, make sure your device map puts the parameters on the GPUs in a sequential manner (e.g. don't put one of the first weights on GPU 0, then weights on GPU 1 and the last weight back to GPU 0) to avoid making many transfers of data between the GPUs.
</Tip>
## Limits and further development
We are aware of the current limitations in the API:
- While this could theoretically work on just one CPU with potential disk offload, you need at least one GPU to run this API. This will be fixed in further development.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to lack of RAM.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) attributes devices sequentially (to avoid moving things back and forth) so if your first layer is bigger than the size of the GPU you have, it will end up with everything on the CPU/Disk.
- [`load_checkpoint_and_dispatch`] and [`load_checkpoint_in_model`] do not perform any check on the correctness of your state dict compared to your model at the moment (this will be fixed in a future version), so you may get some weird errors if trying to load a checkpoint with mismatched or missing keys.
- The model parallelism used when your model is split on several GPUs is naive and not optimized, meaning that only one GPU works at a given time and the other sits idle.
- When weights are offloaded on the CPU/hard drive, there is no pre-fetching (yet, we will work on this for future versions) which means the weights are put on the GPU when they are needed and not before.
- Hard-drive offloading might be very slow if the hardware you run on does not have fast communication between disk and CPU (like NVMes).

View File

@ -0,0 +1,60 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Checkpointing
When training a PyTorch model with 🤗 Accelerate, you may often want to save and continue a state of training. Doing so requires
saving and loading the model, optimizer, RNG generators, and the GradScaler. Inside 🤗 Accelerate are two convenience functions to achieve this quickly:
- Use [`~Accelerator.save_state`] for saving everything mentioned above to a folder location
- Use [`~Accelerator.load_state`] for loading everything stored from an earlier `save_state`
It should be noted that the expectation is that those states come from the same training script, they should not be from two separate scripts.
- By using [`~Accelerator.register_for_checkpointing`], you can register custom objects to be automatically stored or loaded from the two prior functions,
so long as the object has a `state_dict` **and** a `load_state_dict` functionality. This could include objects such as a learning rate scheduler.
Below is a brief example using checkpointing to save and reload a state during training:
```python
from accelerate import Accelerator
import torch
accelerator = Accelerator()
my_scheduler = torch.optim.lr_scheduler.StepLR(my_optimizer, step_size=1, gamma=0.99)
my_model, my_optimizer, my_training_dataloader = accelerate.prepare(my_model, my_optimizer, my_training_dataloader)
# Register the LR scheduler
accelerate.register_for_checkpointing(my_scheduler)
# Save the starting state
accelerate.save_state("my/save/path")
device = accelerator.device
my_model.to(device)
# Perform training
for epoch in range(num_epochs):
for batch in my_training_dataloader:
my_optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
accelerator.backward(loss)
my_optimizer.step()
my_scheduler.step()
# Restore previous state
accelerate.load_state("my/save/path")
```

View File

@ -0,0 +1,494 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DeepSpeed
[DeepSpeed](https://github.com/microsoft/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Currently it provides full support for:
1. Optimizer state partitioning (ZeRO stage 1)
2. Gradient partitioning (ZeRO stage 2)
3. Parameter partitioning (ZeRO stage 3)
4. Custom mixed precision training handling
5. A range of fast CUDA-extension-based optimizers
6. ZeRO-Offload to CPU and Disk/NVMe
ZeRO-Offload has its own dedicated paper: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840). And NVMe-support is described in the paper [ZeRO-Infinity: Breaking the GPU
Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857).
DeepSpeed ZeRO-2 is primarily used only for training, as its features are of no use to inference.
DeepSpeed ZeRO-3 can be used for inference as well, since it allows huge models to be loaded on multiple GPUs, which
won't be possible on a single GPU.
🤗 Accelerate integrates [DeepSpeed](https://github.com/microsoft/DeepSpeed) via 2 options:
1. Integration of the DeepSpeed features via `deepspeed config file` specification in `accelerate config` . You just supply your custom config file or use our template. Most of
this document is focused on this feature. This supports all the core features of DeepSpeed and gives user a lot of flexibility.
User may have to change few lines of code depending on the config.
2. Integration via `deepspeed_plugin`.This supports subset of the DeepSpeed features and uses default options for the rest of the configurations.
User need not change any code and is good for those who are fine with most of the default settings of DeepSpeed.
## What is integrated?
Training:
1. DeepSpeed ZeRO training supports the full ZeRO stages 1, 2 and 3 as well as CPU/Disk offload of optimizer states, gradients and parameters.
Below is a short description of Data Parallelism using ZeRO - Zero Redundancy Optimizer along with diagram from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)
![ZeRO Data Parallelism](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero.png)
(Source: [link](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/))
a. **Stage 1** : Shards optimizer states across data parallel workers/GPUs
b. **Stage 2** : Shards optimizer states + gradients across data parallel workers/GPUs
c. **Stage 3**: Shards optimizer states + gradients + model parameters across data parallel workers/GPUs
d. **Optimizer Offload**: Offloads the gradients + optimizer states to CPU/Disk building on top of ZERO Stage 2
e. **Param Offload**: Offloads the model parameters to CPU/Disk building on top of ZERO Stage 3
<u>Note</u>: With respect to Disk Offload, the disk should be an NVME for decent speed but it technically work on any Disk
Inference:
1. DeepSpeed ZeRO Inference supports ZeRO stage 3 with ZeRO-Infinity. It uses the same ZeRO protocol as training, but
it doesn't use an optimizer and a lr scheduler and only stage 3 is relevant. For more details see:
[deepspeed-zero-inference](#deepspeed-zero-inference).
## How it works?
**Pre-Requisites**: Install DeepSpeed version >=0.6.5. Please refer to the [DeepSpeed Installation details](https://github.com/microsoft/DeepSpeed#installation)
for more information.
We will first look at easy to use integration via `accelerate config`.
Followed by more flexible and feature rich `deepspeed config file` integration.
### Accelerate DeepSpeed Plugin
On your machine(s) just run:
```bash
accelerate config
```
and answer the questions asked. It will ask whether you want to use a config file for DeepSpeed to which you should answer no. Then answer the following questions to generate a basic DeepSpeed config.
This will generate a config file that will be used automatically to properly set the
default options when doing
```bash
accelerate launch my_script.py --args_to_my_script
```
For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with DeepSpeed Plugin:
**ZeRO Stage-2 DeepSpeed Plugin Example**
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: none
offload_param_device: none
zero3_init_flag: true
zero_stage: 2
distributed_type: DEEPSPEED
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
use_cpu: false
```
```bash
accelerate launch examples/nlp_example.py --mixed_precision fp16
```
**ZeRO Stage-3 with CPU Offload DeepSpeed Plugin Example**
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
distributed_type: DEEPSPEED
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
use_cpu: false
```
```bash
accelerate launch examples/nlp_example.py --mixed_precision fp16
```
Currently, `Accelerate` supports following config through the CLI:
```bash
`zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning
`gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them.
`gradient_clipping`: Enable gradient clipping with value.
`offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2.
`offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3.
`zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3.
`zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3.
`mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training.
```
To be able to tweak more options, you will need to use a DeepSpeed config file.
### DeepSpeed Config File
On your machine(s) just run:
```bash
accelerate config
```
and answer the questions asked. It will ask whether you want to use a config file for deepspeed to which you answer yes
and provide the path to the deepspeed config file.
This will generate a config file that will be used automatically to properly set the
default options when doing
```bash
accelerate launch my_script.py --args_to_my_script
```
For instance, here is how you would run the NLP example `examples/by_feature/deepspeed_with_config_support.py` (from the root of the repo) with DeepSpeed Config File:
**ZeRO Stage-2 DeepSpeed Config File Example**
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/ubuntu/accelerate/examples/configs/deepspeed_config_templates/zero_stage2_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
use_cpu: false
```
with the contents of `zero_stage2_config.json` being:
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto",
"torch_adam": true,
"adam_w_mode": true
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": "auto",
"contiguous_gradients": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
```bash
accelerate launch examples/by_feature/deepspeed_with_config_support.py \
--config_name "gpt2-large" \
--tokenizer_name "gpt2-large" \
--dataset_name "wikitext" \
--dataset_config_name "wikitext-2-raw-v1" \
--block_size 128 \
--output_dir "./clm/clm_deepspeed_stage2_accelerate" \
--learning_rate 5e-4 \
--per_device_train_batch_size 24 \
--per_device_eval_batch_size 24 \
--num_train_epochs 3 \
--with_tracking \
--report_to "wandb"\
```
**ZeRO Stage-3 with CPU offload DeepSpeed Config File Example**
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/ubuntu/accelerate/examples/configs/deepspeed_config_templates/zero_stage3_offload_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
use_cpu: false
```
with the contents of `zero_stage3_offload_config.json` being:
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"sub_group_size": 1e9,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": "auto"
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
```bash
accelerate launch examples/by_feature/deepspeed_with_config_support.py \
--config_name "gpt2-large" \
--tokenizer_name "gpt2-large" \
--dataset_name "wikitext" \
--dataset_config_name "wikitext-2-raw-v1" \
--block_size 128 \
--output_dir "./clm/clm_deepspeed_stage3_offload_accelerate" \
--learning_rate 5e-4 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--num_train_epochs 3 \
--with_tracking \
--report_to "wandb"\
```
**Important code changes when using DeepSpeed Config File**
1. DeepSpeed Optimizers and Schedulers. For more information on these,
see the [DeepSpeed Optimizers](https://deepspeed.readthedocs.io/en/latest/optimizers.html) and [DeepSpeed Schedulers](https://deepspeed.readthedocs.io/en/latest/schedulers.html) documentation.
We will look at the changes needed in the code when using these.
a. DS Optim + DS Scheduler: The case when both `optimizer` and `scheduler` keys present in the DeepSpeed config file.
In this situation, those will be used and user has to use `accelerate.utils.DummyOptim` and `accelerate.utils.DummyScheduler` to replace the PyTorch/Custom optimizers and schedulers in their code.
Below is the snippet from `examples/by_feature/deepspeed_with_config_support.py` showing this:
```python
# Creates Dummy Optimizer if `optimizer` was spcified in the config file else creates Adam Optimizer
optimizer_cls = (
torch.optim.AdamW
if accelerator.state.deepspeed_plugin is None
or "optimizer" not in accelerator.state.deepspeed_plugin.deepspeed_config
else DummyOptim
)
optimizer = optimizer_cls(optimizer_grouped_parameters, lr=args.learning_rate)
# Creates Dummy Scheduler if `scheduler` was spcified in the config file else creates `args.lr_scheduler_type` Scheduler
if (
accelerator.state.deepspeed_plugin is None
or "scheduler" not in accelerator.state.deepspeed_plugin.deepspeed_config
):
lr_scheduler = get_scheduler(
name=args.lr_scheduler_type,
optimizer=optimizer,
num_warmup_steps=args.num_warmup_steps,
num_training_steps=args.max_train_steps,
)
else:
lr_scheduler = DummyScheduler(
optimizer, total_num_steps=args.max_train_steps, warmup_num_steps=args.num_warmup_steps
)
```
b. Custom Optim + Custom Scheduler: The case when both `optimizer` and `scheduler` keys are absent in the DeepSpeed config file.
In this situation, no code changes are needed from the user and this is the case when using integration via DeepSpeed Plugin.
In the above example we can see that the code remains unchanged if the `optimizer` and `scheduler` keys are absent in the DeepSpeed config file.
c. Custom Optim + DS Scheduler: The case when only `scheduler` key is present in the DeepSpeed config file.
In this situation, user has to use `accelerate.utils.DummyScheduler` to replace the PyTorch/Custom scheduler in their code.
d. DS Optim + Custom Scheduler: The case when only `optimizer` key is present in the DeepSpeed config file.
This will result in an error because you can only use DS Scheduler when using DS Optim.
2. Notice the `auto` values in the above example DeepSpeed config files. These are automatically handled by `prepare` method
based on model, dataloaders, dummy optimizer and dummy schedulers provided to `prepare` method.
Only the `auto` fields specified in above examples are handled by `prepare` method and the rest have to be explicitly specified by the user.
## Saving and loading
1. Saving and loading of models is unchanged for ZeRO Stage-1 and Stage-2.
2. under ZeRO Stage-3, `state_dict` contains just the placeholders since the model weights are partitioned across multiple GPUs.
ZeRO Stage-3 has 2 options:
a. Saving the entire 16bit model weights to directly load later on using `model.load_state_dict(torch.load(pytorch_model.bin))`.
For this, either set `zero_optimization.stage3_gather_16bit_weights_on_model_save` to True in DeepSpeed Config file or set
`zero3_save_16bit_model` to True in DeepSpeed Plugin.
**Note that this option requires consolidation of the weights on one GPU it can be slow and memory demanding, so only use this feature when needed.**
Below is the snippet from `examples/by_feature/deepspeed_with_config_support.py` showing this:
```python
unwrapped_model = accelerator.unwrap_model(model)
# New Code #
# Saves the whole/unpartitioned fp16 model when in ZeRO Stage-3 to the output directory if
# `stage3_gather_16bit_weights_on_model_save` is True in DeepSpeed Config file or
# `zero3_save_16bit_model` is True in DeepSpeed Plugin.
# For Zero Stages 1 and 2, models are saved as usual in the output directory.
# The model name saved is `pytorch_model.bin`
unwrapped_model.save_pretrained(
args.output_dir,
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
state_dict=accelerator.get_state_dict(model),
)
```
b. To get 32bit weights, first save the model using `model.save_checkpoint()`.
Below is the snippet from `examples/by_feature/deepspeed_with_config_support.py` showing this:
```python
success = model.save_checkpoint(PATH, ckpt_id, checkpoint_state_dict)
status_msg = "checkpointing: PATH={}, ckpt_id={}".format(PATH, ckpt_id)
if success:
logging.info(f"Success {status_msg}")
else:
logging.warning(f"Failure {status_msg}")
```
This will create ZeRO model and optimizer partitions along with `zero_to_fp32.py` script in checkpoint directory.
You can use this script to do offline consolidation.
It requires no configuration files or GPUs. Here is an example of its usage:
```bash
$ cd /path/to/checkpoint_dir
$ ./zero_to_fp32.py . pytorch_model.bin
Processing zero checkpoint at global_step1
Detected checkpoint of type zero stage 3, world_size: 2
Saving fp32 state dict to pytorch_model.bin (total_numel=60506624)
```
To get 32bit model for saving/inference, you can perform:
```python
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
unwrapped_model = accelerator.unwrap_model(model)
fp32_model = load_state_dict_from_zero_checkpoint(unwrapped_model, checkpoint_dir)
```
If you are only interested in the `state_dict`, you can do the following:
```python
from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir)
```
Note that all these functions require ~2x memory (general RAM) of the size of the final checkpoint.
## ZeRO Inference
DeepSpeed ZeRO Inference supports ZeRO stage 3 with ZeRO-Infinity.
It uses the same ZeRO protocol as training, but it doesn't use an optimizer and a lr scheduler and only stage 3 is relevant.
With accelerate integration, you just need to prepare the model and dataloader as shown below:
```python
model, eval_dataloader = accelerator.prepare(model, eval_dataloader)
```
## Few caveats to be aware of
1. Current integration doesnt support Pipeline Parallelism of DeepSpeed.
2. Current integration doesnt support `mpu`, limiting the tensor parallelism which is supported in Megatron-LM.
3. Current integration doesnt support multiple models.
## DeepSpeed Resources
The documentation for the internals related to deepspeed can be found [here](../package_reference/deepspeed).
- [Project's github](https://github.com/microsoft/deepspeed)
- [Usage docs](https://www.deepspeed.ai/getting-started/)
- [API docs](https://deepspeed.readthedocs.io/en/latest/index.html)
- [Blog posts](https://www.microsoft.com/en-us/research/search/?q=deepspeed)
Papers:
- [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054)
- [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840)
- [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857)
Finally, please, remember that, 🤗 `Accelerate` only integrates DeepSpeed, therefore if you
have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues).

View File

@ -0,0 +1,125 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Fully Sharded Data Parallel
To accelerate training huge models on larger batch sizes, we can use a fully sharded data parallel model.
This type of data parallel paradigm enables fitting more data and larger models by sharding the optimizer states, gradients and parameters.
To read more about it and the benefits, check out the [Fully Sharded Data Parallel blog](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/).
We have integrated the latest PyTorch's Fully Sharded Data Parallel (FSDP) training feature.
All you need to do is enable it through the config.
## How it works out of the box
On your machine(s) just run:
```bash
accelerate config
```
and answer the questions asked. This will generate a config file that will be used automatically to properly set the
default options when doing
```bash
accelerate launch my_script.py --args_to_my_script
```
For instance, here is how you would run the NLP example (from the root of the repo) with FSDP enabled:
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: GPT2Block
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 2
use_cpu: false
```
```bash
accelerate launch examples/nlp_example.py
```
Currently, `Accelerate` supports the following config through the CLI:
```bash
`Sharding Strategy`: [1] FULL_SHARD (shards optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD
`Offload Params`: Decides Whether to offload parameters and gradients to CPU
`Auto Wrap Policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP
`Transformer Layer Class to Wrap`: When using `TRANSFORMER_BASED_WRAP`, user specifies transformer layer class name (case-sensitive) to wrap ,e.g, `BertLayer`, `GPTJBlock`, `T5Block`...
`Min Num Params`: minimum number of parameters when using `SIZE_BASED_WRAP`
`Backward Prefetch`: [1] BACKWARD_PRE, [2] BACKWARD_POST, [3] NO_PREFETCH
`State Dict Type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT
```
## A few caveats to be aware of
- PyTorch FSDP auto wraps sub-modules, flattens the parameters and shards the parameters in place.
Due to this, any optimizer created before model wrapping gets broken and occupies more memory.
Hence, it is highly recommended and efficient to prepare the model before creating the optimizer.
`Accelerate` will automatically wrap the model and create an optimizer for you in case of single model with a warning message.
> FSDP Warning: When using FSDP, it is efficient and recommended to call prepare for the model before creating the optimizer
However, below is the recommended way to prepare model and optimizer while using FSDP:
```diff
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
+ model = accelerator.prepare(model)
optimizer = torch.optim.AdamW(params=model.parameters(), lr=lr)
- model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
- model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
- )
+ optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
+ optimizer, train_dataloader, eval_dataloader, lr_scheduler
+ )
```
- In case of a single model, if you have created the optimizer with multiple parameter groups and called prepare with them together,
then the parameter groups will be lost and the following warning is displayed:
> FSDP Warning: When using FSDP, several parameter groups will be conflated into
> a single one due to nested module wrapping and parameter flattening.
This is because parameter groups created before wrapping will have no meaning post wrapping due to parameter flattening of nested FSDP modules into 1D arrays (which can consume many layers).
For instance, below are the named parameters of an FSDP model on GPU 0 (When using 2 GPUs. Around 55M (110M/2) params in 1D arrays as this will have the 1st shard of the parameters).
Here, if one has applied no weight decay for [bias, LayerNorm.weight] the named parameters of an unwrapped BERT model,
it can't be applied to the below FSDP wrapped model as there are no named parameters with either of those strings and
the parameters of those layers are concatenated with parameters of various other layers.
```
{
'_fsdp_wrapped_module.flat_param': torch.Size([494209]),
'_fsdp_wrapped_module._fpw_module.bert.embeddings.word_embeddings._fsdp_wrapped_module.flat_param': torch.Size([11720448]),
'_fsdp_wrapped_module._fpw_module.bert.encoder._fsdp_wrapped_module.flat_param': torch.Size([42527232])
}
```
- In case of multiple models, it is necessary to prepare the models before creating optimizers or else it will throw an error.
Then pass the optimizers to the prepare call in the same order as corresponding models else `accelerator.save_state()` and `accelerator.load_state()` will result in wrong/unexpected behaviour.
- This feature is incompatible with `--predict_with_generate` in the `run_translation.py` script of 🤗 `Transformers` library.
For more control, users can leverage the `FullyShardedDataParallelPlugin`. After creating an instance of this class, users can pass it to the Accelerator class instantiation.
For more information on these options, please refer to the PyTorch [FullyShardedDataParallel](https://github.com/pytorch/pytorch/blob/0df2e863fbd5993a7b9e652910792bd21a516ff3/torch/distributed/fsdp/fully_sharded_data_parallel.py#L236) code.

View File

@ -0,0 +1,130 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Performing gradient accumulation with 🤗 Accelerate
Gradient accumulation is a technique where you can train on bigger batch sizes than
your machine would normally be able to fit into memory. This is done by accumulating gradients over
several batches, and only stepping the optimizer after a certain number of batches have been performed.
While technically standard gradient accumulation code would work fine in a distributed setup, it is not the most efficient
method for doing so and you may experience considerable slowdowns!
In this tutorial you will see how to quickly setup gradient accumulation and perform it with the utilities provided in 🤗 Accelerate,
which can total to adding just one new line of code!
This example will use a very simplistic PyTorch training loop that performs gradient accumulation every two batches:
```python
device = "cuda"
model.to(device)
gradient_accumulation_steps = 2
for index, batch in enumerate(training_dataloader):
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss = loss / gradient_accumulation_steps
loss.backward()
if (index + 1) % gradient_accumulation_steps == 0:
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
## Converting it to 🤗 Accelerate
First the code shown earlier will be converted to utilize 🤗 Accelerate without the special gradient accumulation helper:
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
+ model, optimizer, training_dataloader, scheduler
+ )
for index, batch in enumerate(training_dataloader):
inputs, targets = batch
- inputs = inputs.to(device)
- targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss = loss / gradient_accumulation_steps
+ accelerator.backward(loss)
if (index+1) % gradient_accumulation_steps == 0:
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
<Tip warning={true}>
In its current state, this code is not going to perform gradient accumulation efficiently due to a process called gradient synchronization. Read more about that in the [Concepts tutorial](concept_guides/gradient_synchronization)!
</Tip>
## Letting 🤗 Accelerate handle gradient accumulation
All that is left now is to let 🤗 Accelerate handle the gradient accumulation for us. To do so you should pass in a `gradient_accumulation_steps` parameter to [`Accelerator`], dictating the number
of steps to perform before each call to `step()` and how to automatically adjust the loss during the call to [`~Accelerator.backward`]:
```diff
from accelerate import Accelerator
- accelerator = Accelerator()
+ accelerator = Accelerator(gradient_accumulation_steps=2)
```
From here you can use the [`~Accelerator.accumulate`] context manager from inside your training loop to automatically perform the gradient accumulation for you!
You just wrap it around the entire training part of our code:
```diff
- for index, batch in enumerate(training_dataloader):
+ for batch in training_dataloader:
+ with accelerator.accumulate(model):
inputs, targets = batch
outputs = model(inputs)
```
You can remove all the special checks for the step number and the loss adjustment:
```diff
- loss = loss / gradient_accumulation_steps
accelerator.backward(loss)
- if (index+1) % gradient_accumulation_steps == 0:
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
As you can see the [`Accelerator`] is able to keep track of the batch number you are on and it will automatically know whether to step through the prepared optimizer and how to adjust the loss.
## The finished code
Below is the finished implementation for performing gradient accumulation with 🤗 Accelerate
```python
for batch in training_dataloader:
with accelerator.accumulate(model):
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
To learn more about what magic this wraps around, read the [Gradient Synchronization concept guide](/concept_guides/gradient_synchronization)

View File

@ -0,0 +1,55 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Memory Utilities
One of the most frustrating errors when it comes to running training scripts is hitting "CUDA Out-of-Memory",
as the entire script needs to be restarted, progress is lost, and typically a developer would want to simply
start their script and let it run.
`Accelerate` provides a utility heavily based on [toma](https://github.com/BlackHC/toma) to give this capability.
## find_executable_batch_size
This algorithm operates with exponential decay, decreasing the batch size in half after each failed run on some
training script. To use it, restructure your training function to include an inner function that includes this wrapper,
and build your dataloaders inside it. At a minimum, this could look like 4 new lines of code.
> Note: The inner function *must* take in the batch size as the first parameter, but we do not pass one to it when called. The wrapper handles this for us
It should also be noted that anything which will consume CUDA memory and passed to the `accelerator` **must** be declared inside the inner function,
such as models and optimizers.
```diff
def training_function(args):
accelerator = Accelerator()
+ @find_executable_batch_size(starting_batch_size=args.batch_size)
+ def inner_training_loop(batch_size):
+ nonlocal accelerator # Ensure they can be used in our context
+ accelerator.free_memory() # Free all lingering references
model = get_model()
model.to(accelerator.device)
optimizer = get_optimizer()
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
lr_scheduler = get_scheduler(
optimizer,
num_training_steps=len(train_dataloader)*num_epochs
)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
train(model, optimizer, train_dataloader, lr_scheduler)
validate(model, eval_dataloader)
+ inner_training_loop()
```
To find out more, check the documentation [here](../package_reference/utilities#accelerate.find_executable_batch_size).

View File

@ -0,0 +1,82 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Accelerated PyTorch Training on Mac
With PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training.
This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac.
Apple's Metal Performance Shaders (MPS) as a backend for PyTorch enables this and can be used via the new `"mps"` device.
This will map computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS.
For more information please refer official documents [Introducing Accelerated PyTorch Training on Mac](https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/)
and [MPS BACKEND](https://pytorch.org/docs/stable/notes/mps.html).
### Benefits of Training and Inference using Apple Silicon Chips
1. Enables users to train larger networks or batch sizes locally
2. Reduces data retrieval latency and provides the GPU with direct access to the full memory store due to unified memory architecture.
Therefore, improving end-to-end performance.
3. Reduces costs associated with cloud-based development or the need for additional local GPUs.
**Pre-requisites**: To install torch with mps support,
please follow this nice medium article [GPU-Acceleration Comes to PyTorch on M1 Macs](https://medium.com/towards-data-science/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1).
## How it works out of the box
On your machine(s) just run:
```bash
accelerate config
```
and answer the questions asked, specifically choose `MPS` for the query:
```
Which type of machine are you using?.
```
This will generate a config file that will be used automatically to properly set
the default options when doing `accelerate launch`, such as the one shown below:
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: MPS
downcast_bf16: 'no'
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 1
use_cpu: false
```
After this configuration has been made, here is how you run the CV example
(from the root of the repo) with MPS enabled:
```bash
accelerate launch /examples/cv_example.py --data_dir images
```
## A few caveats to be aware of
1. We strongly recommend to install PyTorch >= 1.13 (nightly version at the time of writing) on your MacOS machine.
It has major fixes related to model correctness and performance improvements for transformer based models.
Please refer to https://github.com/pytorch/pytorch/issues/82707 for more details.
2. Distributed setups `gloo` and `nccl` are not working with `mps` device.
This means that currently only single GPU of `mps` device type can be used.
Finally, please, remember that, 🤗 `Accelerate` only integrates MPS backend, therefore if you
have any problems or questions with regards to MPS backend usage, please, file an issue with [PyTorch GitHub](https://github.com/pytorch/pytorch/issues).

View File

@ -23,7 +23,7 @@ make it easier than ever to train Hugging Face Transformer models in [Amazon Sag
Before you can run your 🤗 Accelerate scripts on Amazon SageMaker you need to sign up for an AWS account. If you do not
have an AWS account yet learn more [here](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-set-up.html).
After you have your AWS Account you need to install the `sagemaker` sdk for 🤗 Accelerate with.
After you have your AWS Account you need to install the `sagemaker` sdk for 🤗 Accelerate with:
```bash
pip install "accelerate[sagemaker]" --upgrade
@ -31,7 +31,7 @@ pip install "accelerate[sagemaker]" --upgrade
🤗 Accelerate currently uses the 🤗 DLCs, with `transformers`, `datasets` and `tokenizers` pre-installed. 🤗
Accelerate is not in the DLC yet (will soon be added!) so to use it within Amazon SageMaker you need to create a
`requirements.txt` in the same directory where your training script is located and add it as dependency.
`requirements.txt` in the same directory where your training script is located and add it as dependency:
```
accelerate
@ -43,7 +43,7 @@ You should also add any other dependencies you have to this `requirements.txt`.
### Configure 🤗 Accelerate
You can configure the launch configuration for Amazon SageMaker the same as you do for non SageMaker training jobs with
the 🤗 Accelerate CLI.
the 🤗 Accelerate CLI:
```bash
accelerate config
@ -54,7 +54,7 @@ accelerate config
<Tip>
🤗 Accelerate is not saving any of your credentials.
🤗 Accelerate is not saving any of your credentials.
</Tip>
@ -62,7 +62,7 @@ accelerate config
The training script is very similar to a training script you might run outside of SageMaker, but to save your model
after training you need to specify either `/opt/ml/model` or use `os.environ["SM_MODEL_DIR"]` as your save
directory. After training, artifacts in this directory are uploaded to S3.
directory. After training, artifacts in this directory are uploaded to S3:
```diff
@ -72,14 +72,14 @@ directory. After training, artifacts in this directory are uploaded to S3.
<Tip warning={true}>
SageMaker doesnt support argparse actions. If you want to use, for example, boolean hyperparameters, you need to
specify type as bool in your script and provide an explicit True or False value for this hyperparameter. [[REF]](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#prepare-a-pytorch-training-script).
SageMaker doesnt support argparse actions. If you want to use, for example, boolean hyperparameters, you need to
specify type as bool in your script and provide an explicit True or False value for this hyperparameter. [[REF]](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#prepare-a-pytorch-training-script).
</Tip>
### Launch Training
You can launch your training with 🤗 Accelerate CLI with
You can launch your training with 🤗 Accelerate CLI with:
```
accelerate launch path_to_script.py --args_to_the_script
@ -92,7 +92,7 @@ arguments needed by your training script as named arguments.
<Tip>
If you run one of the example scripts, don't forget to add `accelerator.save('/opt/ml/model')` to it.
If you run one of the example scripts, don't forget to add `accelerator.save('/opt/ml/model')` to it.
</Tip>
@ -129,7 +129,26 @@ You can find your model data at: s3://your-bucket/accelerate-sagemaker-1-2021-04
### Distributed Training: Data Parallelism
*currently in development, will be supported soon.*
Set up the accelerate config by running `accelerate config` and answer the SageMaker questions and set it up.
To use SageMaker DDP, select it when asked
`What is the distributed mode? ([0] No distributed training, [1] data parallelism):`.
Example config below:
```yaml
base_job_name: accelerate-sagemaker-1
compute_environment: AMAZON_SAGEMAKER
distributed_type: DATA_PARALLEL
ec2_instance_type: ml.p3.16xlarge
iam_role_name: xxxxx
image_uri: null
mixed_precision: fp16
num_machines: 1
profile: xxxxx
py_version: py38
pytorch_version: 1.10.2
region: us-east-1
transformers_version: 4.17.0
use_cpu: false
```
### Distributed Training: Model Parallelism

View File

@ -0,0 +1,200 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Tracking
There are a large number of experiment tracking API's available, however getting them all to work with in a multi-processing environment can oftentimes be complex.
🤗 Accelerate provides a general tracking API that can be used to log useful items during your script through [`Accelerator.log`]
## Integrated Trackers
Currently `Accelerate` supports three trackers out-of-the-box:
- TensorBoard
- WandB
- CometML
To use any of them, pass in the selected type(s) to the `log_with` parameter in [`Accelerate`]:
```python
from accelerate import Accelerator
from accelerate.utils import LoggerType
accelerator = Accelerator(log_with="all") # For all available trackers in the environment
accelerator = Accelerator(log_with="wandb")
accelerator = Accelerator(log_with=["wandb", LoggerType.TENSORBOARD])
```
At the start of your experiment [`Accelerator.init_trackers`] should be used to setup your project, and potentially add any experiment hyperparameters to be logged:
```python
hps = {"num_iterations": 5, "learning_rate": 1e-2}
accelerator.init_trackers("my_project", config=hps)
```
When you are ready to log any data, [`Accelerator.log`] should be used.
A `step` can also be passed in to correlate the data with a particular step in the training loop.
```python
accelerator.log({"train_loss": 1.12, "valid_loss": 0.8}, step=1)
```
Once you've finished training, make sure to run [`Accelerator.end_training`] so that all the trackers can run their finish functionalities if they have any.
```python
accelerator.end_training()
```
A full example is below:
```python
from accelerate import Accelerator
accelerator = Accelerator(log_with="all")
config = {
"num_iterations": 5,
"learning_rate": 1e-2,
"loss_function": str(my_loss_function),
}
accelerator.init_trackers("example_project", config=config)
my_model, my_optimizer, my_training_dataloader = accelerate.prepare(my_model, my_optimizer, my_training_dataloader)
device = accelerator.device
my_model.to(device)
for iteration in config["num_iterations"]:
for step, batch in my_training_dataloader:
my_optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
accelerator.backward(loss)
my_optimizer.step()
accelerator.log({"training_loss": loss}, step=step)
accelerator.end_training()
```
## Implementing Custom Trackers
To implement a new tracker to be used in `Accelerator`, a new one can be made through implementing the [`GeneralTracker`] class.
Every tracker must implement three functions and have three properties:
- `__init__`:
- Should store a `run_name` and initialize the tracker API of the integrated library.
- If a tracker stores their data locally (such as TensorBoard), a `logging_dir` parameter can be added.
- `store_init_configuration`:
- Should take in a `values` dictionary and store them as a one-time experiment configuration
- `log`:
- Should take in a `values` dictionary and a `step`, and should log them to the run
- `name` (`str`):
- A unique string name for the tracker, such as `"wandb"` for the wandb tracker.
- This will be used for interacting with this tracker specifically
- `requires_logging_directory` (`bool`):
- Whether a `logging_dir` is needed for this particular tracker and if it uses one.
- `tracker`:
- This should be implemented as a `@property` function
- Should return the internal tracking mechanism the library uses, such as the `run` object for `wandb`.
A brief example can be seen below with an integration with Weights and Biases, containing only the relevant information:
```python
from accelerate.tracking import GeneralTracker
from typing import Optional
import wandb
class MyCustomTracker(GeneralTracker):
name = "wandb"
requires_logging_directory = False
def __init__(self, run_name: str):
self.run_name = run_name
run = wandb.init(self.run_name)
@property
def tracker(self):
return self.run.run
def store_init_configuration(self, values: dict):
wandb.config(values)
def log(self, values: dict, step: Optional[int] = None):
wandb.log(values, step=step)
```
When you are ready to build your `Accelerator` object, pass in an **instance** of your tracker to [`Accelerator.log_with`] to have it automatically
be used with the API:
```python
tracker = MyCustomTracker("some_run_name")
accelerator = Accelerator(log_with=tracker)
```
These also can be mixed with existing trackers, including with `"all"`:
```python
tracker = MyCustomTracker("some_run_name")
accelerator = Accelerator(log_with=[tracker, "all"])
```
## Accessing the internal tracker
If some custom interactions with a tracker might be wanted directly, you can quickly access one using the
[`Accelerator.get_tracker`] method. Just pass in the string corresponding to a tracker's `.name` attribute
and it will return that tracker on the main process.
This example shows doing so with wandb:
```python
wandb_tracker = accelerator.get_tracker("wandb")
```
From there you can interact with `wandb`'s `run` object like normal:
<Tip warning={true}>
Make sure to only interact with trackers on the main process!
</Tip>
```python
if accelerator.is_main_process:
wandb_run.log_artifact(some_artifact_to_log)
```
## When a wrapper cannot work
If a library has an API that does not follow a strict `.log` with an overall dictionary such as Neptune.AI, logging can be done manually under an `if accelerator.is_main_process` statement:
```diff
from accelerate import Accelerator
+ import neptune.new as neptune
accelerator = Accelerator()
+ run = neptune.init(...)
my_model, my_optimizer, my_training_dataloader = accelerate.prepare(my_model, my_optimizer, my_training_dataloader)
device = accelerator.device
my_model.to(device)
for iteration in config["num_iterations"]:
for batch in my_training_dataloader:
my_optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
total_loss += loss
accelerator.backward(loss)
my_optimizer.step()
+ if accelerator.is_main_process:
+ run["logs/training/batch/loss"].log(loss)
```

View File

@ -0,0 +1,107 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Example Zoo
Below contains a non-exhuastive list of tutorials and scripts showcasing Accelerate
## Official Accelerate Examples:
### Basic Examples
These examples showcase the base features of Accelerate and are a great starting point
- [Barebones NLP example](https://github.com/huggingface/accelerate/blob/main/examples/nlp_example.py)
- [Barebones computer vision example](https://github.com/huggingface/accelerate/blob/main/examples/cv_example.py)
### Feature Specific Examples
These examples showcase specific features that the Accelerate framework offers
- [Automatic memory-aware gradient accumulation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/automatic_gradient_accumulation.py)
- [Checkpointing states](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/checkpointing.py)
- [Cross validation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/cross_validation.py)
- [DeepSpeed](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/deepspeed_with_config_support.py)
- [Fully Sharded Data Parallelism](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/fsdp_with_peak_mem_tracking.py)
- [Gradient accumulation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/gradient_accumulation.py)
- [Memory-aware batch size finder](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/memory.py)
- [Metric Computation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/multi_process_metrics.py)
- [Using Trackers](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/tracking.py)
### Full Examples
These examples showcase every feature in Accelerate at once that was shown in "Feature Specific Examples"
- [Complete NLP example](https://github.com/huggingface/accelerate/blob/main/examples/complete_nlp_example.py)
- [Complete computer vision example](https://github.com/huggingface/accelerate/blob/main/examples/complete_cv_example.py)
- [Causal language model fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py)
- [Masked language model fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm_no_trainer.py)
- [Speech pretraining example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py)
- [Translation fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py)
- [Text classification fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py)
- [Semantic segmentation fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/semantic-segmentation/run_semantic_segmentation_no_trainer.py)
- [Question answering fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa_no_trainer.py)
- [Beam search question answering fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py)
- [Multiple choice question answering fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/multiple-choice/run_swag_no_trainer.py)
- [Named entity recognition fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner_no_trainer.py)
- [Image classification fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification_no_trainer.py)
- [Summarization fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization_no_trainer.py)
## Integration Examples
These are tutorials from libraries that integrate with 🤗 Accelerate:
### Catalyst
- [Distributed training tutorial with Catalyst](https://catalyst-team.github.io/catalyst/tutorials/ddp.html)
### DALLE2-pytorch
- [Fine-tuning DALLE2](https://github.com/lucidrains/DALLE2-pytorch#usage)
### 🤗 diffusers
- [Performing textual inversion with diffusers](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
- [Training DreamBooth with diffusers](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth)
### fastai
- [Distributed training from Jupyter Notebooks with fastai](https://docs.fast.ai/tutorial.distributed.html)
- [Basic distributed training examples with fastai](https://docs.fast.ai/examples/distributed_app_examples.html)
### GradsFlow
- [Auto Image Classification with GradsFlow](https://docs.gradsflow.com/en/latest/examples/nbs/01-ImageClassification/)
### imagen-pytorch
- [Fine-tuning Imagen](https://github.com/lucidrains/imagen-pytorch#usage)
### Kornia
- [Fine-tuning vision models with Kornia's Trainer](https://kornia.readthedocs.io/en/latest/get-started/training.html)
### PyTorch Accelerated
- [Quickstart distributed training tutorial with PyTorch Accelerated](https://pytorch-accelerated.readthedocs.io/en/latest/quickstart.html)
### PyTorch3D
- [Perform Deep Learning with 3D data](https://pytorch3d.org/tutorials/)
### Tez
- [Leaf disease detection with Tez and Accelerate](https://www.kaggle.com/code/abhishek/tez-faster-and-easier-training-for-leaf-detection/notebook)
### trlx
- [How to implement a sentiment learning task with trlx](https://github.com/CarperAI/trlx#example-how-to-add-a-task)

View File

@ -23,7 +23,7 @@ The [nlp_example.py](./nlp_example.py) script is a simple example to train a Ber
Prior to running it you should install 🤗 Dataset and 🤗 Transformers:
```bash
pip install datasets transformers
pip install datasets evaluate transformers
```
The same script can be run in any of the following configurations:
@ -136,7 +136,7 @@ To run it in each of these various modes, use the following commands:
```
- single GPU:
```bash
python ./nlp_example.py # from a server with a GPU
python ./cv_example.py # from a server with a GPU
```
- with fp16 (mixed-precision)
* from any server by passing `fp16=True` to the `Accelerator`.
@ -183,3 +183,30 @@ To run it in each of these various modes, use the following commands:
```
* In PyTorch:
Add an `xmp.spawn` line in your script as you usually do.
### Simple vision example (GANs)
- [huggan project](https://github.com/huggingface/community-events/tree/main/huggan)
### Using AWS SageMaker integration
- [Examples showcasing AWS SageMaker integration of 🤗 Accelerate.](https://github.com/pacman100/accelerate-aws-sagemaker)
## Finer Examples
While the first two scripts are extremely barebones when it comes to what you can do with accelerate, more advanced features are documented in two other locations.
### `by_feature` examples
These scripts are *individual* examples highlighting one particular feature or use-case within Accelerate. They all stem from the [nlp_example.py](./nlp_example.py) script, and any changes or modifications is denoted with a `# New Code #` comment.
Read the README.md file located in the `by_feature` folder for more information.
### `complete_*` examples
These two scripts contain *every* single feature currently available in Accelerate in one place, as one giant script.
New arguments that can be passed include:
- `checkpointing_steps`, whether the various states should be saved at the end of every `n` steps, or `"epoch"` for each epoch. States are then saved to folders named `step_{n}` or `epoch_{n}`
- `resume_from_checkpoint`, should be used if you want to resume training off of a previous call to the script and passed a `checkpointing_steps` to it.
- `with_tracking`, should be used if you want to log the training run using all available experiment trackers in your environment. Currently supported trackers include TensorBoard, Weights and Biases, and CometML.

View File

@ -0,0 +1,80 @@
# What are these scripts?
All scripts in this folder originate from the `nlp_example.py` file, as it is a very simplistic NLP training example using Accelerate with zero extra features.
From there, each further script adds in just **one** feature of Accelerate, showing how you can quickly modify your own scripts to implement these capabilities.
A full example with all of these parts integrated together can be found in the `complete_nlp_example.py` script and `complete_cv_example.py` script.
Adjustments to each script from the base `nlp_example.py` file can be found quickly by searching for "# New Code #"
## Example Scripts by Feature and their Arguments
### Base Example (`../nlp_example.py`)
- Shows how to use `Accelerator` in an extremely simplistic PyTorch training loop
- Arguments available:
- `mixed_precision`, whether to use mixed precision. ("no", "fp16", or "bf16")
- `cpu`, whether to train using only the CPU. (yes/no/1/0)
All following scripts also accept these arguments in addition to their added ones.
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
```bash
accelerate launch ../nlp_example.py --mixed_precision fp16 --cpu 0
```
### Checkpointing and Resuming Training (`checkpointing.py`)
- Shows how to use `Accelerator.save_state` and `Accelerator.load_state` to save or continue training
- **It is assumed you are continuing off the same training script**
- Arguments available:
- `checkpointing_steps`, after how many steps the various states should be saved. ("epoch", 1, 2, ...)
- `output_dir`, where saved state folders should be saved to, default is current working directory
- `resume_from_checkpoint`, what checkpoint folder to resume from. ("epoch_0", "step_22", ...)
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
(Note, `resume_from_checkpoint` assumes that we've ran the script for one epoch with the `--checkpointing_steps epoch` flag)
```bash
accelerate launch ./checkpointing.py --checkpointing_steps epoch output_dir "checkpointing_tutorial" --resume_from_checkpoint "checkpointing_tutorial/epoch_0"
```
### Cross Validation (`cross_validation.py`)
- Shows how to use `Accelerator.free_memory` and run cross validation efficiently with `datasets`.
- Arguments available:
- `num_folds`, the number of folds the training dataset should be split into.
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
```bash
accelerate launch ./cross_validation.py --num_folds 2
```
### Experiment Tracking (`tracking.py`)
- Shows how to use `Accelerate.init_trackers` and `Accelerator.log`
- Can be used with Weights and Biases, TensorBoard, or CometML.
- Arguments available:
- `with_tracking`, whether to load in all available experiment trackers from the environment.
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
```bash
accelerate launch ./tracking.py --with_tracking
```
### Gradient Accumulation (`gradient_accumulation.py`)
- Shows how to use `Accelerator.no_sync` to prevent gradient averaging in a distributed setup.
- Arguments available:
- `gradient_accumulation_steps`, the number of steps to perform before the gradients are accumulated and the optimizer and scheduler are stepped + zero_grad
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
```bash
accelerate launch ./gradient_accumulation.py --gradient_accumulation_steps 5
```

View File

@ -0,0 +1,232 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import torch
from torch.optim import AdamW
from torch.utils.data import DataLoader
# New Code #
import evaluate
from accelerate import Accelerator, DistributedType
from accelerate.utils import find_executable_batch_size
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
########################################################################
# This is a fully working simple example to use Accelerate,
# specifically showcasing how to combine both the gradient accumulation
# and automatic batch size finder utilities of Accelerate to perfrom
# automatic gradient accumulation
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# New additions from the base script can be found quickly by
# looking for the # New Code # tags
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
EVAL_BATCH_SIZE = 32
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
"""
Creates a set of `DataLoader`s for the `glue` dataset,
using "bert-base-cased" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
return train_dataloader, eval_dataloader
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
from accelerate.test_utils.training import mocked_dataloaders
get_dataloaders = mocked_dataloaders # noqa: F811
def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# Initialize accelerator
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
observed_batch_size = int(config["batch_size"])
metric = evaluate.load("glue", "mrpc")
# New Code #
# We use the `find_executable_batch_size` decorator, passing in the desired observed batch size
# to train on. If a CUDA OOM error occurs, it will retry this loop cutting the batch size in
# half each time. From this, we can calculate the number of gradient accumulation steps needed
# and modify the Accelerator object as a result
@find_executable_batch_size(starting_batch_size=int(observed_batch_size))
def inner_training_loop(batch_size):
# Since we need to modify the outside accelerator object, we need to bring it
# to the local scope
nonlocal accelerator
# We can calculate the number of gradient accumulation steps based on the current
# batch size vs the starting batch size
num_gradient_accumulation_steps = observed_batch_size // batch_size
# And then set it in the Accelerator directly:
accelerator.gradient_accumulation_steps = num_gradient_accumulation_steps
# Next we need to free all of the stored model references in the Accelerator each time
accelerator.free_memory()
# And set the seed so our results are reproducable each reset
set_seed(seed)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs),
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now we train the model
for epoch in range(num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
# And perform gradient accumulation
with accelerator.accumulate(model):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
# New Code #
# And call it at the end with no arguments
# Note: You could also refactor this outside of your training loop function
inner_training_loop()
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
args = parser.parse_args()
# New Code #
# We modify the starting batch size to be an observed batch size of 256, to guarentee an initial CUDA OOM
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 256}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,303 @@
# coding=utf-8
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import torch
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
########################################################################
# This is a fully working simple example to use Accelerate,
# specifically showcasing the checkpointing capability,
# and builds off the `nlp_example.py` script.
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# To help focus on the differences in the code, building `DataLoaders`
# was refactored into its own function.
# New additions from the base script can be found quickly by
# looking for the # New Code # tags
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
"""
Creates a set of `DataLoader`s for the `glue` dataset,
using "bert-base-cased" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
return train_dataloader, eval_dataloader
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
from accelerate.test_utils.training import mocked_dataloaders
get_dataloaders = mocked_dataloaders # noqa: F811
def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# Initialize accelerator
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
# New Code #
# Parse out whether we are saving every epoch or after a certain number of batches
if hasattr(args.checkpointing_steps, "isdigit"):
if args.checkpointing_steps == "epoch":
checkpointing_steps = args.checkpointing_steps
elif args.checkpointing_steps.isdigit():
checkpointing_steps = int(args.checkpointing_steps)
else:
raise ValueError(
f"Argument `checkpointing_steps` must be either a number or `epoch`. `{args.checkpointing_steps}` passed."
)
else:
checkpointing_steps = None
set_seed(seed)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
metric = evaluate.load("glue", "mrpc")
# If the batch size is too big we use gradient accumulation
gradient_accumulation_steps = 1
if batch_size > MAX_GPU_BATCH_SIZE and accelerator.distributed_type != DistributedType.TPU:
gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE
batch_size = MAX_GPU_BATCH_SIZE
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs) // gradient_accumulation_steps,
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# New Code #
# We need to keep track of how many total steps we have iterated over
overall_step = 0
# We also need to keep track of the stating epoch so files are named properly
starting_epoch = 0
# We need to load the checkpoint back in before training here with `load_state`
# The total number of epochs is adjusted based on where the state is being loaded from,
# as we assume continuation of the same training script
if args.resume_from_checkpoint:
if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
accelerator.load_state(args.resume_from_checkpoint)
path = os.path.basename(args.resume_from_checkpoint)
else:
# Get the most recent checkpoint
dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()]
dirs.sort(key=os.path.getctime)
path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last
# Extract `epoch_{i}` or `step_{i}`
training_difference = os.path.splitext(path)[0]
if "epoch" in training_difference:
starting_epoch = int(training_difference.replace("epoch_", "")) + 1
resume_step = None
else:
resume_step = int(training_difference.replace("step_", ""))
starting_epoch = resume_step // len(train_dataloader)
resume_step -= starting_epoch * len(train_dataloader)
# Now we train the model
for epoch in range(starting_epoch, num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
# New Code #
# We need to skip steps until we reach the resumed step during the first epoch
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
overall_step += 1
continue
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
loss = loss / gradient_accumulation_steps
accelerator.backward(loss)
if step % gradient_accumulation_steps == 0:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
# New Code #
overall_step += 1
# New Code #
# We save the model, optimizer, lr_scheduler, and seed states by calling `save_state`
# These are saved to folders named `step_{overall_step}`
# Will contain files: "pytorch_model.bin", "optimizer.bin", "scheduler.bin", and "random_states.pkl"
# If mixed precision was used, will also save a "scalar.bin" file
if isinstance(checkpointing_steps, int):
output_dir = f"step_{overall_step}"
if overall_step % checkpointing_steps == 0:
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True` (the default).
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
# New Code #
# We save the model, optimizer, lr_scheduler, and seed states by calling `save_state`
# These are saved to folders named `epoch_{epoch}`
# Will contain files: "pytorch_model.bin", "optimizer.bin", "scheduler.bin", and "random_states.pkl"
# If mixed precision was used, will also save a "scalar.bin" file
if checkpointing_steps == "epoch":
output_dir = f"epoch_{epoch}"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
parser.add_argument(
"--checkpointing_steps",
type=str,
default=None,
help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
)
parser.add_argument(
"--output_dir",
type=str,
default=".",
help="Optional save directory where all checkpoint folders will be stored. Default is the current working directory.",
)
parser.add_argument(
"--resume_from_checkpoint",
type=str,
default=None,
help="If the training should continue from a checkpoint folder.",
)
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,268 @@
# coding=utf-8
# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from typing import List
import numpy as np
import torch
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import DatasetDict, load_dataset
# New Code #
# We'll be using StratifiedKFold for this example
from sklearn.model_selection import StratifiedKFold
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
########################################################################
# This is a fully working simple example to use Accelerate,
# specifically showcasing how to perform Cross Validation,
# and builds off the `nlp_example.py` script.
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# To help focus on the differences in the code, building `DataLoaders`
# was refactored into its own function.
# New additions from the base script can be found quickly by
# looking for the # New Code # tags
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
# New Code #
# We need a different `get_dataloaders` function that will build dataloaders by index
def get_fold_dataloaders(
accelerator: Accelerator, dataset: DatasetDict, train_idxs: List[int], valid_idxs: List[int], batch_size: int = 16
):
"""
Gets a set of train, valid, and test dataloaders for a particular fold
Args:
accelerator (`Accelerator`):
The main `Accelerator` object
train_idxs (list of `int`):
The split indices for the training dataset
valid_idxs (list of `int`):
The split indices for the validation dataset
batch_size (`int`):
The size of the minibatch. Default is 16
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = DatasetDict(
{
"train": dataset["train"].select(train_idxs),
"validation": dataset["train"].select(valid_idxs),
"test": dataset["validation"],
}
)
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
test_dataloader = DataLoader(
tokenized_datasets["test"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
return train_dataloader, eval_dataloader, test_dataloader
def training_function(config, args):
# New Code #
test_predictions = []
# Download the dataset
datasets = load_dataset("glue", "mrpc")
# Create our splits
kfold = StratifiedKFold(n_splits=int(args.num_folds))
# Initialize accelerator
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
metric = evaluate.load("glue", "mrpc")
# If the batch size is too big we use gradient accumulation
gradient_accumulation_steps = 1
if batch_size > MAX_GPU_BATCH_SIZE and accelerator.distributed_type != DistributedType.TPU:
gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE
batch_size = MAX_GPU_BATCH_SIZE
set_seed(seed)
# New Code #
# Create our folds:
folds = kfold.split(np.zeros(datasets["train"].num_rows), datasets["train"]["label"])
test_references = []
# Iterate over them
for i, (train_idxs, valid_idxs) in enumerate(folds):
train_dataloader, eval_dataloader, test_dataloader = get_fold_dataloaders(
accelerator,
datasets,
train_idxs,
valid_idxs,
)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs) // gradient_accumulation_steps,
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now we train the model
for epoch in range(num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
loss = loss / gradient_accumulation_steps
accelerator.backward(loss)
if step % gradient_accumulation_steps == 0:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
# New Code #
# We also run predictions on the test set at the very end
fold_predictions = []
for step, batch in enumerate(test_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
fold_predictions.append(predictions.cpu())
if i == 0:
# We need all of the test predictions
test_references.append(references.cpu())
# Use accelerator.print to print only on the main process.
test_predictions.append(torch.cat(fold_predictions, dim=0))
# We now need to release all our memory and get rid of the current model, optimizer, etc
accelerator.free_memory()
# New Code #
# Finally we check the accuracy of our folded results:
test_references = torch.cat(test_references, dim=0)
preds = torch.stack(test_predictions, dim=0).sum(dim=0).div(int(args.num_folds)).argmax(dim=-1)
test_metric = metric.compute(predictions=preds, references=test_references)
accelerator.print("Average test metrics from all folds:", test_metric)
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
# New Code #
parser.add_argument("--num_folds", type=int, default=3, help="The number of splits to perform across the dataset")
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,734 @@
#!/usr/bin/env python
# coding=utf-8
# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...)
on a text file or a dataset without using HuggingFace Trainer.
Here is the full list of checkpoints on the hub that can be fine-tuned by this script:
https://huggingface.co/models?filter=text-generation
"""
# You can also adapt this script on your own causal language modeling task. Pointers for this are left as comments.
import argparse
import json
import logging
import math
import os
import random
from itertools import chain
from pathlib import Path
import torch
from torch.utils.data import DataLoader
import datasets
import transformers
from accelerate import Accelerator, DistributedType
from accelerate.logging import get_logger
from accelerate.utils import DummyOptim, DummyScheduler, set_seed
from datasets import load_dataset
from huggingface_hub import Repository
from tqdm.auto import tqdm
from transformers import (
CONFIG_MAPPING,
MODEL_MAPPING,
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
SchedulerType,
default_data_collator,
get_scheduler,
)
from transformers.utils import get_full_repo_name
from transformers.utils.versions import require_version
logger = get_logger(__name__)
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
MODEL_CONFIG_CLASSES = list(MODEL_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
def parse_args():
parser = argparse.ArgumentParser(description="Finetune a transformers model on a causal language modeling task")
parser.add_argument(
"--dataset_name",
type=str,
default=None,
help="The name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--dataset_config_name",
type=str,
default=None,
help="The configuration name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--train_file", type=str, default=None, help="A csv or a json file containing the training data."
)
parser.add_argument(
"--validation_file", type=str, default=None, help="A csv or a json file containing the validation data."
)
parser.add_argument(
"--validation_split_percentage",
default=5,
help="The percentage of the train set used as validation set in case there's no validation split",
)
parser.add_argument(
"--model_name_or_path",
type=str,
help="Path to pretrained model or model identifier from huggingface.co/models.",
required=False,
)
parser.add_argument(
"--config_name",
type=str,
default=None,
help="Pretrained config name or path if not the same as model_name",
)
parser.add_argument(
"--tokenizer_name",
type=str,
default=None,
help="Pretrained tokenizer name or path if not the same as model_name",
)
parser.add_argument(
"--use_slow_tokenizer",
action="store_true",
help="If passed, will use a slow tokenizer (not backed by the 🤗 Tokenizers library).",
)
parser.add_argument(
"--per_device_train_batch_size",
type=int,
default=8,
help="Batch size (per device) for the training dataloader.",
)
parser.add_argument(
"--per_device_eval_batch_size",
type=int,
default=8,
help="Batch size (per device) for the evaluation dataloader.",
)
parser.add_argument(
"--learning_rate",
type=float,
default=5e-5,
help="Initial learning rate (after the potential warmup period) to use.",
)
parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.")
parser.add_argument("--num_train_epochs", type=int, default=3, help="Total number of training epochs to perform.")
parser.add_argument(
"--max_train_steps",
type=int,
default=None,
help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
)
parser.add_argument(
"--gradient_accumulation_steps",
type=int,
default=1,
help="Number of updates steps to accumulate before performing a backward/update pass.",
)
parser.add_argument(
"--lr_scheduler_type",
type=SchedulerType,
default="linear",
help="The scheduler type to use.",
choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"],
)
parser.add_argument(
"--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler."
)
parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.")
parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
parser.add_argument(
"--model_type",
type=str,
default=None,
help="Model type to use if training from scratch.",
choices=MODEL_TYPES,
)
parser.add_argument(
"--block_size",
type=int,
default=None,
help=(
"Optional input sequence length after tokenization. The training dataset will be truncated in block of"
" this size for training. Default to the model max input length for single sentence inputs (take into"
" account special tokens)."
),
)
parser.add_argument(
"--preprocessing_num_workers",
type=int,
default=None,
help="The number of processes to use for the preprocessing.",
)
parser.add_argument(
"--overwrite_cache", type=bool, default=False, help="Overwrite the cached training and evaluation sets"
)
parser.add_argument(
"--no_keep_linebreaks", action="store_true", help="Do not keep line breaks when using TXT files."
)
parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
parser.add_argument(
"--hub_model_id", type=str, help="The name of the repository to keep in sync with the local `output_dir`."
)
parser.add_argument("--hub_token", type=str, help="The token to use to push to the Model Hub.")
parser.add_argument(
"--checkpointing_steps",
type=str,
default=None,
help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
)
parser.add_argument(
"--resume_from_checkpoint",
type=str,
default=None,
help="If the training should continue from a checkpoint folder.",
)
# New Code #
# Whether to load the best model at the end of training
parser.add_argument(
"--load_best_model",
action="store_true",
help="Whether to load the best model at the end of training",
)
parser.add_argument(
"--with_tracking",
action="store_true",
help="Whether to enable experiment trackers for logging.",
)
parser.add_argument(
"--report_to",
type=str,
default="all",
help=(
'The integration to report the results and logs to. Supported platforms are `"tensorboard"`,'
' `"wandb"` and `"comet_ml"`. Use `"all"` (default) to report to all integrations.'
"Only applicable when `--with_tracking` is passed."
),
)
args = parser.parse_args()
# Sanity checks
if args.dataset_name is None and args.train_file is None and args.validation_file is None:
raise ValueError("Need either a dataset name or a training/validation file.")
else:
if args.train_file is not None:
extension = args.train_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, json or txt file."
if args.validation_file is not None:
extension = args.validation_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, json or txt file."
if args.push_to_hub:
assert args.output_dir is not None, "Need an `output_dir` to create a repo when `--push_to_hub` is passed."
return args
# New Code #
def checkpoint_model(checkpoint_folder, ckpt_id, model, epoch, last_global_step, **kwargs):
"""Utility function for checkpointing model + optimizer dictionaries
The main purpose for this is to be able to resume training from that instant again
"""
checkpoint_state_dict = {
"epoch": epoch,
"last_global_step": last_global_step,
}
# Add extra kwargs too
checkpoint_state_dict.update(kwargs)
success = model.save_checkpoint(checkpoint_folder, ckpt_id, checkpoint_state_dict)
status_msg = f"checkpointing: checkpoint_folder={checkpoint_folder}, ckpt_id={ckpt_id}"
if success:
logging.info(f"Success {status_msg}")
else:
logging.warning(f"Failure {status_msg}")
return
# New Code #
def load_training_checkpoint(model, load_dir, tag=None, **kwargs):
"""Utility function for checkpointing model + optimizer dictionaries
The main purpose for this is to be able to resume training from that instant again
"""
_, checkpoint_state_dict = model.load_checkpoint(load_dir, tag=tag, **kwargs)
epoch = checkpoint_state_dict["epoch"]
last_global_step = checkpoint_state_dict["last_global_step"]
del checkpoint_state_dict
return (epoch, last_global_step)
# New Code #
def evaluate(args, model, eval_dataloader, accelerator, eval_dataset):
model.eval()
losses = []
for step, batch in enumerate(eval_dataloader):
with torch.no_grad():
outputs = model(**batch)
loss = outputs.loss
losses.append(accelerator.gather(loss.repeat(args.per_device_eval_batch_size)))
losses = torch.cat(losses)
losses = losses[: len(eval_dataset)]
try:
eval_loss = torch.mean(losses)
perplexity = math.exp(eval_loss)
except OverflowError:
perplexity = float("inf")
return perplexity, eval_loss
def main():
args = parse_args()
# Initialize the accelerator. We will let the accelerator handle device placement for us in this example.
# If we're using tracking, we also need to initialize it here and it will by default pick up all supported trackers
# in the environment
accelerator = (
Accelerator(log_with=args.report_to, logging_dir=args.output_dir) if args.with_tracking else Accelerator()
)
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logger.info(accelerator.state, main_process_only=False)
if accelerator.is_local_main_process:
datasets.utils.logging.set_verbosity_warning()
transformers.utils.logging.set_verbosity_info()
else:
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
# If passed along, set the training seed now.
if args.seed is not None:
set_seed(args.seed)
# Handle the repository creation
if accelerator.is_main_process:
if args.push_to_hub:
if args.hub_model_id is None:
repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
else:
repo_name = args.hub_model_id
repo = Repository(args.output_dir, clone_from=repo_name)
with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
if "step_*" not in gitignore:
gitignore.write("step_*\n")
if "epoch_*" not in gitignore:
gitignore.write("epoch_*\n")
elif args.output_dir is not None:
os.makedirs(args.output_dir, exist_ok=True)
accelerator.wait_for_everyone()
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
# (the dataset will be downloaded automatically from the datasets Hub).
#
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
# 'text' is found. You can easily tweak this behavior (see below).
#
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
# download the dataset.
if args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
raw_datasets = load_dataset(args.dataset_name, args.dataset_config_name)
if "validation" not in raw_datasets.keys():
raw_datasets["validation"] = load_dataset(
args.dataset_name,
args.dataset_config_name,
split=f"train[:{args.validation_split_percentage}%]",
)
raw_datasets["train"] = load_dataset(
args.dataset_name,
args.dataset_config_name,
split=f"train[{args.validation_split_percentage}%:]",
)
else:
data_files = {}
dataset_args = {}
if args.train_file is not None:
data_files["train"] = args.train_file
if args.validation_file is not None:
data_files["validation"] = args.validation_file
extension = args.train_file.split(".")[-1]
if extension == "txt":
extension = "text"
dataset_args["keep_linebreaks"] = not args.no_keep_linebreaks
raw_datasets = load_dataset(extension, data_files=data_files, **dataset_args)
# If no validation data is there, validation_split_percentage will be used to divide the dataset.
if "validation" not in raw_datasets.keys():
raw_datasets["validation"] = load_dataset(
extension,
data_files=data_files,
split=f"train[:{args.validation_split_percentage}%]",
**dataset_args,
)
raw_datasets["train"] = load_dataset(
extension,
data_files=data_files,
split=f"train[{args.validation_split_percentage}%:]",
**dataset_args,
)
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
# https://huggingface.co/docs/datasets/loading_datasets.html.
# Load pretrained model and tokenizer
#
# In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
if args.config_name:
config = AutoConfig.from_pretrained(args.config_name)
elif args.model_name_or_path:
config = AutoConfig.from_pretrained(args.model_name_or_path)
else:
config = CONFIG_MAPPING[args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, use_fast=not args.use_slow_tokenizer)
elif args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path, use_fast=not args.use_slow_tokenizer)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported by this script."
"You can do it from another script, save it, and load it from here, using --tokenizer_name."
)
if args.model_name_or_path:
model = AutoModelForCausalLM.from_pretrained(
args.model_name_or_path,
from_tf=bool(".ckpt" in args.model_name_or_path),
config=config,
)
else:
logger.info("Training new model from scratch")
model = AutoModelForCausalLM.from_config(config)
model.resize_token_embeddings(len(tokenizer))
# Preprocessing the datasets.
# First we tokenize all the texts.
column_names = raw_datasets["train"].column_names
text_column_name = "text" if "text" in column_names else column_names[0]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
with accelerator.main_process_first():
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on dataset",
)
if args.block_size is None:
block_size = tokenizer.model_max_length
if block_size > 1024:
logger.warning(
f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). "
"Picking 1024 instead. You can change that default value by passing --block_size xxx."
)
block_size = 1024
else:
if args.block_size > tokenizer.model_max_length:
logger.warning(
f"The block_size passed ({args.block_size}) is larger than the maximum length for the model"
f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}."
)
block_size = min(args.block_size, tokenizer.model_max_length)
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= block_size:
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
# Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a remainder
# for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value might be slower
# to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
with accelerator.main_process_first():
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.overwrite_cache,
desc=f"Grouping texts in chunks of {block_size}",
)
train_dataset = lm_datasets["train"]
eval_dataset = lm_datasets["validation"]
# Log a few random samples from the training set:
for index in random.sample(range(len(train_dataset)), 3):
logger.info(f"Sample {index} of the training set: {train_dataset[index]}.")
# DataLoaders creation:
train_dataloader = DataLoader(
train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=args.per_device_train_batch_size
)
eval_dataloader = DataLoader(
eval_dataset, collate_fn=default_data_collator, batch_size=args.per_device_eval_batch_size
)
# Optimizer
# Split weights in two groups, one with weight decay and the other not.
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": args.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
# New Code #
# Creates Dummy Optimizer if `optimizer` was specified in the config file else creates Adam Optimizer
optimizer_cls = (
torch.optim.AdamW
if accelerator.state.deepspeed_plugin is None
or "optimizer" not in accelerator.state.deepspeed_plugin.deepspeed_config
else DummyOptim
)
optimizer = optimizer_cls(optimizer_grouped_parameters, lr=args.learning_rate)
# On TPU, the tie weights in our model have been disconnected, so we need to restore the ties.
if accelerator.distributed_type == DistributedType.TPU:
model.tie_weights()
# Scheduler and math around the number of training steps.
# New Code
# Get gradient accumulation steps from deepspeed config if available
if accelerator.state.deepspeed_plugin is not None:
args.gradient_accumulation_steps = accelerator.state.deepspeed_plugin.deepspeed_config[
"gradient_accumulation_steps"
]
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
if args.max_train_steps is None:
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
else:
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
# New Code #
# Creates Dummy Scheduler if `scheduler` was specified in the config file else creates `args.lr_scheduler_type` Scheduler
if (
accelerator.state.deepspeed_plugin is None
or "scheduler" not in accelerator.state.deepspeed_plugin.deepspeed_config
):
lr_scheduler = get_scheduler(
name=args.lr_scheduler_type,
optimizer=optimizer,
num_warmup_steps=args.num_warmup_steps,
num_training_steps=args.max_train_steps,
)
else:
lr_scheduler = DummyScheduler(
optimizer, total_num_steps=args.max_train_steps, warmup_num_steps=args.num_warmup_steps
)
# Prepare everything with our `accelerator`.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# We need to recalculate our total training steps as the size of the training dataloader may have changed.
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
# Figure out how many steps we should save the Accelerator states
if hasattr(args.checkpointing_steps, "isdigit"):
checkpointing_steps = args.checkpointing_steps
if args.checkpointing_steps.isdigit():
checkpointing_steps = int(args.checkpointing_steps)
else:
checkpointing_steps = None
# We need to initialize the trackers we use, and also store our configuration.
# The trackers initializes automatically on the main process.
if args.with_tracking:
experiment_config = vars(args)
# TensorBoard cannot log Enums, need the raw value
experiment_config["lr_scheduler_type"] = experiment_config["lr_scheduler_type"].value
accelerator.init_trackers("clm_no_trainer", experiment_config)
# Train!
total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
logger.info("***** Running training *****")
logger.info(f" Num examples = {len(train_dataset)}")
logger.info(f" Num Epochs = {args.num_train_epochs}")
logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}")
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
logger.info(f" Total optimization steps = {args.max_train_steps}")
# Only show the progress bar once on each machine.
progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
completed_steps = 0
starting_epoch = 0
best_metric = None
best_metric_checkpoint = None
# Potentially load in the weights and states from a previous save
if args.resume_from_checkpoint:
# New Code #
# Loads the DeepSpeed checkpoint from the specified path
_, last_global_step = load_training_checkpoint(
model,
args.resume_from_checkpoint,
**{"load_optimizer_states": True, "load_lr_scheduler_states": True},
)
accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
resume_step = last_global_step
starting_epoch = resume_step // len(train_dataloader)
resume_step -= starting_epoch * len(train_dataloader)
for epoch in range(starting_epoch, args.num_train_epochs):
model.train()
if args.with_tracking:
total_loss = 0
for step, batch in enumerate(train_dataloader):
# We need to skip steps until we reach the resumed step
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
completed_steps += 1
continue
outputs = model(**batch)
loss = outputs.loss
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
loss = loss / args.gradient_accumulation_steps
accelerator.backward(loss)
if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
completed_steps += 1
if isinstance(checkpointing_steps, int):
if completed_steps % checkpointing_steps == 0:
output_dir = f"step_{completed_steps }"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
if completed_steps >= args.max_train_steps:
break
perplexity, eval_loss = evaluate(args, model, eval_dataloader, accelerator, eval_dataset)
logger.info(f"epoch {epoch}: perplexity: {perplexity} eval_loss: {eval_loss}")
if args.with_tracking:
accelerator.log(
{
"perplexity": perplexity,
"eval_loss": eval_loss,
"train_loss": total_loss.item() / len(train_dataloader),
"epoch": epoch,
"step": completed_steps,
},
step=completed_steps,
)
# New Code #
# Save the DeepSpeed checkpoint to the specified path
checkpoint_model(args.output_dir, epoch, model, epoch, completed_steps)
# New Code #
# Tracks the best checkpoint and best metric
if best_metric is None or best_metric > perplexity:
best_metric = perplexity
best_metric_checkpoint = os.path.join(args.output_dir, str(epoch))
accelerator.print(f"New best metric: {best_metric} at epoch {epoch}")
accelerator.print(f"best_metric_checkpoint: {best_metric_checkpoint}")
# New Code #
# Loads the best checkpoint after the training is finished
if args.load_best_model:
_, last_global_step = load_training_checkpoint(
model,
"/".join(best_metric_checkpoint.split("/")[:-1]),
tag=best_metric_checkpoint.split("/")[-1],
**{"load_optimizer_states": True, "load_lr_scheduler_states": True},
)
# New Code #
# Evaluates using the best checkpoint
perplexity, eval_loss = evaluate(args, model, eval_dataloader, accelerator, eval_dataset)
logger.info(f"Best model metrics: perplexity: {perplexity} eval_loss: {eval_loss}")
if perplexity != best_metric:
raise AssertionError(
f"Best metric {best_metric} does not match the metric {perplexity} of the loaded best model."
)
if args.output_dir is not None:
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
# New Code #
# Saves the whole/unpartitioned fp16 model when in ZeRO Stage-3 to the output directory if
# `stage3_gather_16bit_weights_on_model_save` is True in DeepSpeed Config file or
# `zero3_save_16bit_model` is True in DeepSpeed Plugin.
# For Zero Stages 1 and 2, models are saved as usual in the output directory.
# The model name saved is `pytorch_model.bin`
unwrapped_model.save_pretrained(
args.output_dir,
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
state_dict=accelerator.get_state_dict(model),
)
if accelerator.is_main_process:
tokenizer.save_pretrained(args.output_dir)
if args.push_to_hub:
repo.push_to_hub(commit_message="End of training", auto_lfs_prune=True)
with open(os.path.join(args.output_dir, "all_results.json"), "w") as f:
json.dump({"perplexity": perplexity, "eval_loss": eval_loss.item()}, f)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,381 @@
# coding=utf-8
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import gc
import os
import torch
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
########################################################################
# This is a fully working simple example to use Accelerate
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
# - FSDP
#
# This example also demonstrates the checkpointing and sharding capabilities
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
# New Code #
# Converting Bytes to Megabytes
def b2mb(x):
return int(x / 2**20)
# New Code #
# This context manager is used to track the peak memory usage of the process
class TorchTracemalloc:
def __enter__(self):
gc.collect()
torch.cuda.empty_cache()
torch.cuda.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.cuda.memory_allocated()
return self
def __exit__(self, *exc):
gc.collect()
torch.cuda.empty_cache()
self.end = torch.cuda.memory_allocated()
self.peak = torch.cuda.max_memory_allocated()
self.used = b2mb(self.end - self.begin)
self.peaked = b2mb(self.peak - self.begin)
# print(f"delta used/peak {self.used:4d}/{self.peaked:4d}")
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
from accelerate.test_utils.training import mocked_dataloaders
get_dataloaders = mocked_dataloaders # noqa: F811
def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# Initialize accelerator
if args.with_tracking:
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="wandb", logging_dir=args.logging_dir
)
else:
accelerator = Accelerator()
accelerator.print(accelerator.distributed_type)
if hasattr(args.checkpointing_steps, "isdigit"):
if args.checkpointing_steps == "epoch":
checkpointing_steps = args.checkpointing_steps
elif args.checkpointing_steps.isdigit():
checkpointing_steps = int(args.checkpointing_steps)
else:
raise ValueError(
f"Argument `checkpointing_steps` must be either a number or `epoch`. `{args.checkpointing_steps}` passed."
)
else:
checkpointing_steps = None
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
# We need to initialize the trackers we use, and also store our configuration
if args.with_tracking:
experiment_config = vars(args)
accelerator.init_trackers("fsdp_glue_no_trainer", experiment_config)
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path)
datasets = load_dataset("glue", "mrpc")
metric = evaluate.load("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
# If the batch size is too big we use gradient accumulation
gradient_accumulation_steps = 1
if batch_size > MAX_GPU_BATCH_SIZE and accelerator.distributed_type != DistributedType.TPU:
gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE
batch_size = MAX_GPU_BATCH_SIZE
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
set_seed(seed)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained(args.model_name_or_path, return_dict=True)
# New Code #
# For FSDP feature, it is highly recommended and efficient to prepare the model before creating optimizer
model = accelerator.prepare(model)
accelerator.print(model)
# Instantiate optimizer
# New Code #
# For FSDP feature, at present it doesn't support multiple parameter groups,
# so we need to create a single parameter group for the whole model
optimizer = torch.optim.AdamW(params=model.parameters(), lr=lr, weight_decay=2e-4)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=10,
num_training_steps=(len(train_dataloader) * num_epochs) // gradient_accumulation_steps,
)
# New Code #
# For FSDP feature, prepare everything except the model as we have already prepared the model
# before creating the optimizer
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
overall_step = 0
# Potentially load in the weights and states from a previous save
if args.resume_from_checkpoint:
if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
accelerator.load_state(args.resume_from_checkpoint)
path = os.path.basename(args.resume_from_checkpoint)
else:
# Get the most recent checkpoint
dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()]
dirs.sort(key=os.path.getctime)
path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last
# Extract `epoch_{i}` or `step_{i}`
training_difference = os.path.splitext(path)[0]
if "epoch" in training_difference:
num_epochs -= int(training_difference.replace("epoch_", ""))
resume_step = None
else:
resume_step = int(training_difference.replace("step_", ""))
num_epochs -= resume_step // len(train_dataloader)
# If resuming by step, we also need to know exactly how far into the DataLoader we went
resume_step = (num_epochs * len(train_dataloader)) - resume_step
# Now we train the model
for epoch in range(num_epochs):
# New Code #
# context manager to track the peak memory usage during the training epoch
with TorchTracemalloc() as tracemalloc:
model.train()
if args.with_tracking:
total_loss = 0
for step, batch in enumerate(train_dataloader):
# We need to skip steps until we reach the resumed step
if args.resume_from_checkpoint and epoch == 0:
if resume_step is not None and step < resume_step:
pass
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
loss = loss / gradient_accumulation_steps
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
accelerator.backward(loss)
if step % gradient_accumulation_steps == 0:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
# accelerator.print(lr_scheduler.get_lr())
overall_step += 1
if isinstance(checkpointing_steps, int):
output_dir = f"step_{overall_step}"
if overall_step % checkpointing_steps == 0:
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
# New Code #
# Printing the GPU memory usage details such as allocated memory, peak memory, and total memory usage
accelerator.print("Memory before entering the train : {}".format(b2mb(tracemalloc.begin)))
accelerator.print("Memory consumed at the end of the train (end-begin): {}".format(tracemalloc.used))
accelerator.print("Peak Memory consumed during the train (max-begin): {}".format(tracemalloc.peaked))
accelerator.print(
"Total Peak Memory consumed during the train (max): {}".format(
tracemalloc.peaked + b2mb(tracemalloc.begin)
)
)
# Logging the peak memory usage of the GPU to the tracker
if args.with_tracking:
accelerator.log(
{
"train_total_peak_memory": tracemalloc.peaked + b2mb(tracemalloc.begin),
},
step=epoch,
)
# New Code #
# context manager to track the peak memory usage during the evaluation
with TorchTracemalloc() as tracemalloc:
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
if args.with_tracking:
accelerator.log(
{
"accuracy": eval_metric["accuracy"],
"f1": eval_metric["f1"],
"train_loss": total_loss.item() / len(train_dataloader),
},
step=epoch,
)
if checkpointing_steps == "epoch":
output_dir = f"epoch_{epoch}"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
# New Code #
# Printing the GPU memory usage details such as allocated memory, peak memory, and total memory usage
accelerator.print("Memory before entering the eval : {}".format(b2mb(tracemalloc.begin)))
accelerator.print("Memory consumed at the end of the eval (end-begin): {}".format(tracemalloc.used))
accelerator.print("Peak Memory consumed during the eval (max-begin): {}".format(tracemalloc.peaked))
accelerator.print(
"Total Peak Memory consumed during the eval (max): {}".format(tracemalloc.peaked + b2mb(tracemalloc.begin))
)
# Logging the peak memory usage of the GPU to the tracker
if args.with_tracking:
accelerator.log(
{
"eval_total_peak_memory": tracemalloc.peaked + b2mb(tracemalloc.begin),
},
step=epoch,
)
if args.with_tracking:
accelerator.end_training()
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
parser.add_argument(
"--checkpointing_steps",
type=str,
default=None,
help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
)
parser.add_argument(
"--resume_from_checkpoint",
type=str,
default=None,
help="If the training should continue from a checkpoint folder.",
)
parser.add_argument(
"--with_tracking",
action="store_true",
help="Whether to load in all available experiment trackers from the environment and use them for logging.",
)
parser.add_argument(
"--output_dir",
type=str,
default=".",
help="Optional save directory where all checkpoint folders will be stored. Default is the current working directory.",
)
parser.add_argument(
"--logging_dir",
type=str,
default="logs",
help="Location on where to store experiment tracking logs`",
)
parser.add_argument(
"--model_name_or_path",
type=str,
help="Path to pretrained model or model identifier from huggingface.co/models.",
required=True,
)
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 1, "batch_size": 16}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,215 @@
# coding=utf-8
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import torch
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
########################################################################
# This is a fully working simple example to use Accelerate
# and perform gradient accumulation
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
"""
Creates a set of `DataLoader`s for the `glue` dataset,
using "bert-base-cased" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
return train_dataloader, eval_dataloader
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
from accelerate.test_utils.training import mocked_dataloaders
get_dataloaders = mocked_dataloaders # noqa: F811
def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# New Code #
gradient_accumulation_steps = int(args.gradient_accumulation_steps)
# Initialize accelerator
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, gradient_accumulation_steps=gradient_accumulation_steps
)
if accelerator.distributed_type == DistributedType.TPU and gradient_accumulation_steps > 1:
raise NotImplementedError(
"Gradient accumulation on TPUs is currently not supported. Pass `gradient_accumulation_steps=1`"
)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
metric = evaluate.load("glue", "mrpc")
set_seed(seed)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs),
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now we train the model
for epoch in range(num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
# New code #
# We use the new `accumulate` context manager to perform gradient accumulation
# We also currently do not support TPUs nor advise it as bugs were found on the XLA side when running our tests.
with accelerator.accumulate(model):
output = model(**batch)
loss = output.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
# New Code #
parser.add_argument(
"--gradient_accumulation_steps",
type=int,
default=1,
help="The number of minibatches to be ran before gradients are accumulated.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,220 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import torch
from torch.optim import AdamW
from torch.utils.data import DataLoader
# New Code #
import evaluate
from accelerate import Accelerator, DistributedType
from accelerate.utils import find_executable_batch_size
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
########################################################################
# This is a fully working simple example to use Accelerate,
# specifically showcasing how to ensure out-of-memory errors never
# interrupt training, and builds off the `nlp_example.py` script.
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# New additions from the base script can be found quickly by
# looking for the # New Code # tags
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
"""
Creates a set of `DataLoader`s for the `glue` dataset,
using "bert-base-cased" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
return train_dataloader, eval_dataloader
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
from accelerate.test_utils.training import mocked_dataloaders
get_dataloaders = mocked_dataloaders # noqa: F811
def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# Initialize accelerator
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
metric = evaluate.load("glue", "mrpc")
# New Code #
# We now can define an inner training loop function. It should take a batch size as the only parameter,
# and build the dataloaders in there.
# It also gets our decorator
@find_executable_batch_size(starting_batch_size=batch_size)
def inner_training_loop(batch_size):
# And now just move everything below under this function
# We need to bring in the Accelerator object from earlier
nonlocal accelerator
# And reset all of its attributes that could hold onto any memory:
accelerator.free_memory()
# Then we can declare the model, optimizer, and everything else:
set_seed(seed)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs),
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now we train the model
for epoch in range(num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
# New Code #
# And call it at the end with no arguments
# Note: You could also refactor this outside of your training loop function
inner_training_loop()
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,225 @@
# coding=utf-8
# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import torch
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
########################################################################
# This is a fully working simple example to use Accelerate,
# specifically showcasing how to properly calculate the metrics on the
# validation dataset when in a distributed system, and builds off the
# `nlp_example.py` script.
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# To help focus on the differences in the code, building `DataLoaders`
# was refactored into its own function.
# New additions from the base script can be found quickly by
# looking for the # New Code # tags
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
"""
Creates a set of `DataLoader`s for the `glue` dataset,
using "bert-base-cased" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
return train_dataloader, eval_dataloader
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
from accelerate.test_utils.training import mocked_dataloaders
get_dataloaders = mocked_dataloaders # noqa: F811
def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# Initialize accelerator
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
metric = evaluate.load("glue", "mrpc")
# If the batch size is too big we use gradient accumulation
gradient_accumulation_steps = 1
if batch_size > MAX_GPU_BATCH_SIZE and accelerator.distributed_type != DistributedType.TPU:
gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE
batch_size = MAX_GPU_BATCH_SIZE
set_seed(seed)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs) // gradient_accumulation_steps,
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now we train the model
for epoch in range(num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
loss = loss / gradient_accumulation_steps
accelerator.backward(loss)
if step % gradient_accumulation_steps == 0:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
samples_seen = 0
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather((predictions, batch["labels"]))
# New Code #
# First we check if it's a distributed system
if accelerator.use_distributed:
# Then see if we're on the last batch of our eval dataloader
if step == len(eval_dataloader) - 1:
# Last batch needs to be truncated on distributed systems as it contains additional samples
predictions = predictions[: len(eval_dataloader.dataset) - samples_seen]
references = references[: len(eval_dataloader.dataset) - samples_seen]
else:
# Otherwise we add the number of samples seen
samples_seen += references.shape[0]
# All of this can be avoided if you use `Accelerator.gather_for_metrics` instead of `Accelerator.gather`:
# accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,263 @@
# coding=utf-8
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import torch
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
########################################################################
# This is a fully working simple example to use Accelerate,
# specifically showcasing the experiment tracking capability,
# and builds off the `nlp_example.py` script.
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# To help focus on the differences in the code, building `DataLoaders`
# was refactored into its own function.
# New additions from the base script can be found quickly by
# looking for the # New Code # tags
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
"""
Creates a set of `DataLoader`s for the `glue` dataset,
using "bert-base-cased" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
return train_dataloader, eval_dataloader
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
from accelerate.test_utils.training import mocked_dataloaders
get_dataloaders = mocked_dataloaders # noqa: F811
def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# Initialize Accelerator
# New Code #
# We pass in "all" to `log_with` to grab all available trackers in the environment
# Note: If using a custom `Tracker` class, should be passed in here such as:
# >>> log_with = ["all", MyCustomTrackerClassInstance()]
if args.with_tracking:
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="all", logging_dir=args.logging_dir
)
else:
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
set_seed(seed)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
metric = evaluate.load("glue", "mrpc")
# If the batch size is too big we use gradient accumulation
gradient_accumulation_steps = 1
if batch_size > MAX_GPU_BATCH_SIZE and accelerator.distributed_type != DistributedType.TPU:
gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE
batch_size = MAX_GPU_BATCH_SIZE
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs) // gradient_accumulation_steps,
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# New Code #
# We need to initialize the trackers we use. Overall configurations can also be stored
if args.with_tracking:
run = os.path.split(__file__)[-1].split(".")[0]
accelerator.init_trackers(run, config)
# Now we train the model
for epoch in range(num_epochs):
model.train()
# New Code #
# For our tracking example, we will log the total loss of each epoch
if args.with_tracking:
total_loss = 0
for step, batch in enumerate(train_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
# New Code #
if args.with_tracking:
total_loss += loss.detach().float()
loss = loss / gradient_accumulation_steps
accelerator.backward(loss)
if step % gradient_accumulation_steps == 0:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True` (the default).
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
# New Code #
# To actually log, we call `Accelerator.log`
# The values passed can be of `str`, `int`, `float` or `dict` of `str` to `float`/`int`
if args.with_tracking:
accelerator.log(
{
"accuracy": eval_metric["accuracy"],
"f1": eval_metric["f1"],
"train_loss": total_loss.item() / len(train_dataloader),
"epoch": epoch,
},
step=epoch,
)
# New Code #
# When a run is finished, you should call `accelerator.end_training()`
# to close all of the open trackers
if args.with_tracking:
accelerator.end_training()
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
parser.add_argument(
"--with_tracking",
action="store_true",
help="Whether to load in all available experiment trackers from the environment and use them for logging.",
)
parser.add_argument(
"--logging_dir",
type=str,
default="logs",
help="Location on where to store experiment tracking logs`",
)
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,315 @@
# coding=utf-8
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import re
import numpy as np
import torch
from torch.optim.lr_scheduler import OneCycleLR
from torch.utils.data import DataLoader, Dataset
import PIL
from accelerate import Accelerator
from timm import create_model
from torchvision.transforms import Compose, RandomResizedCrop, Resize, ToTensor
########################################################################
# This is a fully working simple example to use Accelerate
#
# This example trains a ResNet50 on the Oxford-IIT Pet Dataset
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
# Function to get the label from the filename
def extract_label(fname):
stem = fname.split(os.path.sep)[-1]
return re.search(r"^(.*)_\d+\.jpg$", stem).groups()[0]
class PetsDataset(Dataset):
def __init__(self, file_names, image_transform=None, label_to_id=None):
self.file_names = file_names
self.image_transform = image_transform
self.label_to_id = label_to_id
def __len__(self):
return len(self.file_names)
def __getitem__(self, idx):
fname = self.file_names[idx]
raw_image = PIL.Image.open(fname)
image = raw_image.convert("RGB")
if self.image_transform is not None:
image = self.image_transform(image)
label = extract_label(fname)
if self.label_to_id is not None:
label = self.label_to_id[label]
return {"image": image, "label": label}
def training_function(config, args):
# Initialize accelerator
if args.with_tracking:
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="all", logging_dir=args.logging_dir
)
else:
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
image_size = config["image_size"]
if not isinstance(image_size, (list, tuple)):
image_size = (image_size, image_size)
# Parse out whether we are saving every epoch or after a certain number of batches
if hasattr(args.checkpointing_steps, "isdigit"):
if args.checkpointing_steps == "epoch":
checkpointing_steps = args.checkpointing_steps
elif args.checkpointing_steps.isdigit():
checkpointing_steps = int(args.checkpointing_steps)
else:
raise ValueError(
f"Argument `checkpointing_steps` must be either a number or `epoch`. `{args.checkpointing_steps}` passed."
)
else:
checkpointing_steps = None
# We need to initialize the trackers we use, and also store our configuration
if args.with_tracking:
run = os.path.split(__file__)[-1].split(".")[0]
accelerator.init_trackers(run, config)
# Grab all the image filenames
file_names = [os.path.join(args.data_dir, fname) for fname in os.listdir(args.data_dir) if fname.endswith(".jpg")]
# Build the label correspondences
all_labels = [extract_label(fname) for fname in file_names]
id_to_label = list(set(all_labels))
id_to_label.sort()
label_to_id = {lbl: i for i, lbl in enumerate(id_to_label)}
# Set the seed before splitting the data.
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
# Split our filenames between train and validation
random_perm = np.random.permutation(len(file_names))
cut = int(0.8 * len(file_names))
train_split = random_perm[:cut]
eval_split = random_perm[cut:]
# For training we use a simple RandomResizedCrop
train_tfm = Compose([RandomResizedCrop(image_size, scale=(0.5, 1.0)), ToTensor()])
train_dataset = PetsDataset(
[file_names[i] for i in train_split], image_transform=train_tfm, label_to_id=label_to_id
)
# For evaluation, we use a deterministic Resize
eval_tfm = Compose([Resize(image_size), ToTensor()])
eval_dataset = PetsDataset([file_names[i] for i in eval_split], image_transform=eval_tfm, label_to_id=label_to_id)
# Instantiate dataloaders.
train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size, num_workers=4)
eval_dataloader = DataLoader(eval_dataset, shuffle=False, batch_size=batch_size, num_workers=4)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Freezing the base model
for param in model.parameters():
param.requires_grad = False
for param in model.get_classifier().parameters():
param.requires_grad = True
# We normalize the batches of images to be a bit faster.
mean = torch.tensor(model.default_cfg["mean"])[None, :, None, None].to(accelerator.device)
std = torch.tensor(model.default_cfg["std"])[None, :, None, None].to(accelerator.device)
# Instantiate optimizer
optimizer = torch.optim.Adam(params=model.parameters(), lr=lr / 25)
# Instantiate learning rate scheduler
lr_scheduler = OneCycleLR(optimizer=optimizer, max_lr=lr, epochs=num_epochs, steps_per_epoch=len(train_dataloader))
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# We need to keep track of how many total steps we have iterated over
overall_step = 0
# We also need to keep track of the stating epoch so files are named properly
starting_epoch = 0
# Potentially load in the weights and states from a previous save
if args.resume_from_checkpoint:
if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
accelerator.load_state(args.resume_from_checkpoint)
path = os.path.basename(args.resume_from_checkpoint)
else:
# Get the most recent checkpoint
dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()]
dirs.sort(key=os.path.getctime)
path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last
# Extract `epoch_{i}` or `step_{i}`
training_difference = os.path.splitext(path)[0]
if "epoch" in training_difference:
starting_epoch = int(training_difference.replace("epoch_", "")) + 1
resume_step = None
else:
resume_step = int(training_difference.replace("step_", ""))
starting_epoch = resume_step // len(train_dataloader)
resume_step -= starting_epoch * len(train_dataloader)
# Now we train the model
for epoch in range(starting_epoch, num_epochs):
model.train()
if args.with_tracking:
total_loss = 0
for step, batch in enumerate(train_dataloader):
# We need to skip steps until we reach the resumed step
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
overall_step += 1
continue
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch = {k: v.to(accelerator.device) for k, v in batch.items()}
inputs = (batch["image"] - mean) / std
outputs = model(inputs)
loss = torch.nn.functional.cross_entropy(outputs, batch["label"])
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
overall_step += 1
if isinstance(checkpointing_steps, int):
output_dir = f"step_{overall_step}"
if overall_step % checkpointing_steps == 0:
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
model.eval()
accurate = 0
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch = {k: v.to(accelerator.device) for k, v in batch.items()}
inputs = (batch["image"] - mean) / std
with torch.no_grad():
outputs = model(inputs)
predictions = outputs.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["label"]))
accurate_preds = predictions == references
accurate += accurate_preds.long().sum()
eval_metric = accurate.item() / accelerator.gradient_state.samples_seen
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}: {100 * eval_metric:.2f}")
if args.with_tracking:
accelerator.log(
{
"accuracy": 100 * eval_metric,
"train_loss": total_loss.item() / len(train_dataloader),
"epoch": epoch,
},
step=overall_step,
)
if checkpointing_steps == "epoch":
output_dir = f"epoch_{epoch}"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
if args.with_tracking:
accelerator.end_training()
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument("--data_dir", required=True, help="The data folder on disk.")
parser.add_argument("--fp16", action="store_true", help="If passed, will use FP16 training.")
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
parser.add_argument(
"--checkpointing_steps",
type=str,
default=None,
help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
)
parser.add_argument(
"--output_dir",
type=str,
default=".",
help="Optional save directory where all checkpoint folders will be stored. Default is the current working directory.",
)
parser.add_argument(
"--resume_from_checkpoint",
type=str,
default=None,
help="If the training should continue from a checkpoint folder.",
)
parser.add_argument(
"--with_tracking",
action="store_true",
help="Whether to load in all available experiment trackers from the environment and use them for logging.",
)
parser.add_argument(
"--logging_dir",
type=str,
default="logs",
help="Location on where to store experiment tracking logs`",
)
args = parser.parse_args()
config = {"lr": 3e-2, "num_epochs": 3, "seed": 42, "batch_size": 64, "image_size": 224}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,296 @@
# coding=utf-8
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import torch
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
########################################################################
# This is a fully working simple example to use Accelerate
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# This example also demonstrates the checkpointing and sharding capabilities
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
def training_function(config, args):
# Initialize accelerator
if args.with_tracking:
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="all", logging_dir=args.logging_dir
)
else:
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
if hasattr(args.checkpointing_steps, "isdigit"):
if args.checkpointing_steps == "epoch":
checkpointing_steps = args.checkpointing_steps
elif args.checkpointing_steps.isdigit():
checkpointing_steps = int(args.checkpointing_steps)
else:
raise ValueError(
f"Argument `checkpointing_steps` must be either a number or `epoch`. `{args.checkpointing_steps}` passed."
)
else:
checkpointing_steps = None
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
# We need to initialize the trackers we use, and also store our configuration
if args.with_tracking:
run = os.path.split(__file__)[-1].split(".")[0]
accelerator.init_trackers(run, config)
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
metric = evaluate.load("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
# If the batch size is too big we use gradient accumulation
gradient_accumulation_steps = 1
if batch_size > MAX_GPU_BATCH_SIZE and accelerator.distributed_type != DistributedType.TPU:
gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE
batch_size = MAX_GPU_BATCH_SIZE
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
set_seed(seed)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs) // gradient_accumulation_steps,
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# We need to keep track of how many total steps we have iterated over
overall_step = 0
# We also need to keep track of the stating epoch so files are named properly
starting_epoch = 0
# Potentially load in the weights and states from a previous save
if args.resume_from_checkpoint:
if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
accelerator.load_state(args.resume_from_checkpoint)
path = os.path.basename(args.resume_from_checkpoint)
else:
# Get the most recent checkpoint
dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()]
dirs.sort(key=os.path.getctime)
path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last
# Extract `epoch_{i}` or `step_{i}`
training_difference = os.path.splitext(path)[0]
if "epoch" in training_difference:
starting_epoch = int(training_difference.replace("epoch_", "")) + 1
resume_step = None
else:
resume_step = int(training_difference.replace("step_", ""))
starting_epoch = resume_step // len(train_dataloader)
resume_step -= starting_epoch * len(train_dataloader)
# Now we train the model
for epoch in range(starting_epoch, num_epochs):
model.train()
if args.with_tracking:
total_loss = 0
for step, batch in enumerate(train_dataloader):
# We need to skip steps until we reach the resumed step
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
overall_step += 1
continue
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
loss = loss / gradient_accumulation_steps
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
accelerator.backward(loss)
if step % gradient_accumulation_steps == 0:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
overall_step += 1
if isinstance(checkpointing_steps, int):
output_dir = f"step_{overall_step}"
if overall_step % checkpointing_steps == 0:
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
if args.with_tracking:
accelerator.log(
{
"accuracy": eval_metric["accuracy"],
"f1": eval_metric["f1"],
"train_loss": total_loss.item() / len(train_dataloader),
"epoch": epoch,
},
step=epoch,
)
if checkpointing_steps == "epoch":
output_dir = f"epoch_{epoch}"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
if args.with_tracking:
accelerator.end_training()
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
parser.add_argument(
"--checkpointing_steps",
type=str,
default=None,
help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
)
parser.add_argument(
"--resume_from_checkpoint",
type=str,
default=None,
help="If the training should continue from a checkpoint folder.",
)
parser.add_argument(
"--with_tracking",
action="store_true",
help="Whether to load in all available experiment trackers from the environment and use them for logging.",
)
parser.add_argument(
"--output_dir",
type=str,
default=".",
help="Optional save directory where all checkpoint folders will be stored. Default is the current working directory.",
)
parser.add_argument(
"--logging_dir",
type=str,
default="logs",
help="Location on where to store experiment tracking logs`",
)
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -73,7 +73,7 @@ class PetsDataset(Dataset):
def training_function(config, args):
# Initialize accelerator
accelerator = Accelerator(fp16=args.fp16, cpu=args.cpu, mixed_precision=args.mix_precision)
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
@ -139,17 +139,16 @@ def training_function(config, args):
# Instantiate optimizer
optimizer = torch.optim.Adam(params=model.parameters(), lr=lr / 25)
# Instantiate learning rate scheduler
lr_scheduler = OneCycleLR(optimizer=optimizer, max_lr=lr, epochs=num_epochs, steps_per_epoch=len(train_dataloader))
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Instantiate learning rate scheduler after preparing the training dataloader as the prepare method
# may change its length.
lr_scheduler = OneCycleLR(optimizer=optimizer, max_lr=lr, epochs=num_epochs, steps_per_epoch=len(train_dataloader))
# Now we train the model
for epoch in range(num_epochs):
model.train()
@ -167,14 +166,15 @@ def training_function(config, args):
model.eval()
accurate = 0
num_elems = 0
for step, batch in enumerate(eval_dataloader):
for _, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch = {k: v.to(accelerator.device) for k, v in batch.items()}
inputs = (batch["image"] - mean) / std
with torch.no_grad():
outputs = model(inputs)
predictions = outputs.argmax(dim=-1)
accurate_preds = accelerator.gather(predictions) == accelerator.gather(batch["label"])
predictions, references = accelerator.gather_for_metrics((predictions, batch["label"]))
accurate_preds = predictions == references
num_elems += accurate_preds.shape[0]
accurate += accurate_preds.long().sum()
@ -186,7 +186,6 @@ def training_function(config, args):
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument("--data_dir", required=True, help="The data folder on disk.")
parser.add_argument("--fp16", action="store_true", help="If passed, will use FP16 training.")
parser.add_argument(
"--mixed_precision",
type=str,
@ -196,6 +195,12 @@ def main():
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument(
"--checkpointing_steps",
type=str,
default=None,
help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
args = parser.parse_args()
config = {"lr": 3e-2, "num_epochs": 3, "seed": 42, "batch_size": 64, "image_size": 224}

View File

@ -0,0 +1,43 @@
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto",
"torch_adam": true,
"adam_w_mode": true
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 1,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": "auto",
"contiguous_gradients": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}

View File

@ -0,0 +1,43 @@
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto",
"torch_adam": true,
"adam_w_mode": true
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": "auto",
"contiguous_gradients": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}

View File

@ -0,0 +1,47 @@
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto",
"torch_adam": true,
"adam_w_mode": true
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": "auto",
"contiguous_gradients": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}

View File

@ -0,0 +1,44 @@
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"sub_group_size": 1e9,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": "auto"
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}

View File

@ -0,0 +1,52 @@
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"sub_group_size": 1e9,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": "auto"
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}

View File

@ -15,17 +15,13 @@
import argparse
import torch
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset, load_metric
from transformers import (
AdamW,
AutoModelForSequenceClassification,
AutoTokenizer,
get_linear_schedule_with_warmup,
set_seed,
)
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
########################################################################
@ -49,20 +45,19 @@ MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
def training_function(config, args):
# Initialize accelerator
accelerator = Accelerator(fp16=args.fp16, cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
correct_bias = config["correct_bias"]
seed = int(config["seed"])
batch_size = int(config["batch_size"])
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
"""
Creates a set of `DataLoader`s for the `glue` dataset,
using "bert-base-cased" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
metric = load_metric("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
@ -70,21 +65,17 @@ def training_function(config, args):
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets.rename_column_("label", "labels")
# If the batch size is too big we use gradient accumulation
gradient_accumulation_steps = 1
if batch_size > MAX_GPU_BATCH_SIZE:
gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE
batch_size = MAX_GPU_BATCH_SIZE
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
@ -100,8 +91,28 @@ def training_function(config, args):
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
set_seed(seed)
return train_dataloader, eval_dataloader
def training_function(config, args):
# Initialize accelerator
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
metric = evaluate.load("glue", "mrpc")
# If the batch size is too big we use gradient accumulation
gradient_accumulation_steps = 1
if batch_size > MAX_GPU_BATCH_SIZE and accelerator.distributed_type != DistributedType.TPU:
gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE
batch_size = MAX_GPU_BATCH_SIZE
set_seed(seed)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
@ -111,23 +122,22 @@ def training_function(config, args):
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr, correct_bias=correct_bias)
optimizer = AdamW(params=model.parameters(), lr=lr)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader
)
# Instantiate learning rate scheduler after preparing the training dataloader as the prepare method
# may change its length.
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs) // gradient_accumulation_steps,
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now we train the model
for epoch in range(num_epochs):
model.train()
@ -150,9 +160,10 @@ def training_function(config, args):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=accelerator.gather(predictions),
references=accelerator.gather(batch["labels"]),
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
@ -162,7 +173,6 @@ def training_function(config, args):
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument("--fp16", action="store_true", help="If passed, will use FP16 training.")
parser.add_argument(
"--mixed_precision",
type=str,
@ -174,7 +184,7 @@ def main():
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "correct_bias": True, "seed": 42, "batch_size": 16}
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)

View File

@ -1 +1,3 @@
accelerate # used to be installed in Amazon SageMaker environment
accelerate # used to be installed in Amazon SageMaker environment
evaluate
datasets==2.3.2

View File

@ -0,0 +1,108 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from manim import *
class Stage1(Scene):
def construct(self):
mem = Rectangle(height=0.5,width=0.5)
fill = Rectangle(height=0.46,width=0.46).set_stroke(width=0)
cpu_left_col_base = [mem.copy() for i in range(6)]
cpu_right_col_base = [mem.copy() for i in range(6)]
cpu_left_col = VGroup(*cpu_left_col_base).arrange(UP, buff=0)
cpu_right_col = VGroup(*cpu_right_col_base).arrange(UP, buff=0)
cpu_rects = VGroup(cpu_left_col,cpu_right_col).arrange(RIGHT, buff=0)
cpu_text = Text("CPU", font_size=24)
cpu = Group(cpu_rects,cpu_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
cpu.move_to([-2.5,-.5,0])
self.add(cpu)
gpu_base = [mem.copy() for i in range(1)]
gpu_rect = VGroup(*gpu_base).arrange(UP,buff=0)
gpu_text = Text("GPU", font_size=24)
gpu = Group(gpu_rect,gpu_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
gpu.align_to(cpu, DOWN)
gpu.set_x(gpu.get_x() - 1)
self.add(gpu)
model_base = [mem.copy() for i in range(6)]
model_rect = VGroup(*model_base).arrange(RIGHT,buff=0)
model_text = Text("Model", font_size=24)
model = Group(model_rect,model_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
model.move_to([3, -1., 0])
self.play(
Create(cpu_left_col, run_time=1),
Create(cpu_right_col, run_time=1),
Create(gpu_rect, run_time=1),
)
step_1 = MarkupText(
f"First, an empty model skeleton is loaded\ninto <span fgcolor='{YELLOW}'>memory</span> without using much RAM.",
font_size=24
)
key = Square(side_length=2.2)
key.move_to([-5, 2, 0])
key_text = MarkupText(
f"<b>Key:</b>\n\n<span fgcolor='{YELLOW}'>●</span> Empty Model",
font_size=18,
)
key_text.move_to([-5, 2.4, 0])
step_1.move_to([2, 2, 0])
self.play(
Write(step_1, run_time=2.5),
Write(key_text),
Write(key)
)
self.add(model)
cpu_targs = []
first_animations = []
second_animations = []
for i,rect in enumerate(model_base):
cpu_target = Rectangle(height=0.46,width=0.46).set_stroke(width=0.).set_fill(YELLOW, opacity=0.7)
cpu_target.move_to(rect)
cpu_target.generate_target()
cpu_target.target.height = 0.46/4
cpu_target.target.width = 0.46/3
if i == 0:
cpu_target.target.next_to(cpu_left_col_base[0].get_corner(DOWN+LEFT), buff=0.02, direction=UP)
cpu_target.target.set_x(cpu_target.target.get_x()+0.1)
elif i == 3:
cpu_target.target.next_to(cpu_targs[0].target, direction=UP, buff=0.)
else:
cpu_target.target.next_to(cpu_targs[i-1].target, direction=RIGHT, buff=0.)
cpu_targs.append(cpu_target)
first_animations.append(rect.animate(run_time=0.5).set_stroke(YELLOW))
second_animations.append(MoveToTarget(cpu_target, run_time=1.5))
self.play(*first_animations)
self.play(*second_animations)
self.wait()

View File

@ -0,0 +1,126 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from manim import *
class Stage2(Scene):
def construct(self):
mem = Rectangle(height=0.5,width=0.5)
fill = Rectangle(height=0.46,width=0.46).set_stroke(width=0)
cpu_left_col_base = [mem.copy() for i in range(6)]
cpu_right_col_base = [mem.copy() for i in range(6)]
cpu_left_col = VGroup(*cpu_left_col_base).arrange(UP, buff=0)
cpu_right_col = VGroup(*cpu_right_col_base).arrange(UP, buff=0)
cpu_rects = VGroup(cpu_left_col,cpu_right_col).arrange(RIGHT, buff=0)
cpu_text = Text("CPU", font_size=24)
cpu = Group(cpu_rects,cpu_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
cpu.move_to([-2.5,-.5,0])
self.add(cpu)
gpu_base = [mem.copy() for i in range(4)]
gpu_rect = VGroup(*gpu_base).arrange(UP,buff=0)
gpu_text = Text("GPU", font_size=24)
gpu = Group(gpu_rect,gpu_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
gpu.move_to([-1,-1,0])
self.add(gpu)
model_base = [mem.copy() for i in range(6)]
model_rect = VGroup(*model_base).arrange(RIGHT,buff=0)
model_text = Text("Model", font_size=24)
model = Group(model_rect,model_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
model.move_to([3, -1., 0])
self.add(model)
cpu_targs = []
for i,rect in enumerate(model_base):
rect.set_stroke(YELLOW)
# target = fill.copy().set_fill(YELLOW, opacity=0.7)
# target.move_to(rect)
# self.add(target)
cpu_target = Rectangle(height=0.46/4,width=0.46/3).set_stroke(width=0.).set_fill(YELLOW, opacity=0.7)
if i == 0:
cpu_target.next_to(cpu_left_col_base[0].get_corner(DOWN+LEFT), buff=0.02, direction=UP)
cpu_target.set_x(cpu_target.get_x()+0.1)
elif i == 3:
cpu_target.next_to(cpu_targs[0], direction=UP, buff=0.)
else:
cpu_target.next_to(cpu_targs[i-1], direction=RIGHT, buff=0.)
self.add(cpu_target)
cpu_targs.append(cpu_target)
checkpoint_base = [mem.copy() for i in range(6)]
checkpoint_rect = VGroup(*checkpoint_base).arrange(RIGHT,buff=0)
checkpoint_text = Text("Loaded Checkpoint", font_size=24)
checkpoint = Group(checkpoint_rect,checkpoint_text).arrange(DOWN, aligned_edge=DOWN, buff=0.4)
checkpoint.move_to([3, .5, 0])
key = Square(side_length=2.2)
key.move_to([-5, 2, 0])
key_text = MarkupText(
f"<b>Key:</b>\n\n<span fgcolor='{YELLOW}'>●</span> Empty Model",
font_size=18,
)
key_text.move_to([-5, 2.4, 0])
self.add(key_text, key)
blue_text = MarkupText(
f"<span fgcolor='{BLUE}'>●</span> Checkpoint",
font_size=18,
)
blue_text.next_to(key_text, DOWN*2.4, aligned_edge=key_text.get_left())
step_2 = MarkupText(
f'Next, a <i><span fgcolor="{BLUE}">second</span></i> model is loaded into memory,\nwith the weights of a <span fgcolor="{BLUE}">single shard</span>.',
font_size=24
)
step_2.move_to([2, 2, 0])
self.play(
Write(step_2),
Write(blue_text)
)
self.play(
Write(checkpoint_text, run_time=1),
Create(checkpoint_rect, run_time=1)
)
first_animations = []
second_animations = []
for i,rect in enumerate(checkpoint_base):
target = fill.copy().set_fill(BLUE, opacity=0.7)
target.move_to(rect)
first_animations.append(GrowFromCenter(target, run_time=1))
cpu_target = target.copy()
cpu_target.generate_target()
if i < 5:
cpu_target.target.move_to(cpu_left_col_base[i+1])
else:
cpu_target.target.move_to(cpu_right_col_base[i-5])
second_animations.append(MoveToTarget(cpu_target, run_time=1.5))
self.play(*first_animations)
self.play(*second_animations)
self.wait()

View File

@ -0,0 +1,158 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from manim import *
class Stage3(Scene):
def construct(self):
mem = Rectangle(height=0.5,width=0.5)
meta_mem = Rectangle(height=0.25,width=0.25)
fill = Rectangle(height=0.46,width=0.46).set_stroke(width=0)
cpu_left_col_base = [mem.copy() for i in range(6)]
cpu_right_col_base = [mem.copy() for i in range(6)]
cpu_left_col = VGroup(*cpu_left_col_base).arrange(UP, buff=0)
cpu_right_col = VGroup(*cpu_right_col_base).arrange(UP, buff=0)
cpu_rects = VGroup(cpu_left_col,cpu_right_col).arrange(RIGHT, buff=0)
cpu_text = Text("CPU", font_size=24)
cpu = Group(cpu_rects,cpu_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
cpu.move_to([-2.5,-.5,0])
self.add(cpu)
gpu_base = [mem.copy() for i in range(4)]
gpu_rect = VGroup(*gpu_base).arrange(UP,buff=0)
gpu_text = Text("GPU", font_size=24)
gpu = Group(gpu_rect,gpu_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
gpu.move_to([-1,-1,0])
self.add(gpu)
model_base = [mem.copy() for i in range(6)]
model_rect = VGroup(*model_base).arrange(RIGHT,buff=0)
model_text = Text("Model", font_size=24)
model = Group(model_rect,model_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
model.move_to([3, -1., 0])
self.add(model)
model_arr = []
model_cpu_arr = []
model_meta_arr = []
for i,rect in enumerate(model_base):
rect.set_stroke(YELLOW)
cpu_target = Rectangle(height=0.46/4,width=0.46/3).set_stroke(width=0.).set_fill(YELLOW, opacity=0.7)
if i == 0:
cpu_target.next_to(cpu_left_col_base[0].get_corner(DOWN+LEFT), buff=0.02, direction=UP)
cpu_target.set_x(cpu_target.get_x()+0.1)
elif i == 3:
cpu_target.next_to(model_cpu_arr[0], direction=UP, buff=0.)
else:
cpu_target.next_to(model_cpu_arr[i-1], direction=RIGHT, buff=0.)
self.add(cpu_target)
model_cpu_arr.append(cpu_target)
self.add(*model_arr, *model_cpu_arr, *model_meta_arr)
checkpoint_base = [mem.copy() for i in range(6)]
checkpoint_rect = VGroup(*checkpoint_base).arrange(RIGHT,buff=0)
checkpoint_text = Text("Loaded Checkpoint", font_size=24)
checkpoint = Group(checkpoint_rect,checkpoint_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
checkpoint.move_to([3, .5, 0])
self.add(checkpoint)
ckpt_arr = []
ckpt_cpu_arr = []
for i,rect in enumerate(checkpoint_base):
target = fill.copy().set_fill(BLUE, opacity=0.7)
target.move_to(rect)
ckpt_arr.append(target)
cpu_target = target.copy()
if i < 5:
cpu_target.move_to(cpu_left_col_base[i+1])
else:
cpu_target.move_to(cpu_right_col_base[i-5])
ckpt_cpu_arr.append(cpu_target)
self.add(*ckpt_arr, *ckpt_cpu_arr)
key = Square(side_length=2.2)
key.move_to([-5, 2, 0])
key_text = MarkupText(
f"<b>Key:</b>\n\n<span fgcolor='{YELLOW}'>●</span> Empty Model",
font_size=18,
)
key_text.move_to([-5, 2.4, 0])
self.add(key_text, key)
blue_text = MarkupText(
f"<span fgcolor='{BLUE}'>●</span> Checkpoint",
font_size=18,
)
blue_text.next_to(key_text, DOWN*2.4, aligned_edge=key_text.get_left())
self.add(blue_text)
step_3 = MarkupText(
f'Based on the passed in configuration, weights are stored in\na variety of np.memmaps on disk or to a particular device.',
font_size=24
)
step_3.move_to([2, 2, 0])
disk_left_col_base = [meta_mem.copy() for i in range(6)]
disk_right_col_base = [meta_mem.copy() for i in range(6)]
disk_left_col = VGroup(*disk_left_col_base).arrange(UP, buff=0)
disk_right_col = VGroup(*disk_right_col_base).arrange(UP, buff=0)
disk_rects = VGroup(disk_left_col,disk_right_col).arrange(RIGHT, buff=0)
disk_text = Text("Disk", font_size=24)
disk = Group(disk_rects,disk_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
disk.move_to([-4.,-1.25,0])
self.play(
Write(step_3, run_time=3),
Write(disk_text, run_time=1),
Create(disk_rects, run_time=1)
)
animations = []
for i,rect in enumerate(ckpt_cpu_arr):
target = rect.copy()
target.generate_target()
target.target.move_to(disk_left_col_base[i]).scale(0.5)
animations.append(MoveToTarget(target, run_time=1.5))
self.play(*animations)
self.play(FadeOut(step_3))
step_4 = MarkupText(
f'Then, the checkpoint is removed from memory\nthrough garbage collection.',
font_size=24
)
step_4.move_to([2, 2, 0])
self.play(
Write(step_4, run_time=3)
)
self.play(
FadeOut(checkpoint_rect, checkpoint_text, *ckpt_arr, *ckpt_cpu_arr),
)
self.wait()

View File

@ -0,0 +1,156 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from manim import *
class Stage4(Scene):
def construct(self):
mem = Rectangle(height=0.5,width=0.5)
fill = Rectangle(height=0.46,width=0.46).set_stroke(width=0)
meta_mem = Rectangle(height=0.25,width=0.25)
cpu_left_col_base = [mem.copy() for i in range(6)]
cpu_right_col_base = [mem.copy() for i in range(6)]
cpu_left_col = VGroup(*cpu_left_col_base).arrange(UP, buff=0)
cpu_right_col = VGroup(*cpu_right_col_base).arrange(UP, buff=0)
cpu_rects = VGroup(cpu_left_col,cpu_right_col).arrange(RIGHT, buff=0)
cpu_text = Text("CPU", font_size=24)
cpu = Group(cpu_rects,cpu_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
cpu.move_to([-2.5,-.5,0])
self.add(cpu)
gpu_base = [mem.copy() for i in range(4)]
gpu_rect = VGroup(*gpu_base).arrange(UP,buff=0)
gpu_text = Text("GPU", font_size=24)
gpu = Group(gpu_rect,gpu_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
gpu.move_to([-1,-1,0])
self.add(gpu)
model_base = [mem.copy() for i in range(6)]
model_rect = VGroup(*model_base).arrange(RIGHT,buff=0)
model_text = Text("Model", font_size=24)
model = Group(model_rect,model_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
model.move_to([3, -1., 0])
self.add(model)
model_cpu_arr = []
model_meta_arr = []
for i,rect in enumerate(model_base):
rect.set_stroke(YELLOW)
cpu_target = Rectangle(height=0.46/4,width=0.46/3).set_stroke(width=0.).set_fill(YELLOW, opacity=0.7)
if i == 0:
cpu_target.next_to(cpu_left_col_base[0].get_corner(DOWN+LEFT), buff=0.02, direction=UP)
cpu_target.set_x(cpu_target.get_x()+0.1)
elif i == 3:
cpu_target.next_to(model_cpu_arr[0], direction=UP, buff=0.)
else:
cpu_target.next_to(model_cpu_arr[i-1], direction=RIGHT, buff=0.)
self.add(cpu_target)
model_cpu_arr.append(cpu_target)
self.add(*model_cpu_arr, *model_meta_arr)
disk_left_col_base = [meta_mem.copy() for i in range(6)]
disk_right_col_base = [meta_mem.copy() for i in range(6)]
disk_left_col = VGroup(*disk_left_col_base).arrange(UP, buff=0)
disk_right_col = VGroup(*disk_right_col_base).arrange(UP, buff=0)
disk_rects = VGroup(disk_left_col,disk_right_col).arrange(RIGHT, buff=0)
disk_text = Text("Disk", font_size=24)
disk = Group(disk_rects,disk_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
disk.move_to([-4.,-1.25,0])
self.add(disk_text, disk_rects)
cpu_disk_arr = []
for i in range(6):
target = fill.copy().set_fill(BLUE, opacity=0.8)
target.move_to(disk_left_col_base[i]).scale(0.5)
cpu_disk_arr.append(target)
self.add(*cpu_disk_arr)
key = Square(side_length=2.2)
key.move_to([-5, 2, 0])
key_text = MarkupText(
f"<b>Key:</b>\n\n<span fgcolor='{YELLOW}'>●</span> Empty Model",
font_size=18,
)
key_text.move_to([-5, 2.4, 0])
self.add(key_text, key)
blue_text = MarkupText(
f"<span fgcolor='{BLUE}'>●</span> Checkpoint",
font_size=18,
)
blue_text.next_to(key_text, DOWN*2.4, aligned_edge=key_text.get_left())
self.add(blue_text)
step_5 = MarkupText(
f'The offloaded weights are all sent to the CPU.',
font_size=24
)
step_5.move_to([2, 2, 0])
self.play(Write(step_5, run_time=3))
for i in range(6):
rect = cpu_disk_arr[i]
cp2 = rect.copy().set_fill(BLUE, opacity=0.8).scale(2.0)
cp2.generate_target()
cp2.target.move_to(model_base[i])
if i == 0:
rect.set_fill(BLUE, opacity=0.8)
rect.generate_target()
rect.target.move_to(cpu_left_col_base[0]).scale(2.0)
self.remove(*model_meta_arr,
*model_cpu_arr,
)
else:
rect.generate_target()
rect.target.move_to(cpu_left_col_base[i]).scale(2.0)
self.play(
MoveToTarget(rect),
MoveToTarget(cp2),
model_base[i].animate.set_stroke(WHITE)
)
self.play(FadeOut(step_5))
step_5 = MarkupText(
f'Finally, hooks are added to each weight in the model\nto transfer the weights from CPU to GPU\n\t\tand back when needed.',
font_size=24
)
step_5.move_to([2, 2, 0])
self.play(Write(step_5, run_time=3))
arrows = []
animations = []
for i in range(6):
a = Arrow(start=UP, end=DOWN, color=RED, buff=.5)
a.next_to(model_base[i].get_left(), UP, buff=0.2)
arrows.append(a)
animations.append(Write(a))
self.play(*animations)
self.wait()

View File

@ -0,0 +1,221 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from manim import *
class Stage5(Scene):
def construct(self):
mem = Rectangle(height=0.5,width=0.5)
fill = Rectangle(height=0.46,width=0.46).set_stroke(width=0)
meta_mem = Rectangle(height=0.25,width=0.25)
cpu_left_col_base = [mem.copy() for i in range(6)]
cpu_right_col_base = [mem.copy() for i in range(6)]
cpu_left_col = VGroup(*cpu_left_col_base).arrange(UP, buff=0)
cpu_right_col = VGroup(*cpu_right_col_base).arrange(UP, buff=0)
cpu_rects = VGroup(cpu_left_col,cpu_right_col).arrange(RIGHT, buff=0)
cpu_text = Text("CPU", font_size=24)
cpu = Group(cpu_rects,cpu_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
cpu.move_to([-2.5,-.5,0])
self.add(cpu)
gpu_base = [mem.copy() for i in range(4)]
gpu_rect = VGroup(*gpu_base).arrange(UP,buff=0)
gpu_text = Text("GPU", font_size=24)
gpu = Group(gpu_rect,gpu_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
gpu.move_to([-1,-1,0])
self.add(gpu)
model_base = [mem.copy() for i in range(6)]
model_rect = VGroup(*model_base).arrange(RIGHT,buff=0)
model_text = Text("Model", font_size=24)
model = Group(model_rect,model_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
model.move_to([3, -1., 0])
self.add(model)
model_arr = []
model_cpu_arr = []
for i,rect in enumerate(model_base):
target = fill.copy().set_fill(BLUE, opacity=0.8)
target.move_to(rect)
model_arr.append(target)
cpu_target = Rectangle(height=0.46,width=0.46).set_stroke(width=0.).set_fill(BLUE, opacity=0.8)
cpu_target.move_to(cpu_left_col_base[i])
model_cpu_arr.append(cpu_target)
self.add(*model_arr, *model_cpu_arr)
disk_left_col_base = [meta_mem.copy() for i in range(6)]
disk_right_col_base = [meta_mem.copy() for i in range(6)]
disk_left_col = VGroup(*disk_left_col_base).arrange(UP, buff=0)
disk_right_col = VGroup(*disk_right_col_base).arrange(UP, buff=0)
disk_rects = VGroup(disk_left_col,disk_right_col).arrange(RIGHT, buff=0)
disk_text = Text("Disk", font_size=24)
disk = Group(disk_rects,disk_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN)
disk.move_to([-4,-1.25,0])
self.add(disk_text, disk_rects)
key = Square(side_length=2.2)
key.move_to([-5, 2, 0])
key_text = MarkupText(
f"<b>Key:</b>\n\n<span fgcolor='{YELLOW}'>●</span> Empty Model",
font_size=18,
)
key_text.move_to([-5, 2.4, 0])
self.add(key_text, key)
blue_text = MarkupText(
f"<span fgcolor='{BLUE}'>●</span> Checkpoint",
font_size=18,
)
blue_text.next_to(key_text, DOWN*2.4, aligned_edge=key_text.get_left())
self.add(blue_text)
step_6 = MarkupText(
f'Now watch as an input is passed through the model\nand how the memory is utilized and handled.',
font_size=24
)
step_6.move_to([2, 2, 0])
self.play(Write(step_6))
input = Square(0.3)
input.set_fill(RED, opacity=1.)
input.set_stroke(width=0.)
input.next_to(model_base[0], LEFT, buff=.5)
self.play(Write(input))
input.generate_target()
input.target.next_to(model_arr[0], direction=LEFT, buff=0.02)
self.play(MoveToTarget(input))
self.play(FadeOut(step_6))
a = Arrow(start=UP, end=DOWN, color=RED, buff=.5)
a.next_to(model_arr[0].get_left(), UP, buff=0.2)
model_cpu_arr[0].generate_target()
model_cpu_arr[0].target.move_to(gpu_rect[0])
step_7 = MarkupText(
f'As the input reaches a layer, the hook triggers\nand weights are moved from the CPU\nto the GPU and back.',
font_size=24
)
step_7.move_to([2, 2, 0])
self.play(Write(step_7, run_time=3))
circ_kwargs = {"run_time":1, "fade_in":True, "fade_out":True, "buff":0.02}
self.play(
Write(a),
Circumscribe(model_arr[0], color=ORANGE, **circ_kwargs),
Circumscribe(model_cpu_arr[0], color=ORANGE, **circ_kwargs),
Circumscribe(gpu_rect[0], color=ORANGE, **circ_kwargs),
)
self.play(
MoveToTarget(model_cpu_arr[0])
)
a_c = a.copy()
for i in range(6):
a_c.next_to(model_arr[i].get_right()+0.02, UP, buff=0.2)
input.generate_target()
input.target.move_to(model_arr[i].get_right()+0.02)
grp = AnimationGroup(
FadeOut(a, run_time=.5),
MoveToTarget(input, run_time=.5),
FadeIn(a_c, run_time=.5),
lag_ratio=0.2
)
self.play(grp)
model_cpu_arr[i].generate_target()
model_cpu_arr[i].target.move_to(cpu_left_col_base[i])
if i < 5:
model_cpu_arr[i+1].generate_target()
model_cpu_arr[i+1].target.move_to(gpu_rect[0])
if i >= 1:
circ_kwargs["run_time"] = .7
self.play(
Circumscribe(model_arr[i], **circ_kwargs),
Circumscribe(cpu_left_col_base[i], **circ_kwargs),
Circumscribe(cpu_left_col_base[i+1], color=ORANGE, **circ_kwargs),
Circumscribe(gpu_rect[0], color=ORANGE, **circ_kwargs),
Circumscribe(model_arr[i+1], color=ORANGE, **circ_kwargs),
)
if i < 1:
self.play(
MoveToTarget(model_cpu_arr[i]),
MoveToTarget(model_cpu_arr[i+1]),
)
else:
self.play(
MoveToTarget(model_cpu_arr[i], run_time=.7),
MoveToTarget(model_cpu_arr[i+1], run_time=.7),
)
else:
model_cpu_arr[i].generate_target()
model_cpu_arr[i].target.move_to(cpu_left_col_base[-1])
input.generate_target()
input.target.next_to(model_arr[-1].get_right(), RIGHT+0.02, buff=0.2)
self.play(
Circumscribe(model_arr[-1], color=ORANGE, **circ_kwargs),
Circumscribe(cpu_left_col_base[-1], color=ORANGE, **circ_kwargs),
Circumscribe(gpu_rect[0], color=ORANGE, **circ_kwargs),
)
self.play(
MoveToTarget(model_cpu_arr[i])
)
a = a_c
a_c = a_c.copy()
input.generate_target()
input.target.next_to(model_base[-1], RIGHT+0.02, buff=.5)
self.play(
FadeOut(step_7),
FadeOut(a, run_time=.5),
)
step_8 = MarkupText(
f'Inference on a model too large for GPU memory\nis successfully completed.', font_size=24
)
step_8.move_to([2, 2, 0])
self.play(
Write(step_8, run_time=3),
MoveToTarget(input)
)
self.wait()

View File

@ -16,13 +16,15 @@ from setuptools import setup
from setuptools import find_packages
extras = {}
extras["quality"] = ["black ~= 22.0", "isort >= 5.5.4", "flake8 >= 3.8.3"]
extras["quality"] = ["black ~= 22.0", "isort >= 5.5.4", "flake8 >= 3.8.3", "hf-doc-builder >= 0.3.0"]
extras["docs"] = []
extras["test"] = [
"pytest",
"pytest-xdist",
]
extras["dev"] = extras["quality"] + extras["test"]
extras["test_prod"] = ["pytest", "pytest-xdist", "pytest-subtests", "parameterized"]
extras["test_dev"] = ["datasets", "evaluate", "transformers", "scipy", "sklearn", "deepspeed<0.7.0", "tqdm"]
extras["testing"] = extras["test_prod"] + extras["test_dev"]
extras["rich"] = ["rich"]
extras["test_trackers"] = ["wandb", "comet-ml", "tensorboard"]
extras["dev"] = extras["quality"] + extras["testing"] + extras["rich"]
extras["sagemaker"] = [
"sagemaker", # boto3 is a required package in sagemaker
@ -30,7 +32,7 @@ extras["sagemaker"] = [
setup(
name="accelerate",
version="0.7.0.dev0",
version="0.13.0",
description="Accelerate",
long_description=open("README.md", "r", encoding="utf-8").read(),
long_description_content_type="text/markdown",
@ -48,8 +50,8 @@ setup(
"accelerate-launch=accelerate.commands.launch:main",
]
},
python_requires=">=3.6.0",
install_requires=["torch>=1.4.0", "pyyaml", "numpy>=1.17"],
python_requires=">=3.7.0",
install_requires=["numpy>=1.17", "packaging>=20.0", "psutil", "pyyaml", "torch>=1.4.0"],
extras_require=extras,
classifiers=[
"Development Status :: 5 - Production/Stable",
@ -59,7 +61,6 @@ setup(
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
],

View File

@ -2,10 +2,25 @@
# There's no way to ignore "F401 '...' imported but unused" warnings in this
# module, but to preserve other warnings. So, don't check this module at all.
__version__ = "0.7.0.dev0"
__version__ = "0.13.0"
from .accelerator import Accelerator
from .kwargs_handlers import DistributedDataParallelKwargs, GradScalerKwargs, InitProcessGroupKwargs
from .big_modeling import cpu_offload, disk_offload, dispatch_model, init_empty_weights, load_checkpoint_and_dispatch
from .launchers import debug_launcher, notebook_launcher
from .state import DistributedType
from .utils import DeepSpeedPlugin, synchronize_rng_states
from .utils import (
DeepSpeedPlugin,
DistributedDataParallelKwargs,
DistributedType,
FullyShardedDataParallelPlugin,
GradScalerKwargs,
InitProcessGroupKwargs,
find_executable_batch_size,
infer_auto_device_map,
is_rich_available,
load_checkpoint_in_model,
synchronize_rng_states,
)
if is_rich_available():
from .utils import rich

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,373 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from contextlib import contextmanager
from typing import Dict, List, Optional, Union
import torch
import torch.nn as nn
from .hooks import AlignDevicesHook, add_hook_to_module, attach_align_device_hook, attach_align_device_hook_on_blocks
from .utils import (
OffloadedWeightsLoader,
check_device_map,
extract_submodules_state_dict,
get_balanced_memory,
infer_auto_device_map,
load_checkpoint_in_model,
offload_state_dict,
)
from .utils.versions import is_torch_version
@contextmanager
def init_empty_weights(include_buffers: bool = False):
"""
A context manager under which models are initialized with all parameters on the meta device, therefore creating an
empty model. Useful when just initializing the model would blow the available RAM.
Args:
include_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to also put all buffers on the meta device while initializing.
Example:
```python
import torch.nn as nn
from accelerate import init_empty_weights
# Initialize a model with 100 billions parameters in no time and without using any RAM.
with init_empty_weights():
tst = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
```
<Tip warning={true}>
Any model created under this context manager has no weights. As such you can't do something like
`model.to(some_device)` with it. To load weights inside your empty model, see [`load_checkpoint_and_dispatch`].
</Tip>
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("Initializing empty weights to a meta device requires torch >= 1.9.0")
old_register_parameter = nn.Module.register_parameter
if include_buffers:
old_register_buffer = nn.Module.register_buffer
def register_empty_parameter(module, name, param):
old_register_parameter(module, name, param)
if param is not None:
param_cls = type(module._parameters[name])
kwargs = module._parameters[name].__dict__
module._parameters[name] = param_cls(module._parameters[name].to(torch.device("meta")), **kwargs)
def register_empty_buffer(module, name, buffer):
old_register_buffer(module, name, buffer)
if buffer is not None:
module._buffers[name] = module._buffers[name].to(torch.device("meta"))
# Patch tensor creation
if include_buffers:
tensor_constructors_to_patch = {
torch_function_name: getattr(torch, torch_function_name)
for torch_function_name in ["empty", "zeros", "ones", "full"]
}
else:
tensor_constructors_to_patch = {}
def patch_tensor_constructor(fn):
def wrapper(*args, **kwargs):
kwargs["device"] = torch.device("meta")
return fn(*args, **kwargs)
return wrapper
try:
nn.Module.register_parameter = register_empty_parameter
if include_buffers:
nn.Module.register_buffer = register_empty_buffer
for torch_function_name in tensor_constructors_to_patch.keys():
setattr(torch, torch_function_name, patch_tensor_constructor(getattr(torch, torch_function_name)))
yield
finally:
nn.Module.register_parameter = old_register_parameter
if include_buffers:
nn.Module.register_buffer = old_register_buffer
for torch_function_name, old_torch_function in tensor_constructors_to_patch.items():
setattr(torch, torch_function_name, old_torch_function)
def cpu_offload(
model: nn.Module,
execution_device: Optional[torch.device] = None,
offload_buffers: bool = False,
state_dict: Optional[Dict[str, torch.Tensor]] = None,
preload_module_classes: Optional[List[str]] = None,
):
"""
Activates full CPU offload for a model. As a result, all parameters of the model will be offloaded and only one
copy of the state dict of the model will be kept. During the forward pass, parameters will be extracted from that
state dict and put on the execution device passed as they are needed, then offloaded again.
Args:
model (`torch.nn.Module`):
The model to offload.
execution_device (`torch.device`, *optional*):
The device on which the forward pass of the model will be executed (should be a GPU). Will default to the
model first parameter device.
offload_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to offload the buffers with the model parameters.
state_dict (`Dict[str, torch.Tensor]`, *optional*):
The state dict of the model that will be kept on CPU.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("CPU offloading requires torch >= 1.9.0")
if execution_device is None:
execution_device = next(iter(model.parameters())).device
if state_dict is None:
state_dict = {n: p.to("cpu") for n, p in model.state_dict().items()}
attach_align_device_hook(
model,
execution_device=execution_device,
offload=True,
offload_buffers=offload_buffers,
weights_map=state_dict,
preload_module_classes=preload_module_classes,
)
add_hook_to_module(model, AlignDevicesHook(io_same_device=True))
return model
def disk_offload(
model: nn.Module,
offload_dir: Union[str, os.PathLike],
execution_device: Optional[torch.device] = None,
offload_buffers: bool = False,
preload_module_classes: Optional[List[str]] = None,
):
"""
Activates full disk offload for a model. As a result, all parameters of the model will be offloaded as
memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and
put on the execution device passed as they are needed, then offloaded again.
Args:
model (`torch.nn.Module`): The model to offload.
offload_dir (`str` or `os.PathLike`):
The folder in which to offload the model weights (or where the model weights are already offloaded).
execution_device (`torch.device`, *optional*):
The device on which the forward pass of the model will be executed (should be a GPU). Will default to the
model's first parameter device.
offload_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to offload the buffers with the model parameters.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("Disk offloading requires torch >= 1.9.0")
if not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json")):
offload_state_dict(offload_dir, model.state_dict())
if execution_device is None:
execution_device = next(iter(model.parameters())).device
weights_map = OffloadedWeightsLoader(save_folder=offload_dir)
attach_align_device_hook(
model,
execution_device=execution_device,
offload=True,
offload_buffers=offload_buffers,
weights_map=weights_map,
preload_module_classes=preload_module_classes,
)
add_hook_to_module(model, AlignDevicesHook(io_same_device=True))
return model
def dispatch_model(
model: nn.Module,
device_map: Dict[str, Union[str, int, torch.device]],
main_device: Optional[torch.device] = None,
state_dict: Optional[Dict[str, torch.Tensor]] = None,
offload_dir: Union[str, os.PathLike] = None,
offload_buffers: bool = False,
preload_module_classes: Optional[List[str]] = None,
):
"""
Dispatches a model according to a given device map. Layers of the model might be spread across GPUs, offloaded on
the CPU or even the disk.
Args:
model (`torch.nn.Module`):
The model to dispatch.
device_map (`Dict[str, Union[str, int, torch.device]]`):
A dictionary mapping module names in the models `state_dict` to the device they should go to. Note that
`"disk"` is accepted even if it's not a proper value for `torch.device`.
main_device (`str`, `int` or `torch.device`, *optional*):
The main execution device. Will default to the first device in the `device_map` different from `"cpu"` or
`"disk"`.
state_dict (`Dict[str, torch.Tensor]`, *optional*):
The state dict of the part of the model that will be kept on CPU.
offload_dir (`str` or `os.PathLike`):
The folder in which to offload the model weights (or where the model weights are already offloaded).
offload_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to offload the buffers with the model parameters.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("Model dispatching requires torch >= 1.9.0")
# Error early if the device map is incomplete.
check_device_map(model, device_map)
if main_device is None:
main_device = [d for d in device_map.values() if d not in ["cpu", "disk"]][0]
cpu_modules = [name for name, device in device_map.items() if device == "cpu"]
if state_dict is None and len(cpu_modules) > 0:
state_dict = extract_submodules_state_dict(model.state_dict(), cpu_modules)
disk_modules = [name for name, device in device_map.items() if device == "disk"]
if offload_dir is None and len(disk_modules) > 0:
raise ValueError(
"We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules "
f"need to be offloaded: {', '.join(disk_modules)}."
)
if len(disk_modules) > 0 and (
not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json"))
):
disk_state_dict = extract_submodules_state_dict(model.state_dict(), disk_modules)
offload_state_dict(offload_dir, disk_state_dict)
execution_device = {
name: main_device if device in ["cpu", "disk"] else device for name, device in device_map.items()
}
offload = {name: device in ["cpu", "disk"] for name, device in device_map.items()}
save_folder = offload_dir if len(disk_modules) > 0 else None
if state_dict is not None or save_folder is not None:
weights_map = OffloadedWeightsLoader(state_dict=state_dict, save_folder=save_folder)
else:
weights_map = None
attach_align_device_hook_on_blocks(
model,
execution_device=execution_device,
offload=offload,
offload_buffers=offload_buffers,
weights_map=weights_map,
preload_module_classes=preload_module_classes,
)
model.hf_device_map = device_map
return model
def load_checkpoint_and_dispatch(
model: nn.Module,
checkpoint: Union[str, os.PathLike],
device_map: Optional[Union[str, Dict[str, Union[int, str, torch.device]]]] = None,
max_memory: Optional[Dict[Union[int, str], Union[int, str]]] = None,
no_split_module_classes: Optional[List[str]] = None,
offload_folder: Optional[Union[str, os.PathLike]] = None,
offload_buffers: bool = False,
dtype: Optional[Union[str, torch.dtype]] = None,
offload_state_dict: Optional[bool] = None,
preload_module_classes: Optional[List[str]] = None,
):
"""
Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are
loaded and adds the various hooks that will make this model run properly (even if split across devices).
Args:
model (`torch.nn.Module`): The model in which we want to load a checkpoint.
checkpoint (`str` or `os.PathLike`):
The folder checkpoint to load. It can be:
- a path to a file containing a whole model state dict
- a path to a `.json` file containing the index to a sharded checkpoint
- a path to a folder containing a unique `.index.json` file and the shards of a checkpoint.
device_map (`Dict[str, Union[int, str, torch.device]]`, *optional*):
A map that specifies where each submodule should go. It doesn't need to be refined to each parameter/buffer
name, once a given module name is inside, every submodule of it will be sent to the same device.
To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For more
information about each option see [here](big_modeling#designing-a-device-map).
max_memory (`Dict`, *optional*):
A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU
and the available CPU RAM if unset.
no_split_module_classes (`List[str]`, *optional*):
A list of layer class names that should never be split across device (for instance any layer that has a
residual connection).
offload_folder (`str` or `os.PathLike`, *optional*):
If the `device_map` contains any value `"disk"`, the folder where we will offload weights.
offload_buffers (`bool`, *optional*, defaults to `False`):
In the layers that are offloaded on the CPU or the hard drive, whether or not to offload the buffers as
well as the parameters.
dtype (`str` or `torch.dtype`, *optional*):
If provided, the weights will be converted to that type when loaded.
offload_state_dict (`bool`, *optional*):
If `True`, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if
the weight of the CPU state dict + the biggest shard does not fit. Will default to `True` if the device map
picked contains `"disk"` values.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("Loading and dispatching requires torch >= 1.9.0")
if isinstance(device_map, str) and device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]:
raise ValueError(
"If passing a string for `device_map`, please choose 'auto', 'balanced', 'balanced_low_0' or "
"'sequential'."
)
if device_map != "sequential":
max_memory = get_balanced_memory(
model,
max_memory=max_memory,
no_split_module_classes=no_split_module_classes,
dtype=dtype,
low_zero=(device_map == "balanced_low_0"),
)
if isinstance(device_map, str):
device_map = infer_auto_device_map(
model, max_memory=max_memory, no_split_module_classes=no_split_module_classes, dtype=dtype
)
if offload_state_dict is None and "disk" in device_map.values():
offload_state_dict = True
load_checkpoint_in_model(
model,
checkpoint,
device_map=device_map,
offload_folder=offload_folder,
dtype=dtype,
offload_state_dict=offload_state_dict,
)
if device_map is None:
return model
return dispatch_model(
model,
device_map=device_map,
offload_dir=offload_folder,
offload_buffers=offload_buffers,
preload_module_classes=preload_module_classes,
)

View File

@ -21,21 +21,34 @@ import numpy as np
import torch
from torch.cuda.amp import GradScaler
from .state import is_tpu_available
from .utils import MODEL_NAME, OPTIMIZER_NAME, RNG_STATE_NAME, SCALER_NAME, get_pretty_name, save
from .utils import (
MODEL_NAME,
OPTIMIZER_NAME,
RNG_STATE_NAME,
SCALER_NAME,
SCHEDULER_NAME,
get_pretty_name,
is_tpu_available,
save,
)
if is_tpu_available():
if is_tpu_available(check_device=False):
import torch_xla.core.xla_model as xm
import logging
from .logging import get_logger
logger = logging.getLogger(__name__)
logger = get_logger(__name__)
def save_accelerator_state(
output_dir: str, model_states: List[dict], optimizers: list, process_index: int, scaler: GradScaler = None
output_dir: str,
model_states: List[dict],
optimizers: list,
schedulers: list,
process_index: int,
scaler: GradScaler = None,
):
"""
Saves the current states of the models, optimizers, scaler, and RNG generators to a given directory.
@ -47,6 +60,8 @@ def save_accelerator_state(
A list of model states
optimizers (`List[torch.optim.Optimizer]`):
A list of optimizer instances
schedulers (`List[torch.optim.lr_scheduler._LRScheduler]`):
A list of learning rate schedulers
process_index (`int`):
The current process index in the Accelerator state
scaler (`torch.cuda.amp.GradScaler`, *optional*):
@ -65,6 +80,13 @@ def save_accelerator_state(
output_optimizer_file = os.path.join(output_dir, optimizer_name)
save(state, output_optimizer_file)
logger.info(f"Optimizer state saved in {output_optimizer_file}")
# Scheduler states
for i, scheduler in enumerate(schedulers):
state = scheduler.state_dict()
scheduler_name = f"{SCHEDULER_NAME}.bin" if i == 0 else f"{SCHEDULER_NAME}_{i}.bin"
output_scheduler_file = os.path.join(output_dir, scheduler_name)
save(state, output_scheduler_file)
logger.info(f"Scheduler state saved in {output_scheduler_file}")
# GradScaler state
if scaler is not None:
state = scaler.state_dict()
@ -80,24 +102,26 @@ def save_accelerator_state(
states["torch_cuda_manual_seed"] = torch.cuda.get_rng_state_all()
# ^^ safe to call this function even if cuda is not available
if is_tpu_available():
states["xm_seed"] = torch.tensor(xm.get_rng_state())
states["xm_seed"] = xm.get_rng_state()
output_states_file = os.path.join(output_dir, states_name)
torch.save(states, output_states_file)
logger.info(f"Random states saved in {output_states_file}")
return output_dir
def load_accelerator_state(input_dir, models, optimizers, process_index, scaler=None):
def load_accelerator_state(input_dir, models, optimizers, schedulers, process_index, scaler=None):
"""
Loads states of the models, optimizers, scaler, and RNG generators from a given directory.
Args:
input_dir (`str` or `os.PathLike`):
The name of the folder to load all relevant weights and states.
model_stmodelsates (`List[torch.nn.Module]`):
models (`List[torch.nn.Module]`):
A list of model instances
optimizers (`List[torch.optim.Optimizer]`):
A list of optimizer instances
schedulers (`List[torch.optim.lr_scheduler._LRScheduler]`):
A list of learning rate schedulers
process_index (`int`):
The current process index in the Accelerator state
scaler (`torch.cuda.amp.GradScaler`, *optional*):
@ -107,16 +131,23 @@ def load_accelerator_state(input_dir, models, optimizers, process_index, scaler=
for i, model in enumerate(models):
weights_name = f"{MODEL_NAME}.bin" if i == 0 else f"{MODEL_NAME}_{i}.bin"
input_model_file = os.path.join(input_dir, weights_name)
models[i].load_state_dict(torch.load(input_model_file))
models[i].load_state_dict(torch.load(input_model_file, map_location="cpu"))
logger.info("All model weights loaded successfully")
# Optimizer states
for i, opt in enumerate(optimizers):
optimizer_name = f"{OPTIMIZER_NAME}.bin" if i == 0 else f"{OPTIMIZER_NAME}_{i}.bin"
input_optimizer_file = os.path.join(input_dir, optimizer_name)
optimizers[i].load_state_dict(torch.load(input_optimizer_file))
optimizers[i].load_state_dict(torch.load(input_optimizer_file, map_location="cpu"))
logger.info("All optimizer states loaded successfully")
# Scheduler states
for i, scheduler in enumerate(schedulers):
scheduler_name = f"{SCHEDULER_NAME}.bin" if i == 0 else f"{SCHEDULER_NAME}_{i}.bin"
input_scheduler_file = os.path.join(input_dir, scheduler_name)
scheduler.load_state_dict(torch.load(input_scheduler_file))
logger.info("All scheduler states loaded successfully")
# GradScaler state
if scaler is not None:
input_scaler_file = os.path.join(input_dir, SCALER_NAME)

View File

@ -17,7 +17,7 @@
import argparse
import os
from accelerate.state import ComputeEnvironment
from accelerate.utils import ComputeEnvironment
from .cluster import get_cluster_input
from .config_args import cache_dir, default_config_file, default_yaml_config_file, load_config_from_file # noqa: F401

View File

@ -14,24 +14,32 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from accelerate.state import ComputeEnvironment, DistributedType
from ...utils import is_deepspeed_available
from ...utils import ComputeEnvironment, DistributedType, is_deepspeed_available, is_transformers_available
from ...utils.constants import (
DEEPSPEED_MULTINODE_LAUNCHERS,
FSDP_AUTO_WRAP_POLICY,
FSDP_BACKWARD_PREFETCH,
FSDP_SHARDING_STRATEGY,
FSDP_STATE_DICT_TYPE,
)
from .config_args import ClusterConfig
from .config_utils import _ask_field, _convert_distributed_mode, _convert_yes_no_to_bool
def get_cluster_input():
distributed_type = _ask_field(
"Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU): ",
"Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU [4] MPS): ",
_convert_distributed_mode,
error_message="Please enter 0, 1, 2 or 3.",
error_message="Please enter 0, 1, 2, 3 or 4.",
)
machine_rank = 0
num_machines = 1
gpu_ids = None
main_process_ip = None
main_process_port = None
rdzv_backend = "static"
same_network = True
if distributed_type in [DistributedType.MULTI_GPU, DistributedType.MULTI_CPU]:
num_machines = _ask_field(
"How many different machines will you use (use more than 1 for multi-node training)? [1]: ",
@ -51,19 +59,30 @@ def get_cluster_input():
"What is the port you will use to communicate with the main process? ",
lambda x: int(x),
)
same_network = _ask_field(
"Are all the machines on the same local network? Answer `no` if nodes are on the cloud and/or on different network hosts [YES/no]: ",
_convert_yes_no_to_bool,
default=True,
error_message="Please enter yes or no.",
)
if not same_network:
rdzv_backend = _ask_field(
"What rendezvous backend will you use? ('static', 'c10d', ...): ", default="static"
)
if distributed_type == DistributedType.NO:
use_cpu = _ask_field(
"Do you want to run your training on CPU only (even if a GPU is available)? [no]:",
lambda x: bool(x),
"Do you want to run your training on CPU only (even if a GPU is available)? [yes/NO]:",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
elif distributed_type == DistributedType.MULTI_CPU:
use_cpu = True
else:
use_cpu = False
deepspeed_config = None
deepspeed_config = {}
if distributed_type in [DistributedType.MULTI_GPU, DistributedType.NO]:
use_deepspeed = _ask_field(
"Do you want to use DeepSpeed? [yes/NO]: ",
@ -77,26 +96,183 @@ def get_cluster_input():
is_deepspeed_available()
), "DeepSpeed is not installed => run `pip3 install deepspeed` or build it from source"
deepspeed_config = {}
if distributed_type == DistributedType.DEEPSPEED:
deepspeed_config["zero_stage"] = _ask_field(
"What should be your DeepSpeed's ZeRO optimization stage (0, 1, 2, 3)? [2]: ",
lambda x: int(x),
default=2,
use_deepspeed_config = _ask_field(
"Do you want to specify a json file to a DeepSpeed config? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if deepspeed_config["zero_stage"] >= 2:
deepspeed_config["offload_optimizer_device"] = _ask_field(
"Where to offload optimizer states? [NONE/cpu/nvme]: ",
if use_deepspeed_config:
deepspeed_config["deepspeed_config_file"] = _ask_field(
"Please enter the path to the json DeepSpeed config file: ",
lambda x: str(x),
default="none",
)
else:
deepspeed_config["zero_stage"] = _ask_field(
"What should be your DeepSpeed's ZeRO optimization stage (0, 1, 2, 3)? [2]: ",
lambda x: int(x),
default=2,
)
deepspeed_config["gradient_accumulation_steps"] = _ask_field(
"How many gradient accumulation steps you're passing in your script? [1]: ",
if deepspeed_config["zero_stage"] >= 2:
deepspeed_config["offload_optimizer_device"] = _ask_field(
"Where to offload optimizer states? [none/cpu/nvme]: ",
lambda x: str(x),
default="none",
)
deepspeed_config["offload_param_device"] = _ask_field(
"Where to offload parameters? [none/cpu/nvme]: ",
lambda x: str(x),
default="none",
)
deepspeed_config["gradient_accumulation_steps"] = _ask_field(
"How many gradient accumulation steps you're passing in your script? [1]: ",
lambda x: int(x),
default=1,
)
use_gradient_clipping = _ask_field(
"Do you want to use gradient clipping? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if use_gradient_clipping:
deepspeed_config["gradient_clipping"] = _ask_field(
"What is the gradient clipping value? [1.0]: ",
lambda x: float(x),
default=1.0,
)
if deepspeed_config["zero_stage"] == 3:
deepspeed_config["zero3_save_16bit_model"] = _ask_field(
"Do you want to save 16-bit model weights when using ZeRO Stage-3? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
deepspeed_config["zero3_init_flag"] = _ask_field(
"Do you want to enable `deepspeed.zero.Init` when using ZeRO Stage-3 for constructing massive models? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if deepspeed_config["zero3_init_flag"]:
if not is_transformers_available():
raise Exception(
"When `zero3_init_flag` is set, it requires Transformers to be installed. "
"Please run `pip3 install transformers`."
)
if num_machines > 1:
launcher_query = "Which Type of launcher do you want to use "
for i, launcher in enumerate(DEEPSPEED_MULTINODE_LAUNCHERS):
launcher_query += f"[{i}] {launcher}, "
launcher_query = launcher_query[:-2] + ")? [0]: "
deepspeed_config["deepspeed_multinode_launcher"] = _ask_field(
launcher_query,
lambda x: DEEPSPEED_MULTINODE_LAUNCHERS[int(x)],
default=DEEPSPEED_MULTINODE_LAUNCHERS[0],
)
if deepspeed_config["deepspeed_multinode_launcher"] != DEEPSPEED_MULTINODE_LAUNCHERS[1]:
deepspeed_config["deepspeed_hostfile"] = _ask_field(
"DeepSpeed configures multi-node compute resources with hostfile. "
"Each row is of the format `hostname slots=[num_gpus]`, e.g., `localhost slots=2`; "
"for more information please refer official [documentation]"
"(https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node). "
"Please specify the location of hostfile: ",
lambda x: str(x),
)
is_exclusion_filter = _ask_field(
"Do you want to specify exclusion filter string? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if is_exclusion_filter:
deepspeed_config["deepspeed_exclusion_filter"] = _ask_field(
"DeepSpeed exclusion filter string: ",
lambda x: str(x),
)
is_inclusion_filter = _ask_field(
"Do you want to specify inclusion filter string? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if is_inclusion_filter:
deepspeed_config["deepspeed_inclusion_filter"] = _ask_field(
"DeepSpeed inclusion filter string: ",
lambda x: str(x),
)
fsdp_config = {}
if distributed_type in [DistributedType.MULTI_GPU]:
use_fsdp = _ask_field(
"Do you want to use FullyShardedDataParallel? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if use_fsdp:
distributed_type = DistributedType.FSDP
if distributed_type == DistributedType.FSDP:
sharding_strategy_query = "What should be your sharding strategy ("
for i, strategy in enumerate(FSDP_SHARDING_STRATEGY):
sharding_strategy_query += f"[{i+1}] {strategy}, "
sharding_strategy_query = sharding_strategy_query[:-2] + ")? [1]: "
fsdp_config["fsdp_sharding_strategy"] = _ask_field(
sharding_strategy_query,
lambda x: int(x),
default=1,
)
fsdp_config["fsdp_offload_params"] = _ask_field(
"Do you want to offload parameters and gradients to CPU? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
fsdp_wrap_query = "What should be your auto wrap policy ("
for i, wrap_policy in enumerate(FSDP_AUTO_WRAP_POLICY):
fsdp_wrap_query += f"[{i}] {wrap_policy}, "
fsdp_wrap_query = fsdp_wrap_query[:-2] + ")? [0]: "
fsdp_config["fsdp_auto_wrap_policy"] = _ask_field(
fsdp_wrap_query,
lambda x: FSDP_AUTO_WRAP_POLICY[int(x)],
default="TRANSFORMER_BASED_WRAP",
)
if fsdp_config["fsdp_auto_wrap_policy"] == FSDP_AUTO_WRAP_POLICY[0]:
fsdp_config["fsdp_transformer_layer_cls_to_wrap"] = _ask_field(
"What is the transformer layer class name (case-sensitive) to wrap ,e.g, `BertLayer`, `GPTJBlock`, `T5Block` ...? : ",
lambda x: str(x),
)
elif fsdp_config["fsdp_auto_wrap_policy"] == FSDP_AUTO_WRAP_POLICY[1]:
fsdp_config["fsdp_min_num_params"] = _ask_field(
"What should be your FSDP's minimum number of parameters for Default Auto Wrapping Policy? [1e8]: ",
lambda x: int(x),
default=1e8,
)
fsdp_backward_prefetch_query = "What should be your FSDP's backward prefetch policy ("
for i, backward_prefetch_policy in enumerate(FSDP_BACKWARD_PREFETCH):
fsdp_backward_prefetch_query += f"[{i}] {backward_prefetch_policy}, "
fsdp_backward_prefetch_query = fsdp_backward_prefetch_query[:-2] + ")? [0]: "
fsdp_config["fsdp_backward_prefetch_policy"] = _ask_field(
fsdp_backward_prefetch_query,
lambda x: FSDP_BACKWARD_PREFETCH[int(x)],
default="BACKWARD_PRE",
)
fsdp_state_dict_type_query = "What should be your FSDP's state dict type ("
for i, state_dict_type in enumerate(FSDP_STATE_DICT_TYPE):
fsdp_state_dict_type_query += f"[{i}] {state_dict_type}, "
fsdp_state_dict_type_query = fsdp_state_dict_type_query[:-2] + ")? [0]: "
fsdp_config["fsdp_state_dict_type"] = _ask_field(
fsdp_state_dict_type_query,
lambda x: FSDP_STATE_DICT_TYPE[int(x)],
default="FULL_STATE_DICT",
)
if distributed_type == DistributedType.TPU:
main_training_function = _ask_field(
@ -106,32 +282,67 @@ def get_cluster_input():
else:
main_training_function = "main"
num_processes = _ask_field(
"How many processes in total will you use? [1]: ",
lambda x: int(x),
default=1,
error_message="Please enter an integer.",
)
if distributed_type in [DistributedType.MULTI_CPU, DistributedType.MULTI_GPU, DistributedType.TPU]:
machine_type = str(distributed_type).split(".")[1].replace("MULTI_", "")
if machine_type == "TPU":
machine_type += " cores"
else:
machine_type += "(s)"
num_processes = _ask_field(
f"How many {machine_type} should be used for distributed training? [1]:",
lambda x: int(x),
default=1,
error_message="Please enter an integer.",
)
if distributed_type != DistributedType.TPU:
mixed_precision = _ask_field(
"Do you wish to use FP16 or BF16 (mixed precision)? [NO/fp16/bf16]: ",
lambda x: str(x).lower(),
default="no",
if distributed_type in [DistributedType.MULTI_GPU, DistributedType.NO] and not use_cpu:
gpu_ids = _ask_field(
"What GPU(s) (by id) should be used for training on this machine as a comma-seperated list? [all]:",
default="all",
)
elif distributed_type in [DistributedType.FSDP, DistributedType.DEEPSPEED]:
num_processes = _ask_field(
"How many GPU(s) should be used for distributed training? [1]:",
lambda x: int(x),
default=1,
error_message="Please enter an integer.",
)
else:
num_processes = 1
if distributed_type != DistributedType.TPU:
if distributed_type == DistributedType.DEEPSPEED and use_deepspeed_config:
mixed_precision = "no"
else:
mixed_precision = _ask_field(
"Do you wish to use FP16 or BF16 (mixed precision)? [NO/fp16/bf16]: ",
lambda x: str(x).lower(),
default="no",
)
else:
mixed_precision = "no"
downcast_bf16 = "no"
if distributed_type == DistributedType.TPU and mixed_precision == "bf16":
downcast_bf16 = _ask_field(
"Should `torch.float` be cast as `bfloat16` and `torch.double` remain `float32` on TPUs?", default="no"
)
return ClusterConfig(
compute_environment=ComputeEnvironment.LOCAL_MACHINE,
distributed_type=distributed_type,
num_processes=num_processes,
gpu_ids=gpu_ids,
mixed_precision=mixed_precision,
downcast_bf16=downcast_bf16,
machine_rank=machine_rank,
num_machines=num_machines,
main_process_ip=main_process_ip,
main_process_port=main_process_port,
main_training_function=main_training_function,
deepspeed_config=deepspeed_config,
fsdp_config=fsdp_config,
use_cpu=use_cpu,
rdzv_backend=rdzv_backend,
same_network=same_network,
)

View File

@ -21,7 +21,9 @@ from enum import Enum
from typing import Optional, Union
import yaml
from accelerate.state import ComputeEnvironment, DistributedType, SageMakerDistributedType
from ...utils import ComputeEnvironment, DistributedType, SageMakerDistributedType
from ...utils.constants import SAGEMAKER_PYTHON_VERSION, SAGEMAKER_PYTORCH_VERSION, SAGEMAKER_TRANSFORMERS_VERSION
hf_cache_home = os.path.expanduser(
@ -122,7 +124,10 @@ class BaseConfig:
if isinstance(self.compute_environment, str):
self.compute_environment = ComputeEnvironment(self.compute_environment)
if isinstance(self.distributed_type, str):
self.distributed_type = DistributedType(self.distributed_type)
if self.compute_environment == ComputeEnvironment.AMAZON_SAGEMAKER:
self.distributed_type = SageMakerDistributedType(self.distributed_type)
else:
self.distributed_type = DistributedType(self.distributed_type)
@dataclass
@ -130,21 +135,39 @@ class ClusterConfig(BaseConfig):
num_processes: int
machine_rank: int = 0
num_machines: int = 1
gpu_ids: Optional[str] = None
main_process_ip: Optional[str] = None
main_process_port: Optional[int] = None
rdzv_backend: Optional[str] = "static"
same_network: Optional[bool] = False
main_training_function: str = "main"
# args for deepspeed_plugin
deepspeed_config: dict = None
# args for fsdp
fsdp_config: dict = None
# args for TPU
downcast_bf16: bool = False
def __post_init__(self):
if self.deepspeed_config is None:
self.deepspeed_config = {}
if self.fsdp_config is None:
self.fsdp_config = {}
return super().__post_init__()
@dataclass
class SageMakerConfig(BaseConfig):
ec2_instance_type: str
iam_role_name: str
image_uri: str
profile: Optional[str] = None
region: str = "us-east-1"
num_machines: int = 1
base_job_name: str = f"accelerate-sagemaker-{num_machines}"
pytorch_version: str = "1.6"
transformers_version: str = "4.4"
pytorch_version: str = SAGEMAKER_PYTORCH_VERSION
transformers_version: str = SAGEMAKER_TRANSFORMERS_VERSION
py_version: str = SAGEMAKER_PYTHON_VERSION
sagemaker_inputs_file: str = None
sagemaker_metrics_file: str = None

View File

@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from accelerate.state import ComputeEnvironment, DistributedType, SageMakerDistributedType
from ...utils.dataclasses import ComputeEnvironment, DistributedType, SageMakerDistributedType
def _ask_field(input_text, convert_value=None, default=None, error_message=None):
@ -37,7 +37,7 @@ def _convert_compute_environment(value):
def _convert_distributed_mode(value):
value = int(value)
return DistributedType(["NO", "MULTI_CPU", "MULTI_GPU", "TPU"][value])
return DistributedType(["NO", "MULTI_CPU", "MULTI_GPU", "TPU", "MPS"][value])
def _convert_sagemaker_distributed_mode(value):

View File

@ -16,11 +16,11 @@
import json
import os
from accelerate.state import ComputeEnvironment, SageMakerDistributedType
from accelerate.utils import is_boto3_available
from ...utils.constants import SAGEMAKER_PARALLEL_EC2_INSTANCES
from ...utils.dataclasses import ComputeEnvironment, SageMakerDistributedType
from ...utils.imports import is_boto3_available
from .config_args import SageMakerConfig
from .config_utils import _ask_field, _convert_sagemaker_distributed_mode
from .config_utils import _ask_field, _convert_sagemaker_distributed_mode, _convert_yes_no_to_bool
if is_boto3_available():
@ -120,24 +120,68 @@ def get_sagemaker_input():
print(f'Accelerate will create an iam role "{iam_role_name}" using the provided credentials')
_create_iam_role_for_sagemaker(iam_role_name)
is_custom_docker_image = _ask_field(
"Do you want to use custom Docker image? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
docker_image = None
if is_custom_docker_image:
docker_image = _ask_field("Enter your Docker image: ", lambda x: str(x).lower())
is_sagemaker_inputs_enabled = _ask_field(
"Do you want to provide SageMaker input channels with data locations? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
sagemaker_inputs_file = None
if is_sagemaker_inputs_enabled:
sagemaker_inputs_file = _ask_field(
"Enter the path to the SageMaker inputs TSV file with columns (channel_name, data_location): ",
lambda x: str(x).lower(),
)
is_sagemaker_metrics_enabled = _ask_field(
"Do you want to enable SageMaker metrics? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
sagemaker_metrics_file = None
if is_sagemaker_metrics_enabled:
sagemaker_metrics_file = _ask_field(
"Enter the path to the SageMaker metrics TSV file with columns (metric_name, metric_regex): ",
lambda x: str(x).lower(),
)
distributed_type = _ask_field(
"Which type of machine are you using? ([0] No distributed training, [1] data parallelism, [2] model parallelism): ",
"What is the distributed mode? ([0] No distributed training, [1] data parallelism): ",
_convert_sagemaker_distributed_mode,
error_message="Please enter 0, 1 or 2",
error_message="Please enter 0 or 1",
)
# using the best two instances for single-gpu training or multi-gpu -> can turn into question to make it more diverse
ec2_instance_type = "ml.p3.2xlarge" if distributed_type == SageMakerDistributedType.NO else "ml.p3dn.24xlarge"
ec2_instance_query = "Which EC2 instance type you want to use for your training "
if distributed_type != SageMakerDistributedType.NO:
ec2_instance_query += "("
for i, instance_type in enumerate(SAGEMAKER_PARALLEL_EC2_INSTANCES):
ec2_instance_query += f"[{i}] {instance_type}, "
ec2_instance_query = ec2_instance_query[:-2] + ")? [0]: "
ec2_instance_type = _ask_field(ec2_instance_query, lambda x: SAGEMAKER_PARALLEL_EC2_INSTANCES[int(x)])
else:
ec2_instance_query += "? [ml.p3.2xlarge]:"
ec2_instance_type = _ask_field(ec2_instance_query, lambda x: str(x).lower(), default="ml.p3.2xlarge")
num_machines = 1
if (
distributed_type == SageMakerDistributedType.DATA_PARALLEL
or distributed_type == SageMakerDistributedType.MODEL_PARALLEL
):
raise NotImplementedError("Model or Data Parallelism is not implemented yet. We are working on it")
num_machines = _ask_field(
"How many machines do you want use? [2]: ",
"How many machines do you want use? [1]: ",
lambda x: int(x),
default=2,
default=1,
)
mixed_precision = _ask_field(
@ -147,12 +191,16 @@ def get_sagemaker_input():
)
return SageMakerConfig(
image_uri=docker_image,
compute_environment=ComputeEnvironment.AMAZON_SAGEMAKER,
distributed_type=distributed_type,
use_cpu=False,
ec2_instance_type=ec2_instance_type,
profile=aws_profile,
region=aws_region,
iam_role_name=iam_role_name,
mixed_precision=mixed_precision,
num_machines=num_machines,
sagemaker_inputs_file=sagemaker_inputs_file,
sagemaker_metrics_file=sagemaker_metrics_file,
)

View File

@ -16,6 +16,7 @@
import argparse
import importlib
import logging
import os
import subprocess
import sys
@ -24,10 +25,40 @@ from ast import literal_eval
from pathlib import Path
from typing import Dict, List
import torch
import psutil
from accelerate.commands.config import default_config_file, load_config_from_file
from accelerate.commands.config.config_args import SageMakerConfig
from accelerate.state import ComputeEnvironment, DistributedType
from accelerate.utils import PrepareForLaunch, is_sagemaker_available
from accelerate.state import get_int_from_env
from accelerate.utils import (
ComputeEnvironment,
DistributedType,
PrecisionType,
PrepareForLaunch,
_filter_args,
is_deepspeed_available,
is_rich_available,
is_sagemaker_available,
is_torch_version,
patch_environment,
)
from accelerate.utils.constants import DEEPSPEED_MULTINODE_LAUNCHERS
from accelerate.utils.dataclasses import SageMakerDistributedType
if is_rich_available():
from rich import get_console
from rich.logging import RichHandler
FORMAT = "%(message)s"
logging.basicConfig(format=FORMAT, datefmt="[%X]", handlers=[RichHandler()])
if is_torch_version(">=", "1.9.0"):
import torch.distributed.run as distrib_run
logger = logging.getLogger(__name__)
def launch_command_parser(subparsers=None):
@ -45,18 +76,170 @@ def launch_command_parser(subparsers=None):
action="store_true",
help="Whether or not this should launch a distributed GPU training.",
)
parser.add_argument(
"--use_mps_device",
default=False,
action="store_true",
help="Whether or not this should use MPS-enabled GPU device on MacOS machines.",
)
parser.add_argument(
"--use_deepspeed",
default=False,
action="store_true",
help="Whether to use deepspeed.",
)
parser.add_argument(
"--deepspeed_config_file",
default=None,
type=str,
help="DeepSpeed config file.",
)
parser.add_argument(
"--zero_stage",
default=None,
type=int,
help="DeepSpeed's ZeRO optimization stage (useful only when `use_deepspeed` flag is passed).",
)
parser.add_argument(
"--offload_optimizer_device",
default=None,
type=str,
help="Decides where (none|cpu|nvme) to offload optimizer states (useful only when `use_deepspeed` flag is passed).",
)
parser.add_argument(
"--offload_param_device",
default=None,
type=str,
help="Decides where (none|cpu|nvme) to offload parameters (useful only when `use_deepspeed` flag is passed).",
)
parser.add_argument(
"--gradient_accumulation_steps",
default=None,
type=int,
help="No of gradient_accumulation_steps used in your training script (useful only when `use_deepspeed` flag is passed).",
)
parser.add_argument(
"--gradient_clipping",
default=None,
type=float,
help="gradient clipping value used in your training script (useful only when `use_deepspeed` flag is passed).",
)
parser.add_argument(
"--zero3_init_flag",
default=None,
type=str,
help="Decides Whether (true|false) to enable `deepspeed.zero.Init` for constructing massive models. "
"Only applicable with DeepSpeed ZeRO Stage-3.",
)
parser.add_argument(
"--zero3_save_16bit_model",
default=None,
type=str,
help="Decides Whether (true|false) to save 16-bit model weights when using ZeRO Stage-3. "
"Only applicable with DeepSpeed ZeRO Stage-3.",
)
parser.add_argument(
"--deepspeed_hostfile",
default=None,
type=str,
help="DeepSpeed hostfile for configuring multi-node compute resources.",
)
parser.add_argument(
"--deepspeed_exclusion_filter",
default=None,
type=str,
help="DeepSpeed exclusion filter string when using mutli-node setup.",
)
parser.add_argument(
"--deepspeed_inclusion_filter",
default=None,
type=str,
help="DeepSpeed inclusion filter string when using mutli-node setup.",
)
parser.add_argument(
"--deepspeed_multinode_launcher",
default=None,
type=str,
help="DeepSpeed multi-node launcher to use.",
)
parser.add_argument(
"--use_fsdp",
default=False,
action="store_true",
help="Whether to use fsdp.",
)
parser.add_argument(
"--fsdp_offload_params",
default="false",
type=str,
help="Decides Whether (true|false) to offload parameters and gradients to CPU. (useful only when `use_fsdp` flag is passed).",
)
parser.add_argument(
"--fsdp_min_num_params",
type=int,
default=1e8,
help="FSDP's minimum number of parameters for Default Auto Wrapping. (useful only when `use_fsdp` flag is passed).",
)
parser.add_argument(
"--fsdp_sharding_strategy",
type=int,
default=1,
help="FSDP's Sharding Strategy. (useful only when `use_fsdp` flag is passed).",
)
parser.add_argument(
"--fsdp_auto_wrap_policy",
type=str,
default=None,
help="FSDP's auto wrap policy. (useful only when `use_fsdp` flag is passed).",
)
parser.add_argument(
"--fsdp_transformer_layer_cls_to_wrap",
default=None,
type=str,
help="Transformer layer class name (case-sensitive) to wrap ,e.g, `BertLayer`, `GPTJBlock`, `T5Block` .... "
"(useful only when `use_fsdp` flag is passed).",
)
parser.add_argument(
"--fsdp_backward_prefetch_policy",
default=None,
type=str,
help="FSDP's backward prefetch policy. (useful only when `use_fsdp` flag is passed).",
)
parser.add_argument(
"--fsdp_state_dict_type",
default=None,
type=str,
help="FSDP's state dict type. (useful only when `use_fsdp` flag is passed).",
)
parser.add_argument(
"--offload_params",
default=None,
type=str,
help="This argument is deprecated. Use `fsdp_offload_params` instead.",
)
parser.add_argument(
"--min_num_params",
type=int,
default=None,
help="This argument is deprecated. Use `fsdp_min_num_params` instead.",
)
parser.add_argument(
"--sharding_strategy",
type=int,
default=None,
help="This argument is deprecated. Use `fsdp_sharding_strategy` instead.",
)
parser.add_argument(
"--transformer_layer_cls_to_wrap",
default=None,
type=str,
help="This argument is deprecated. Use `fsdp_transformer_layer_cls_to_wrap` instead.",
)
parser.add_argument(
"--tpu", default=False, action="store_true", help="Whether or not this should launch a TPU training."
)
parser.add_argument(
"--mixed_precision",
default="no",
type=str,
choices=["no", "fp16", "bf16"],
help="Whether or not to use mixed precision training. "
@ -76,6 +259,11 @@ def launch_command_parser(subparsers=None):
parser.add_argument(
"--num_machines", type=int, default=None, help="The total number of machines used in this training."
)
parser.add_argument(
"--gpu_ids",
default=None,
help="What GPUs (by id) should be used for training on this machine as a comma-seperated list",
)
parser.add_argument(
"--machine_rank", type=int, default=None, help="The rank of the machine on which this script is launched."
)
@ -86,12 +274,36 @@ def launch_command_parser(subparsers=None):
default=None,
help="The port to use to communicate with the machine of rank 0.",
)
# Rendezvous related arguments
parser.add_argument(
"--rdzv_conf",
type=str,
default="",
help="Additional rendezvous configuration (<key1>=<value1>,<key2>=<value2>,...).",
)
parser.add_argument(
"--max_restarts",
type=int,
default=0,
help="Maximum number of worker group restarts before failing.",
)
parser.add_argument(
"--monitor_interval",
type=float,
default=5,
help="Interval, in seconds, to monitor the state of workers.",
)
parser.add_argument(
"--main_training_function",
type=str,
default=None,
help="The name of the main function to be executed in your script (only for TPU training).",
)
parser.add_argument(
"--downcast_bf16",
action="store_true",
help="Whether when using bf16 precision on TPUs if both float and double tensors are cast to bfloat16 or if double tensors remain as float32",
)
parser.add_argument(
"-m",
"--module",
@ -103,6 +315,12 @@ def launch_command_parser(subparsers=None):
action="store_true",
help="Skip prepending the training script with 'python' - just execute it directly. Useful when the script is not a Python script.",
)
parser.add_argument(
"--num_cpu_threads_per_process",
type=int,
default=None,
help="The number of CPU threads per process. Can be tuned for optimal performance.",
)
parser.add_argument(
"--aws_access_key_id",
type=str,
@ -113,7 +331,12 @@ def launch_command_parser(subparsers=None):
"--aws_secret_access_key",
type=str,
default=None,
help="The AWS_SECRET_ACCESS_KEY used to launch the Amazon SageMaker training job",
help="The AWS_SECRET_ACCESS_KEY used to launch the Amazon SageMaker training job.",
)
parser.add_argument(
"--debug",
action="store_true",
help="Whether to print out the torch.distributed stack trace when something fails.",
)
parser.add_argument(
"training_script",
@ -123,24 +346,6 @@ def launch_command_parser(subparsers=None):
"script."
),
)
parser.add_argument(
"--zero_stage",
default=None,
type=int,
help="DeepSpeed's ZeRO optimization stage (useful only when `use_deepspeed` flag is passed).",
)
parser.add_argument(
"--offload_optimizer_device",
default=None,
type=str,
help="Decides where (none|cpu|nvme) to offload optimizer states (useful only when `use_deepspeed` flag is passed).",
)
parser.add_argument(
"--gradient_accumulation_steps",
default=None,
type=int,
help="No of gradient_accumulation_steps used in your training script (useful only when `use_deepspeed` flag is passed).",
)
# Other arguments of the training scripts
parser.add_argument("training_script_args", nargs=argparse.REMAINDER, help="Arguments of the training script.")
@ -162,17 +367,32 @@ def simple_launcher(args):
cmd.extend(args.training_script_args)
current_env = os.environ.copy()
current_env["USE_CPU"] = str(args.cpu)
current_env["USE_CPU"] = str(args.cpu or args.use_cpu)
current_env["USE_MPS_DEVICE"] = str(args.use_mps_device)
if args.use_mps_device:
current_env["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
elif args.gpu_ids != "all":
current_env["CUDA_VISIBLE_DEVICES"] = args.gpu_ids
if args.num_machines > 1:
current_env["MASTER_ADDR"] = args.main_process_ip
current_env["MASTER_PORT"] = str(args.main_process_port)
elif args.num_processes > 1:
current_env["MASTER_ADDR"] = args.main_process_ip if args.main_process_ip is not None else "127.0.0.1"
current_env["MASTER_PORT"] = str(args.main_process_port) if args.main_process_port is not None else "29500"
mixed_precision = args.mixed_precision.lower()
if mixed_precision not in ["no", "fp16", "bf16"]:
raise ValueError(f"Unknown mixed_precision mode: {mixed_precision}. Choose between 'no', 'fp16' and 'bf16'.")
try:
mixed_precision = PrecisionType(args.mixed_precision.lower())
except ValueError:
raise ValueError(
f"Unknown mixed_precision mode: {args.mixed_precision.lower()}. Choose between {PrecisionType.list()}."
)
if args.fp16:
warnings.warn('--fp16 flag is deprecated. Use "--mixed_precision fp16" instead.', DeprecationWarning)
mixed_precision = "fp16"
current_env["MIXED_PRECISION"] = str(mixed_precision)
current_env["OMP_NUM_THREADS"] = str(args.num_cpu_threads_per_process)
process = subprocess.Popen(cmd, env=current_env)
process.wait()
@ -181,111 +401,234 @@ def simple_launcher(args):
def multi_gpu_launcher(args):
cmd = [sys.executable, "-m", "torch.distributed.launch", "--use_env"]
if args.num_machines > 1:
cmd.extend(
[
"--nproc_per_node",
str(args.num_processes // args.num_machines),
"--nnodes",
str(args.num_machines),
"--node_rank",
str(args.machine_rank),
"--master_addr",
args.main_process_ip,
"--master_port",
str(args.main_process_port),
]
)
num_processes = getattr(args, "num_processes")
num_machines = getattr(args, "num_machines")
main_process_ip = getattr(args, "main_process_ip")
main_process_port = getattr(args, "main_process_port")
if num_machines > 1:
setattr(args, "nproc_per_node", str(num_processes // num_machines))
setattr(args, "nnodes", str(num_machines))
setattr(args, "node_rank", int(args.machine_rank))
if getattr(args, "same_network"):
setattr(args, "master_addr", str(main_process_ip))
setattr(args, "master_port", str(main_process_port))
else:
setattr(args, "rdzv_endpoint", f"{main_process_ip}:{main_process_port}")
else:
cmd.extend(["--nproc_per_node", str(args.num_processes)])
if args.main_process_port is not None:
cmd.extend(["--master_port", str(args.main_process_port)])
setattr(args, "nproc_per_node", str(num_processes))
if main_process_port is not None:
setattr(args, "master_port", str(main_process_port))
if args.module and args.no_python:
raise ValueError("--module and --no_python cannot be used together")
elif args.module:
cmd.append("--module")
setattr(args, "module", True)
elif args.no_python:
cmd.append("--no_python")
cmd.append(args.training_script)
cmd.extend(args.training_script_args)
setattr(args, "no_python", True)
current_env = os.environ.copy()
gpu_ids = getattr(args, "gpu_ids")
if gpu_ids != "all":
current_env["CUDA_VISIBLE_DEVICES"] = gpu_ids
mixed_precision = args.mixed_precision.lower()
if mixed_precision not in ["no", "fp16", "bf16"]:
raise ValueError(f"Unknown mixed_precision mode: {mixed_precision}. Choose between 'no', 'fp16' and 'bf16'.")
try:
mixed_precision = PrecisionType(mixed_precision)
except ValueError:
raise ValueError(f"Unknown mixed_precision mode: {mixed_precision}. Choose between {PrecisionType.list()}.")
if args.fp16:
warnings.warn('--fp16 flag is deprecated. Use "--mixed_precision fp16" instead.', DeprecationWarning)
mixed_precision = "fp16"
current_env["MIXED_PRECISION"] = str(mixed_precision)
if args.use_fsdp:
if args.sharding_strategy is not None:
warnings.warn(
"`sharding_strategy` is deprecated and will be removed in version 0.13.0 of 🤗 Accelerate. Use"
" `fsdp_sharding_strategy` instead",
FutureWarning,
)
args.fsdp_sharding_strategy = args.sharding_strategy
process = subprocess.Popen(cmd, env=current_env)
process.wait()
if process.returncode != 0:
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
if args.offload_params is not None:
warnings.warn(
"`offload_params` is deprecated and will be removed in version 0.13.0 of 🤗 Accelerate. Use"
" `fsdp_offload_params` instead",
FutureWarning,
)
args.fsdp_offload_params = args.offload_params
if args.min_num_params is not None:
warnings.warn(
"`min_num_params` is deprecated and will be removed in version 0.13.0 of 🤗 Accelerate. Use"
" `fsdp_min_num_params` instead",
FutureWarning,
)
args.fsdp_min_num_params = args.min_num_params
if args.transformer_layer_cls_to_wrap is not None:
warnings.warn(
"`transformer_layer_cls_to_wrap` is deprecated and will be removed in version 0.13.0 of 🤗 Accelerate. Use"
" `fsdp_transformer_layer_cls_to_wrap` instead",
FutureWarning,
)
args.fsdp_transformer_layer_cls_to_wrap = args.transformer_layer_cls_to_wrap
current_env["USE_FSDP"] = "true"
current_env["FSDP_SHARDING_STRATEGY"] = str(args.fsdp_sharding_strategy)
current_env["FSDP_OFFLOAD_PARAMS"] = str(args.fsdp_offload_params).lower()
current_env["FSDP_MIN_NUM_PARAMS"] = str(args.fsdp_min_num_params)
if args.fsdp_auto_wrap_policy is not None:
current_env["FSDP_AUTO_WRAP_POLICY"] = str(args.fsdp_auto_wrap_policy)
if args.fsdp_transformer_layer_cls_to_wrap is not None:
current_env["FSDP_TRANSFORMER_CLS_TO_WRAP"] = str(args.fsdp_transformer_layer_cls_to_wrap)
if args.fsdp_backward_prefetch_policy is not None:
current_env["FSDP_BACKWARD_PREFETCH"] = str(args.fsdp_backward_prefetch_policy)
if args.fsdp_state_dict_type is not None:
current_env["FSDP_STATE_DICT_TYPE"] = str(args.fsdp_state_dict_type)
current_env["OMP_NUM_THREADS"] = str(args.num_cpu_threads_per_process)
if is_torch_version("<", "1.9.0"):
raise NotImplementedError("Multi-node training requires pytorch>=1.9.0")
debug = getattr(args, "debug", False)
args = _filter_args(args)
with patch_environment(**current_env):
try:
distrib_run.run(args)
except:
if debug:
console = get_console()
console.print("\n[bold red]Using --debug, `torch.distributed` Stack Trace:[/bold red]")
console.print_exception(suppress=[__file__], show_locals=False)
def deepspeed_launcher(args):
cmd = ["deepspeed"]
if args.num_machines > 1:
cmd.extend(
[
"--num_gpus",
str(args.num_processes // args.num_machines),
"--num_nodes",
str(args.num_machines),
"--node_rank",
str(args.machine_rank),
"--master_addr",
args.main_process_ip,
"--master_port",
str(args.main_process_port),
]
)
if not is_deepspeed_available():
raise ImportError("DeepSpeed is not installed => run `pip3 install deepspeed` or build it from source.")
num_processes = getattr(args, "num_processes")
num_machines = getattr(args, "num_machines")
main_process_ip = getattr(args, "main_process_ip")
main_process_port = getattr(args, "main_process_port")
if num_machines > 1 and args.deepspeed_multinode_launcher != DEEPSPEED_MULTINODE_LAUNCHERS[1]:
cmd = ["deepspeed", "--no_local_rank"]
cmd.extend(["--hostfile", str(args.deepspeed_hostfile), "--launcher", str(args.deepspeed_multinode_launcher)])
if args.deepspeed_exclusion_filter is not None:
cmd.extend(
[
"--exclude",
str(args.deepspeed_exclusion_filter),
]
)
elif args.deepspeed_inclusion_filter is not None:
cmd.extend(
[
"--include",
str(args.deepspeed_inclusion_filter),
]
)
else:
cmd.extend(["--num_gpus", str(args.num_processes // args.num_machines)])
if args.module and args.no_python:
raise ValueError("--module and --no_python cannot be used together")
elif args.module:
cmd.append("--module")
elif args.no_python:
cmd.append("--no_python")
cmd.append(args.training_script)
cmd.extend(args.training_script_args)
elif num_machines > 1 and args.deepspeed_multinode_launcher == DEEPSPEED_MULTINODE_LAUNCHERS[1]:
setattr(args, "nproc_per_node", str(num_processes // num_machines))
setattr(args, "nnodes", str(num_machines))
setattr(args, "node_rank", int(args.machine_rank))
if getattr(args, "same_network"):
setattr(args, "master_addr", str(main_process_ip))
setattr(args, "master_port", str(main_process_port))
else:
setattr(args, "rdzv_endpoint", f"{main_process_ip}:{main_process_port}")
else:
cmd.extend(["--num_gpus", str(args.num_processes)])
setattr(args, "nproc_per_node", str(num_processes))
if main_process_port is not None:
setattr(args, "master_port", str(main_process_port))
if args.module and args.no_python:
raise ValueError("--module and --no_python cannot be used together")
elif args.module:
cmd.append("--module")
setattr(args, "module", True)
elif args.no_python:
cmd.append("--no_python")
cmd.append(args.training_script)
cmd.extend(args.training_script_args)
setattr(args, "no_python", True)
current_env = os.environ.copy()
mixed_precision = args.mixed_precision.lower()
if mixed_precision not in ["no", "fp16", "bf16"]:
raise ValueError(f"Unknown mixed_precision mode: {mixed_precision}. Choose between 'no', 'fp16' and 'bf16'.")
gpu_ids = getattr(args, "gpu_ids")
if gpu_ids != "all":
current_env["CUDA_VISIBLE_DEVICES"] = gpu_ids
try:
mixed_precision = PrecisionType(args.mixed_precision.lower())
except ValueError:
raise ValueError(
f"Unknown mixed_precision mode: {args.mixed_precision.lower()}. Choose between {PrecisionType.list()}."
)
if args.fp16:
warnings.warn('--fp16 flag is deprecated. Use "--mixed_precision fp16" instead.', DeprecationWarning)
mixed_precision = "fp16"
current_env["PYTHONPATH"] = sys.executable
current_env["MIXED_PRECISION"] = str(mixed_precision)
current_env["USE_DEEPSPEED"] = "true"
current_env["DEEPSPEED_ZERO_STAGE"] = str(args.zero_stage)
current_env["GRADIENT_ACCUMULATION_STEPS"] = str(args.gradient_accumulation_steps)
current_env["DEEPSPEED_OFFLOAD_OPTIMIZER_DEVICE"] = str(args.offload_optimizer_device)
current_env["GRADIENT_CLIPPING"] = str(args.gradient_clipping).lower()
current_env["DEEPSPEED_OFFLOAD_OPTIMIZER_DEVICE"] = str(args.offload_optimizer_device).lower()
current_env["DEEPSPEED_OFFLOAD_PARAM_DEVICE"] = str(args.offload_param_device).lower()
current_env["DEEPSPEED_ZERO3_INIT"] = str(args.zero3_init_flag).lower()
current_env["DEEPSPEED_ZERO3_SAVE_16BIT_MODEL"] = str(args.zero3_save_16bit_model).lower()
if args.deepspeed_config_file is not None:
current_env["DEEPSPEED_CONFIG_FILE"] = str(args.deepspeed_config_file)
process = subprocess.Popen(cmd, env=current_env)
process.wait()
if process.returncode != 0:
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
if args.num_machines > 1 and args.deepspeed_multinode_launcher != DEEPSPEED_MULTINODE_LAUNCHERS[1]:
with open(".deepspeed_env", "a") as f:
for key, value in current_env.items():
if ";" in value or " " in value:
continue
f.write(f"{key}={value}\n")
process = subprocess.Popen(cmd, env=current_env)
process.wait()
if process.returncode != 0:
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
else:
if is_torch_version("<", "1.9.0"):
raise NotImplementedError("Multi-node training requires pytorch>=1.9.0")
debug = getattr(args, "debug", False)
args = _filter_args(args)
with patch_environment(**current_env):
try:
distrib_run.run(args)
except:
if debug:
console = get_console()
console.print("\n[bold red]Using --debug, `torch.distributed` Stack Trace:[/bold red]")
console.print_exception(suppress=[__file__], show_locals=False)
def tpu_launcher(args):
import torch_xla.distributed.xla_multiprocessing as xmp
current_env = {}
if args.no_python:
raise ValueError("--no_python cannot be used with TPU launcher")
if args.mixed_precision == "bf16":
if args.downcast_bf16:
current_env["XLA_USE_BF16"] = "0"
current_env["XLA_DOWNCAST_BF16"] = "1"
else:
current_env["XLA_USE_BF16"] = "1"
current_env["XLA_DOWNCAST_BF16"] = "0"
if args.module:
mod_name = args.training_script
else:
@ -305,7 +648,8 @@ def tpu_launcher(args):
sys.argv = [mod.__file__] + args.training_script_args
main_function = getattr(mod, args.main_training_function)
xmp.spawn(PrepareForLaunch(main_function), args=(), nprocs=args.num_processes)
with patch_environment(**current_env):
xmp.spawn(PrepareForLaunch(main_function), args=(), nprocs=args.num_processes)
def _convert_nargs_to_dict(nargs: List[str]) -> Dict[str, str]:
@ -388,29 +732,68 @@ def sagemaker_launcher(sagemaker_config: SageMakerConfig, args):
print("Converting Arguments to Hyperparameters")
hyperparameters = _convert_nargs_to_dict(args.training_script_args)
mixed_precision = args.mixed_precision.lower()
if mixed_precision not in ["no", "fp16", "bf16"]:
raise ValueError(f"Unknown mixed_precision mode: {mixed_precision}. Choose between 'no', 'fp16' and 'bf16'.")
try:
mixed_precision = PrecisionType(args.mixed_precision.lower())
except ValueError:
raise ValueError(
f"Unknown mixed_precision mode: {args.mixed_precision.lower()}. Choose between {PrecisionType.list()}."
)
if args.fp16:
warnings.warn('--fp16 flag is deprecated. Use "--mixed_precision fp16" instead.', DeprecationWarning)
mixed_precision = "fp16"
# Environment variables to be set for use during training job
environment = {"MIXED_PRECISION": str(mixed_precision)}
environment = {
"USE_SAGEMAKER": "true",
"MIXED_PRECISION": str(mixed_precision),
"SAGEMAKER_DISTRIBUTED_TYPE": sagemaker_config.distributed_type.value,
}
# configure distribution set up
distribution = None # TODO: not yet implemented
distribution = None
if sagemaker_config.distributed_type == SageMakerDistributedType.DATA_PARALLEL:
distribution = {"smdistributed": {"dataparallel": {"enabled": True}}}
# configure sagemaker inputs
sagemaker_inputs = None
if sagemaker_config.sagemaker_inputs_file is not None:
print(f"Loading SageMaker Inputs from {sagemaker_config.sagemaker_inputs_file} file")
sagemaker_inputs = {}
with open(sagemaker_config.sagemaker_inputs_file) as file:
for i, line in enumerate(file):
if i == 0:
continue
l = line.split("\t")
sagemaker_inputs[l[0]] = l[1].strip()
print(f"Loaded SageMaker Inputs: {sagemaker_inputs}")
# configure sagemaker metrics
sagemaker_metrics = None
if sagemaker_config.sagemaker_metrics_file is not None:
print(f"Loading SageMaker Metrics from {sagemaker_config.sagemaker_metrics_file} file")
sagemaker_metrics = []
with open(sagemaker_config.sagemaker_metrics_file) as file:
for i, line in enumerate(file):
if i == 0:
continue
l = line.split("\t")
metric_dict = {
"Name": l[0],
"Regex": l[1].strip(),
}
sagemaker_metrics.append(metric_dict)
print(f"Loaded SageMaker Metrics: {sagemaker_metrics}")
# configure session
print("Creating Estimator")
huggingface_estimator = HuggingFace(
image_uri=sagemaker_config.image_uri,
entry_point=entry_point,
source_dir=source_dir,
role=sagemaker_config.iam_role_name,
transformers_version="4.4",
pytorch_version="1.6",
py_version="py36",
transformers_version=sagemaker_config.transformers_version,
pytorch_version=sagemaker_config.pytorch_version,
py_version=sagemaker_config.py_version,
base_job_name=sagemaker_config.base_job_name,
instance_count=sagemaker_config.num_machines,
instance_type=sagemaker_config.ec2_instance_type,
@ -418,25 +801,43 @@ def sagemaker_launcher(sagemaker_config: SageMakerConfig, args):
distribution=distribution,
hyperparameters=hyperparameters,
environment=environment,
metric_definitions=sagemaker_metrics,
)
huggingface_estimator.fit()
huggingface_estimator.fit(inputs=sagemaker_inputs)
print(f"You can find your model data at: {huggingface_estimator.model_data}")
def launch_command(args):
# Sanity checks
if sum([args.multi_gpu, args.tpu, args.use_deepspeed]) > 1:
raise ValueError("You can only pick one between `--multi_gpu`, `--use_deepspeed`, `--tpu`.")
if sum([args.multi_gpu, args.tpu, args.use_deepspeed, args.use_fsdp]) > 1:
raise ValueError("You can only pick one between `--multi_gpu`, `--use_deepspeed`, `--tpu`, `--use_fsdp`.")
defaults = None
warned = []
# Get the default from the config file.
if args.config_file is not None or os.path.isfile(default_config_file) and not args.cpu:
defaults = load_config_from_file(args.config_file)
if not args.multi_gpu and not args.tpu and not args.use_deepspeed:
if (
not args.multi_gpu
and not args.tpu
and not args.use_deepspeed
and not args.use_fsdp
and not args.use_mps_device
):
args.use_deepspeed = defaults.distributed_type == DistributedType.DEEPSPEED
args.multi_gpu = defaults.distributed_type == DistributedType.MULTI_GPU
args.tpu = defaults.distributed_type == DistributedType.TPU
args.use_fsdp = defaults.distributed_type == DistributedType.FSDP
args.use_mps_device = defaults.distributed_type == DistributedType.MPS
if not args.use_mps_device:
if args.gpu_ids is None:
if defaults.gpu_ids is not None:
args.gpu_ids = defaults.gpu_ids
else:
args.gpu_ids = "all"
if len(args.gpu_ids.split(",")) < 2 and args.multi_gpu and (args.gpu_ids != "all"):
args.multi_gpu = False
if defaults.compute_environment == ComputeEnvironment.LOCAL_MACHINE:
# Update args with the defaults
for name, attr in defaults.__dict__.items():
@ -444,6 +845,11 @@ def launch_command(args):
for k in defaults.deepspeed_config:
if getattr(args, k) is None:
setattr(args, k, defaults.deepspeed_config[k])
for k in defaults.fsdp_config:
arg_to_set = k
if "fsdp" not in arg_to_set:
arg_to_set = "fsdp_" + arg_to_set
setattr(args, arg_to_set, defaults.fsdp_config[k])
continue
# Those args are handled separately
@ -452,7 +858,6 @@ def launch_command(args):
and getattr(args, name, None) is None
):
setattr(args, name, attr)
if not args.mixed_precision:
if args.fp16:
args.mixed_precision = "fp16"
@ -460,11 +865,41 @@ def launch_command(args):
args.mixed_precision = defaults.mixed_precision
else:
if args.num_processes is None:
args.num_processes = 1
args.num_processes = torch.cuda.device_count() if args.multi_gpu else 1
warned.append("\t`--num_processes` was set to a value of `{args.num_processes}`")
if args.num_machines is None:
warned.append("\t`--num_machines` was set to a value of `1`")
args.num_machines = 1
if args.mixed_precision is None:
warned.append("\t`--mixed_precision` was set to a value of `'no'`")
args.mixed_precision = "no"
if not hasattr(args, "use_cpu"):
args.use_cpu = args.cpu
if args.num_cpu_threads_per_process is None:
local_size = get_int_from_env(
["MPI_LOCALNRANKS", "OMPI_COMM_WORLD_LOCAL_SIZE", "MV2_COMM_WORLD_LOCAL_SIZE"], 1
)
args.num_cpu_threads_per_process = int(psutil.cpu_count(logical=False) / local_size)
if args.num_cpu_threads_per_process == 0:
args.num_cpu_threads_per_process = 1
warned.append(
f"\t`--num_cpu_threads_per_process` was set to `{args.num_cpu_threads_per_process}` to improve out-of-box performance"
)
if any(warned):
message = "The following values were not passed to `accelerate launch` and had defaults used instead:\n"
message += "\n".join(warned)
message += (
"\nTo avoid this warning pass in values for each of the problematic parameters or run `accelerate config`."
)
logger.warn(message)
# Use the proper launcher
if args.use_deepspeed and not args.cpu:
deepspeed_launcher(args)
elif args.use_fsdp and not args.cpu:
multi_gpu_launcher(args)
elif args.multi_gpu and not args.cpu:
multi_gpu_launcher(args)
elif args.tpu and not args.cpu:

View File

@ -43,7 +43,7 @@ def test_command_parser(subparsers=None):
def test_command(args):
script_name = os.path.sep.join(__file__.split(os.path.sep)[:-2] + ["test_utils", "test_script.py"])
script_name = os.path.sep.join(__file__.split(os.path.sep)[:-2] + ["test_utils", "scripts", "test_script.py"])
test_args = f"""
--config_file={args.config_file} {script_name}

View File

@ -18,9 +18,8 @@ from typing import List, Optional, Union
import torch
from torch.utils.data import BatchSampler, DataLoader, IterableDataset
from packaging import version
from .state import AcceleratorState, DistributedType, is_tpu_available
from .logging import get_logger
from .state import AcceleratorState, DistributedType, GradientState, is_tpu_available
from .utils import (
RNGType,
broadcast,
@ -29,15 +28,39 @@ from .utils import (
find_batch_size,
get_data_structure,
initialize_tensors,
is_torch_version,
send_to_device,
slice_tensors,
synchronize_rng_states,
)
if is_tpu_available():
import torch_xla.core.xla_model as xm
if is_tpu_available(check_device=False):
import torch_xla.distributed.parallel_loader as xpl
class MpDeviceLoaderWrapper(xpl.MpDeviceLoader):
"""
Wrapper for the xpl.MpDeviceLoader class that knows the total batch size.
**Available attributes:**
- **total_batch_size** (`int`) -- Total batch size of the dataloader across all processes.
Equal to the original batch size when `split_batches=True`; otherwise the original batch size * the total
number of processes
- **total_dataset_length** (`int`) -- Total length of the inner dataset across all processes.
"""
@property
def total_batch_size(self):
return self._loader.total_batch_size
@property
def total_dataset_length(self):
return self._loader.total_dataset_length
logger = get_logger(__name__)
# kwargs of the DataLoader in min version 1.4.0.
_PYTORCH_DATALOADER_KWARGS = {
@ -52,16 +75,16 @@ _PYTORCH_DATALOADER_KWARGS = {
"timeout": 0,
"worker_init_fn": None,
"multiprocessing_context": None,
"generator": None,
}
# kwargs added after by version
_PYTORCH_DATALOADER_ADDITIONAL_KWARGS = {
"1.6.0": {"generator": None},
"1.7.0": {"prefetch_factor": 2, "persistent_workers": False},
}
for v, additional_kwargs in _PYTORCH_DATALOADER_ADDITIONAL_KWARGS.items():
if version.parse(torch.__version__) >= version.parse(v):
if is_torch_version(">=", v):
_PYTORCH_DATALOADER_KWARGS.update(additional_kwargs)
@ -115,6 +138,10 @@ class BatchSamplerShard(BatchSampler):
self.batch_size = batch_sampler.batch_size
self.drop_last = batch_sampler.drop_last
@property
def total_length(self):
return len(self.batch_sampler)
def __len__(self):
if self.split_batches:
return len(self.batch_sampler)
@ -288,6 +315,14 @@ class DataLoaderShard(DataLoader):
A random number generator to keep synchronized across processes.
kwargs:
All other keyword arguments to pass to the regular `DataLoader` initialization.
**Available attributes:**
- **total_batch_size** (`int`) -- Total batch size of the dataloader across all processes.
Equal to the original batch size when `split_batches=True`; otherwise the original batch size * the total
number of processes
- **total_dataset_length** (`int`) -- Total length of the inner dataset across all processes.
"""
def __init__(self, dataset, device=None, rng_types=None, generator=None, **kwargs):
@ -295,123 +330,208 @@ class DataLoaderShard(DataLoader):
self.device = device
self.rng_types = rng_types
self.generator = generator
self.gradient_state = GradientState()
def __iter__(self):
if self.rng_types is not None:
synchronize_rng_states(self.rng_types, self.generator)
state = AcceleratorState()
for batch in super().__iter__():
if state.distributed_type == DistributedType.TPU:
xm.mark_step()
yield batch if self.device is None else send_to_device(batch, self.device)
self.gradient_state._set_end_of_dataloader(False)
try:
length = getattr(self.dataset, "total_dataset_length", len(self.dataset))
self.gradient_state._set_remainder(length % self.total_batch_size)
except Exception:
# We can safely pass because the default is -1
pass
dataloader_iter = super().__iter__()
# We iterate one batch ahead to check when we are at the end
try:
current_batch = next(dataloader_iter)
except StopIteration:
yield
while True:
try:
# But we still move it to the device so it is done before `StopIteration` is reached
if self.device is not None:
current_batch = send_to_device(current_batch, self.device)
next_batch = next(dataloader_iter)
yield current_batch
current_batch = next_batch
except StopIteration:
self.gradient_state._set_end_of_dataloader(True)
yield current_batch
break
@property
def total_batch_size(self):
batch_sampler = self.sampler if isinstance(self.sampler, BatchSampler) else self.batch_sampler
return (
batch_sampler.batch_size
if batch_sampler.split_batches
else (batch_sampler.batch_size * batch_sampler.num_processes)
)
@property
def total_dataset_length(self):
if hasattr("total_length", self.dataset):
return self.dataset.total_length
else:
return len(self.dataset)
class DataLoaderDispatcher(DataLoader):
"""
Args:
Subclass of a PyTorch `DataLoader` that will iterate and preprocess on process 0 only, then dispatch on each
process their part of the batch.
Args:
split_batches (`bool`, *optional*, defaults to `False`):
Whether the resulting `DataLoader` should split the batches of the original data loader across devices or
yield full batches (in which case it will yield batches starting at the `process_index`-th and advancing of
`num_processes` batches at each iteration).
`num_processes` batches at each iteration). Another way to see this is that the observed batch size will be
the same as the initial `dataloader` if this option is set to `True`, the batch size of the initial
`dataloader` multiplied by `num_processes` otherwise. Setting this option to `True` requires that the batch
size of the `dataloader` is a round multiple of `batch_size`.
Another way to see this is that the observed batch size will be the same as the initial `dataloader` if
this option is set to `True`, the batch size of the initial `dataloader` multiplied by `num_processes`
otherwise.
**Available attributes:**
Setting this option to `True` requires that the batch size of the `dataloader` is a round multiple of
`batch_size`.
- **total_batch_size** (`int`) -- Total batch size of the dataloader across all processes.
Equal to the original batch size when `split_batches=True`; otherwise the original batch size * the total
number of processes
- **total_dataset_length** (`int`) -- Total length of the inner dataset across all processes.
"""
def __init__(self, dataset, split_batches: bool = False, **kwargs):
def __init__(self, dataset, split_batches: bool = False, _drop_last: bool = False, **kwargs):
shuffle = False
if is_torch_version(">=", "1.11.0"):
from torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe
# We need to save the shuffling state of the DataPipe
if isinstance(dataset, ShufflerIterDataPipe):
shuffle = dataset._shuffle_enabled
super().__init__(dataset, **kwargs)
self.split_batches = split_batches
if version.parse(torch.__version__) < version.parse("1.8.0"):
if is_torch_version("<", "1.8.0"):
raise ImportError(
"Using `DataLoaderDispatcher` requires PyTorch 1.8.0 minimum. You have {torch.__version__}."
f"Using `DataLoaderDispatcher` requires PyTorch 1.8.0 minimum. You have {torch.__version__}."
)
if shuffle:
torch.utils.data.graph_settings.apply_shuffle_settings(dataset, shuffle=shuffle)
self.gradient_state = GradientState()
self.state = AcceleratorState()
self._drop_last = _drop_last
try:
length = getattr(self.dataset, "total_dataset_length", len(self.dataset))
self.gradient_state._set_remainder(length % self.total_batch_size)
except Exception:
# We can safely pass because the default is -1
pass
def _fetch_batches(self, iterator):
batches, batch = None, None
# On process 0, we gather the batch to dispatch.
if self.state.process_index == 0:
try:
if self.split_batches:
# One batch of the main iterator is dispatched and split.
batch = next(iterator)
else:
# num_processes batches of the main iterator are concatenated then dispatched and split.
# We add the batches one by one so we have the remainder available when drop_last=False.
batches = []
for _ in range(self.state.num_processes):
batches.append(next(iterator))
batch = concatenate(batches, dim=0)
# In both cases, we need to get the structure of the batch that we will broadcast on other
# processes to initialize the tensors with the right shape.
# data_structure, stop_iteration
batch_info = [get_data_structure(batch), False]
except StopIteration:
batch_info = [None, True]
else:
batch_info = [None, self._stop_iteration]
# This is inplace, so after this instruction, every process has the same `batch_info` as process 0.
broadcast_object_list(batch_info)
self._stop_iteration = batch_info[1]
if self._stop_iteration:
# If drop_last is False and split_batches is False, we may have a remainder to take care of.
if not self.split_batches and not self._drop_last:
if self.state.process_index == 0 and len(batches) > 0:
batch = concatenate(batches, dim=0)
batch_info = [get_data_structure(batch), False]
else:
batch_info = [None, True]
broadcast_object_list(batch_info)
return batch, batch_info
def __iter__(self):
state = AcceleratorState()
if state.process_index == 0:
self.gradient_state._set_end_of_dataloader(False)
main_iterator = None
if self.state.process_index == 0:
# We only iterate through the DataLoader on process 0.
main_iterator = super().__iter__()
stop_iteration = False
self._stop_iteration = False
first_batch = None
next_batch, next_batch_info = self._fetch_batches(main_iterator)
while not stop_iteration:
# On process 0, we gather the batch to dispatch.
if state.process_index == 0:
try:
if self.split_batches:
# One batch of the main iterator is dispatched and split.
batch = next(main_iterator)
else:
# num_processes batches of the main iterator are concatenated then dispatched and split.
# We add the batches one by one so we have the remainder available when drop_last=False.
batches = []
for _ in range(state.num_processes):
batches.append(next(main_iterator))
batch = concatenate(batches, dim=0)
# In both cases, we need to get the structure of the batch that we will broadcast on other
# processes to initialize the tensors with the right shape.
# data_structure, stop_iteration
batch_info = [get_data_structure(batch), False]
except StopIteration:
batch_info = [None, True]
else:
batch_info = [None, stop_iteration]
batch, batch_info = next_batch, next_batch_info
# This is inplace, so after this instruction, every process has the same `batch_info` as process 0.
broadcast_object_list(batch_info)
stop_iteration = batch_info[1]
if stop_iteration:
# If drop_last is False and split_batches is False, we may have a remainder to take care of.
if not self.split_batches and not self.drop_last:
if state.process_index == 0 and len(batches) > 0:
batch = concatenate(batches, dim=0)
batch_info = [get_data_structure(batch), False]
else:
batch_info = [None, True]
broadcast_object_list(batch_info)
if batch_info[1]:
continue
else:
continue
if state.process_index != 0:
if self.state.process_index != 0:
# Initialize tensors on other processes than process 0.
batch = initialize_tensors(batch_info[0])
batch = send_to_device(batch, state.device)
batch = send_to_device(batch, self.state.device)
# Broadcast the batch before splitting it.
batch = broadcast(batch, from_process=0)
if not self.drop_last and first_batch is None:
if not self._drop_last and first_batch is None:
# We keep at least num processes elements of the first batch to be able to complete the last batch
first_batch = slice_tensors(batch, slice(0, state.num_processes))
first_batch = slice_tensors(batch, slice(0, self.state.num_processes))
observed_batch_size = find_batch_size(batch)
batch_size = observed_batch_size // state.num_processes
batch_size = observed_batch_size // self.state.num_processes
if not self.drop_last and stop_iteration and observed_batch_size % state.num_processes != 0:
stop_iteration = self._stop_iteration
if not stop_iteration:
# We may still be at the end of the dataloader without knowing it yet: if there is nothing left in
# the dataloader since the number of batches is a round multiple of the number of processes.
next_batch, next_batch_info = self._fetch_batches(main_iterator)
# next_batch_info[0] is None when there are no more batches, otherwise we still need to process them.
if self._stop_iteration and next_batch_info[0] is None:
stop_iteration = True
if not self._drop_last and stop_iteration and observed_batch_size % self.state.num_processes != 0:
# If the last batch is not complete, let's add the first batch to it.
batch = concatenate([batch, first_batch], dim=0)
# Batch size computation above is wrong, it's off by 1 so we fix it.
batch_size += 1
data_slice = slice(state.process_index * batch_size, (state.process_index + 1) * batch_size)
data_slice = slice(self.state.process_index * batch_size, (self.state.process_index + 1) * batch_size)
batch = slice_tensors(batch, data_slice)
if state.distributed_type == DistributedType.TPU:
xm.mark_step()
yield slice_tensors(batch, data_slice)
if stop_iteration:
self.gradient_state._set_remainder(observed_batch_size)
self.gradient_state._set_end_of_dataloader(True)
yield batch
def __len__(self):
state = AcceleratorState()
whole_length = super().__len__()
if self.drop_last:
return whole_length // state.num_processes
if self.split_batches:
return whole_length
elif self._drop_last:
return whole_length // self.state.num_processes
else:
return math.ceil(whole_length / state.num_processes)
return math.ceil(whole_length / self.state.num_processes)
@property
def total_batch_size(self):
return (
self.dataset.batch_size if self.split_batches else (self.dataset.batch_size * self.dataset.num_processes)
)
@property
def total_dataset_length(self):
return len(self.dataset)
def prepare_data_loader(
@ -478,7 +598,7 @@ def prepare_data_loader(
</Tip>"""
if dispatch_batches is None:
if version.parse(torch.__version__) < version.parse("1.8.0") or not put_on_device:
if is_torch_version("<", "1.8.0") or not put_on_device:
dispatch_batches = False
else:
dispatch_batches = isinstance(dataloader.dataset, IterableDataset)
@ -502,6 +622,7 @@ def prepare_data_loader(
new_dataset = dataloader.dataset
# Iterable dataset doesn't like batch_sampler, but data_loader creates a default one for it
new_batch_sampler = dataloader.batch_sampler if not isinstance(new_dataset, IterableDataset) else None
sampler_is_batch_sampler = False
generator = getattr(dataloader, "generator", None)
# No change if no multiprocess
if num_processes != 1 and not dispatch_batches:
@ -518,15 +639,20 @@ def prepare_data_loader(
)
else:
# New batch sampler for the current process.
if hasattr(dataloader.sampler, "generator"):
if dataloader.sampler.generator is None:
dataloader.sampler.generator = torch.Generator()
generator = dataloader.sampler.generator
generator.manual_seed(int(torch.empty((), dtype=torch.int64).random_().item()))
elif getattr(dataloader.batch_sampler, "generator", None) is not None:
generator = dataloader.batch_sampler.generator
sampler_is_batch_sampler = isinstance(dataloader.sampler, BatchSampler)
if sampler_is_batch_sampler:
sampler = dataloader.sampler.sampler
else:
sampler = dataloader.batch_sampler.sampler
if hasattr(sampler, "generator"):
if sampler.generator is None:
sampler.generator = torch.Generator()
generator = sampler.generator
generator.manual_seed(int(torch.empty((), dtype=torch.int64).random_().item()))
batch_sampler = dataloader.sampler if sampler_is_batch_sampler else dataloader.batch_sampler
new_batch_sampler = BatchSamplerShard(
dataloader.batch_sampler,
batch_sampler,
num_processes=num_processes,
process_index=process_index,
split_batches=split_batches,
@ -557,15 +683,33 @@ def prepare_data_loader(
kwargs["batch_size"] = dataloader.batch_size // num_processes if split_batches else dataloader.batch_size
if dispatch_batches:
return DataLoaderDispatcher(
new_dataset, split_batches=split_batches, batch_sampler=new_batch_sampler, **kwargs
dataloader = DataLoaderDispatcher(
new_dataset,
split_batches=split_batches,
batch_sampler=new_batch_sampler,
_drop_last=dataloader.drop_last,
**kwargs,
)
elif sampler_is_batch_sampler:
dataloader = DataLoaderShard(
new_dataset,
device=device if put_on_device and state.distributed_type != DistributedType.TPU else None,
sampler=new_batch_sampler,
batch_size=getattr(dataloader, "batch_size", _PYTORCH_DATALOADER_KWARGS["batch_size"]),
rng_types=rng_types,
generator=generator,
**kwargs,
)
else:
dataloader = DataLoaderShard(
new_dataset,
device=device if put_on_device and state.distributed_type != DistributedType.TPU else None,
batch_sampler=new_batch_sampler,
rng_types=rng_types,
generator=generator,
**kwargs,
)
return DataLoaderShard(
new_dataset,
device=device if put_on_device else None,
batch_sampler=new_batch_sampler,
rng_types=rng_types,
generator=generator,
**kwargs,
)
if state.distributed_type == DistributedType.TPU:
return MpDeviceLoaderWrapper(dataloader, device)
return dataloader

View File

@ -1,96 +0,0 @@
# Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .optimizer import AcceleratedOptimizer
from .state import is_apex_available, is_deepspeed_available
if is_deepspeed_available():
from deepspeed import DeepSpeedEngine
if is_apex_available():
from apex import amp
class DeepSpeedEngineWrapper(DeepSpeedEngine):
"""
Wrapper over deepspeed.DeepSpeedEngine object
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# overwriting micro_steps for user's gradient_accumulation
self.micro_steps = -1
def step(self, lr_kwargs=None):
"""DeepSpeedEngine.step() without `micro_steps` update & no profiling"""
if self.is_gradient_accumulation_boundary(): # it shouldn't matter whether we keep this line or not
if self.progressive_layer_drop:
self.progressive_layer_drop.update_state(self.global_steps)
self._take_model_step(lr_kwargs)
def backward(self, loss):
"""DeepSpeedEngine.backward() with with no loss scaling; no profiling but with `micro_steps` update"""
if self.zero_optimization():
self.optimizer.is_gradient_accumulation_boundary = self.is_gradient_accumulation_boundary()
self.optimizer.backward(loss)
elif self.amp_enabled():
# AMP requires delaying unscale when inside gradient accumulation boundaries
# https://nvidia.github.io/apex/advanced.html#gradient-accumulation-across-iterations
delay_unscale = not self.is_gradient_accumulation_boundary()
with amp.scale_loss(loss, self.optimizer, delay_unscale=delay_unscale) as scaled_loss:
scaled_loss.backward()
elif self.fp16_enabled():
self.optimizer.backward(loss)
else:
loss.backward()
if self.enable_backward_allreduce:
self.allreduce_gradients()
# this will ensure deepspeed gradient_accumulation matches user's accumulation
self.micro_steps += 1
class DeepSpeedOptimizerWrapper(AcceleratedOptimizer):
"""
Internal wrapper around a deepspeed optimizer.
Args:
optimizer (`torch.optim.optimizer.Optimizer`):
The optimizer to wrap.
"""
def __init__(self, optimizer, model: DeepSpeedEngineWrapper):
super().__init__(optimizer, device_placement=False, scaler=None)
self.model = model
def zero_grad(self, set_to_none=None):
pass # `model.step()` is doing that automatically. Therefore, it's implementation is not needed
def step(self):
"""This will handle optimizer.step() & optimizer.zero_grad() with gradient_accumulation"""
self.model.step()
@property
def is_overflow(self):
"""Whether or not the optimizer step was done, or skipped because of gradient overflow."""
overflow = False
if hasattr(self.optimizer, "overflow"):
overflow = self.optimizer.overflow
return overflow

480
src/accelerate/hooks.py Normal file
View File

@ -0,0 +1,480 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
from typing import Dict, List, Mapping, Optional, Union
import torch
import torch.nn as nn
from .utils import PrefixedDataset, find_device, named_module_tensors, send_to_device, set_module_tensor_to_device
class ModelHook:
"""
A hook that contains callbacks to be executed just before and after the forward method of a model. The difference
with PyTorch existing hooks is that they get passed along the kwargs.
Class attribute:
- **no_grad** (`bool`, *optional*, defaults to `False`) -- Whether or not to execute the actual forward pass under
the `torch.no_grad()` context manager.
"""
no_grad = False
def init_hook(self, module):
"""
To be executed when the hook is attached to the module.
Args:
module (`torch.nn.Module`): The module attached to this hook.
"""
return module
def pre_forward(self, module, *args, **kwargs):
"""
To be executed just before the forward method of the model.
Args:
module (`torch.nn.Module`): The module whose forward pass will be executed just after this event.
args (`Tuple[Any]`): The positional arguments passed to the module.
kwargs (`Dict[Str, Any]`): The keyword arguments passed to the module.
Returns:
`Tuple[Tuple[Any], Dict[Str, Any]]`: A tuple with the treated `args` and `kwargs`.
"""
return args, kwargs
def post_forward(self, module, output):
"""
To be executed just after the forward method of the model.
Args:
module (`torch.nn.Module`): The module whose forward pass been executed just before this event.
output (`Any`): The output of the module.
Returns:
`Any`: The processed `output`.
"""
return output
def detach_hook(self, module):
"""
To be executed when the hook is detached from a module.
Args:
module (`torch.nn.Module`): The module detached from this hook.
"""
return module
class SequentialHook(ModelHook):
"""
A hook that can contain several hooks and iterates through them at each event.
"""
def __init__(self, *hooks):
self.hooks = hooks
def init_hook(self, module):
for hook in self.hooks:
module = hook.init_hook(module)
return module
def pre_forward(self, module, *args, **kwargs):
for hook in self.hooks:
args, kwargs = hook.pre_forward(module, *args, **kwargs)
return args, kwargs
def post_forward(self, module, output):
for hook in self.hooks:
output = hook.post_forward(module, output)
return output
def detach_hook(self, module):
for hook in self.hooks:
module = hook.detach_hook(module)
return module
def add_hook_to_module(module: nn.Module, hook: ModelHook):
"""
Adds a hook to a given module. This will rewrite the `forward` method of the module to include the hook, to remove
this behavior and restore the original `forward` method, use `remove_hook_from_module`.
<Tip warning={true}>
If the module already contains a hook, this will replace it with the new hook passed. To chain two hooks together,
use the `SequentialHook` class.
</Tip>
Args:
module (`torch.nn.Module`): The module to attach a hook to.
hook (`ModelHook`): The hook to attach.
Returns:
`torch.nn.Module`: The same module, with the hook attached (the module is modified in place, so the result can
be discarded).
"""
if hasattr(module, "_hf_hook") and hasattr(module, "_old_forward"):
# If we already put some hook on this module, we replace it with the new one.
old_forward = module._old_forward
else:
old_forward = module.forward
module._old_forward = old_forward
module = hook.init_hook(module)
module._hf_hook = hook
@functools.wraps(old_forward)
def new_forward(*args, **kwargs):
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
if module._hf_hook.no_grad:
with torch.no_grad():
output = old_forward(*args, **kwargs)
else:
output = old_forward(*args, **kwargs)
return module._hf_hook.post_forward(module, output)
module.forward = new_forward
return module
def remove_hook_from_module(module: nn.Module):
"""
Removes any hook attached to a module via `add_hook_to_module`.
Args:
module (`torch.nn.Module`): The module to attach a hook to.
Returns:
`torch.nn.Module`: The same module, with the hook detached (the module is modified in place, so the result can
be discarded).
"""
if hasattr(module, "_hf_hook"):
module._hf_hook.detach_hook(module)
delattr(module, "_hf_hook")
if hasattr(module, "_old_forward"):
module.forward = module._old_forward
delattr(module, "_old_forward")
return module
class AlignDevicesHook(ModelHook):
"""
A generic `ModelHook` that ensures inputs and model weights are on the same device for the forward pass of the
associated module, potentially offloading the weights after the forward pass.
Args:
execution_device (`torch.device`, *optional*):
The device on which inputs and model weights should be placed before the forward pass.
offload (`bool`, *optional*, defaults to `False`):
Whether or not the weights should be offloaded after the forward pass.
io_same_device (`bool`, *optional*, defaults to `False`):
Whether or not the output should be placed on the same device as the input was.
weights_map (`Mapping[str, torch.Tensor]`, *optional*):
When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values.
offload_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to include the associated module's buffers when offloading.
place_submodules (`bool`, *optional*, defaults to `False`):
Whether to place the submodules on `execution_device` during the `init_hook` event.
"""
def __init__(
self,
execution_device: Optional[Union[int, str, torch.device]] = None,
offload: bool = False,
io_same_device: bool = False,
weights_map: Optional[Mapping] = None,
offload_buffers: bool = False,
place_submodules: bool = False,
):
self.execution_device = execution_device
self.offload = offload
self.io_same_device = io_same_device
self.weights_map = weights_map
self.offload_buffers = offload_buffers
self.place_submodules = place_submodules
# Will contain the input device when `io_same_device=True`.
self.input_device = None
self.param_original_devices = {}
self.buffer_original_devices = {}
def init_hook(self, module):
if not self.offload and self.execution_device is not None:
for name, _ in named_module_tensors(module, recurse=self.place_submodules):
set_module_tensor_to_device(module, name, self.execution_device)
elif self.offload:
self.original_devices = {
name: param.device for name, param in named_module_tensors(module, recurse=self.place_submodules)
}
if self.weights_map is None:
self.weights_map = {
name: param.to("cpu")
for name, param in named_module_tensors(
module, include_buffers=self.offload_buffers, recurse=self.place_submodules
)
}
for name, _ in named_module_tensors(
module, include_buffers=self.offload_buffers, recurse=self.place_submodules
):
set_module_tensor_to_device(module, name, "meta")
if not self.offload_buffers and self.execution_device is not None:
for name, _ in module.named_buffers(recurse=self.place_submodules):
set_module_tensor_to_device(module, name, self.execution_device)
return module
def pre_forward(self, module, *args, **kwargs):
if self.io_same_device:
self.input_device = find_device([args, kwargs])
if self.offload:
for name, _ in named_module_tensors(
module, include_buffers=self.offload_buffers, recurse=self.place_submodules
):
set_module_tensor_to_device(module, name, self.execution_device, value=self.weights_map[name])
return send_to_device(args, self.execution_device), send_to_device(kwargs, self.execution_device)
def post_forward(self, module, output):
if self.offload:
for name, _ in named_module_tensors(
module, include_buffers=self.offload_buffers, recurse=self.place_submodules
):
set_module_tensor_to_device(module, name, "meta")
if self.io_same_device and self.input_device is not None:
output = send_to_device(output, self.input_device)
return output
def detach_hook(self, module):
if self.offload:
for name, device in self.original_devices.items():
if device != torch.device("meta"):
set_module_tensor_to_device(module, name, device, value=self.weights_map.get(name, None))
def attach_execution_device_hook(
module: torch.nn.Module,
execution_device: Union[int, str, torch.device],
preload_module_classes: Optional[List[str]] = None,
):
"""
Recursively attaches `AlignDevicesHook` to all submodules of a given model to make sure they have the right
execution device
Args:
module (`torch.nn.Module`):
The module where we want to attach the hooks.
execution_device (`int`, `str` or `torch.device`):
The device on which inputs and model weights should be placed before the forward pass.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
if not hasattr(module, "_hf_hook") and len(module.state_dict()) > 0:
add_hook_to_module(module, AlignDevicesHook(execution_device))
# Break the recursion if we get to a preload module.
if preload_module_classes is not None and module.__class__.__name__ in preload_module_classes:
return
for child in module.children():
attach_execution_device_hook(child, execution_device)
def attach_align_device_hook(
module: torch.nn.Module,
execution_device: Optional[torch.device] = None,
offload: bool = False,
weights_map: Optional[Mapping] = None,
offload_buffers: bool = False,
module_name: str = "",
preload_module_classes: Optional[List[str]] = None,
):
"""
Recursively attaches `AlignDevicesHook` to all submodules of a given model that have direct parameters and/or
buffers.
Args:
module (`torch.nn.Module`):
The module where we want to attach the hooks.
execution_device (`torch.device`, *optional*):
The device on which inputs and model weights should be placed before the forward pass.
offload (`bool`, *optional*, defaults to `False`):
Whether or not the weights should be offloaded after the forward pass.
weights_map (`Mapping[str, torch.Tensor]`, *optional*):
When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values.
offload_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to include the associated module's buffers when offloading.
module_name (`str`, *optional*, defaults to `""`):
The name of the module.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
# Attach the hook on this module if it has any direct tensor.
directs = named_module_tensors(module)
full_offload = (
offload and preload_module_classes is not None and module.__class__.__name__ in preload_module_classes
)
if len(list(directs)) > 0 or full_offload:
if weights_map is not None:
prefix = f"{module_name}." if len(module_name) > 0 else ""
prefixed_weights_map = PrefixedDataset(weights_map, prefix)
else:
prefixed_weights_map = None
hook = AlignDevicesHook(
execution_device=execution_device,
offload=offload,
weights_map=prefixed_weights_map,
offload_buffers=offload_buffers,
place_submodules=full_offload,
)
add_hook_to_module(module, hook)
# We stop the recursion in case we hit the full offload.
if full_offload:
return
# Recurse on all children of the module.
for child_name, child in module.named_children():
child_name = f"{module_name}.{child_name}" if len(module_name) > 0 else child_name
attach_align_device_hook(
child,
execution_device=execution_device,
offload=offload,
weights_map=weights_map,
offload_buffers=offload_buffers,
module_name=child_name,
preload_module_classes=preload_module_classes,
)
def remove_hook_from_submodules(module: nn.Module):
"""
Recursively removes all hooks attached on the submodules of a given model.
Args:
module (`torch.nn.Module`): The module on which to remove all hooks.
"""
remove_hook_from_module(module)
for child in module.children():
remove_hook_from_submodules(child)
def attach_align_device_hook_on_blocks(
module: nn.Module,
execution_device: Optional[Union[torch.device, Dict[str, torch.device]]] = None,
offload: Union[bool, Dict[str, bool]] = False,
weights_map: Mapping = None,
offload_buffers: bool = False,
module_name: str = "",
preload_module_classes: Optional[List[str]] = None,
):
"""
Attaches `AlignDevicesHook` to all blocks of a given model as needed.
Args:
module (`torch.nn.Module`):
The module where we want to attach the hooks.
execution_device (`torch.device` or `Dict[str, torch.device]`, *optional*):
The device on which inputs and model weights should be placed before the forward pass. It can be one device
for the whole module, or a dictionary mapping module name to device.
offload (`bool`, *optional*, defaults to `False`):
Whether or not the weights should be offloaded after the forward pass. It can be one boolean for the whole
module, or a dictionary mapping module name to boolean.
weights_map (`Mapping[str, torch.Tensor]`, *optional*):
When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values.
offload_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to include the associated module's buffers when offloading.
module_name (`str`, *optional*, defaults to `""`):
The name of the module.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
# If one device and one offload, we've got one hook.
if not isinstance(execution_device, Mapping) and not isinstance(offload, dict):
if not offload:
hook = AlignDevicesHook(execution_device=execution_device, io_same_device=True, place_submodules=True)
add_hook_to_module(module, hook)
else:
attach_align_device_hook(
module,
execution_device=execution_device,
offload=True,
weights_map=weights_map,
offload_buffers=offload_buffers,
module_name=module_name,
)
return
if not isinstance(execution_device, Mapping):
execution_device = {key: execution_device for key in offload.keys()}
if not isinstance(offload, Mapping):
offload = {key: offload for key in execution_device.keys()}
if module_name in execution_device and not offload[module_name]:
hook = AlignDevicesHook(
execution_device=execution_device[module_name],
offload_buffers=offload_buffers,
io_same_device=(module_name == ""),
place_submodules=True,
)
add_hook_to_module(module, hook)
attach_execution_device_hook(module, execution_device[module_name])
elif module_name in execution_device:
attach_align_device_hook(
module,
execution_device=execution_device[module_name],
offload=True,
weights_map=weights_map,
offload_buffers=offload_buffers,
module_name=module_name,
preload_module_classes=preload_module_classes,
)
if not hasattr(module, "_hf_hook"):
hook = AlignDevicesHook(execution_device=execution_device[module_name], io_same_device=(module_name == ""))
add_hook_to_module(module, hook)
attach_execution_device_hook(
module, execution_device[module_name], preload_module_classes=preload_module_classes
)
elif module_name == "":
hook = AlignDevicesHook(io_same_device=True)
add_hook_to_module(module, hook)
for child_name, child in module.named_children():
child_name = f"{module_name}.{child_name}" if len(module_name) > 0 else child_name
attach_align_device_hook_on_blocks(
child,
execution_device=execution_device,
offload=offload,
weights_map=weights_map,
offload_buffers=offload_buffers,
module_name=child_name,
preload_module_classes=preload_module_classes,
)

View File

@ -1,90 +0,0 @@
# Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from dataclasses import dataclass
from datetime import timedelta
from typing import Optional
class KwargsHandler:
"""
Internal mixin that implements a `to_kwargs()` method for a dataclass.
"""
def to_dict(self):
return copy.deepcopy(self.__dict__)
def to_kwargs(self):
"""
Returns a dictionary containing the attributes with values different from the default of this class.
"""
default_dict = self.__class__().to_dict()
this_dict = self.to_dict()
return {k: v for k, v in this_dict.items() if default_dict[k] != v}
@dataclass
class DistributedDataParallelKwargs(KwargsHandler):
"""
Use this object in your [`Accelerator`] to customize how your model is wrapped in a
`torch.nn.parallel.DistributedDataParallel`. Please refer to the documentation of this
[wrapper](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) for more
information on each argument.
<Tip warning={true}>
`gradient_as_bucket_view` is only available in PyTorch 1.7.0 and later versions.
</Tip>"""
dim: int = 0
broadcast_buffers: bool = True
bucket_cap_mb: int = 25
find_unused_parameters: bool = False
check_reduction: bool = False
gradient_as_bucket_view: bool = False
@dataclass
class GradScalerKwargs(KwargsHandler):
"""
Use this object in your [`Accelerator`] to customize the behavior of mixed precision, specifically how the
`torch.cuda.amp.GradScaler` used is created. Please refer to the documentation of this
[scaler](https://pytorch.org/docs/stable/amp.html?highlight=gradscaler) for more information on each argument.
<Tip warning={true}>
`GradScaler` is only available in PyTorch 1.5.0 and later versions.
</Tip>"""
init_scale: float = 65536.0
growth_factor: float = 2.0
backoff_factor: float = 0.5
growth_interval: int = 2000
enabled: bool = True
@dataclass
class InitProcessGroupKwargs(KwargsHandler):
"""
Use this object in your [`Accelerator`] to customize the initialization of the distributed processes. Please refer
to the documentation of this
[method](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) for more
information on each argument.
"""
init_method: Optional[str] = None
timeout: timedelta = timedelta(seconds=1800)

View File

@ -19,10 +19,8 @@ import warnings
import torch
from packaging import version
from .state import AcceleratorState
from .utils import PrepareForLaunch, patch_environment
from .utils import PrecisionType, PrepareForLaunch, patch_environment
def notebook_launcher(function, args=(), num_processes=None, use_fp16=False, mixed_precision="no", use_port="29500"):
@ -52,6 +50,13 @@ def notebook_launcher(function, args=(), num_processes=None, use_fp16=False, mix
else:
in_colab_or_kaggle = False
try:
mixed_precision = PrecisionType(mixed_precision.lower())
except ValueError:
raise ValueError(
f"Unknown mixed_precision mode: {args.mixed_precision.lower()}. Choose between {PrecisionType.list()}."
)
if in_colab_or_kaggle:
if os.environ.get("TPU_NAME", None) is not None:
# TPU launch
@ -74,23 +79,17 @@ def notebook_launcher(function, args=(), num_processes=None, use_fp16=False, mix
if torch.cuda.is_available():
print("Launching training on one GPU.")
else:
print("Launching training on CPU.")
print("Launching training on one CPU.")
function(*args)
else:
if num_processes is None:
raise ValueError(
"You have to specify the number of GPUs you would like to use, add `num_process=...` to your call."
"You have to specify the number of GPUs you would like to use, add `num_processes=...` to your call."
)
if num_processes > 1:
# Multi-GPU launch
if version.parse(torch.__version__) < version.parse("1.5.0"):
raise ImportError(
"Using `notebook_launcher` for distributed training on GPUs require torch >= 1.5.0, got "
f"{torch.__version__}."
)
from torch.multiprocessing import start_processes
if len(AcceleratorState._shared_state) > 0:
@ -107,12 +106,6 @@ def notebook_launcher(function, args=(), num_processes=None, use_fp16=False, mix
"function."
)
mixed_precision = mixed_precision.lower()
if mixed_precision not in ["no", "fp16", "bf16"]:
raise ValueError(
f"Unknown mixed_precision: {mixed_precision}. Choose between 'no', 'fp16' and 'bf16'."
)
if use_fp16:
warnings.warn('use_fp16=True is deprecated. Use mixed_precision="fp16" instead.', DeprecationWarning)
mixed_precision = "fp16"
@ -128,12 +121,17 @@ def notebook_launcher(function, args=(), num_processes=None, use_fp16=False, mix
start_processes(launcher, args=args, nprocs=num_processes, start_method="fork")
else:
# No need for a distributed launch otherwise as it's either CPU or one GPU.
if torch.cuda.is_available():
# No need for a distributed launch otherwise as it's either CPU, GPU or MPS.
use_mps_device = "false"
if torch.backends.mps.is_available():
print("Launching training on MPS.")
use_mps_device = "true"
elif torch.cuda.is_available():
print("Launching training on one GPU.")
else:
print("Launching training on CPU.")
function(*args)
with patch_environment(use_mps_device=use_mps_device):
function(*args)
def debug_launcher(function, args=(), num_processes=2):
@ -155,12 +153,6 @@ def debug_launcher(function, args=(), num_processes=2):
num_processes (`int`, *optional*, defaults to 2):
The number of processes to use for training.
"""
if version.parse(torch.__version__) < version.parse("1.5.0"):
raise ImportError(
"Using `debug_launcher` for distributed training on GPUs require torch >= 1.5.0, got "
f"{torch.__version__}."
)
from torch.multiprocessing import start_processes
with tempfile.NamedTemporaryFile() as tmp_file:

68
src/accelerate/logging.py Normal file
View File

@ -0,0 +1,68 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from .state import AcceleratorState
class MultiProcessAdapter(logging.LoggerAdapter):
"""
An adapter to assist with logging in multiprocess.
`log` takes in an additional `main_process_only` kwarg, which dictates whether it should be called on all processes
or only the main executed one. Default is `main_process_only=True`.
"""
@staticmethod
def _should_log(main_process_only):
"Check if log should be performed"
return not main_process_only or (main_process_only and AcceleratorState().local_process_index == 0)
def log(self, level, msg, *args, **kwargs):
"""
Delegates logger call after checking if we should log.
Accepts a new kwarg of `main_process_only`, which will dictate whether it will be logged across all processes
or only the main executed one. Default is `True` if not passed
"""
main_process_only = kwargs.pop("main_process_only", True)
if self.isEnabledFor(level) and self._should_log(main_process_only):
msg, kwargs = self.process(msg, kwargs)
self.logger.log(level, msg, *args, **kwargs)
def get_logger(name: str):
"""
Returns a `logging.Logger` for `name` that can handle multiprocessing.
If a log should be called on all processes, pass `main_process_only=False`
Args:
name (`str`):
The name for the logger, such as `__file__`
Example:
```python
>>> from accelerate.logging import get_logger
>>> logger = get_logger(__name__)
>>> logger.info("My log", main_process_only=False)
>>> logger.debug("My log", main_process_only=True)
```
"""
logger = logging.getLogger(name)
return MultiProcessAdapter(logger, {})

View File

@ -0,0 +1,29 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# flake8: noqa
# There's no way to ignore "F401 '...' imported but unused" warnings in this
# module, but to preserve other warnings. So, don't check this module at all
import warnings
warnings.warn(
"memory_utils has been reorganized to utils.memory. Import `find_executable_batchsize` from the main `__init__`: "
"`from accelerate import find_executable_batch_size` to avoid this warning.",
FutureWarning,
)
from .utils.memory import find_executable_batch_size

View File

@ -13,16 +13,15 @@
# limitations under the License.
import inspect
import warnings
import torch
from packaging import version
from .state import AcceleratorState, DistributedType, is_tpu_available
from .utils import honor_type
from .state import AcceleratorState, GradientState
from .utils import DistributedType, honor_type, is_torch_version, is_tpu_available
if is_tpu_available():
if is_tpu_available(check_device=False):
import torch_xla.core.xla_model as xm
@ -40,6 +39,9 @@ class AcceleratedOptimizer(torch.optim.Optimizer):
"""
Internal wrapper around a torch optimizer.
Conditionally will perform `step` and `zero_grad` if gradients should be synchronized when performing gradient
accumulation.
Args:
optimizer (`torch.optim.optimizer.Optimizer`):
The optimizer to wrap.
@ -54,6 +56,7 @@ class AcceleratedOptimizer(torch.optim.Optimizer):
self.optimizer = optimizer
self.scaler = scaler
self.accelerator_state = AcceleratorState()
self.gradient_state = GradientState()
self.device_placement = device_placement
self._is_overflow = False
@ -102,37 +105,39 @@ class AcceleratedOptimizer(torch.optim.Optimizer):
return self.optimizer.state_dict()
def zero_grad(self, set_to_none=None):
if version.parse(torch.__version__) < version.parse("1.7.0"):
if set_to_none is not None:
raise ValueError(
"`set_to_none` for Optimizer.zero_grad` was introduced in PyTorch 1.7.0 and can't be used for "
f"earlier versions (found version {torch.__version__})."
)
self.optimizer.zero_grad()
else:
accept_arg = "set_to_none" in inspect.signature(self.optimizer.zero_grad).parameters
if accept_arg:
if set_to_none is None:
set_to_none = False
self.optimizer.zero_grad(set_to_none=set_to_none)
else:
if self.gradient_state.sync_gradients:
if is_torch_version("<", "1.7.0"):
if set_to_none is not None:
raise ValueError("`set_to_none` for Optimizer.zero_grad` is not supported by this optimizer.")
raise ValueError(
"`set_to_none` for Optimizer.zero_grad` was introduced in PyTorch 1.7.0 and can't be used for "
f"earlier versions (found version {torch.__version__})."
)
self.optimizer.zero_grad()
else:
accept_arg = "set_to_none" in inspect.signature(self.optimizer.zero_grad).parameters
if accept_arg:
if set_to_none is None:
set_to_none = False
self.optimizer.zero_grad(set_to_none=set_to_none)
else:
if set_to_none is not None:
raise ValueError("`set_to_none` for Optimizer.zero_grad` is not supported by this optimizer.")
self.optimizer.zero_grad()
def step(self, closure=None):
if self.accelerator_state.distributed_type == DistributedType.TPU:
optimizer_args = {"closure": closure} if closure is not None else {}
xm.optimizer_step(self.optimizer, optimizer_args=optimizer_args)
elif self.scaler is not None:
scale_before = self.scaler.get_scale()
self.scaler.step(self.optimizer, closure)
self.scaler.update()
scale_after = self.scaler.get_scale()
# If we reduced the loss scale, it means the optimizer step was skipped because of gradient overflow.
self._is_overflow = scale_after < scale_before
else:
self.optimizer.step(closure)
if self.gradient_state.sync_gradients:
if self.accelerator_state.distributed_type == DistributedType.TPU:
optimizer_args = {"closure": closure} if closure is not None else {}
xm.optimizer_step(self.optimizer, optimizer_args=optimizer_args)
elif self.scaler is not None:
scale_before = self.scaler.get_scale()
self.scaler.step(self.optimizer, closure)
self.scaler.update()
scale_after = self.scaler.get_scale()
# If we reduced the loss scale, it means the optimizer step was skipped because of gradient overflow.
self._is_overflow = scale_after < scale_before
else:
self.optimizer.step(closure)
def _switch_parameters(self, parameters_map):
for param_group in self.optimizer.param_groups:
@ -141,4 +146,14 @@ class AcceleratedOptimizer(torch.optim.Optimizer):
@property
def is_overflow(self):
"""Whether or not the optimizer step was done, or skipped because of gradient overflow."""
warnings.warn(
"The `is_overflow` property is deprecated and will be removed in version 1.0 of Accelerate use "
"`optimizer.step_was_skipped` instead.",
FutureWarning,
)
return self._is_overflow
@property
def step_was_skipped(self):
"""Whether or not the optimizer step was skipped."""
return self._is_overflow

Some files were not shown because too many files have changed in this diff Show More