Compare commits

...

1211 Commits

Author SHA1 Message Date
6d3324a1cf Release v0.32.0 2024-07-03 13:22:17 -04:00
8330b375d4 Fix get_backend bug and add clear_device_cache function (#2857)
* added clear_device_cache

* set lambda: 0 for mps and cpu
2024-07-03 06:59:10 -04:00
92404fbf5f fix load_state_dict for xpu and refine xpu safetensor version check (#2879)
* add fix

* update warning

* no and
2024-07-03 06:36:36 -04:00
3a02754915 add require_triton and enable test_dynamo work on xpu (#2878) 2024-07-03 04:52:09 -04:00
fec1170e35 fix mlu device longTensor bugs (#2887)
* Add Cambricon MLU accelerator support

* up mlu support for test

* fix mlu device MULTI_MLU

* Update src/accelerate/utils/imports.py

it's beautiful !

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* up mlu for quality check

* fix mlu device longTensor error

* fix mlu device tensor dtype check

* fix mlu device send_to_device with torch dynamo error

* Refactor AcceleratorState

* Should be near complete now

* Last missing piece

* Make my way to the acceleratorstate

* Include update to global var

* Don't use global

* gpu -> cuda

* Don't use update for dict, easier to read

* Fix tests

* stash

* Getting closer...

* Needed to spawn at the very end after env was setup

* Explain set_device before deepspeed

* Make docstring more accurate

* Early return insteaD

* Delineat blocks

* Make prepare_backend return state + backend for clarity/less magic

* fix mlu longtensor.to() bugs.

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-07-03 04:50:11 -04:00
eac206f063 make more cuda-only tests device-agnostic (#2876)
* enable 3 cases

* add ests

* add 2 more

* revert 1 back

* revert 1 more

* enable on xpu
2024-07-03 04:49:53 -04:00
6882ff2bea Added a MultiCPU SLURM example using Accelerate Launch and MPIRun (#2902)
* initial commit for slurm multicpu script

* changed output path

* Added multicpu example using accelerate + mpirun + slurm

* removed file

* rename file

* deleted file

* refactored for cleanliness

* updated docs

* fixed variable names

* quality update

* test fix

* addressed review comments

* fix typo for activateEnvironment.sh

* added ACCELERATE path

* Edit wording

Co-authored-by: Dina Suehiro Jones <dina.s.jones@intel.com>

* added back mistakenly deleted line

---------

Co-authored-by: Dina Suehiro Jones <dina.s.jones@intel.com>
2024-07-03 04:14:02 -04:00
57a4c7465e Add XLA Dynamo backends for training and inference (#2892) 2024-07-03 04:10:13 -04:00
YH
404510a5ec Make log_line_prefix_template Optional in Elastic Launcher for Backward Compatibility (#2888)
* Fix unexpected keyword argument err for elastic launch config

* Update torch version flow

* Del log prefix template from env vars
2024-07-03 04:06:08 -04:00
3086e26db9 Speed up imports and add a CI (#2845)
* Working test

* Timing cleanup

* Add CI

* Fix nits

* Mixup imports

* Clean

* tuna -> tuna-interpreter

* Refactor pippy imports

* Accelerator

* Fin

* Fin

* Keep specific ones for docs
2024-07-01 18:50:18 -04:00
YH
5d5d07abfc Add Profiler Support for Performance Analysis (#2883)
* Add torch profiler

* Add example

* Fix rank 0 saving

* Add docstring

* Add profile readme

* Fix minor

* Fix example path

* Add exp test code

* Rename profile dir

* Change readme

* Change save format

* Minor

* Enhance docstring example

* Add user guide

* Add memory profile guide

* Enhance error msg

* Fix type hinting

* Minor refactor

* Fix hf tag

* Fix copyright year

* Mv toctree

* Fix image path

* Fix license year

* Change profiler pattern name

* Update package reference

* Add slow decorator

* Check output value
2024-07-01 18:01:09 -04:00
5a0b7dc597 Support saving and loading of step while saving and loading state (#2765)
* Add feature to save step when saving state

* Update docstring for `load_accelerate_state`
2024-07-01 14:57:19 -04:00
c799c198e9 add xpu support (#2864) 2024-06-26 14:56:13 +02:00
1f7a79b428 Potentially fix tests (#2862)
* Potentially fix tests

* Try again with numpy sub 2
2024-06-18 11:38:30 +02:00
4cc3530b64 [tests] skip bnb-related tests instead of failing on xpu (#2860)
* fix requirement

* add one more

* add one more case

* remove files

* remove more file

* bug fix

* revert
2024-06-18 11:22:03 +02:00
5d4a3beb01 [tests] use torch_device instead of 0 for device check (#2861)
* bug fix

* fix one more case

* add more cases

* refine
2024-06-18 10:01:52 +02:00
0284f9a9f6 [tests] fix bug in test_tracking.ClearMLTest (#2863) 2024-06-17 21:40:45 +02:00
573d22d48f Default FSDP weights merge to safetensors (#2853) 2024-06-17 11:23:17 +02:00
13ca7dccb6 Drop torch re-imports in npu and mlu paths (#2856)
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2024-06-14 07:13:59 -04:00
3b5a00e048 xpu: support xpu backend from stock pytorch (>=2.4) (#2825)
Fixes: https://github.com/huggingface/transformers/issues/31237

XPU backend is available in the stock PyTorch starting from
version 2.4, see [1]. This commit extends huggingface accelerate
to support XPU from both IPEX and the stock pytorch. IPEX is being
tried first.

See: https://github.com/pytorch/pytorch/issues/114842

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2024-06-13 11:20:30 -04:00
3c4eaedd46 Refactor logging to use logger in dispatch_model (#2855) 2024-06-13 11:18:48 -04:00
YH
c0faec766c Add DDP Communication Hooks (#2841)
* Add ddp comm hook

* Fix dataclass order

* Merge ddp grad hook to ddp kwargs handler

* Reset ddp kwargs key

* Add test

* Fix test case

* Split ddp grad test

* Fix test case

* Ehance docstring

* Minor

* Use naive baseenum for ddp comm hook type

* Add by feature example

* Add multi device deco

* Add user guide

* Update examples/by_feature/ddp_comm_hook.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update examples/by_feature/ddp_comm_hook.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Add wrapper and state option details

* Update toctree

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/usage_guides/ddp_comm_hook.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Mv ddp comm hook index

* Fix ddp comm hook user guid

* Del empty line

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2024-06-13 10:34:20 -04:00
91a2599f93 Auto create dir when merging FSDP weights (#2854) 2024-06-13 05:32:22 -04:00
5f9235a731 Remove underlines between badges (#2851) 2024-06-12 15:30:28 -04:00
7a36a75c7c remove warning hook addede during dispatch_model (#2843)
* remove-warning-hook

* add _accelerate_added_attributes

* add comments
2024-06-12 16:24:45 +02:00
f62854a281 Revert "Slight rename" (#2850)
This reverts commit a9869ea0dc49652e49607d5f111caed79ed5cb67.
2024-06-12 08:10:13 -04:00
a9869ea0dc Slight rename 2024-06-11 10:15:28 -04:00
6d59614603 doc: fix link (#2844) 2024-06-11 07:41:09 -04:00
2d74c0c077 fix(ci): remove unnecessary permissions (#2842) 2024-06-10 05:35:19 -04:00
40007b4e97 feat(ci): add trufflehog secrets detection (#2836) 2024-06-07 18:29:14 +02:00
7141881b1f Push new release version 2024-06-07 10:05:51 -04:00
f0049b2cfb Use shard saving from huggingface_hub (#2795)
* use shard saving from huggingface hub

* move import

* add shard_checkpoint back but with deprecation msg

* add shard_checkpoint back
2024-06-07 10:03:46 -04:00
83bad87559 fix fstr format (#2810)
* fix fstr format

* Quality pass
2024-06-07 08:46:21 -04:00
24d8b63fc3 Optimize the megatron plugin (#2822)
* Update megatron_lm.md

* Update accelerator.py

* Update dataclasses.py

* Update imports.py

* Update megatron_lm.py

* Update megatron_lm.py
2024-06-07 07:49:52 -04:00
4a83ee5382 monitor-interval, take 2 (#2833)
* monitor-interval

* Update defaults
2024-06-06 09:43:08 -04:00
05d240af95 Improve test speeds by up to 30% in multi-gpu settings (#2830) 2024-06-06 06:12:59 -04:00
bad2ce42ed Fix DeepSpeed config validation error by changing stage3_prefetch_bucket_size value to an integer (#2814) 2024-06-05 21:41:35 -04:00
30cb7ece76 Remove out-dated xpu device check code in get_balanced_memory (#2826)
* fix xpu device check

* simplify
2024-06-05 12:34:43 -04:00
b7fa2fa956 add cuda dep for a test (#2820)
* add cuda dep for a test

* hmmm
2024-06-03 08:37:44 -04:00
d5d378d64e State dictionary retrieval from offloaded modules (#2619)
* added get_state_dict_from_offloaded

* cleaned

* make style

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* implemented suggestions, refactored, make style

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-06-03 14:16:07 +02:00
065e74d11a 4-bit quantization meta device bias loading bug (#2805)
* 4-bit quantization meta device bias loading bug: fixes #2742

* move condition

---------

Co-authored-by: mh <mh@mhs-Mac-mini.local>
2024-05-31 15:26:17 +02:00
86b6deaea1 Fix access error for torch.mps when using torch==1.13.1 on macOS (#2806)
* Fix access error for torch.mps when using torch==1.13.1

* Add missing parentheses

* add min_version

---------

Co-authored-by: Matthew Hoffman <matthew@protopia.ai>
2024-05-31 14:48:37 +02:00
b24a0ef5db New template (#2808) 2024-05-28 10:10:13 -04:00
e061edc6e7 fix comet test (#2804) 2024-05-28 13:45:24 +02:00
c3f422699a Fix type in accelerator.py (#2800)
* Fix type in accelerator.py

* Update accelerator.py
2024-05-24 19:38:43 -04:00
0553483638 Fix Wrong use of sync_gradients used to implement sync_each_batch (#2790)
* fix wrong use of sync_gradients to implement sync_each_batch as pointed out by @Nightmare-n

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>

* fix test

---------

Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
2024-05-23 10:55:52 -04:00
YH
415789d0e4 Add Elastic Launch Support to notebook_launcher (#2788)
* Support elastic launcher

* Update src/accelerate/launchers.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Typo

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-05-23 10:52:41 -04:00
hkz
ae472bac48 fix duplicate elements in split_between_processes (#2781)
* fix duplicate elements in split_between_processes

* add test

* use divmod

* fix apply_padding=True

* fix unused import
2024-05-23 10:51:49 -04:00
4f2c2ba45c Fixup CLI test (#2796) 2024-05-23 09:06:14 -04:00
e26065a265 Upgrade huggingface's megatron to nvidia's megatron when use MegatronLMPlugin (#2501)
* nvidia-megatron

* Update megatron_lm.py

* Update megatron_lm.py

* ruff fix

* ruff format

* Update megatron_lm.py

* Update dataclasses.py

* Update megatron_lm.py

* 直接使用megatron接口

---------

Co-authored-by: zhenwenqi <zhenwenqi_2022@qq.com>
2024-05-23 08:07:27 -04:00
1cb6fdcf7b FIX / FSDP : Guard fsdp utils for earlier PyTorch versions (#2794)
* guard fsdp utils

* Update src/accelerate/utils/fsdp_utils.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/utils/fsdp_utils.py

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-05-21 19:29:30 -04:00
4ba436eccc Introduce shard-merging util for FSDP (#2772)
* Initial commit

* Now to test

* Store false

* Slight tweaks

* Fix naming

* Got it all working with tests

* Use not for safetensors arg

* rm change

* Add docs

* Adjust based on Marc's feedback

* Specify just weights

* Update tests to include CLI and swap namings

* Fin

* Rm unused

* Rm again
2024-05-16 13:49:50 -04:00
91e8a3ced4 Skip tied weights disk offload test (#2782)
* skip

* fix

* quality

* fix comment
2024-05-16 14:09:58 +02:00
4ad4d28c49 Add arg from CLI to fix failing test (#2783) 2024-05-15 12:49:54 -04:00
befd87f043 Enable config for fsdp activation checkpointing (#2779)
* Enable config for fsdp activation checkpointing

* Fix ruff errors
2024-05-14 20:17:49 -04:00
abce3604f0 Skip deepspeed test (#2776)
* skip test

* style
2024-05-14 18:28:10 +02:00
27a607ea90 Fix small edge case in get_module_leaves (#2774)
* fix edge case

* fix
2024-05-14 11:52:51 +02:00
aa21174de9 fix minor typo (#2767) 2024-05-13 08:24:01 -04:00
6cf1cc0a39 optimize get_module_leaves speed (#2756)
* optimize get_module_leaves

* fix format

* Update modeling.py
2024-05-13 08:23:38 -04:00
bb465a9cf0 Sets default to PyTorch defaults based on backend (#2758)
* Amd

* Add timeout defaults to match pytorch

* forward contrib credits from discussions

* oop

---------

Co-authored-by: Julian Buchel <jubueche@users.noreply.github.com>
2024-05-13 05:41:15 -04:00
67308ca6ef Enable sharded cpu resume (#2762) 2024-05-10 11:39:37 -04:00
63772f6ac2 Revert "Simplify CLI args validation and ensure CLI args take precedence over config file." (#2763)
This reverts commit 724824abbe0aed8606661bbce5e057c0d2447794.
2024-05-10 11:22:56 -04:00
8798cf06ab fix cpu omp num threads set (#2755)
* fix cpu omp num threads set

* fix OMP_NUM_THREADS

* consider no-cpu usage

* fix style
2024-05-10 11:16:06 -04:00
47bb2dd53e Fix sagemaker config (#2753)
* Fix sagemaker

* Default to False

* Include fixes

* Nit

* Ignore launching
2024-05-10 09:09:36 -04:00
724824abbe Simplify CLI args validation and ensure CLI args take precedence over config file. (#2757)
* Remove unnecessary args.debug statement

* Add expected test failure for config sub-sections

* Remove redundancy in config file args parsing

* Make config file --cpu logic more explicit
2024-05-09 09:30:13 -04:00
YH
afc2c99e6a Fix duplicate environment variable check in multi-cpu condition (#2752)
* Del duplicted key

* Apply format
2024-05-07 14:27:29 -04:00
0fb95a2d3b Fix max_memory assignment (#2751) 2024-05-07 11:53:25 +02:00
7ac153f404 LOMO / FIX: Support multiple optimizers (#2745) 2024-05-06 08:28:14 -04:00
0f1b91bb74 Fix stacklevel in logging to log the actual user call site (instead of the call site inside the logger wrapper) of log functions (#2730)
* fix stacklevel in logging to log info about the actual user callsite

* Add two tests for stacklevel in logging

---------

Co-authored-by: luowyang <luowyang@github.com>
2024-05-06 08:21:19 -04:00
d1eb44c856 Fixed the problem of incorrect conditional judgment statement when configuring enable_cpu_affinity (#2748) 2024-05-06 08:20:22 -04:00
11a363287a Update modeling.py by adding try-catch section to skip the unavailable devices (#2681)
* Update modeling.py to ignore the unavailable devices

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-05-06 12:44:35 +02:00
LFu
5cfe409443 Add feature to allow redirecting std streams into log files when using torchrun as the launcher. (#2740)
* Add --log-dir/--log_dir to `distributed_args` to allow redirecting std
streams into log files when using torchrun as the launcher. Used with
--tee this will acheive similar effect as running with `torchrun --tee X
--log-dir=logs`.

* Deleted the unecessary "--log-dir" argument following suggestion from
@muellerzr, since it will be automatically generated from "--log_dir".
2024-05-04 15:03:05 -04:00
5b3a7f3892 Update setup.py + test falures found during release 2024-05-03 10:40:25 -04:00
060361fca3 Fix tests on main (#2739)
* Start

* Fixings
2024-05-03 10:18:20 -04:00
6ac27e2383 FEAT: Add LOMO optimizer (#2695)
* add v1 lomo

* final fixes

* fix

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* add comment

* more comments

* fix

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-05-03 10:55:44 +02:00
YH
ba5f49219f Fix offload device type (#2717) 2024-05-02 17:07:24 +05:30
2c767338f2 Fix Documentation in FSDP and DeepSpeed Concept Guide (#2725)
* address part of stats comments

* automatically set sync_module_states if low_cpu_mem is set

* Apply suggestions from @stas00

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* add links from fsdp and deepspeed docs. fix deepspeed imports

* replace raise in accelerate.launch

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2024-05-01 09:25:18 -04:00
234a85506d Docs: Fix build main documentation (#2729) 2024-05-01 08:18:52 -04:00
232ebd159a Fix sampler (#2728) 2024-05-01 12:20:26 +02:00
4d3d4bc88f fix sampler serialization (#2723)
* fix sampler serialization

* add getter and setter for sampler

* more maintenable
2024-04-30 11:19:05 +02:00
2b1e7bd462 Fixup free_memory to deal with garbage collection (#2716)
* Fixup cleanup

* Return

* Fixup test

* Fix test

* DeepSpeed

* More careful guard

* bring back as none

* passing

* bring forward
2024-04-30 03:28:57 -04:00
c7e5e41b8c Segment out a deepspeed docker image (#2707)
* Segment out a deepspeed docker image

* Update readme

* Keep pinned ds
2024-04-29 11:25:22 -04:00
9557598c45 Add Upcasting for FSDP in Mixed Precision. Add Concept Guide for FSPD and DeepSpeed. (#2674)
* draft fsdp vs ds

* reframe to migration doc

* updated functionality section

* cast to float32

* improvements to float32 casting

* some cleanup

* addressed @pacman100's comments

* Apply some of @muellerz suggestions

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* change to subsections

* changed the manner upcasting warnings are surfaced

* update document to discuss fsdp and ds plugins. minor fixes.

* @muellerzr's new suggestions

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* explain all-or-nothing

* add @pacman100's comments

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* minor fix

---------

Co-authored-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2024-04-29 11:19:03 -04:00
156331aecd allow gather_for_metrics to be more flexible (#2710)
* allow gather_for_metrics to be more flexible

* style

* udapte doc

* fix

* style

* typo

* typo

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* remove distributed

* clean

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-04-29 12:14:22 +02:00
cd7df4117d fix bnb multi gpu training (#2714)
* fix bnb multi gpu training

* style

* elif instead

* fix

* style

* fix
2024-04-26 15:52:15 +02:00
6af157ea93 Add diffusers to req (#2711) 2024-04-25 08:31:54 -04:00
83317b3081 add distributed examples (#2672)
* add distributed examples

* typo

* uncomment

* require multigpu

* add stable diffusion example

* style

* add copyright

* style

* remove tqdm

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* add comments

* remove print

* More comments

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-04-25 11:13:56 +02:00
e831bcb3b1 Change dataloader send_to_device calls to non-blocking (#2685)
* Change dataloader send_to_device calls to non-blocking

* add non_blocking to dataloader dataclass

* add dataloader non blocking option from dataclass

* add handling for non blocking to accelerator

* add notes on non-blocking transfers to quicktour

* link to dataloaderconfiguration in docs

* linting

* "requires" -> "recommended" on non-blocking setting

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: drhead <a@a.a>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-04-24 15:45:57 -04:00
092c3af0c4 Add version checks for the import of DeepSpeed moe utils (#2705)
* fix import for moe utils

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-04-25 00:38:56 +05:30
3e944c5583 add cann version info to command accelerate env (#2689) 2024-04-24 09:17:09 -04:00
f67737363c Do a pip freeze during workflows (#2704)
* Do a pip freeze

* No need to do source activate on non-conda workflow
2024-04-24 08:46:13 -04:00
f7daaaa305 fix support (#2699) 2024-04-23 15:32:43 +02:00
3dc131cd8d Add source code for DataLoader Animation (#2696)
* dl animation

* oops

* Export
2024-04-23 04:28:28 -04:00
ef0f62c12a Simplify test logic (#2697)
* simplify test logic 😅

* 😅
2024-04-23 02:49:55 +05:30
baafaf4a6e Fix the rng states of sampler's generator to be synchronized for correct sharding of dataset across GPUs (#2694)
* Fix the rng states of sampler's generator to be synchronized for correct sharding of dataset across GPUs

* add tests
2024-04-22 13:50:04 -04:00
abc86c0e35 Enable BF16 autocast to everything during FP8 + some tweaks to enable FSDP (#2655)
* Basic autocasting stuff

* Delay fp8 autocast until after DDP wrapping

* More fixes

* Bookmark: without dtype change

* Bookmark: with dtype changes

* Different alternative, better results

* Didn't matter what order, same result

* Revert + maintain

* Fin

* Refactor based on feedback

* native_amp bool

* Final nits
2024-04-18 10:14:35 -04:00
4450cb3132 Deprecate tqdm args + slight logic tweaks (#2673)
* Deprecate + slight logic fix

* Maybe fix test?
2024-04-17 06:26:55 -04:00
fd0dcd1c45 fix backend check (#2670)
* fix backend check

* reformat backend check

* Update src/accelerate/state.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/state.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* raise value error if backend mismatch

* Update src/accelerate/state.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-04-16 21:22:27 -04:00
f478201c28 Pin DS...again.. (#2679) 2024-04-16 12:07:59 -04:00
c7046845e7 Fix deepspeed moe test with version check (#2677) 2024-04-16 10:22:41 -04:00
701e24c539 Handle MoE models with DeepSpeed (#2662)
* Handle MoE models with DeepSpeed

* Update launch.py

* Update test_deepspeed.py

* Update test_deepspeed.py

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* address comments

* Update deepspeed.md

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-04-16 16:11:49 +05:30
37da848e6c tqdm: *args should come ahead of main_process_only (#2654)
* Update tqdm.py

* add unit test

* add test to test_utils

* ruff changes
2024-04-15 12:30:28 -04:00
c470a1336a Revert "fix backend check (#2652)" (#2669)
This reverts commit 2fc48c7eeea67e747a39be2dec822b07a27bae71.
2024-04-15 04:30:33 -04:00
581a390e2f Megatron plugin can support NPU (#2667) 2024-04-15 03:02:13 -04:00
2fc48c7eee fix backend check (#2652)
* fix backend check

* fix ccl check
2024-04-15 02:59:29 -04:00
1024231133 Add MLU rng state setter (#2664) 2024-04-15 02:59:10 -04:00
5ca095a34f Fix test_from_pretrained_low_cpu_mem_usage_measured failure (#2644)
This test is to test the change in the memory size occupied by model loading when low_cpu_mem_usage is used.
Therefore, the default device used is cpu. However, when judging whether other devices are available,
new packages will be introduced, causing memory changes and interfering with the test results.

Signed-off-by: yuanwu <yuan.wu@intel.com>
2024-04-12 18:23:28 +02:00
b77c65398c Don't use deprecated Repository anymore (#2658)
* Don't use deprecated Repository anymore

* oops

* Update requirements.txt
2024-04-12 09:05:54 -04:00
YH
a91691463b Fix deepspeed plugin attr type (#2646) 2024-04-12 15:29:16 +05:30
5056d327f8 Allow "auto" for gradient clipping in YAML (#2649)
* Allow "auto" for gradient clipping in YAML

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Make style

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2024-04-12 13:44:39 +05:30
c0a37015e3 Typo fix in tracking.md (#2650) 2024-04-10 17:16:11 -04:00
e9b9c7d022 device agnostic testing for hooks&utils&big_modeling (#2602)
* device agnostic testing for hooks&utils&big_modeling

* fix failed test cased on cpu

* make style
2024-04-10 13:56:50 -04:00
6c09584f73 add strict arg to load_checkpoint_and_dispatch (#2641) 2024-04-10 11:20:07 +02:00
b8c8583953 add third-party device prefix to execution_device (#2612)
* add xpu device_map

* fix
2024-04-09 13:47:41 +02:00
df485ae1e3 Parenthesis on xpu_available (#2639) 2024-04-09 06:33:38 -04:00
6386f70103 Fix up state with xla + performance regression (#2634)
* Fix up state with xla

* use backend

* Change last time

* Cmoment

* Slight tweak to use dtype
2024-04-09 06:06:28 -04:00
6d92198ef4 Schedule free optimizer support (#2631)
* Schedule free optimizer supporT

* Fin

* Doc

* Add in eval

* Add to exclude

* Fix module issue
2024-04-08 11:28:27 -04:00
16488be9a4 Update version 2024-04-05 13:11:05 -04:00
685bd3a439 CLean 2024-04-05 13:05:05 -04:00
2e69948c1a Patchfix 2024-04-05 13:04:44 -04:00
7531e8c13e Unpin hub (#2625) 2024-04-04 10:33:49 -04:00
8e439de744 Link to bash in env reporting (#2623)
* link to bash in env reporting

* Not found

* Use check_output

* Support windows
2024-04-04 09:47:08 -04:00
d96a5aa730 Fix links in Quick Tour (#2617) 2024-04-03 12:47:31 -04:00
d7bcd85d4d fix llama example for pippy (#2616)
* fix llama example

* remove llama from tests
2024-04-03 08:22:16 -04:00
d927b8f3a2 Default false for trust_remote_code (#2607) 2024-04-02 10:58:24 -04:00
f579d9550d Pin hub for tests (#2608) 2024-04-02 10:58:17 -04:00
bbecad4e8e Allow for force unwrapping (#2595)
* Try new method

* Clean a bit more

* Use spmd

* reported typo

* Forward contrib credits

* Comment

* Comments

---------

Co-authored-by: Shubham Krishna <shubhamkrishna.ism@gmail.com>
2024-04-02 09:59:07 -04:00
b82999a84b Re-put in zero3 failure 2024-04-02 09:57:07 -04:00
11568e562c Refactor PartialState and AcceleratorState (#2576)
* Refactor AcceleratorState

* Should be near complete now

* Last missing piece

* Make my way to the acceleratorstate

* Include update to global var

* Don't use global

* gpu -> cuda

* Don't use update for dict, easier to read

* Fix tests

* stash

* Getting closer...

* Needed to spawn at the very end after env was setup

* Explain set_device before deepspeed

* Make docstring more accurate

* Early return insteaD

* Delineat blocks

* Make prepare_backend return state + backend for clarity/less magic

* Check if it's None and then return

* Use a dataclass

* Forgot one

* Clean

* Style

* Docstring fix?

* Fix deepspeed

* Move slighly

* Final fix

* Fix state for deepspeed

* rm comment
2024-04-02 09:55:34 -04:00
d9a1b8f975 Resolve ZeRO-3 Initialization Failure in Pre-Set Torch Distributed Environments (huggingface/transformers#28803) (#2578)
* Resolve ZeRO-3 Initialization Failure in Pre-Set Torch Distributed Environments (huggingface/transformers#28803)

* add unit test for deepspeed zero3 intergation

* update test case then keep it accelerate spec
2024-04-01 10:46:08 +05:30
b634388ef1 Fix warning log for unused checkpoint keys (#2594)
As per title
2024-03-28 15:32:44 +01:00
4d415f2129 Allow notebook_launcher to launch to multiple GPUs from Colab (#2561)
* changed notebook_launcher to not ignore num_processes parameter on colab

* clarified documentation on notebook_launcher (that config file is ignored by notebook_launcher)

* simplified logic in launcher to retain prev elif, imported get_gpu_info from environment

* run quality and style fixes

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-03-26 22:49:14 -04:00
829171a9a4 [docs] Fix kwarg docstring (#2590)
* fix kwarg docstrings

* **
2024-03-26 13:24:23 -07:00
5a232de2fa Expound PartialState docstring (#2589)
* Expound docstring

* Reword

* Weird spacing

* Move example

* Move to solve formatting issues

* Link to the spec class

* Take 3

* Copy kwargs format to others

* Take 4...

* Special thingy
2024-03-26 13:41:23 -04:00
5f8048cd04 Guard stateful objects (#2572)
* Guard stateful objects

* Add test

* Add a test

* MOre tests

* Update AcceleratorState

* Decision: early return

* Test accelerator as well

* use right assert check

* Use getattr
2024-03-26 12:04:40 -04:00
4378b560e8 Fix load_checkpoint_in_model behavior when unexpected keys are in the checkpoint (#2588)
* fix load_checkpoint_in_model when unexpected keys are in the checkpoint

* fix test

* style
2024-03-26 23:36:00 +08:00
8644e23b71 Refactor and improve model estimator tool (#2581)
* Start

* Stash

* Mark

* Better mixed precision

* Can confirm transformerengine

* Finish refactor

* Update training usage

* Slight tweak

* Fin

* Fixup test

* Add comment about FP8
2024-03-26 10:33:14 -04:00
b2fc3a3b0e Refactor affinity and make it stateful (#2579)
* Move under initialized check

* One more

* Numa affinity

* Docs

* Import

* Add verbosity

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Improve import err

* Test + fix bug

* Update src/accelerate/utils/environment.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Clean

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2024-03-26 09:51:37 -04:00
UNI
290446d446 Update data_loader.py to Ensure Reproducibility in Multi-Process Environments with Dataloader Shuffle (#2584)
* Update data_loader.py

* fix reformatting bug

* add unit test

* add Accelerator initialization in unit test

* move unit test of seedable sampler to test_script.py

* reformatted
2024-03-25 15:04:05 -04:00
85a75d4c3d [docs] Missing functions from API (#2580) 2024-03-22 13:40:21 -04:00
f94f0ff912 Allow for custom deepspeed env files (#2566)
* Allow for any .env file

* Messed up merge conflicts
2024-03-22 08:20:43 -04:00
1b2e634970 Rm uv install (#2577) 2024-03-22 07:59:18 -04:00
dd62fc90ce Unpin deepspeed (#2570) 2024-03-21 09:42:03 -04:00
10b418495e Allow for setting deterministic algorithms (#2569)
* Allow for setting deterministic algorithms

* Expound doc

* English fails me again
2024-03-21 09:12:02 -04:00
c2f193a25c Improve deepspeed env gen (#2565)
* Improve .deepspeed_env generation

Co-authored-by: Rick Lamers <ricklamers@gmail.com>

* Leave for a latter date

---------

Co-authored-by: Rick Lamers <ricklamers@gmail.com>
2024-03-20 14:29:27 -04:00
1812152392 Add log message for RTX 4000 series when performing multi-gpu inference with device_map (#2557)
* add log message for RTX 4000 series when using device_map multi-gpu

* style

* style

* switch to warning

* Update src/accelerate/big_modeling.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-03-20 12:30:41 -04:00
b8b353b7a7 Add NUMA affinity control for NVIDIA GPUs (#2535)
* Beta test, could break!

* Cleanup and get rid of unneded files

* Work on integration

* Add numa affinity to config

* Add to config command

* Fix some of Stas' notes

* Use raw os to make things easier

* Update questionairre

* Use CPU_AFFINITY instead

* Change doc

* Update test

* Fix numa, I submit

* include ref to original

* Fix

---------

Co-authored-by: zach.mueller@huggingface.co <muellerzr@ip-26-0-160-100.ec2.internal>
2024-03-20 11:12:30 -04:00
f2778d6502 Add Cambricon MLU accelerator support (#2552)
* Add Cambricon MLU accelerator support

* up mlu support for test

* fix mlu device MULTI_MLU

* Update src/accelerate/utils/imports.py

it's beautiful !

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* up mlu for quality check

* fix mlu device longTensor error

* fix mlu device tensor dtype check

* fix mlu device send_to_device with torch dynamo error

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-03-20 10:59:00 -04:00
2ad42e77c3 🚨🚨🚨Move to using tags rather than latest for docker images and consolidate image repos 🚨 🚨🚨 (#2554)
* Move to using tags

* Add readme

* Include hf repo description in auto-build

* Test

* Even with an a...

* Rm readme things

* Symlink README for docker repo

* Include readme

* Fin

* Try now?

* Finally got symlink working

* Let's try this

* Forgot runs-on

* Still perm issues, revert
2024-03-18 09:35:32 -04:00
e8aaee5d9b Include working driver check (#2558)
* Include working driver

* Style
2024-03-15 10:12:22 -04:00
910c1b6a8f split_between_processes for Dataset (#2433)
* split_between_processes for Dataset

* Update state.py

* remove param datasets.Dataset from split_between_processes, add note to function doc

* is_datasets_available is a function not a var

* reformat to make ruff happy

* isinstance(inputs, Dataset) only if is_datasets_available()

* add test_split_between_processes_dataset

* split_between_processes for Dataset: pad if apply_padding

* removed trailing whitespace

* complete test_split_between_processes_dataset

* fix test_split_between_processes_dataset for single GPU
2024-03-14 17:39:47 -04:00
92d3240bb5 Add mapping main_process_ip and ip-master_addr when not using standard as deepspeed launcher (#2495)
Co-authored-by: 정수현 <soohyun.jung@ten1010.io>
2024-03-14 16:43:55 +05:30
02a8a9a3a7 Fix test_script.py on TPU v2/v3 (#2542)
* fix replication

* Set generator on each thread. The test passed.

* remove comments

* fix up

* fix format

* fix comment

* not setting the dataloader.batch_sampler
2024-03-13 13:20:16 -04:00
ee163b66fb Update version 2024-03-12 11:55:22 -04:00
354db5b5f7 Use uv instead of pip install for github CI (#2546)
* Test uv

* Workflow dispatch

* Modify

* Setuptools...apparently?

* No need for -y

* Rm cache

* Rm workflow dispatch

* Trainer tests

* Might need to be -e

* Try keeping it at absolute home

* Undo integration
2024-03-12 08:06:27 -04:00
92b1ad01f3 Update FSDP mixed precision setter to enable fsdp+qlora (#2544)
* update FSDP mp setter to enable fsdp+qlora

* fixes

* Update test_fsdp.py
2024-03-12 16:17:29 +05:30
60bfdaa934 Allow Gradients to be Synced Each Data Batch While Performing Gradient Accumulation (#2531)
* add force flag in _do_sync class method and add sync_each_batch in GradientAccumulationPlugin

* modify test_sync to consider sync_each_batch. fix old tests involving optimizer

* run style checker

* minor refactoring based on @muellerzr's comments.

* update docs: gradient_synchronization.md

* Apply @muellerzr's documentation suggestions.

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Apply suggestions from @BenjaminBossan

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

---------

Co-authored-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-03-11 11:13:31 -04:00
16eb6d76bf Remove extra double-dash in error message (#2541)
Error messages should read `--main_process_port`, not `----main_process_port`.  Users who copy and paste the message as it was will get this error message:
```
Accelerate CLI tool: error: unrecognized arguments: ----main_process_port
```
2024-03-10 08:59:44 -04:00
c8acfa700b [docs] Troubleshoot (#2538)
* reorg and light edits

* fix hfoption

* move doc

* move
2024-03-08 13:28:42 -05:00
e70e3c87de Overdue email change... (#2534) 2024-03-08 12:55:42 -05:00
bc8dfe3caf init (#2438) 2024-03-08 11:36:10 -05:00
e3d324240f Check if the buffers fit GPU memory after device map auto inferred (#2412)
* Check if the buffers fit GPU memory after device map auto inferred

  * For some models, like TheBloke/WizardCoder-33B-V1.1-GPTQ, contain a
    huge buffer, which may cause OOM on GPU memory if not using
    offload_buffers. This commit adds a check for such case.

* Minor refactors.

* Add missing assertions
2024-03-08 11:05:38 -05:00
10882eeddd Update link to dynamo/compile doc (#2533) 2024-03-07 09:36:43 -05:00
145a98fc12 Update the default behavior of zero_grad(set_to_none=None) (#2472)
Now, the behavior of the wrapped optimizer is that the gradient is cleared by default when `set_to_none=None`. This aligns with `torch.optim.Optimizer` and saves memory.
2024-03-07 09:31:21 -05:00
64ae9ea3fe Enable using dash or underscore for CLI args (#2527)
* New approach

* New version, good

* Complete rewrite, and works for testing

* More nits

* Simplify option_string filtering

* More suggestions from codereview

* Add test

* Fix broken tests
2024-03-07 07:22:34 -05:00
8aa72b9748 Launch mpirun from accelerate launch for multi-CPU training (#2493)
* Update accelerate config and launch to abstract out mpirun

* Fix var

* Documentation updates, updating the launch script to work with other MPI programs, and fixing the nlp example when using IPEX

* Style fixes

* Add a test

* Style fixes

* Formatting fix

* Updates based on review feedback.

* Remove model.train()

* Doc update

* Update doc regarding the accelerate config with the old method of mpirun and accelerate

* Fix typo in comment

* Quality and test updates

* Updates based on review feedback

* Quality fix

* Fix mock patch path

* Updates based on review feedback

* Quality fixes
2024-03-06 13:52:08 -05:00
97d115a266 Remove unnecessary env=os.environ.copy()s (#2449) 2024-03-06 06:36:56 -05:00
63cfd9efdc qbitstensor compatibility (#2526) 2024-03-04 17:55:28 -05:00
6cf8221a09 Don't manage PYTORCH_NVML_BASED_CUDA_CHECK when calling accelerate.utils.imports.is_cuda_available() (#2524)
* Don't manage PYTORCH_NVML_BASED_CUDA_CHECK

PYTORCH_NVML_BASED_CUDA_CHECK will use an NVML-based check when
determining how many devices are available. That's useful for preventing
CUDA initialization when doing that check (or calling
`torch.cuda.is_available()`). Instead of manipulating that env var, one
can call the torch utility `_device_count_nvml` directly preventing the
manipulation of the env var.

* Uses env var instead of private torch function

* Fixes flake8 check
2024-03-04 14:18:17 -05:00
7a2feecad4 Add copyright + some ruff lint things (#2523)
* Copyright and ruff stuff

* lol
2024-03-04 09:14:31 -05:00
ee004674b9 fix typo in launch.py (#2516) 2024-03-03 04:51:57 -05:00
65544d8fe9 [docs] Fix typos (#2490)
* fix typos

* fix typos

* fix typo

* fix typos

* fix typos

* fix typos

* fix typo

* fix typo

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-03-01 12:19:05 -05:00
5fce525f90 Fix edge case in infer_auto_device_map when dealing with buffers (#2511)
* fix buffer

* style
2024-03-01 10:32:31 -05:00
ca37b0e471 Fixed 0MiB bug in convert_file_size_to_int (#2507) 2024-02-29 09:32:59 -05:00
82a1258ffc Remove offline stuff (#2509)
* Better check

* Fully remove

* Trail
2024-02-29 09:17:37 -05:00
21b225e8d5 Check if hub down (#2506)
* Let's try it out

* Let's try this out

* Some more cases

* String

* Require hub online for estimator

* Add CI checker to alert on hub status

* Format

* Oops death by ctrl z

* Fix import
2024-02-28 18:56:37 -05:00
25ee6ab3b7 [docs] Quicktour (#2456)
* first draft

* fix callouts

* save, load, training features

* fix hfoption tag

* execution, tpu

* fix toctree

* move from accelerator api

* feedback
2024-02-28 15:45:41 -08:00
2d3e822d11 quanto compatibility for cpu/disk offload (#2481)
* quanto compatibility

* fix
2024-02-28 18:05:14 -05:00
811dc1e464 add custom dtype INT2 (#2505)
* add-custom-dtype

* style
2024-02-28 18:05:02 -05:00
c59c6c9bff [docs] Divide training and inference (#2466)
* divide training and inference

* nest
2024-02-28 09:01:25 -08:00
422bd23f3f Docstring fixup (#2504)
* Docstring fixup

* Tense
2024-02-28 11:56:52 -05:00
c0b16b684f [docs] Accelerator API (#2465)
* update

* make style

* align toctree title

* feedback
2024-02-28 08:55:36 -08:00
78b15561a1 fix link typo (#2503) 2024-02-28 10:48:34 -05:00
8f9673f509 hotfix test 2024-02-27 13:30:37 -05:00
9c071103f0 Remove all cases of torchrun in tests and centralize as accelerate launch (#2498)
* Migrate torchrun to a full helper for tests

* keep old namings

* Metrics too

* Fix examples

* Bronked tests

* Refactor

* No need for setup
2024-02-27 13:09:05 -05:00
1127e670ca Fix CI tests due to pathlib issues (#2491)
* Fix tests

* Fixup tests

* Fix test

* Actually cast to string!

* Fixup deepspeed

* fsdp and deepspeed fix

* Since we're doing this, may as well get it all

* Stragglers

* Split only if we require config_file

* Make list

* Only convert if it's a path

* type

* Other func

* rm parenth
2024-02-27 10:39:31 -05:00
fa83efc33e [FIX] allow Accelerator to detect distributed type from the "LOCAL_RANK" env variable for XPU (#2473)
* add LOCAL_RANK

* style
2024-02-27 09:41:51 -05:00
4aa71049c3 Free mps memory (#2483) 2024-02-26 15:14:19 -05:00
c0b441f6be Fix TPU with new XLA device type (#2467)
* Fix TPU after new `XLA` device type

* use `torch_xla.runtime.device_type`

* format
2024-02-26 14:50:21 -05:00
34fdddd7df Context manager fixes (#2450)
* Ban use of `os.*env`

* Fix `clear_environment` to actually clear environment variables

Assigning to `os.environ` does not clear the environment (Ruff B003)

* Have environment context managers restore state even if the block raises

* Add tests for environment CMs
2024-02-26 14:35:06 -05:00
3fb9a3a231 DOC: Fixes to Accelerator docstring (#2443)
* DOC Fixes to Accelerator docstring

- Add more links to accelerator classes where applicable
- Fix a typo: KwargHandler => KwargsHandler

* Fix syntax issues

Not sure how to add a link of the type is `list[SomeType]`, so just
removed it for now.

* Fixing link for KwargsHandler

* Add KwargsHandler to API docs

* Also add doc entry to kwargs.md
2024-02-26 14:11:36 -05:00
065d88729b Replace os.path.sep.join path manipulations with a helper (#2446)
* Replace `os.path.sep.join` path manipulations with a helper

* Fix `base_cmd` being modified in CLI tests
2024-02-26 14:10:23 -05:00
67e698cf4d Add pre-commit configuration (#2451) 2024-02-26 14:05:24 -05:00
46ac6c9bba Use grad-accum on TPU (#2453)
* Use grad-accum on TPU

* Better logic
2024-02-26 14:03:57 -05:00
9b24f56e42 Fix wrong is_namedtuple implementation (#2475)
* fix

* add test
2024-02-26 12:11:03 +01:00
f20445d4ac Fix the pytest version to be less than 8.0.1 (#2461)
* Fix the pytest version to be less than 8.0.0

We're getting errors such as:

> /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/testing_utils.py:129: in <module>
>     from _pytest.doctest import (
> E   ImportError: cannot import name 'import_path' from '_pytest.doctest' (/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/_pytest/doctest.py)

* Update setup.py

Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>
2024-02-23 16:03:29 -05:00
97d2168e59 Check for None (#2452) 2024-02-15 10:38:54 -05:00
79016eb163 Fix test 2024-02-14 14:38:01 -05:00
164193fa7e [Big deprecation] Introduces a DataLoaderConfig (#2441)
* Deprecate and introduce dataloader_config

* Update docs

* Doc nits

* More tests, adjust based on PR review

* Fixup tests

* Nits

* Update docs/source/quicktour.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Clean

* Actually create one

* Forgot to change one

* Use pytest

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-02-14 13:26:02 -05:00
482a9f9fa4 Point to right file 2024-02-14 12:52:49 -05:00
d7de8d1794 Include pippy_file_path (#2444) 2024-02-14 11:24:07 -05:00
b443be70fb Make torch xla available on GPU (#2176)
* Make torch xla available on GPU

* format code

* fix documentation build error

* update according to the comments

* Replace DistributedType.TPU with DistributedType.XLA

* make all ut pass

* format code

* update comments

* skip test

* format code

* skip FSDPPluginIntegration for torchxla

* bring back custom_sampler_check

* fix ut

* format code

* format code

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-02-14 10:19:25 -05:00
613ad7089a Fix warning when dispatching model (#2442)
* Fix warning when moving the model

* oups
2024-02-14 09:06:14 -05:00
13e79ccfab Enable more Ruff lints & fix issues (#2419)
* Remove antiquated flake8 and isort configuration

* Bump to Ruff 0.2.1

* Explain ruff options

* Autofix Ruff B010 (static `setattr`)

* Autofix Ruff B009 (static `getattr`)

* Enable Ruff UP (not UP007); auto-fix

* Fix remaining Ruff UP complaints

* Fix a couple more format calls
2024-02-14 08:59:42 -05:00
aba3b8c72f Prefer is_torch_tensor over hasattr for torch.compile. (#2387)
* Prefer `is_torch_tensor` over `hasattr` for `torch.compile`.

`torch.compile` breaks when using `hasattr` but succeeds when using `isinstance(torch.Tensor)`.  This commit short-circuits the `hasattr` call for `torch.Tensor`s if possible.

Note: `is_npu_available` is also not torch.compila compatible due to (1) lru_cache and (2) importlib checks, so I've moved it into the try block, catching the AssertionError instead.

* Fix torch.device("npu").

This is not available in non-npu pytorch. Note that
torch.device automatically assigns an index when created as torch.device("npu"), so overwriting device with `"npu:0"` is only required if device is a string "npu".

* Remove unittest.main execution.

* Fix style broken by merge save.

* Import operations functions directly.

* fix style

* Fix imports attempt 2.

* Re-raise error if no NPU available.
2024-02-14 08:59:28 -05:00
70cdf5fe52 Make test assertions more idiomatic (#2420)
* Codemod `unittest` assertions into native assertions

With https://github.com/akx/codemod-unittest-to-pytest-asserts

* Use plain asserts instead of `assertDict` and `assertList`

Done with

```
ast-grep run --pattern 'self.assertDictEqual($A, $B)' --rewrite 'assert $A == $B' -l python -i
ast-grep run --pattern 'self.assertListEqual($A, $B)' --rewrite 'assert $A == $B' -l python -i
``

* DRY some Deepspeed tests
2024-02-13 14:23:18 -05:00
b38590a28a fix tied_pointers_to_remove (#2439) 2024-02-13 16:07:06 +01:00
5318bc7733 Dev version 2024-02-13 10:04:34 -05:00
ef68b4655c Fix seedable sampler logic and expound docs (#2434)
* Fix and add more docs

* Add tests + ensure working

* Fixup all tests!
2024-02-13 09:19:42 -05:00
ecebfa19c9 3.9 image (#2436) 2024-02-12 15:02:32 -05:00
5a39359fb2 Fix test (#2435) 2024-02-12 14:23:36 -05:00
b3d2111708 Version 0.28.0.dev 2024-02-09 10:51:07 -05:00
f75c6245ba [Fix] make all tests pass on XPU (#2427)
* fix tests

* style
2024-02-09 10:11:41 -05:00
9c1d5bac15 bug fix (#2426) 2024-02-09 10:11:08 -05:00
b0b867da85 Fix fp8 things (#2403)
* Fix fp8 things

* if
2024-02-09 10:03:29 -05:00
433d693b70 [FIX] fix the wrong nproc_per_node in the multi gpu test (#2422)
* bug fix

* style fix
2024-02-09 10:02:28 -05:00
c3aec59b12 Migrate pippy examples over and run tests (#2424)
* Migrate examples over

* Finish updating doc

* torchpippy

* Readme review nits

* Mention gather op in examples
2024-02-09 10:01:56 -05:00
9467a62744 Make output end up on all GPUs at the end (#2423)
* Make output end up on the cpu at the end

* Rework a bit

* Remove the CPU part

* Update to include a new util to copy tensors across devices

* Update test

* Update doc

* Update docstring

* Make False by default and change if community feedback says yes

* Apply suggestions from code review

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update default to False in doc and make a tip

* Update typing

* Defaults

* Explain

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-02-09 10:01:00 -05:00
86228e321d Update FSDP docs (#2430)
* Update fsdp.md

* address comments
2024-02-09 20:29:02 +05:30
06b138d845 Try again 2024-02-06 13:10:43 -05:00
0867c09318 torch-native pipeline parallelism for big models (#2345)
* Broken version

* Timing I would expect

* Working version!

* Use MethodType

* working test

* Tests

* Use no split module classes explicitly

* Put split_points in pipelien

* Store split points in hf_split_points

* fix case num_process=1

* Allow for dynamic batch padding (#2352)

* Allow for dynamic batch paddign

* Fix test

* Update src/accelerate/inference.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Break early after the first valid bs is found

* Less slicy-dicy

* Test cv model

* Start, need to test

* Use dataloader-like logic

* Refactor to utils

* With tests

* Update the source

* Clean

* bs=1 case

* Add test

* add some failing test

* Almost working version

* Much cleaner implementation

* Use pad_input_tensor

* All tests passing!

* Do it at tracing too

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Marc Sun <marc@huggingface.co>

* Rm literal

* Allow users to pass in max_memory

* Note about recursion

* Document, document, document

* Right import check

* Fix bug, add tests to multigpu runners

* Change default to None

* Start of docs

* Try again?

* Try again x2

* Trailing comma

* Move import

* Clean

* typehint

* typo

* From code review

* Use num_chunks

* Update tests/test_utils.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Bad copy/paste

* hf_split_points

---------

Co-authored-by: Marc Sun <marc@huggingface.co>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-02-06 13:00:40 -05:00
0e1ee4b92d Use Ruff for formatting too (#2400)
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-02-06 08:18:18 -05:00
d8a64cb79d Unpin (#2418) 2024-02-06 08:00:33 -05:00
b703efdcc3 Adding Local SGD support for NPU (#2415) 2024-02-05 10:26:48 -05:00
68f54720dc Fix the size of int and bool type when computing module size (#2411)
* According to the code in set_module_tensor_to_device, uint, int and bool type
  won't be converted, so let's keep its original size, or the module size will be
  under-estimated.
2024-02-02 12:15:50 -05:00
46f1391b79 Fix XPU inference (#2383)
Though it will complain about "Device xpu is not recognized, available devices are integers(for GPU/XPU),
'mps', 'cpu' and 'disk'", but you cannot just put 0 as device, or it will treat 0 as CUDA device, then complains
again that torch is not compiled with CUDA enabled.

You will need safetensors >= 0.4.2 if using safetensors files.
2024-02-02 11:08:22 -05:00
cd7ff5e137 Added activateEnviroment.sh to readme (#2409)
Clarification of the activateEnviroment.sh script in the examples working on a cluster with Slurm&Enviroment Modules
2024-02-01 14:21:55 -05:00
f4b411f84b Fix CI due to pytest (#2408)
* New makefile

* Big modeling, oops
2024-02-01 12:28:10 -05:00
7ba64e632c Revert "[don't merge yet] unpin torch (#2406)" (#2407)
This reverts commit 8b770a7dabd957ae54f1abb028d1ce53db6cf4d4.
2024-02-01 10:13:15 -05:00
8b770a7dab [don't merge yet] unpin torch (#2406)
* unpin torch

* unpin torch

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-02-01 09:56:16 -05:00
3d8b998fbb Address PIP-632 deprecation of distutils (#2388) 2024-01-31 05:54:23 -05:00
03365a3d17 Pin torch version (#2394) 2024-01-30 19:15:33 +00:00
7aafa25673 Fix batch_size sanity check logic for split_batches (#2344)
* fix

* lets raise an error

* Update error message

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* fix error message style

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2024-01-27 19:33:48 +01:00
f88661b5d9 device agnostic cli/data_loader/grad_sync/kwargs_handlers/memory_utils testing (#2356)
* test_cli

* test_data_loader

* test_grad_sync

* test_kwargs_handlers

* test_memory_utils

* test_data_loader

* style check
2024-01-26 09:26:40 +01:00
581fabba48 Add adapter_only option to save_fsdp_model and load_fsdp_model to only save/load PEFT weights (#2321)
* Add adapter_only option to save_fsdp_model and load_fsdp_model

* Gate with adapter_only

* Black format

* Change unwrapping behavior

* Use extract_model_from_parallel for model unwrapping

* Fix quality

* Move functions to utils files

* Fix quality
2024-01-26 08:58:40 +01:00
e909eb34e2 modified big_modeling.py (#2376)
Co-authored-by: Andrei Panferov <blacksamorez@yandex-team.ru>
2024-01-25 14:16:52 +01:00
7644a02e6b add_hook_to_module and remove_hook_from_module compatibility with fx.GraphModule (#2369)
* fix add & remove hook with torch fx

* comment test
2024-01-25 10:53:53 +01:00
162a82164e device agnosic optimizer testing (#2363) 2024-01-23 10:12:22 +01:00
0d6a5fa8ee remove init_hook_kwargs (#2365) 2024-01-22 13:05:29 +01:00
53845d2596 Fix deepspeed issue (#2366) 2024-01-22 11:47:01 +01:00
5ec00da2be bugfix that doesnt let fp8recipekwarg use TE or MSAMP (#2355)
Signed-off-by: Sudhakar Singh <sudhakars@nvidia.com>
2024-01-19 09:24:51 -05:00
649e65b542 fix test (#2354)
Co-authored-by: Ubuntu <ubuntu@ip-172-31-18-207.ec2.internal>
2024-01-18 15:33:34 -05:00
14d7c3fca6 Fix block_size picking in megatron_lm_gpt_pretraining.py (#2342)
Only cap `block_size` to 1024 if `tokenizer.model_max_length` is actually greater than 1024.
2024-01-18 13:04:23 -05:00
c7d11d7e40 Fix mpi4py/failing deepspeed test issues (#2353)
* Try deepspeed after installing mpi4py

* Try again

* Just GPU needed

* Run slow deepspeed

* Fin

* Uncomment

* Uncomment x2
2024-01-18 13:01:44 -05:00
ec4f01a099 device agnostic test_accelerator/test_multigpu (#2343) 2024-01-18 09:03:20 -05:00
f5c01eeb63 FIX: add oneCCL environment variable for non-MPI launcher (accelerate launch) (#2339)
* add ccl env

* add local world size

* set env vars for deepspeed path

* adapt style
2024-01-18 09:01:34 -05:00
20ff458d80 Show DeepSpeed option when multi-XPU is selected in accelerate config (#2346)
* add XPU

* adapt style
2024-01-18 06:32:03 -05:00
6719cb6db3 Avoid duplicating memory for tied weights in dispatch_model, and in forward with offloading (#2330)
* wip

* fix

* add test

* cleanup

* style

* style & tests pass

* fix offload, submodules

* cleanup

* Update tests/test_big_modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update tests/test_big_modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* disk offloading do not reload tied parameters in memory

* remove outdated comment

---------

Co-authored-by: Your Name <you@example.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-01-17 10:58:05 +01:00
31fd2b1ad6 Just 40* (#2332) 2024-01-12 15:34:50 -05:00
fce61a99ec Fixed typos in readme files of docs folder. (#2329) 2024-01-12 05:44:28 -05:00
6ec92cf06b Fix model memory issue (#2327)
* Potential fix

* REmove config part?
2024-01-11 13:47:59 -05:00
2a4037322f convert it back to dict (#2326) 2024-01-11 13:29:21 -05:00
f823404f69 Raise error when using batches of different sizes with dispatch_batches=True (#2325)
* raise err

* typo

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* remove from e

* fix

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-01-11 10:13:07 -05:00
ef2fe912c5 Update versions to dev 2024-01-10 14:43:29 -05:00
e3e9b87592 Fix infer_auto_device_map when tied weights share the same prefix name (#2324)
* fix auto device map with tied weights sharing a prefix name

Co-authored-by: Giuseppe Franco <giuseppefranco4@gmail.com>
Co-authored-by: Nick Fraser <icanlosh@gmail.com>

* precise comment

---------

Co-authored-by: Giuseppe Franco <giuseppefranco4@gmail.com>
Co-authored-by: Nick Fraser <icanlosh@gmail.com>
2024-01-10 15:57:37 +01:00
456afd92ce Params4bit added to bnb classes in set_module_tensor_to_device() (#2315) 2024-01-10 09:25:01 -05:00
0d2280dadc fix sanity check (#2310) 2024-01-09 14:11:51 -05:00
55d4a496dd Bring old seed technique back (#2319)
* Redo stage 1

* Fix rest of tests

* Expand doc

* Expand x2

* Expand x2
2024-01-09 14:10:57 -05:00
2a8829d9a5 Update test_deepspeed.py (#2323) 2024-01-10 00:15:19 +05:30
3969731ce8 Fix DeepSpeed related regression (#2304)
* Update accelerator.py

* Update test_performance.py

* add test
2024-01-09 15:08:12 +05:30
411aa58a77 DeepSpeed refactoring (#2313)
* DeepSpeed refactoring

Co-Authored-By: Stas Bekman <stas00@users.noreply.github.com>

* add tests

* Update test_deepspeed.py

* Update test_deepspeed.py

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2024-01-09 15:07:27 +05:30
4420ec641d Update accelerator.py (#2295) 2024-01-09 10:23:03 +05:30
2241725ad6 Update docs: Add warning for device_map=None for load_checkpoint_and_dispatch (#2308)
* Update docs: Add warning for device_map=None for load_checkpoint_and_dispatch

* Fix style errors.
2024-01-08 19:24:11 -05:00
5cac878984 Add more missing items (#2309)
* Add more missing items

* Update docs/source/package_reference/utilities.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-01-08 14:58:23 -05:00
5d31423308 [deepspeed] documentation (#2296)
* Update dataclasses.py

* expand docs
2024-01-08 13:38:12 +05:30
2721387b98 make test_state_checkpointing device agnostic (#2290) 2024-01-05 12:47:58 -05:00
2cfa88bdf1 Fix breakpoint API in test_script.py on TPU. (#2263)
* Fix breakpoint API in test_script.py on TPU.

* only call set_trigger on the main process

* The test passed.

* add a comment

* Call mark_step after all_reduce to make torch_xla run collective op like the torch.distributed below, rather than waiting untill the tensor is referenced again to run the pending operations.
2024-01-05 12:47:30 -05:00
102caf4fab bugfix in swapping init module weights (#2305)
Signed-off-by: Sudhakar Singh <sudhakars@nvidia.com>
2024-01-05 12:45:21 -05:00
07df5d268f add back dvclive to tests (#2280)
* add back dvclive

* dvclive tracker: handle and test step increments

* fix python<3.9 compatibility
2024-01-05 12:22:22 -05:00
68b3dbf666 Bump tj-actions/changed-files from 22.2 to 41 in /.github/workflows (#2300)
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 22.2 to 41.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](https://github.com/tj-actions/changed-files/compare/v22.2...v41)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-05 09:52:22 -05:00
403c0714d1 Update dataclasses.py (#2292) 2023-12-28 23:59:26 +05:30
848ed800fa Improve FSDP config usability (#2288)
* Improve FSDP config usability

* quality 

* Update tests

* fix cmd arg

* fix

* update docs

* address comments
2023-12-27 20:41:29 +05:30
ad957ce556 Update deepspeed.md (#2286) 2023-12-27 15:05:42 +05:30
3db088f5d6 [doc] FSDP improvements (#2274)
* Update fsdp.md

* fix typo

* fix readability

* resolve the "static models" ambiguity

* rewrite section

* typo
2023-12-27 15:04:55 +05:30
d1abd59114 fix (#2218) 2023-12-26 14:21:08 +01:00
ceb7c699bc typo fix (#2276)
* typo

* style
2023-12-22 14:10:22 -05:00
c5baa055c0 Rm DVCLive as latest version causes failures (#2279) 2023-12-22 11:47:04 -05:00
349be97ccb Uninstall DVC in the Trainer tests (#2271)
* Test using my branch

* Uninstall DVCLive only
2023-12-22 08:04:16 -05:00
b60061dfd2 Solve CUDA issues (#2272)
* Solve CUDA issues

* import
2023-12-22 08:03:59 -05:00
b565a6c58a device agnostic deepspeed&fsdp testing (#2235)
* device agnostic deepspeed testing

* device agnostic fsdp testing

* fix failing deepspeed test

* make style

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-12-20 10:47:39 -05:00
a03c361ffb refactor deepspeed dataloader prepare logic (#2238)
* refactor deepspeed dataloader prepare logic

Co-Authored-By: Stas Bekman <stas00@users.noreply.github.com>

* address comments and fix issues

Co-Authored-By: Stas Bekman <stas00@users.noreply.github.com>

* further refactor

* add test

* rename test

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2023-12-19 12:45:14 +05:30
b0528392c8 Integrate MS-AMP Support for FP8 as a seperate backend (#2232)
* Redo with new version

* Store

* Working version

* Seperate for now

* Min diff

* check if available

* Better docstring

* Check for multiple models and optimizers

* Check for TE and MSAMP args seperately

* String clarity

* Better docstring and types

* Quality

* Simplify a bunch for fp8

* Convert literals to type alias

* Better err

* Docs

* toc typo

* Apply suggestions from code review

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Maria Khalusova <kafooster@gmail.com>

* Address doc nits

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Maria Khalusova <kafooster@gmail.com>
2023-12-15 13:07:55 -05:00
060678415a Support log_images for aim tracker (#2257)
* support `log_images` for aim tracker

* fix the potential kwargs issue for aim tracker's `log_images`

* remove ambiguous import statement

* use `aim` directly to avoid potential conflict
2023-12-15 11:25:53 -05:00
6b2d968897 [Big-Modeling] Harmonize device check to handle corner cases (#2254)
* harmonize device check

* make style

* oops

* oops again
2023-12-14 09:55:31 -05:00
ad3a5bc920 Fix MpDeviceLoaderWrapper not having attribute batch_sampler (#2242)
* Fix MpDeviceLoaderWrapper not having attribute batch_sampler

* fix style
2023-12-13 12:31:51 -05:00
eafcea07f6 fix BFloat16 is not supported on MPS (#2226) (#2227)
* fix BFloat16 is not supported on MPS (#2226)

* fix style

* add comments
2023-12-11 22:27:07 -05:00
eff30e2130 Fix nb tests (#2230)
* Fix nb tests

* INclude bnb import

* pprint

* Try this time

* greater than zero

* Fix test

* bnb

* Clean
2023-12-11 09:58:12 -05:00
694f2e2c12 fix the failing test (#2237) 2023-12-11 16:15:23 +05:30
9964f90fd7 Add npu support to big model inference (#2222)
* Add npu support to big model inference

* make style

* add warning when using npu

* fix typo

* replace `.to(<num>)` with `.to("npu:<num>") when using `torch_npu`

* empty_cache

* fix
2023-12-08 11:58:32 -05:00
f86876d56d Make cleaning optional for device map (#2233)
* Make cleaning optional for device map

* Apply suggestions from code review

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Change order

* Nit

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2023-12-08 11:55:03 -05:00
0a37e2042e device agnostic testing (#2123)
* device agnostic testing

* initilaize accelerate state before using the logging utility

* apply review suggestion

* apply review suggestion

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* use `hardware accelerator` to disambiguate

* remove redundant guard code

* rename variable name for consistency

* remove the overkilled codes

* fix ci-error

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-12-08 07:29:25 -05:00
54d670be41 [Docs] Add doc for cpu/disk offload (#2231)
* Add doc offload

* fix

* Update docs/source/concept_guides/big_model_inference.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-12-07 12:02:06 -05:00
339854a9a4 Update the 'Frameworks using Accelerate' section to include Amphion (#2225)
* Extend the frameworks using accelerate to include Amphion

* Update integration examples to include Amphion

* fix some typos
2023-12-07 11:28:41 -05:00
5296419df4 [data_loader] expand the error message (#2221)
* Update data_loader.py

* style
2023-12-07 10:38:39 -05:00
6a4857fec2 fix tqdm wrapper to print when job id ==0 (#2223) 2023-12-06 08:45:31 -05:00
9569150174 Fix dtype bug when offload_state_dict=True and dtype is specified (#2116)
* fix bug when using offload_state_dict

* fix wrong docstring & type hint

* fix & add test

* style

* fix device_map

* Update tests/test_modeling_utils.py

* fix style
2023-12-06 02:04:26 +09:00
8f871f41f1 Check notebook launcher for 3090+ (#2212)
* Include dist launch

* Better way

* CLean

* Just do it always

* Account for notebook launcher

* Use better gpu check

* Clean output

* Set logic
2023-12-05 11:21:44 -05:00
47e6c36155 Add allgather check for xpu (#2199)
* add  allgather check for xpu

* style fix

* fix test

* fix test and review
2023-12-05 11:21:07 -05:00
47c144570c Update docker images (#2213) 2023-12-05 11:07:18 -05:00
6a54d0781b MNT Delete the delete doc workflows (#2217)
They are failing because the corresponding GH action no longer exists.
Docs are now cleaned up automatically.

See discussion in #open-source-interal
2023-12-05 08:35:35 -05:00
0482548363 Update accelerator.py (#2206) 2023-12-02 00:09:59 -05:00
0e48b2358d allow deepspeed without distributed launcher (#2204) 2023-12-01 09:09:36 -05:00
3499cf25aa Assemble state dictionary for offloaded models (#2156)
* changed meta alignment device to cpu

* reverted alignment device and init weight map

* trace on values

* trace on values

* trace on values

* added offload model state dict save and test

* removed hook traces

* removed n

* Update src/accelerate/accelerator.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* suggestions and make style

* fixed circular import and make style

* debugged test

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* function level import and make style

* Update src/accelerate/utils/modeling.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update tests/test_accelerator.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update tests/test_accelerator.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* make style

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-11-30 09:18:28 -05:00
68d63ee15f unpins dvc (#2200) 2023-11-29 13:45:02 -05:00
151637920d Better error when device mismatches when calling gather() on CUDA (#2180)
* Better err

* Update src/accelerate/utils/operations.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-11-29 12:11:52 -05:00
0ba3e9bb50 Explicitly disable P2P using launch, and pick up in state if a user will face issues. (#2195)
* Disable P2P automatically

* Clean

* Right check

* Set better

* Check if just cuda

* Spacing

* replace str int for int as str
2023-11-29 12:10:01 -05:00
b04d36c75f Apply DVC warning to Accelerate (#2197)
* Use logger warn instead

* Warn

* Right import

* Clean up logs

* Apply suggestions from code review

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2023-11-28 15:02:20 -05:00
5fc1b230d3 Pin DVC (#2196)
* Remove dvc

* Pin instead
2023-11-28 13:34:11 -05:00
244122c736 fsdp refactoring (#2177)
* remove the redundant code post the torch 2.1 release

* make `use_orig_params=True` by default.

* fix `save_state` optimizer saving for fsdp and update the fsdp example

* quality

* fixing the utils and tests. Updating the docs

* bump up the minimum version for FSDP support.

* address comment

* rename fsdp model checkpointing variables
2023-11-24 09:31:57 +05:30
d25efa71ce Don't install comet 2023-11-21 09:54:33 -05:00
1aeb1e8997 Don't make integration tests wait 2023-11-21 08:41:57 -05:00
0e51680994 Right URL 2023-11-20 14:03:49 -05:00
7d430cf8de skorch 2023-11-20 13:30:23 -05:00
b8ca803f98 Don't make it wait 2023-11-20 13:11:08 -05:00
1243191ecb [Working again] New CI (#2173)
* Try merge tests

* Fix

* Checkout branch

* Fix pip install

* rebase

* Colons

* right one

* use master

* Rm

* Add needs

* Better clean

* always

* Forgot other

* test on AWS

* update all labels

* fix multi-gpu working directory

* limit to 2 GPU

* force run on kube

* move build docker image to new ci

* test build on CPU instance

* move build docker image release to new ci

* move scheduled slow tests to new ci

* move integration test to new ci

* Comments

* Right CPU tags

* Right machines

* PR comments

* Fix issues

* Some trailers

---------

Co-authored-by: Guillaume LEGENDRE <glegendre01@gmail.com>
2023-11-20 13:01:12 -05:00
2b25b8b3c5 Revert "New CI Runners (#2087)" (#2172)
This reverts commit ca300c0a04f843da2c5c8559e7d728926f7e8bf2.
2023-11-20 12:06:33 -05:00
ca300c0a04 New CI Runners (#2087)
* Try merge tests

* Fix

* Checkout branch

* Fix pip install

* rebase

* Colons

* right one

* use master

* Rm

* Add needs

* Better clean

* always

* Forgot other

* test on AWS

* update all labels

* fix multi-gpu working directory

* limit to 2 GPU

* force run on kube

* move build docker image to new ci

* test build on CPU instance

* move build docker image release to new ci

* move scheduled slow tests to new ci

* move integration test to new ci

* Comments

* Right CPU tags

* Right machines

* PR comments

---------

Co-authored-by: Guillaume LEGENDRE <glegendre01@gmail.com>
2023-11-20 11:41:57 -05:00
427ef8bd00 Updated torchrun instructions (#2096)
* Updated torchrun instructions

* Update examples/README.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update examples/README.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update examples/README.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update examples/README.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Update README.md for torchrun instructions

* Added SLURM scripts and updated README

* Update examples/Slurm/submit-multinode.sh

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update examples/Slurm/submit-multiGPU.sh

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update examples/README.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update examples/README.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* final details

* modified argument parser

* modified slurm multigpu script

* modified multinode slurm script

* Added accelerate multine issue

* Update examples/README.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* fixed readme commnad

* added --main_process_port specification to readme

* Revert "modified argument parser"

This reverts commit c3bef5cdd11a8a120602b5b7ce158f7400881d7f.

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-11-20 10:42:49 -05:00
35b0206353 Fix non persistant buffer dispatch (#1941)
* offload only persistant buffer

* add tests and fix naming

* remove_non_persistant=True by default

* style

* style again

* fix hooks

* fix logic
2023-11-20 09:49:50 -05:00
fbe00d7897 Update dataclasses.py (#2168)
Bug fix: recompute_activation -> recompute_activations
2023-11-20 07:53:10 -05:00
62af737219 Add ZeRO++ to DeepSpeed usage docs (#2166)
* added zeropp to deepspeed doc file

* minor edit to clarify hpz size
2023-11-20 17:54:30 +05:30
cd51581248 Add warning for problematic libraries (#2151)
* Test bnb and fix nb launcher skip

* Fin

* Rm comment

* PR Review comments

* Just star
2023-11-17 09:24:20 -05:00
a5a7c039a0 Do not attempt to pad nested tensors (#2041) 2023-11-17 09:01:35 -05:00
cf745c936d check port availability only in main deepspeed/torchrun launcher (#2078)
* check port availability only in main deepspeed launcher

* check port availability only in main launcher for deepspeed/torchrun

* Update launch.py

add comments

---------

Co-authored-by: 聂靖入 <niejingru@bytedance.com>
2023-11-17 09:00:55 -05:00
99877f56d6 Adds dvclive tracker (#2139)
* dvclive tracker

* add dvclive to test_trackers

* fix dvclive tests

* add dvclive example and respond to other feedback

* fix dvclive tests

* fix quality
2023-11-17 08:49:13 -05:00
0f2686c8d3 Disable pypi for merge workflows + fix trainer tests (#2153)
* Disable workflows for PR + merge

* skorch

* Fix transformers tests too
2023-11-15 11:29:39 -05:00
a912b2ee09 Add examples to tests (#2131)
* Add examples to tests

* Try now

* Right name

* Right path

* Fin

* Too slow, just test on runner
2023-11-14 15:03:41 -05:00
e9fd72a613 Deprecated stuff (#2152) 2023-11-14 14:42:01 -05:00
8dedb140ef Add note about GradientState being in-sync with the dataloader by default (#2134)
* NOte about sync

* PR review comments
2023-11-14 11:53:57 -05:00
b55855a3d4 fix initial typos (#2150) 2023-11-14 09:44:30 -05:00
2b53a9089c [docs] troubleshooting guide (#2133)
* first take at troubleshooting guide

* logging moved to the troubleshooting guide

* TOC updates and gudie edits

* minor edits

* moved to tutorials

* feedback addressed

* batch size clarifications

* typo

* kernel, early stopping hanging, feedback
2023-11-13 17:58:56 -05:00
39d255b3d0 fixed a couple of broken links (#2147) 2023-11-13 12:26:10 -05:00
99dff1a167 Fix more tests (#2146)
* Fix some tests

* Contiguous

* Leave Marc alone ;)
2023-11-13 10:42:35 -05:00
a0a16e118a fix (#2145) 2023-11-13 10:32:15 -05:00
15458c5737 specify config file path on README (#2140)
* specify config file path

* set the path of generated config file for configuring and executing commands
2023-11-13 09:37:00 -05:00
fc0a43c3c1 Deal with shared memory scenarios (#2136)
* Deal with duplicates

* refactor

* Keep false for save

* Clean

* Better test for logs
2023-11-10 10:49:22 -05:00
8256a9c2d4 fix retie (#2137) 2023-11-10 10:12:23 -05:00
6727ac4394 Leave native save as False (#2138)
* Custom objects are not saved using saftensors

* Leave save as false
2023-11-09 13:39:11 -05:00
9674b40580 For testing transformers CI 2023-11-09 11:39:38 -05:00
0b0d9215a9 Raise error when saving with param on meta device (#2132)
* add error

* style

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* style

* move before creating the directory

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-11-08 10:37:27 -05:00
e638b1e21a Make safetensors the default (#2120)
* Make safetensors default

* Rm location

* Actually flip flags

* Tests + update checkpointing

* Add to setup

* Start of tests with both safetensors and without

* Update tests to use both

* Remove from load state

* Explicit tip

* With suggestions

* Simplify, don't abstract. Need to bring back to deepspeed however

* Refactor to use consts

* Keep how it was

* Typo fix
2023-11-08 09:07:22 -05:00
76de60dbdc Fix import error when torch>=2.0.1 and torch.distributed is disabled (#2121) 2023-11-08 17:38:32 +05:30
JQ
217e1a248c Sync states for npu fsdp (#2113) 2023-11-08 14:13:54 +05:30
5e0eb0d750 add DeepSpeed support for NPU (#2054) 2023-11-08 13:01:30 +05:30
183c9dd3ce Allow for ACCELERATE_SEED env var (#2126)
* Manual seeds

* None

* Add to docs

* Document

* Use torch seed for simplicity

* Rm from doc

* Better version
2023-11-07 12:05:42 -05:00
4f100318f4 Add explicit error if empty batch received (#2115)
* Add explicit error if empty batch received

* Move error check to cover all empty iterables
2023-11-03 14:06:12 -04:00
fa6f43033c Update README.md (#2119) 2023-11-03 12:57:46 -04:00
820fc4ca7a Make SeedableRandomSampler the default always (#2117)
* Fix tests

* Simplify logic a ~lot~
2023-11-03 08:28:42 -04:00
bd72a5f1a8 Revert "Always use SeedableRandomSampler (#2110)"
This reverts commit d8e12854098988d2162948c9a853081fcf00b73f.
2023-11-01 15:20:25 -04:00
55088a2cf5 Revert "Fix issue with tests (#2111)"
This reverts commit c2d8e245e9fa603b29986cb3b677cb0d44b41f6a.
2023-11-01 15:20:21 -04:00
c2d8e245e9 Fix issue with tests (#2111) 2023-11-01 15:03:59 -04:00
d8e1285409 Always use SeedableRandomSampler (#2110)
* Fix tests fully

* Change comment

* Further comments

* Clean

* CPU specific

* Just use device

* Rewrite differently

* Rewrite
2023-11-01 13:39:53 -04:00
5b3f3b99d6 fix warning (#2105) 2023-10-31 15:10:06 -04:00
2935057606 Fix memory leak in fp8 causing OOM (and potentially 3x vRAM usage) (#2089)
* Fix memory leak

* Change when model is moved to cuda

* Add from PR

* Remove link

* Undo original forward link
2023-10-31 09:34:53 -04:00
bb6759d634 fixed ip address typo (#2099) 2023-10-31 09:10:11 -04:00
55747318a0 Fix batch sampler (#2097)
* Fix batch sampler

* Clean

* Fix tests

* Fix

* Better comment

* Base case
2023-10-30 09:57:28 -04:00
217faafe08 Fix flag typo (#2090) 2023-10-27 08:46:13 -04:00
5440387529 CRITICAL: fix failing ci (#2088) 2023-10-26 16:12:58 -04:00
e1fab05ce7 Add ClearML tracker (#2034)
* add clearml tracker

* fix style in tracking.py

* run ruff --fix

* run ruff fix on src/accelerate/utils/__init__.py as well

* properly run make style

* add tests

* modify code based on code review

* changes based on code review

* quote data_frame

* fix docs

* remove pandas req in log_table

* style changes

* add tracker to docs
2023-10-26 12:13:28 -04:00
c3ec7ff5a9 Add logs offloading (#2075)
* add logs

* fix comm

* rework comment
2023-10-24 16:05:23 -04:00
d8535921ad v0.25.0.dev 2023-10-24 13:12:40 -04:00
eb8c535c17 Fix (#2080) 2023-10-24 12:55:06 -04:00
b7686ccb44 Warn when kernel version is too low on Linux (#2077)
* Warn when kernel version is too low on Linux

See #1929

On Linux with kernel version < 5.5, issues with hanging processes have
been reported. It is not clear how to fix the issue, so instead we warn
the user that they may encounter problems.

Notes

As logging requires an initialized PartialState, the actual check
happens at the end of Accelerator.__init__.

In a similar vein, the docstring of get_logger has been adjusted to
first initialize the Accelerator, as it is not working as currently
shown.

* Reviewer comment: small change to docstring
2023-10-24 12:43:55 -04:00
f3229872bc fix docstring typo (#2072) 2023-10-24 12:42:59 -04:00
7843286f2e Allow for samplers to be seedable and reproducable (#2057)
* bookmark

* Works!

* Working!

* Fully working now

* Cover dataset

* Needed for dispatch

* Check both

* Bring back pop, fix hang

* Fully working

* Change back to epoch

* Adjust for new methods

* Clean

* Fix tests

* Avoid circular import

* Clean

* Fix test

* Comment

* Add a comment

* Comment

* Use yield from instead
2023-10-24 06:41:06 -04:00
11e2e99cfc Let iterable dataset shard have a len (#2066) 2023-10-23 08:12:26 -04:00
07e745f1c4 DOC: Fix broken link to designing a device map (#2073)
There is a typo in the link.
2023-10-23 07:42:24 -04:00
c7c99a30ea fix: remove useless token (#2069) 2023-10-19 14:29:55 +02:00
8f45a2eae8 remove unused constants (#2045) 2023-10-18 14:24:01 -07:00
9fd64b7ea9 Fix the error when the "train_batch_size" is absent in DeepSpeed config (#2060)
* Update dataclasses.py

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-10-16 15:13:20 -07:00
5be16ad90b Add space to docs (#2055)
* Add space to docs

* Phrasing
2023-10-16 06:33:12 -07:00
dab62832de Reset state to pass failing test 2023-10-13 13:13:41 -04:00
caa9f9bcbb Fix stalebot (#2052) 2023-10-13 12:20:37 -04:00
943efedb88 fix docstring (#2053) 2023-10-13 07:42:26 -04:00
50acb0c2ec Let drop_last modify gather_for_metrics (#2048)
* Drop last

* Test

* Uncomment out tests

* Update src/accelerate/test_utils/scripts/external_deps/test_metrics.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Document better

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-10-12 14:27:06 -04:00
e6d96e5f70 Make fsdp ram efficient loading optional (#2037)
* make fsdp ram efficient loading optional

* Add documentation

* address comments

* address comments

* address comments

* nit
2023-10-12 20:44:09 +05:30
1dfb6e9304 Fix integration CI (#2047)
* Different method

* Should fix version
2023-10-12 07:40:11 -04:00
4bef6bc511 Safely end training even if trackers weren't initialized (#1994)
* Update accelerator.py

* init trackers on class init

* dont need getattr because trackers exists
2023-10-11 08:24:04 -04:00
73640d0463 Reduce memory by using all_gather_into_tensor (#1968)
* all_gather_into_tensor

* Cleanup

* Reduce memory on non-gloo

* Fin

* Check for backend too on cpu

* CPU comment

* Change scope for performance

* Bring back zeros after remembering why

* Add comment

* Add comment

* Use empty

* Comment
2023-10-10 10:10:32 -04:00
7a1159143e Unpin transformers (#2044) 2023-10-10 05:33:22 -04:00
cbb0b82fa2 Fix DeepSpeed version to <0.11 (#2043)
This is a temporary fix to prevent a DeepSpeed installation error that
was introduced in DeepSpeed 0.11.0.
2023-10-09 10:47:33 -04:00
5ae6111180 Allow FSDP to use with torch.autocast for bfloat16 mixed precision (#2033)
* Ignore native_amp when FSDP is used

* Rollback condition

* Fix mixed precision of bfloat16 for FSDP
2023-10-06 18:26:04 +05:30
230a5f541b Fix save on each node (#2036) 2023-10-06 05:18:02 -04:00
956114ac92 Enable shared file system with save and save_state via ProjectConfiguration (#1953)
* Support shared storage, start

* Pass use_local_node_storage

* Reverse and different namings

* Not global only

* Addres comments

* Clean

* Apply suggestions from code review

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Save on each node as explicit arg

* More explicit

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2023-10-03 12:04:01 -04:00
76ee7f211d update fsdp docs (#2026) 2023-10-03 17:40:23 +05:30
420743af22 Sync states for xpu fsdp (#2005)
* sync states for xpu fsdp

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-10-02 17:16:36 -04:00
206ab491ed update torch_dynamo backends (#1992)
* update torch_dynamo choice

* fix test
2023-10-02 14:31:44 -04:00
936d2f4f5c Add basic documentation for multi node training (#1988)
* initial commit for adding multinode training doc

* removed stray changes

* fix formatting issue and switch to bulleted list

* Update docs/source/basic_tutorials/launch.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update docs/source/basic_tutorials/launch.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* added link to new blog post

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-10-02 14:19:59 -04:00
da98d601b5 [docs] Quick tour refactor (#2008)
* quick tour refactor, moved internal mechanism into a conceptual guide

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-10-02 13:19:41 -04:00
658492fb41 fix resuming from checkpoint (#2001) 2023-09-29 13:12:41 +05:30
80da9cfb09 FIX Automatic checkpoint path inference issue (#1989)
Resolves #1983

Fixes an issue where the checkpoint directory would be incorrectly set while
loading when using relative paths.
2023-09-19 14:20:51 +02:00
03deec2a01 Fix model copy after dispatch_model (#1971)
* Fix model copy after dispatch_model

* Minor hook update to fix failing test

* address reviewer comments
2023-09-19 06:05:30 -04:00
629d02c844 Update big_modeling.md (#1976) 2023-09-18 10:11:57 -04:00
a87c95da9e Dev version 2023-09-14 15:24:15 -04:00
bbcdbbaffc Remove checkpoints only on main process (#1974)
* Remove checkpoints only on main process

shutil.rmtree might throw errors if called on multiply processes. Make a call only on main process

* Apply style
2023-09-14 08:31:55 -04:00
ce53708e0e fix for xpu (#1972) 2023-09-14 08:18:20 -04:00
53209ce6d8 update FSDP and DeepSpeed docs (#1973) 2023-09-14 08:18:11 -04:00
bd083ae1bf Add force_hooks to dispatch_model (#1969)
* Add force_hooks to dispatch_model

* Minor documentation rephrasing
2023-09-14 07:57:19 -04:00
e5452a618d fix torch compile with FSDP (#1919)
* fix torch compile with FSDP

* Update accelerator.py

* fixes

* resolve comments

* fix bug

* address comments

* addressing comments

* address comments
2023-09-14 13:19:59 +05:30
40a73e0ae0 Introduce breakpoint API (#1940)
* early stopping

* Fix tests

* Works on multi-gpu, uncomment

* Rm reset

* Check for >=1

* equal

* Trigger

* Fix test

* Update docs/source/concept_guides/deferring_execution.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Explicit example loop

* Set to zero, not None

* rename test

* Check again to ensure it's been reset

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-09-13 12:42:38 -04:00
937e08ce75 add bf16 mixed precision support for NPU (#1949)
* add bf16 mixed precision support for NPU

* Explicitly register the NPU backend to PyTorch via `import torch_npu`

---------

Co-authored-by: statelesshz <jihuazhong1@huawei.com>
2023-09-13 09:56:24 -04:00
5d558f21e2 [WIP] Implementing gather_for_metrics with dedup for non tensor objects (#1937)
* [feat] implementing gather_for_metrics for objects

* [lint] make style result

* [docs] improve fn docs gather for metrics

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [docs] update args description gather for metrics

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [refactor] gather for metrics for non tensor obj

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [fix] renaming tensor to data (was not defined and it is not just a tensor)

* [fix] else state

* [test] gather for metrics with non tensor objects

* [lint] make style result

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [test] removing useless assertion

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* [test] add running on main

* [lint] style autoformat

---------

Co-authored-by: Lorenzobattistela <lorenzobattistela@gmail.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-09-12 12:17:43 -04:00
d9b5ce60b3 Rm strtobool (#1964)
* Rm strtobool

* Reorganize

* c/p

* Signature
2023-09-12 11:21:09 -04:00
61a87ab946 finish all todos (#1957) 2023-09-12 17:13:00 +02:00
5dec654aae Better guards for slow imports (#1963)
* Start

* Deepspeed

* Clean
2023-09-12 10:54:19 -04:00
b2a950205e FIX: patch_environment restores pre-existing environment variables when finished (#1960)
Resolves #1832

This fixes a bug in patch_environment that currently leads to
pre-existing items being deleted completely from the environment
variables, when they were temporarily modified by patch_environment,
once the context has finished. Now, the env vars are restored to their
previous values.
2023-09-12 15:39:54 +02:00
ca7b853abc fix safetensor saving (#1954)
* fix safetensor saving

* fix test

* fix

* better save

* modify as keyword arg
2023-09-12 09:14:41 -04:00
6832aa51a6 move tensorflow dep (#1959) 2023-09-12 06:19:26 -04:00
4a1d5b1fb6 Fix docs (#1951)
Signed-off-by: Peng Gao <peng.gao.dut@gmail.com>
2023-09-11 10:40:14 -04:00
82369c8314 fix the fsdp docs (#1947) 2023-09-11 15:30:09 +05:30
cdb001ca5f Enhance multi-node notebook launching (#1913)
* Introduce new arguments: master_addr, node_rank, and num_nodes.
  Relocate these arguments to the end of the notebook_launcher
  function for compatibility.

* Set defaults for NPROC and NODE_RANK environment variables in the
  PrepareForLaunch function to ensure compatibility.

* Thoroughly document the process and usage guidelines for
  multi-node launching.
2023-09-08 07:53:21 -04:00
c72e22419b Bring back pypi to runners (#1939)
* Bring back pypi

* Flipflop
2023-09-08 07:51:17 -04:00
c872c3086f clean num devices (#1936) 2023-09-07 10:14:52 -04:00
cec5ae8e4d Check for invalid keys (#1935)
* Check for invalid keys

* Revert else

* Better error

* Weird space
2023-09-06 12:22:22 -04:00
cd570b2e2a reduce gradient first for XLA when unscaling the gradients in mixed precision training with AMP. (#1926)
* reduce gradient first for XLA when unscaling the gradients in mixed
precision training with AMP.

* Apply suggestions from code review

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* update acceleartor.reduce and accelerate.utils.operations.reduce

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-09-06 11:00:24 -04:00
727d624322 Add support for deepspeed optimizer and custom scheduler (#1909)
* support for deepspeed optimizer and custom scheduler

* don't throw the error

* Add tests

* fix the tests

* fix the code quality

* Update tests/deepspeed/test_deepspeed.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* fix the docstrings

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-09-05 22:30:46 +05:30
afed2f75f8 Expose auto in dataclass (#1914)
* Auto

* Update str
2023-09-05 09:23:10 -04:00
739b135f83 More CI fun - run all test parts always (#1916)
* Run always

* Populate
2023-08-31 12:32:28 -04:00
4a9dd1cd82 support logging with mlflow in case of mlflow-skinny installed (#1874)
* - support a case of mlflow-skinny installed when log_with is set to mlflow.

* code beautification.
2023-08-31 12:11:02 -04:00
feab09908d improve help info when run accelerate config on npu (#1895) 2023-08-31 12:02:59 -04:00
e0baaa8df0 fix: add debug argument to sagemaker configuration (#1904)
* fix: add debug argument to sagemaker configuration #1903

* ignore:  address quality style

Signed-off-by: maximegmd <672982+maximegmd@users.noreply.github.com>

* tweak: ask if user wants debug information in SageMaker distributed operations

---------

Signed-off-by: maximegmd <672982+maximegmd@users.noreply.github.com>
2023-08-31 11:52:46 -04:00
1b998f1695 Use hosted CI runners for building docker images (#1915)
* New technique

* needs

* explicit all

* Volume prune not going

* Skip volume

* versions

* Avoid checkout perhaps?

* Working dir

* Don't include dot-slash?

* Accelerate prefix?

* Working directory?

* Context?

* other workingdir

* Faster iteration

* Right tag

* Full

* Release

* GPU
2023-08-31 11:28:48 -04:00
7befe580c2 Fix docker images (#1910)
* With driver

* Remove deps

* No bitsandbytes

* Try with raw push

* We can keep old docker images

* Also include release

* Skorch uses master

* Right tag
2023-08-31 07:14:38 -04:00
cd3d3a37f9 Skip pypi transformers until release (#1911)
* Skip release

* TODO comment
2023-08-31 07:14:06 -04:00
81fffe51fd deepspeed grad_acc_steps fixes (#1901)
* deepspeed grad_acc_steps fixes

* fix tests
2023-08-31 16:33:34 +05:30
0b5ac0253e Add PR template (#1906)
* Add PR template

* Sourab is not a fashion company
2023-08-30 03:19:15 -04:00
a16b843a1b deepspeed for ccl xpu (#1827) 2023-08-29 17:36:29 +05:30
bc86a9379f Solve at least one failing test (#1898) 2023-08-29 10:57:56 +05:30
87a096f95e Add FSDP activation checkpointing feature (#1891)
* add FSDP activation checkpointing feature

* fix formatting issue

* fix code formatting issue
2023-08-29 10:56:08 +05:30
44adf1e14f Fix nb launcher test (#1899)
* Try with raw subprocess

* Skip test for now

* Clean
2023-08-28 14:44:18 -04:00
ce870e1ce1 Final nits on model util (#1896)
* Nits

* Annoying markdown tables

* Try with one

* I give, try raw md

* Moot

* W/o code tick

* Markdown
2023-08-28 09:47:44 -04:00
1ace672d3e Update dataclasses.py (#1894) 2023-08-28 17:40:14 +05:30
e2ae254008 Add hub as core dep (#1885)
* Add hub as dep

* Missing refs
2023-08-25 10:05:58 -04:00
0fa291e707 Add doc on model memory usage (#1887)
* Doc

* Note on meta

* Phrase

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Clarity nit

* Nits

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-08-25 10:03:39 -04:00
ba6f11ec3e Enable a token to be used (#1886)
* Enable based on passing the token

* Doc more
2023-08-24 15:43:37 -04:00
430ee9df6b Update with new url (#1884) 2023-08-24 12:52:09 -04:00
409a9df0a4 Introduce model memory estimator (#1876)
* Estimator

* Right err

* Fixup tests

* trust remote code

* Print output for debugging purposes

* trust_remote_code

* Address some comments

* change doc to req arg

* Properly check for _no_split_modules in transformer models

* Note on transformer models

* Check/handle pentabytes

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Tests are passing locally again, better handle for no_split

* Adjust setup?

* Let's see if the cleaner version works

* Refactor and clean up for testing

* Specify in comments

* Better error handling

* A million tests later

* More tests + err handling

* Require hub

* More with remote code

* Clean up

* Add a test for no_split

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Docstring

* Address some comments

* rm einops

* Let it err out

* Adjust errs

* Tests

* Reduce test repeats

* Clean up borders

* Tip on 20%

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-08-24 12:12:01 -04:00
acad5bae5c Enable power users to bypass device_map="auto" training block (#1881)
* Enable TP greedy env var

* Right env setting

* Use true, not false

* Design nit

* ACCELERATE_BYPASS_DEVICE_MAP
2023-08-24 10:27:59 -04:00
81b19c4094 fix detach_hook (#1880) 2023-08-23 15:15:27 -04:00
3e97a9172b Update release instructions (#1877)
* Update release instructions

* Update setup.py

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>

---------

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
2023-08-23 16:04:09 +02:00
812719644d v0.23.0.dev0 2023-08-23 02:25:56 -04:00
16e5113f8a Improve big model inference docs (#1872)
* Start of rework

* Refactor doc

* Got too used to quarto

* They're top level

* md link

* phrasing

* Remove indent
2023-08-22 07:11:12 -04:00
3122a6164d Include a note to the forums in the bug report (#1871)
* gs

* New version
2023-08-21 11:48:39 -04:00
c8682ae74c support custom slice function in DataLoaderDispatcher (#1846)
* save progress

* work on suggestions

* work on some suggestions

* last suggestion

* oops, mini bug
2023-08-21 17:16:43 +02:00
0768905f77 remove casting to FP32 when saving state dict (#1868)
* remove casting to FP32 when saving state dict

* update docs.
2023-08-21 19:08:29 +05:30
d087be0156 add env variable for init_on_device (#1852) 2023-08-18 23:20:50 +02:00
41caaa56e1 Update fsdp_with_peak_mem_tracking.py (#1856) 2023-08-18 13:34:31 +05:30
21d127334e fix dispatch (#1855) 2023-08-17 12:23:50 -04:00
3cf7dee576 Loading logic safetensors (#1853)
* add logic in loading for safetensors

* fix style
2023-08-17 10:46:49 -04:00
64c586f5eb support for ram efficient loading of model with FSDP (#1777)
* support for ram efficient loading of model with FSDP

* with default behaviour of efficient loading when using FSDP, `sync_module_states` needs to be `True`

* fixes

* Update accelerator.py

* Update dataclasses.py
2023-08-16 15:23:20 +05:30
0e714f5ba4 Fix the noneffective parameter: gpu_ids (#1850)
Co-authored-by: Devymex <wangyumeng02@megvii.com>
2023-08-16 09:27:13 +02:00
92f23e123d Change CUDA check (#1833)
* Move into check-device

* Use proper solutiona nd write test

* Move test

* Avoid circular import

* Remove patchenv alltogether

* New version

* Better way, run a verification test

* Final working version

* Debug mode

* doc

* Just debug

* Doc

* print
2023-08-16 03:21:30 -04:00
f67e11afd7 Fix verify_device_map (#1842)
* make verify_device_map return True only if device map has more that 1 element

* Fix style and comment

* fix style
2023-08-14 11:44:41 -04:00
6458058559 FIX: Bug with unwrap_model and keep_fp32_wrapper=False (#1838)
Using accelerator.unwrap_model(model, keep_fp32_wrapper=False) results
in a defective forward method. This bug was (probably) introduced in
PR #872.

Wrapping the method in MethodType (as elsewhere in code) resolves the
issue.
2023-08-14 10:50:38 +02:00
4d13e4e474 fix bug in dev properties for ipex (#1834) 2023-08-11 09:15:15 +02:00
058a3546ea use device as context manager for init_on_device (#1826) 2023-08-10 09:35:00 +02:00
98ecab2083 Minor idiomatic change. (#1829) 2023-08-10 09:06:26 +02:00
b30a349078 Better test (#1825)
* Better test

* Test

* Comment
2023-08-09 02:22:31 -04:00
7cb19ae613 Expose a bit of args/docstring fixup (#1824)
* Expose a bit

* docstring
2023-08-08 11:26:50 -04:00
39897a0662 Update docs and docstrings to match load_and_quantize_model arg (#1822)
* Update quantization.md with correct bnb_quantization_config args

* Update load_and_quantize_model docstring to match bnb_quantization_config arg
2023-08-08 10:20:03 -04:00
aa71bb815a Fix bnb import (#1813)
* Fix import

* Fix bnb

* Comment
2023-08-08 10:17:27 -04:00
f43a08a9c5 add warning when using to and cuda (#1790)
* add warning when using to and cuda

* more warning

* style

* change warning msg

* fix typo

* better check

* Update src/accelerate/big_modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-08-08 10:08:50 -04:00
b42c65b729 Improve docs on grad accumulation (#1817)
* Improve docs on grad accumulation

* Update docs/source/usage_guides/gradient_accumulation.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* fix

* address feedback

* Update docs/source/usage_guides/gradient_accumulation.md

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-08-07 17:28:01 +02:00
7bad726935 Bibtex (#1820) 2023-08-07 11:21:40 -04:00
29ff7c3911 Expand device-map warning (#1819)
* Propagate to general prepare

* Move test to general tester

* Keep in model

* Keep in multi-gpu

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2023-08-07 11:04:29 -04:00
30eff605df Typo fix (#1812) 2023-08-04 11:18:14 -04:00
fc95663e03 Detect device map auto and raise a helpful error (#1810) 2023-08-04 10:02:27 -04:00
49cb83a423 More specific logging in gather_for_metrics (#1784)
* Start on testing behavior

* Add test to capture current behavior

* Cleanup test; add length to DummyIterableDataset

* Remove wip test from test_dataloader.py

* Only check on remainder state if we're at the end of a dataloader

* Cleanup

* Fix style

* Move test to test_metrics

* Remove 2 num_process assertion so that we test on single-GPU as well,
why not

* Use `isinstance()` instead of `type()` in test_metrics

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-08-03 12:38:58 -04:00
d2b159ea1a Fix pytest import (#1808)
* pytest

* Fully rm pytest

* Doc

* Works
2023-08-03 11:00:16 -04:00
40056c69d1 Add FSDP for NPU (#1806)
* Add FSDP for NPU

* enable fsdp's test case for npu&xpu
2023-08-03 11:35:29 +02:00
505b5be044 Add FSDP for XPU (#1803)
* fsdp for xpu

* add fsdp xpu
2023-08-02 15:34:55 -04:00
a6333f2e7c Changed allow_val_change param (#1796) 2023-08-02 13:42:11 -04:00
YQ
0dec477985 add support of float memory size in convert_file_size_to_int (#1799)
* support float memory size

* add unit test for
2023-07-31 15:43:19 -04:00
YQ
a24189db35 reserve 10% GPU in get_balanced_memory to avoid OOM (#1798)
* reserve 10% GPU to avoid OOM

* update warning message

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* use logger.info

* clean up comment

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-31 15:42:55 -04:00
a9aee447ee Fix import error when torch>=2.0.1 and torch.distributed is disabled (#1800) 2023-07-31 11:27:45 -04:00
d5894ab499 Set ipex default (#1776) 2023-07-26 12:20:13 -04:00
6f14928e28 simplify and correct the deepspeed example (#1775)
* simplify and correct the deepspeed example

* Update deepspeed_with_config_support.py

* 🐛 fix
2023-07-26 17:59:13 +05:30
777334a803 [FSDP] Fix load_fsdp_optimizer (#1755) 2023-07-26 14:23:01 +05:30
c3d82d24e2 Contigous on gather (#1771)
* For testing

* Contigous
2023-07-25 13:44:08 -04:00
6e70e79e4e Support wrapping multiple models in Accelerator.accumulate() (#1708)
* Support wrapping multiple models in Accelerator.accumulate()

* Fix style.

* Rename variable

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update doc.

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update variable name.

---------

Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-07-25 12:22:36 -04:00
b3fc3c9067 Introduce an experimental distributed operations framework (#1756)
* First version

* As decorator

* Better err

* Limit

* Partial state

* More work

* Tests + config

* Debug mode

* Flag

* Rm references to debug mode, debug

* Tests

* Docs

* Nit

* Disable debug in config

* Support dict
2023-07-25 11:39:31 -04:00
a9d79163e5 Change is_aim_available() function to not match aim >= 4.0.0 (#1769)
* Change is_aim_available() function to not match aim >= 4.0.0

* Use compare_versions utility function in is_aim_available
2023-07-25 09:07:06 -04:00
0b36ca6e64 Fix offload on disk when executing on CPU (#1762)
* Fix offload on disk when executing on CPU

* Actually refine the error instead
2023-07-24 11:09:29 -04:00
f3b7f9cf25 Fix error when max_memory argument is in unexpected order (#1759)
* sort the user-provided max_memory keys in gpu-cpu-disk order

* fixed the bug by adding disk to main devices

* add checking for max_memory argument

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix typo

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix typos

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-24 09:23:04 -04:00
b909bfacb9 Fix check failure in Accelerator.save_state using multi-gpu (#1760) 2023-07-24 09:03:45 -04:00
a2d8f540c3 FSDP enhancements and fixes (#1753)
* if the model is already an FSDP instance, remove the warning and prep overhead

* allow usage of `_no_split_modules` to simplify UX when using FSDP

* Update other.py

* fixes
2023-07-21 17:52:37 +05:30
e8ed10ae62 Fix FSDP related issues (#1745)
* Update fsdp_utils.py

* other FSDP fixes

* revert as this is resulting in more vram usage

* revert

* Update fsdp_utils.py
2023-07-21 12:16:45 +05:30
a6291e43b0 Expose autocast kwargs and simplify autocast wrapper (#1740)
* kwarg handler

* Proper default

* Enabled

* Rework

* Clean

* Ref autocast properly
2023-07-20 12:49:30 -04:00
2a289f6108 Rework new constant for operations (#1748)
* Rework new constant

* Naming for clarity

* Rm _cpu

* clean
2023-07-20 11:26:35 -04:00
cafc7f785f Remove unused constant (#1749)
* Rm unused

* Clean
2023-07-19 17:12:00 -04:00
39889c7304 Check for misconfiguration of single node & single GPU (#1746)
* Check for misuse

* Right area

* Sapce
2023-07-19 17:11:53 -04:00
12d5a2d0da fix typo (#1747) 2023-07-19 13:25:35 -04:00
243288627d fix KwargsHandler.to_kwargs not working with os.environ initialization in __post_init__ (#1738)
* fix KwargsHandler.to_kwargs not working with os.environ initialization in __post_init__

* fix test_torch_dynamo_plugin such that it wouldn't change os.environ permanently

* move clear_os_environ func to utils/other and rename it

* reformat code in order to pass ci quality check

* modifiy the comment of utils.other.clear_environment
2023-07-19 12:00:53 -04:00
efc1fa8376 Let load_state automatically grab the latest save (#1741)
* Automatic load state

* docstring

* Quality
2023-07-18 14:56:20 -04:00
18e3012489 Fixed the bug that split dict incorrectly (#1742)
* Fixed the bug that split dict incorrectly

* fix list out of index and test script
2023-07-18 14:54:25 -04:00
daa1952f47 Update docs (#1736)
* Still in works

* Utils to check

* More references

* Fin

* add utils

* toctree
2023-07-18 07:28:01 -04:00
653ba110d3 Fixed typo in repr of AlignDevicesHook (#1735)
Changed class name in the repr from AlignDeviceHook to AlignDevicesHook
2023-07-17 10:50:22 -04:00
f518b0ab03 make balanced memory able to work with non continguous GPUs ids (#1734) 2023-07-17 10:49:08 -04:00
3a05e0cf70 Fix errors when optimizer is not a Pytorch optimizer. (#1733)
* Fix errors when optimizer is not a Pytorch optimizer.

* update

---------

Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
2023-07-17 07:11:02 -04:00
299f3ef8ab Adding a shape check for set_module_tensor_to_device. (#1731)
* Fixing set_module_tensor_to_device.

* Adding a shape check for `set_module_tensor_to_device`.

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update error msg.

* Style.

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-14 17:46:52 +02:00
925a13eb04 fix the bug in npu (#1728)
* enable test_sync for npu

* fix the bug in get_cluster_input for npu

* fix the bug in broadcast for npu
2023-07-14 09:31:04 -04:00
4170f395d1 Get rid of calling get_scale() by patching the step method of optimizer. (#1720)
* Get rid of calling get_scale() by patching the step method of optimizer.

* Fix when step() is already patched by other parties.

* support pickle

* Minor updates.

* Change _accelerate_num_step_called to _accelerate_step_called

---------

Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
2023-07-14 07:56:45 -04:00
bb47344c77 Better control over DDP's no_sync (#1726)
* add `ddp_trigger_sync_in_bwd` to accelerator with test

* add example to `ddp_trigger_sync_in_bwd`

* support case of non-DDP model

* style

* make style

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* model_ddp -> model

* .

* .

* .

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* add comment

* style

* style

* Update src/accelerate/accelerator.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-07-13 18:29:02 +02:00
243cd82409 fix failing test on 8GPU (#1724) 2023-07-13 11:45:45 -04:00
51f5e829a8 v0.22.0.dev0 2023-07-13 11:20:38 -04:00
5b9c5881b6 add compatibility with peft (#1725)
* add compatibility with peft

* update docs
2023-07-13 10:33:44 -04:00
0209606364 add Comfy-UI (#1723) 2023-07-13 19:02:50 +05:30
5909c1a514 Fix typo 2023-07-13 09:27:30 -04:00
e7150b0b15 New tactic (#1719) 2023-07-12 18:50:17 -04:00
e8c64f598b Remove duplicate code (#1717) 2023-07-12 14:22:07 -04:00
a14081ccc5 Optimize get_scale to reduce async calls (#1718)
* Optimize

* Comment
2023-07-12 14:00:28 -04:00
d895809613 Keep old behavior (#1716) 2023-07-12 13:24:31 -04:00
02015eb25c fix version (#1701) 2023-07-12 11:48:48 -04:00
19bcd43e14 Modify loading checkpoint behavior (#1715)
* Add check for the whole state dict

* fix style
2023-07-12 11:48:06 -04:00
59f2fff3cf add multi_gpu decorator (#1712) 2023-07-12 11:17:07 -04:00
c33adecc9f Add Ascend NPU accelerator support (#1676)
* add Ascend NPU accelerator support

* fix code  styles

* enable accelerate test on npu

* fix typo&code styles

---------

Co-authored-by: jihuazhong <jihuazhong1@huawei.com>
2023-07-12 08:43:02 -04:00
518c206a2a Fix the bug where DataLoaderDispatcher gets stuck in an infinite wait when the dataset is an IterDataPipe during multi-process training. (#1709)
Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
2023-07-12 07:44:36 -04:00
65b5c2cfad Fixes for issue #1683: failed to run accelerate config in colab (#1692)
* Fixes for issue #1683: failed to run accelerate config in colab

Fixes for issue #1683: failed to run accelerate config in colab

* Fixes for issue #1683: failed to run accelerate config in colab, change input2 to a formal variable name

change input2 to a formal variable name

* Fixes for issue #1683: failed to run accelerate config in colab

removed unnecessary spaces

* Fix for #1683 failed to run accelerate config in colab 

fixed reformatting issue, during the quality check

* Fixes for issue #1683: failed to run accelerate config in colab

refactor the code, passed black, ruff, doc-builder test; modified the prompt in colab.

* Fixes for issue #1683: failed to run accelerate config in colab

fixed black, ruff, doc-builder, modified prompt during choice input

* Fixes for issue #1683: failed to run accelerate config in colab

use utils.imports _is_package_available() method instead, to be consistent with the rest of the library code.

* Fixes for issue #1683: failed to run accelerate config in colab

add default choice, wrap up import check with try catch, passed quality check, style check and test cases
2023-07-12 07:15:02 -04:00
7954a28a71 Fix launcher validation (#1705)
* unstash

* fix validation of launcher args

* bug fix

* cond for tpu
2023-07-11 14:30:44 -04:00
3bdb35abfa Skip tests when bnb isn't available (#1706)
* bnb is available

* Some more
2023-07-11 14:29:17 -04:00
d58aac2e1e Update tracking.md (#1702) 2023-07-11 14:15:59 -04:00
a4c2654f50 Deepcopy on Accelerator to return self (#1694)
* Deepcopy

* Clean

* deepcopy
2023-07-11 14:14:15 -04:00
27d29087b2 Add offload for 8-bit model (#1699)
* Add offload for 8-bit model

* fix saved 8bit model offload and add tests

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* add doc on how offload works

* remove enable_offload

* make style doc

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-11 13:46:15 -04:00
c7698834fc Move mixed precision wrapping ahead of DDP/FSDP wrapping (#1682)
* Update accelerator.py

* Update accelerator.py

* Update accelerator.py

* Update accelerator.py

* Update accelerator.py

* Update test_script.py

* Update test_script.py

* Update test_script.py

* Update test_script.py

* Update test_script.py
2023-07-11 10:35:13 -04:00
64d7b58c44 Improve quality errors (#1698)
* Purposefully fail

* Step summary

* Right bash

* Take 2

* Post to job summary

* Extra space
2023-07-11 09:09:02 -04:00
e3aae2ac65 Fixup docs (#1697) 2023-07-11 08:36:37 -04:00
d0a7991b65 Fix nightly tests (#1696)
* Debug start

* Fix

* Workflow
2023-07-11 08:36:23 -04:00
180ef7c415 update readme in examples (#1678) 2023-07-10 12:19:27 -04:00
95bffdec43 remove duplicate class (#1691) 2023-07-07 10:29:00 -04:00
c74c28c6d1 Fix workflow CI (#1690)
* Try again

* Accelerate only

* Try pushing again
2023-07-07 09:46:00 -04:00
e0f5e03009 fix bnb tests (#1679)
* fix tests

* Fix 8bit serialization tests
2023-07-05 10:13:20 -04:00
dfbfbdfea8 Add docs for saving Transformers models (#1671)
* add section to package_reference/accelerator.md explaining saving for Transformers models

* rename `model` to `unwrapped_model`

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-03 10:34:30 -04:00
24ae624d96 Doc big model inference (#1670)
* change example

* fix spaces

* add link to transformers

* Fix style
2023-06-30 18:00:52 -04:00
40f822a1e3 replace save funct in doc (#1672) 2023-06-30 17:03:19 -04:00
a0bfe2140c Bnb quantization (#1626)
* Add get_quantized_model func

* Add tests for 4bit and 8bit quantization

* Add tests

* Fix style

* Add offload tests

* Fix style

* Fix

* Fix conflit

* fix generate quality test

* fix style

* add check for bnb layers and fix .to(cpu)

* Fix 8bit serialization and memory issue

* add import

* Change quantize_model to load_and_quantize_model

* Add tests for saving 8bit model

* Fix bnb dataclass

* fix style

* fix tests

* fix style

* remove depedency on tie_weights

* remove depedency on base_model_prefix

* remove depedency on device

* fix style

* Add doc about quantization

* fix import

* Fix text

* fix func name

* fix arg in dataclass

* Update docs/source/usage_guides/quantization.md

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix funct name

* Add real model

* Fix doc

* put bash tag

* Update src/accelerate/utils/bnb.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-30 10:59:04 -04:00
c6443f8bd4 Update broken Runhouse link in examples/README.md (#1668) 2023-06-30 08:51:28 -05:00
3cd02e9340 change the import place to avoid import error (#1653) 2023-06-30 11:55:30 +05:30
17ec2ede11 remove safetensor dep on shard_checkpoint (#1664)
* remove safetensor dep on shard_checkpoint

* fix style

* group function
2023-06-29 11:23:13 -04:00
e30938700a 🚨🚨🚨 Spring cleaning: PyTorch 1.10 🚨🚨🚨 (#1662)
* Bookmark

* Bump torch v

* More stuff

* Remove never called else
2023-06-29 09:26:15 -04:00
b864946606 🚨🚨🚨 Spring cleaning: Python 3.8 🚨🚨🚨 (#1661)
* Py 3.8

* Rm typed dict

* Workflows
2023-06-29 08:46:19 -04:00
bc234c040c [BigModeling] Final fix for dispatch int8 and fp4 models (#1660)
* final fix for dispatch int8 and fp4 models

* Update src/accelerate/big_modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2023-06-28 11:16:13 -04:00
662a7dd905 docker cpu py version (#1659) 2023-06-28 10:37:29 -04:00
d3db2d4fe5 TIL (#1657) 2023-06-28 10:36:49 -04:00
96f926a25e Bump integration (#1658) 2023-06-28 10:32:43 -04:00
a9d43cda80 [BigModeling] Add missing check for quantized models (#1652)
* add missing check

* better check

* better check

* much better check
2023-06-28 16:07:30 +02:00
effccbdc84 Check for port usage before launch (#1656)
* Check for port usage

* Just comm

* Right flag in err

* Better err, happy now
2023-06-28 09:10:01 -04:00
d141b4ce79 Fix device_map (#1651) 2023-06-27 21:36:00 -04:00
bc49d0f9b3 Doc save model (#1650)
* add doc for save_model func

* fix doc

* fix path issue

* add load_checkpoint_in_model doc in utilities

* oups

* Update docs/source/package_reference/utilities.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-06-27 16:08:56 -04:00
5ea7c81277 Change dispatch_model when we have only one device (#1648)
* Change dispatch_model when we have only one device

* Fix style

* add else statement

* fix style

* Fix error message

* Fix style
2023-06-27 14:58:11 -04:00
efe4481a28 add save model (#1641)
* add save model

* Fix duplicates function and remove args

* Fix style

* fix description

* add save_model to Accelerator object

* Revert "fix potential OOM when resuming with multi-GPU training (#1444)"

This reverts commit 3a381bfa48dfb082c1f8e892a9a07ca5717bf0df.

* Fix style

* Fix description

* Replace state_dict() by accelerator get_state_dict

* FIx state dict

* clean comment
2023-06-27 11:10:42 -04:00
df215cc243 Add skorch to runners (#1646)
* Skorch tests

* Take 2

* runs-on

* Take 2

* Rm needs

* Needs testing deps

* dep

* Only use all GPUs

* Add skorch tests

* rm

* nl
2023-06-27 10:08:22 -04:00
5791d949ff fix modeling low zero (#1634)
* fix modeling low zero

* low zero logic change
2023-06-26 13:19:48 -04:00
b76409ba05 fix autocasting bug (#1637)
* fix autocasting bug

* refactor and resolve comment
2023-06-26 20:18:36 +05:30
a25c4eacae Swap disable rich (#1640) 2023-06-26 09:59:10 -04:00
d8437ae096 Fix nightly 2023-06-26 09:20:01 -04:00
2fa22f3342 deepspeed z2/z1 state_dict bloating fix (#1638)
* deepspeed z2/z1 state_dict bloating fix

* fix
2023-06-26 17:44:36 +05:30
a2ecb58132 fix: Megatron is not installed. please build it from source. (#1636)
The megatron package name is mismatch with dist directory name.

Signed-off-by: yuanwu <yuan.wu@intel.com>
2023-06-26 08:13:28 -04:00
73cc944067 fixes offload dtype (#1631)
* Fix offload dtype

* Set dtype on meta device

* fix style
2023-06-22 17:38:09 -04:00
b16916f447 Fix transformers sync bug with accumulate (#1624)
* Fix transformers sync

* Docs + expose

* Right arg

* bool
2023-06-22 04:42:54 -04:00
36f8e48747 Fix workflow (#1625)
* Fix steps

* Right runs-on

* Fix directory

* Just integration

* Fix check

* Disable wandb

* Fin

* Diff
2023-06-21 16:04:55 -04:00
790cb8b461 Fix tb issue (#1623) 2023-06-21 13:48:41 -04:00
7b4d12623a Doc to md (#1618)
* Convert doc files to MD

* Convert doc files to Markdown
2023-06-20 18:12:19 -04:00
956c6baf71 Fix failing multinode tests (#1616)
* Should fix multinode test

* For testing, remove after

* try this

* Try disabling

* Try again

* move more

* Fix multinode tests

* New check

* Fix err

* Fix test
2023-06-20 15:32:13 -04:00
485e8c8cb4 Ignore low_zero option when only device is available (#1617) 2023-06-20 12:28:56 -04:00
aaf38c2f35 fix for arc gpus (#1615) 2023-06-20 11:09:11 -04:00
f433457244 reset end_of_dataloader for dataloader_dispatcher (#1609)
* reset end_of_dataloader for dataloader_dispatcher

* add ruff fixes
2023-06-20 08:41:11 -04:00
535b52cef2 Remove GPU safetensors env variable (#1603) 2023-06-16 10:59:41 -04:00
e60a424398 Remove asking xpu plugin for non xpu devices (#1594)
* remove asking xpu plugin for non xpu devices

* style
2023-06-15 13:11:24 -04:00
32f85ce524 Add triggers for CI workflow (#1597)
* Trigger

* Space
2023-06-15 09:12:41 -04:00
0983a9b9b4 Integration tests (#1593)
* Integration tests

* Typofix

* Clean up python version

* Trainer typo

* Clean env

* rm cache
2023-06-15 02:42:34 -04:00
e5d0df44f0 Update modeling.py (#1595) 2023-06-14 17:59:28 -04:00
50eabe5b1d FSDP updates (#1576)
* FSDP updates

* quality and import fixes

* bug fix and adding contributors

Co-Authored-By: Vik Paruchuri <github@vikas.sh>
Co-Authored-By: raghavanone <115454562+raghavanone@users.noreply.github.com>

* fix 🐛

* update docs and example

* quality

* fixes and updates

* use logger

* fix circular dependency issue

* quality

* refactor

* quality

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* address comment

---------

Co-authored-by: Vik Paruchuri <github@vikas.sh>
Co-authored-by: raghavanone <115454562+raghavanone@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-13 20:36:32 +05:30
f2d1047059 Update checkpoint.mdx (#1587) 2023-06-13 09:57:52 -04:00
3e68f1da63 Fix test (#1586) 2023-06-13 09:03:47 -04:00
f8b0696076 fix logger level (#1579) 2023-06-13 08:55:10 -04:00
51a2ca5d88 Return false if CUDA available (#1581) 2023-06-13 08:44:31 -04:00
51de46e368 Update training_tpu.mdx (#1582) 2023-06-13 07:52:59 -04:00
e2b0224ec4 improve oob performance when use mpirun to start DDP finetune without accelerate launch (#1575)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-06-13 07:52:26 -04:00
db11bd5035 Get Torch version using importlib instead of pkg_resources (#1585)
This fixes the following warning:
> pkg_resources is deprecated as an API
2023-06-13 07:50:12 -04:00
543c59af22 Expand prepare() doc (#1580)
* Expand device_placement

* Expand doc

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update accelerator.py

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-12 14:37:43 -04:00
81765e6e00 Make sure that we only set is_accelerator_prepared on items accelerate actually prepares (#1578)
* Other items

* Better test and check

* Align test

* Clean
2023-06-12 12:09:31 -04:00
a4ebc14fab fix the bug in xpu (#1508)
* fix bug in is_xpu_available

* fix device configure bug for DDP with ccl backend

* enable accelerate launch for DistributedType.MULTI_XPU

* fix the bug in wait_for_everyone for xpu

* fix the bug in rng_sync_check for xpu

* refactoring code according to muellerzr's suggestion

* define RegressionModel4XPU for xpu to avoid ccl bug

* make MULTI_XPU independent on env var 'CCL_WORKER_COUNT'
2023-06-12 11:34:21 -04:00
058f6f70f5 Perminant solution (#1577) 2023-06-12 11:29:36 -04:00
665d5180fc Check for bak and expand docs on directory structure (#1571)
* Check for bak and expand doc

* Better regex

* Update docstring

* Use exclusion at beginning and simplify check for digit
2023-06-09 13:10:53 -04:00
d1ea9ab40c Introduce listify, fix tensorboard silently failing (#1570)
* Introduce untensorify, fix logging with tensor

* Clean imports and make note

* untensorify -> listify
2023-06-09 12:50:28 -04:00
632dce67ab Raise error instead of warn (#1568) 2023-06-09 12:18:26 -04:00
e41864ce9d Update mixed precision integrations in README (#1569) 2023-06-09 11:26:33 -04:00
979991aa78 Update gradient sync docs to reflect importance of optimizer.step() (#1565)
Before this commit, this documentation suggested that model parameters
are updated when `accelerator.backward()` is called (which in turn calls
`loss.backward()`). This isn't the case - parameter updates happen when
`optimizer.step()` is called.

This commit:
1. Updates this documentation to reflect this within the discussion of
   gradient accumulation.
2. Adds calls to `optimizer.step()` as that's key to gradient
   accumulation.
2. Adds optimizer.zero_grad() for consistency with `accelerator.accumulate()`'s docs
3. Does some related word-smithing

To make sure I was thinking about gradient accumulation correctly, I'm
using `huggingface/transformer`'s performance guide for a working
definition of gradient accumulation, which this diff is consistent with:

> The idea behind gradient accumulation is to instead of calculating the
gradients for the whole batch at once to do it in smaller steps. The way
we do that is to calculate the gradients iteratively in smaller batches
by doing a forward and backward pass through the model and accumulating
the gradients in the process. *When enough gradients are accumulated we
run the model’s optimization step*. This way we can easily increase the
overall batch size to numbers that would never fit into the GPU’s
memory. In turn, however, the added forward and backward passes can slow
down the training a bit.

(https://huggingface.co/docs/transformers/perf_train_gpu_one#gradient-accumulation)

Another huggingface example of gradient accumulation that is consistent
with this change: [run_glue_no_trainer.py][0]

[0]: https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py#L518-L532
2023-06-09 09:30:43 -04:00
7fc1e438d1 [bnb] Fix failing int8 tests (#1567)
* fix int8 tests

* replace with `replace_8bit_linear`
2023-06-09 14:53:07 +02:00
040f178569 Update big_modeling.mdx (#1564) 2023-06-08 15:52:05 -04:00
87c81315a1 Reset dataloader end_of_datalaoder at each iter (#1562) 2023-06-08 12:08:17 -04:00
f1e84decc9 [core] Fix possibility to passNoneType objects in prepare (#1561)
* add possibility to pass nonetype objects

* adds nice test
2023-06-08 14:56:22 +02:00
eafddf02e3 fix the typo when setting the "_accelerator_prepared" attribute (#1560)
* fix the typo when setting the "_accelerator_prepared" attribute

* use the name "_is_accelerate_prepared" instead
2023-06-07 18:18:08 -04:00
f0029d6f60 Fix tests not being ran on multi-GPU nightly (#1558)
* Fix tests not being ran

* More tests
2023-06-07 15:14:02 -04:00
3147de9010 Fix load_state_dict when there is one device and disk (#1557) 2023-06-07 14:57:20 -04:00
d448ebaf90 Update README.md (#1556) 2023-06-07 14:44:27 -04:00
65dd4f2039 Avoid double wrapping of all accelerate.prepare objects (#1555)
* Add step reset to free memory

* Check if not Accelerated Optimizer

* Continue

* Another try

* Check the rest

* Try with just check on init

* Change logic based on review

* Update

* Oops very big logic issue!
2023-06-07 13:37:19 -04:00
7ee2c79da9 Update launch.mdx (#1553) 2023-06-07 13:35:51 -04:00
bbe2e30901 [doc build] Use secrets (#1551) 2023-06-07 18:42:09 +02:00
0ab72613a7 v0.21.0.dev0 2023-06-07 10:12:36 -04:00
6f14e619b2 Update migration.mdx (#1549) 2023-06-07 09:50:09 -04:00
90e9703d99 Eval mode (#1540) 2023-06-07 09:27:05 -04:00
5f21cde3c7 [documentation] grammar fixes in gradient_synchronization.mdx (#1547)
* Update deferring_execution.mdx

* [documentation] grammar fixes in gradient_synchronization.mdx

These changes are grammatical and do not affect the ideas communicated in the file.
2023-06-06 17:06:03 -04:00
76ccfae682 Add mps support to big inference modeling (#1545)
* Add mps support

* make style

* Fix syntax

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Fix condition

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-06 16:31:02 -04:00
62357f218f Apply deprecations (#1537)
* MPS

* Update examples

* Fix env var

* device type

* Fix test
2023-06-06 13:04:45 -04:00
be1b76e97a Update deferring_execution.mdx (#1544) 2023-06-06 11:59:30 -04:00
3f2b5da094 Update performance.mdx (#1543) 2023-06-06 09:54:25 -04:00
3f1cb09e7b Update deepspeed.mdx (#1541) 2023-06-06 09:54:03 -04:00
7a39d928f5 Prevent using extra VRAM for static device_map (#1536) 2023-06-06 09:31:41 -04:00
961fe728d9 remove ipexplugin, let ACCELERATE_USE_IPEX/ACCELERATE_USE_XPU control the ipex and xpu (#1503)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-06-06 09:27:31 -04:00
ef0c4bf277 Officially support naive PP for quantized models + PEFT (#1523)
* officially support naive PP

- relax check
- add test

* Apply suggestions from code review

* more tests

* Update src/accelerate/accelerator.py
2023-06-06 14:41:59 +02:00
de855b3247 Raise ValueError on iterable dataset if we've hit the end and attempting to go beyond it (#1531)
* Raise ValueError on iterable

* Clean
2023-06-06 07:51:22 -04:00
b9628f13c2 Check tied parameters (#1529)
* Check that parameters are tied correctly

* Fix style

* Fix condition

* Fix failing test

* Fix check_tied_parameters function

* Fix condition

* Fix arg

* Apply suggestions from code review

Fix log

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Fix tests and comments

Fix comments and tests

Fix description

* Remove dep

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-05 15:17:49 -04:00
16ca01feea Refactor mp into its own wrapper (#1527)
* Better, clean version

* Diff

* oops need return

* Make adjustments

* Docstring
2023-06-05 12:00:51 -04:00
4cbbde8945 Fixup deepspeed/cli tests (#1526) 2023-06-05 11:35:21 -04:00
eba6eb79dc Fix a bug when parameters tied belong to the same module (#1514)
* Fix a bug when parameters tied belong to the same module

* Address review comments

* Add tests
2023-06-02 17:07:39 -04:00
109f3272f5 Swap env vars for XPU and IPEX + CLI (#1513)
* Swap env vars

* Clean up CLI

* use_xpu

* Add CLI docs

* Ipex only

* Nit

* Check

* Capitolize

* Make changes from review
2023-06-02 13:30:16 -04:00
85901cdcf9 should set correct dtype to ipex optimize and use amp logic in native… (#1511)
* should set correct dtype to ipex optimize and use amp logic in native_amp logic in prepare_model

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* remove mix precision set in ipex, directly use it from accelerate state

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* raise import error if ipex is not valid in prepare ipex

* Update src/accelerate/accelerator.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-06-02 10:45:17 -04:00
5e74d932b9 NVME path support for deepspeed (#1484)
* NVME path support for deepspeed

* modify stage 3 ds test

* review commit and fixes

* review commits
2023-06-02 09:55:17 -04:00
090c65cd9d Add assertion when call prepare with deepspeed config. (#1468) 2023-06-02 09:55:04 -04:00
b7d5d9072a adjust overriding of model's forward function (#1492)
* adjust overriding of model's forward function

* bug fix

* extend solution to all model.forward overrides

* leave fp8 section alone

* make style

---------

Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2023-06-02 07:52:56 -04:00
d4262021d5 Fix 4bit model on multiple devices (#1506)
* Add 4bit case and fix device index

* Fix style
2023-06-01 15:10:51 -04:00
8ae56dc51d [bnb] Add fp4 support for dispatch (#1505)
* add fp4 support for dispatch

* add tests

* refactor
2023-06-01 20:41:03 +02:00
c9fbb71e37 fix crash when ipex is installed and torch has no xpu (#1502)
also when cpu flag is set. should use cpu instead of XPU
2023-06-01 11:48:55 -04:00
4d583ad6a1 Allow key skipping in big model inference (#1491)
* Allow key skipping in big model inference

* Add a repr
2023-05-31 15:04:52 -04:00
70d999ee4a Use empty like when we only need to create buffers (#1497)
* Use empty like

* Make
2023-05-31 11:53:17 -04:00
3913fa4dd0 Let gather_for_metrics always run (#1496) 2023-05-31 10:59:31 -04:00
f9b2e6769b Update README.md (#1493) 2023-05-31 09:25:29 -04:00
d3f8c52f4c Only use IPEX if available (#1495)
* Only use IPEX if available

* Check first, then make plugin
2023-05-31 08:18:13 -04:00
af12e7b023 Add rdzv-backend (#1490)
* Add rdzv

* rm print

* Doc

* Better help
2023-05-31 08:06:55 -04:00
68376babd8 Fix gradient state bugs in multiple dataloader (#1483)
* Fix gradient state bugs in multiple dataloader

* Fix style issue

* Update src/accelerate/data_loader.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Add docstring

* Fix style

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-05-30 10:56:42 -04:00
7d24bdefb5 Move to device (#1478) 2023-05-26 15:01:02 -04:00
bb296348e1 Split tensors as part of split_between_processes (#1477)
* Try with this

* Remove import to be late

* Apply padding properly for tensors

* Pad across tensors

* Check to see if this works

* Use -1

* Properly send the first item as what's to be padded

* Update docstring

* Add tests

* Fix test

* Update typehints and docstrings
2023-05-26 14:23:07 -04:00
0226f75025 Imrpove sagemaker (#1470)
* Should fix everything now:

* Simplify logic
2023-05-24 15:50:31 -04:00
419c9ce22a Update gradient accumulation docs, and remove redundant example (#1461) 2023-05-24 10:43:42 -04:00
2249fbde0d update register_empty_buffer to match torch args (#1465) 2023-05-24 08:32:38 -04:00
e0ffea5bc3 Check for xpu specifically (#1472) 2023-05-23 12:42:12 -04:00
9a86a49f72 update conversion of layers to retain original data type. (#1467)
* add dtype to retain original dtype of layers in convert_model

* updated params_dtype

* ran make style,quality:
2023-05-23 05:19:57 -04:00
70920895e8 Fix skip first batch being perminant (#1466)
* Better version of fix

* Failing diff test

* Special str
2023-05-22 14:18:16 -04:00
bf3cd30a66 4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) (#1458)
* Added change for FP4.

* fix suggestion

* better check

---------

Co-authored-by: younesbelkada <younesbelkada@gmail.com>
2023-05-22 11:35:14 -04:00
bfa74e51d2 Document how to use commands with python module instead of argparse (#1457)
* Include other commands

* Add another paragraph

* Reverse order

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2023-05-19 12:32:54 -04:00
e6699e6aba Refactor and simplify xpu device in state (#1456)
* Refactor and simplify xpu device in state

* review commit
2023-05-19 10:43:24 -04:00
0871e93a74 fix error for CPU DDP using trainer api. (#1455)
init_process_group() got multiple values for argument 'backend'

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-05-19 06:32:11 -04:00
86720fdb11 Adds in_order argument that defaults to False, to log in order. (#1262)
* Adds `in_order` argument that defaults to False, to log in order.

Ads `in_order` argument that defaults to `False`, to log in order. 
It really helps with readability.  Defaults to false to not break backwards comp.

* fixed formatting

* Update src/accelerate/logging.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Fixed quality & suggestions

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-05-18 15:01:26 -04:00
1deab71e3c Update with cli instructions (#1453)
* Update with cli instructions

* Also update basic tut
2023-05-18 11:32:26 -04:00
5d1cee3d81 Auto multigpu logic (#1452) 2023-05-18 11:12:58 -04:00
5904f56c45 [docs] Replace state.rank -> process_index (#1450)
I couldn't find a rank property in `PartialState`.
2023-05-18 07:13:39 -04:00
99d790dc34 split_between_processes (#1449) 2023-05-17 15:35:36 -04:00
1760d2dc8c Add to (#1448) 2023-05-17 14:52:25 -04:00
b93bfac16d Distributed prompting/inference utility (#1410)
* Splitter

* Rename and fix

* Change value

* Add plus 1?

* mvp

* Nested processes

* Start of implementation

* Fin

* Introduce util

* Return non-nested for now

* Future annotation

* Fix

* Fix failing tests, make it fully nested

* Fin

* Start doc

* Fixup tests

* Add is_torch_version

* Should work now with padding

* Include padding

* Docstrings

* toctree

* Dash

* Note on when padding is needed

* Apply typo fixes from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Try quicklink

* Use dash

* URL

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-17 14:41:25 -04:00
981c6fb8d6 Fix ci (#1447) 2023-05-17 13:49:56 -04:00
6413f25ba9 Raise error when logging improperly (#1446)
* Raise error when logging

* Update src/accelerate/logging.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-17 11:16:35 -04:00
39e20d3e55 Fixes in infer_auto_device_map (#1441) 2023-05-17 10:54:42 -04:00
3a381bfa48 fix potential OOM when resuming with multi-GPU training (#1444)
* load `optimizers`, `schedulers`, `scalers` and `states` in different devices

* only apply to the optimizer state
2023-05-17 10:53:17 -04:00
bc82d18821 fixed: ZeroDivisionError: division by zero (#1436)
* Update modeling.py

fixed: ZeroDivisionError: division by zero

* fixed style

* code optimize

---------

Co-authored-by: xingwei <xingwei@i-click.com>
2023-05-17 08:59:12 -04:00
330d60b817 Make sure torch compiled model can also be unwrapped (#1437)
* Make sure torch compiled model can also be unwrapped

* Apply suggestions from code review

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* add tests

* fix double import

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-05-16 19:03:36 +01:00
612ecef7b8 Fix XPU (#1440) 2023-05-16 13:03:22 -04:00
9493d7276b [core] Introducing CustomDtype enum for custom dtypes (#1434)
* working v1 - draft

* format

* more comments
2023-05-16 16:24:17 +02:00
40c6e0ca41 Ensure that it gets installed (#1439) 2023-05-16 09:50:53 -04:00
a28491bc24 Let quality yell at the user if it's a version difference (#1438)
* Let quality yell at the user if it's a version difference

* Also include in style
2023-05-16 09:30:08 -04:00
435079aafb Improve Slack Updater (#1433)
* Update log_reports to send to slack

* REVERT this change, just for testing!

* Add slack_sdk dep

* Second one

* Try now?

* Remove len

* Need secret

* Try with new version

* Right boldface

* Fix import

* New format, use tabulate

* Add tabulate to yml

* Quality

* Purposefully fail

* Working updater, now to test

* Int

* Print payload

* Append

* Change maxcolwidth

* Offset

* More offset

* Context

* No max width

* gh format

* max-col-width'

* Reduce max

* Non-working tables

* Rm md report

* Try now

* Try with just count

* Use table

* New version

* Use table

* Try with thread

* Should be working now

* Clean

* Fixup test reports fully

* Revert workflow

* Keep tabulate in workflow ci

* Update other workflows

* Use blocks for better formatting

* ONe more test

* Works as expected
2023-05-16 09:08:10 -04:00
dcde1e93d0 Fix bug on ipex for diffusers (#1426) 2023-05-12 23:32:01 +02:00
ab379793d4 Intel GPU support initialization (#1118)
* Intel GPU support initialization

* rng state for xpu ,accel backend

* add xpu variable and clean code

* checkpointing, hooks, colls & megatronlm porting

* fix runtime errors

* test utils and xpu runtime checks

* fix unknown import in constant

* Resolve amp and cuda/xpu tensor placement

* add ipex for state and hooks

* add mingxiao's ipex changes and source code rebase changes

* add ipex binding in cluster

* resolve megatron lm issues and modelling memory

* indent fix and syntax

* versioning and sanity checks

* use kwargs and add upstream

* revert megatron lm xpu changes

* cleanups and test npr

* fix merge conflict

* fix merge conflict

* Fix merge conflict

* review commits

* make style, ruff code styling

* hf doc builder code style

* Review commits and code style

* remove xpu plugin and use only ipex by default if cpu/xpu present

* review commits and fix tests on state

* fix test in state

* add xpu condition in optimizer and code style/testing

* fix test add warn for ipex

* fix test

* fix test

* fix test and condition

* fix  amp test prod,cli ,core

* fix minimum torch tests

* refine accelerator and modelling for tests

* refine modeling and merge

* Fix slow cuda tests

* doc and retrigger test
2023-05-11 09:03:24 -04:00
b50e75f85d Make mlflow logging dir optional (#1413) 2023-05-11 12:03:13 +02:00
f95067bfbf fix deepspeed failing tests (#1411)
* changes required for DS integration

* changing the default value of `zero_force_ds_cpu_optimizer` to True to fix the failing tests
2023-05-11 10:35:46 +05:30
d07fd959cc changes required for DS integration (#1406) 2023-05-11 00:47:32 +05:30
873b39b85b use existing mlflow experiment if exists (#1403)
Co-authored-by: Rustem Galiullin <rustem.galiullin@bayanat.ai>
2023-05-10 11:51:21 +02:00
da39665055 Adding support for local SGD. (#1378)
* Adding support for local SGD.

* Update src/accelerate/local_sgd.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Update src/accelerate/local_sgd.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Update src/accelerate/local_sgd.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* fixing reduction + adding a test.

* style fix.

* Update docs/source/usage_guides/local_sgd.mdx

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/local_sgd.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update examples/by_feature/local_sgd.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-09 10:52:03 -04:00
d95d68ec46 Support TPU v2 and v3 on new PyTorch/XLA TPU runtime (#1385)
* Use numpy Generator instead of global seed

* Implement SharedDict descriptor

* Formatting and comments

* Remove `GlobalSharedDict`

* Formatting

* Formatting with `doc-builder` installed correctly
2023-05-09 09:12:43 -04:00
fafadc5323 Add in a section on papers using Accelerate (#1399)
* Start of papers

* Add back in PickScore

* Rm non-urld

* Test

* Remove space
2023-05-09 15:00:50 +02:00
145fca5a09 Support TPU v4 with new PyTorch/XLA TPU runtime (#1393)
* Fix `XLA_USE_BF16` when not using mixed precision

* Fix RNG sync during data loading

* Fix hanging during checkpointing

* Remove extra _mp_fn

* Use all_gather to implement _tpu_gather

* Use collective_broadcast for torch RNG state

* Formatting and comments.

* Fix formatting with `make style`
2023-05-08 13:53:43 -04:00
9fe690706d v0.20.0.dev0 2023-05-08 08:37:42 -04:00
6e81938282 Update training_zoo.mdx (#1397) 2023-05-07 19:00:46 -04:00
e965d590cd Fix gather_obj (#1391)
* Fix gather_obj

* Fix cpu test

* Requires torch 1.7

* Set torch version
2023-05-05 17:55:51 +02:00
6dfcf5b8ef Bump torch v (#1392) 2023-05-05 17:55:21 +02:00
e4ea4ed4de Log Images and other types to wandb (#962)
* add image logging

* add table logging

* add artifact logging capabilities

* fix black

* remove log_iamges on base class

* fix docstring

* quality

* remove the artifact code

* add main proc decorator

* add main process to log_images in ternsorboard

* quality

---------

Co-authored-by: Thomas Capelle <thomas.capelle@steady-sun.com>
2023-05-05 16:11:16 +02:00
fa8e1cff91 fix config bug for 'mixed_precision' from 'yaml.safe_load()' (#1386)
* fix config bug for 'mixed_precision' from 'yaml.safe_load()'

* Update src/accelerate/commands/config/config_args.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-05 07:37:09 -04:00
60856787ac Fix flakey thread issue (#1387)
* Fix thread issue?

* Fix bool

* \<2

* Below 2.0 fully

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-04 14:41:53 -04:00
995563fec9 delete textfile after tests are done (#1381) 2023-05-02 09:58:06 -04:00
2d62bd1570 Seperate out contextmanager generation (#1379)
* Seperate out contextmanager generation

* Move over to modeling

* Switch import
2023-05-02 09:54:53 -04:00
f8169eaded Improve accelerate env reporting (#1376)
* Have env state GPU kind

* Include system RAM

* CLean
2023-05-01 11:08:26 -04:00
75ab711993 Special transformers case from args (#1364)
* Special transformers case

* Reduce to single line

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Revert

* Clean

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-01 09:44:44 -04:00
f489a86573 Fix default FSDP_MIN_NUM_PARAMS (#1367)
FSDP_MIN_NUM_PARAMS default changed from 1e8 to 100000000 (no floats allowed)
2023-04-28 12:35:07 -04:00
2708c1ae31 fix: typing issues, and replace deprecated python typing (Optional, Union) to | (#1363) 2023-04-27 10:50:53 -04:00
e30034ed07 Better check for packages availability (#1356)
* Better check for packages availability

* lint
2023-04-26 08:46:16 -04:00
78bf8bcb21 fix bnb slow test (#1355) 2023-04-25 13:30:37 +02:00
57f2cf5fa7 using deepspeed.comm for distrbiuted init (#1352) 2023-04-25 09:37:16 +05:30
e06e7b35e7 Support FP8 mixed precision training for Ada Lovelace GPUs (#1348)
* Support FP8 mixed training for Ada Lovelace GPUs

* Black format

* Updating error message
2023-04-24 13:01:12 -04:00
5651521833 Pop more backend options (#1342)
* Fixup more args

* Consistency
2023-04-20 11:41:24 -04:00
ba0ee8a54d only update progress bar when done with tensor (#1341) 2023-04-20 08:57:44 -04:00
c2a162932a Fix nested context manager for main_process_first() (#1304)
* Fix nested context manager for main_process_first()

* Fix test for main_process_first()

* Improve test for main_process_first()

* Fix formatting

* Fix test with single process
2023-04-20 06:38:12 -04:00
c29c3c5e70 Rm unused amp check (#1340) 2023-04-19 14:33:37 -04:00
945085edb3 Temp skip test (#1339) 2023-04-19 14:25:58 -04:00
70388fa44e Verbosity, Progress Bar for Loading (#1329)
* added progress bar to tensor loader, and allocation info when verbose

* align coding style with norms
2023-04-19 09:21:02 -04:00
2fee0c15fd v0.19.0.dev0 2023-04-18 11:00:52 -04:00
c05ed13fc9 Fix clearning of memory (#1332) 2023-04-18 10:53:32 -04:00
5e6351502a Remove repetitive devices in load_state_dict() (#1321)
Previously devices() was a list containing duplicate entries. This
changes it into a set.

This significantly speeds safetensors loading when the device map is
long, as the safetensors loop loads each weight entry for each device
entry.

Co-authored-by: John Doe <john.doe@example.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-04-17 15:57:07 -04:00
ee0c587182 ensure module prefixes only match that module (#1319)
Co-authored-by: John Doe <john.doe@example.com>
2023-04-17 15:52:35 -04:00
43e7229a1a Add test flag and import check for dynamo (#1322)
* Add is_dynamo_available + marker

* Use min_torch_version instead
2023-04-17 13:58:53 -04:00
8b96515ed2 Upgrade torch version on main tests (#1323)
* Upgrade torch version on main tests'

* Also in docker
2023-04-17 13:52:20 -04:00
9d9ea62785 Ensure that dynamo is compatible with mixed precision (#1318)
* Fixed

* Use args kwargs
2023-04-17 13:10:39 -04:00
2106e87d58 offload the previous model hook before the current module is moved to the execution device (#1315) 2023-04-14 21:24:59 -04:00
40980e8fe8 Default to nccl (#1314) 2023-04-14 10:18:37 -04:00
f2f810c536 Allow xpu backend (#1313)
* Allow xpu set

* Use in dataclass
2023-04-13 15:23:48 -04:00
0a9403f308 Bug fix in setattr (#1312) 2023-04-13 07:09:27 -04:00
75a693c9b4 Simplify MPS implementation (#1308)
* Simplify MPS implementation

* Quality

* Update src/accelerate/state.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-04-12 08:54:44 -04:00
55691b14c2 add usage guide for ipex plugin (#1270)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-04-07 08:23:12 -04:00
b757b62325 Set the state device dependant to Accelerator on multigpu (#1220)
* Set the state device dependant to Accelerator on multigpu
2023-04-06 13:59:59 -04:00
15dbf9722b fix for load_checkpoint_and_dispatch(device_map=None) (#1297)
The `load_checkpoint_and_dispatch` method has `device_map: Optional[Union[str, Dict[str, Union[int, str, torch.device]]]] = None,`

But if you pass `device_map=None` you get an error:

```
accelerate/big_modeling.py", line 477, in load_checkpoint_and_dispatch
    if offload_state_dict is None and "disk" in device_map.values():
AttributeError: 'NoneType' object has no attribute 'values'
```
2023-04-06 12:55:37 -04:00
419ecf38af Make note about grad accum and prec (#1296) 2023-04-06 11:55:19 -04:00
3cb9d5fd9c Raise better error on notebook_launcher (#1293)
* Raise better error

* Better err

* Move import
2023-04-04 14:42:29 -04:00
f1298b143e fix bnb slow test (#1292) 2023-04-04 20:02:03 +02:00
07ad358f2d Check for dtype attr (#1288) 2023-04-03 16:57:46 -04:00
211707857d Expound error on recursively_apply (#1286)
* Expound

* Adjust test
2023-04-03 14:07:32 -04:00
e57d5d0eae Raise more explicit error when transformer_engine isn't installed (#1287)
* Raise err for unsupported fp8

* Change hardware spec

* Rm hardware part since we don't check it

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Style

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-04-03 13:40:28 -04:00
92d072043e Fix TypeError bug in honor_type (#1285)
* Use is_namedtuple
2023-04-03 12:23:12 -04:00
3d1a0f7e98 fix attribute error in DataloaderShared (#1278)
When running in single GPU, the `batch_sampler` of `DataLoaderShared` is a `torch.utils.data.sampler.BatchSampler` object instead of `DataSamplerShared ` object, which does not contain necessary attributes to calculate `total_batch_size`.
2023-04-03 09:44:59 -04:00
8b3e30887a Minor fix whitespace colon (#1272)
More readability
2023-04-03 09:42:56 -04:00
3e304c4a1a Update quicktour.mdx (#1273) 2023-04-03 09:42:48 -04:00
1c102f23cc Missing fp8 (#1284) 2023-04-03 09:42:21 -04:00
4c0d5a46ba Raise import err (#1283) 2023-04-03 09:37:17 -04:00
d0c17d707f Fix reduce operation (#1268)
Co-authored-by: amax <amax@admin.cluster.local>
2023-03-31 09:24:36 -04:00
b41d8d8228 Change error raised to ValueError (#1267) 2023-03-30 10:37:08 -04:00
3a6db664c7 Update bug-report.yml (#1264) 2023-03-30 09:17:58 -04:00
166520feea ipex intel extension for pytorch integration (#1255)
* ipex intel extension for pytorch integration

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

Co-authored-by: jianan-gu <jianan.gu@intel.com>

Co-authored-by: Wang, Yi A <yi.a.wang@intel.com>

* fix test error

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* fix the review comment and add testcase

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-03-30 09:08:17 -04:00
663f5120c2 Check attribute 'overflow' exists in optimizer. (#1259)
* Check attribute 'overflow' exists in optimizer.

* Fix code formatting. ;)
2023-03-28 09:26:17 -04:00
23ac55fcab [core] Add Quantization support for dispatch_model (#1237)
* add quantization support for `dispatch_model`

* fix multi-gpu

* more chaecks

* fix bias issue

* Update src/accelerate/utils/modeling.py

Co-authored-by: Andrei Panferov <andrei@BlackSamorez.ru>

* make style

* add tests

* left some todos

---------

Co-authored-by: Andrei Panferov <andrei@BlackSamorez.ru>
2023-03-27 15:33:52 -04:00
93951ce516 handle missing deepspeed config (#1251) 2023-03-24 16:10:12 -04:00
ae86a00be0 raise error when dataloader with None as batch_size when using DS (#1250) 2023-03-24 21:15:23 +05:30
532da3e342 Fix pypi image (#1249) 2023-03-24 11:34:36 -04:00
a826e4441d Handle multiple tied parameters (#1241)
* Handle multiple tied parameters

* Add tests

* Ensure backward compatibility with Transformers

* Update src/accelerate/utils/modeling.py

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>

* Gate test requiring Transformers

---------

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
2023-03-24 09:53:29 -04:00
1fe27e7c95 Hardware Auto-Setup Example/Tutorial for Distributed Launch (#1227)
* add self hosted hardware example

add multi gpu launch script

add auto setup hardware docs

remove an example

tiny fixes

* add colab link

* style

* update readme, remove docs page
2023-03-24 09:46:29 -04:00
c1a6c209df Change multinode to multigpu (#1247) 2023-03-24 09:40:21 -04:00
8ebd6ab2ee backfill ds plugin attributes when using ds_config (#1235)
* backfill ds pluging attributes when using ds_config

* add test

* refactoring code
2023-03-23 21:28:02 +05:30
ea9b85477d remove empty dicts while saving accelerate config (#1236) 2023-03-23 19:14:21 +05:30
420ff21c3b extensions has been removed and replaced by customizations (#1075)
Co-authored-by: Dennis Bappert <bappert@outlook.com>
2023-03-23 09:15:23 -04:00
b1b3312749 Make grad accum steps mutable on the Accelerator object (#1233)
* Make grad accum steps mutable

* Reset state
2023-03-22 17:44:31 -04:00
6e4e870203 add additional check before deleting env variable (#1229) 2023-03-22 15:03:18 -04:00
a3065e1842 Silence dynamo_backend (#1226) 2023-03-22 11:34:08 -04:00
4eaf36e1c4 docs: add finetuner to ppl who use accelerate (#1224) 2023-03-22 09:08:21 -04:00
e7bb060c0e Fix get_logger kwarg documentation issue (#1222) 2023-03-22 08:05:00 -04:00
a15d307426 Fix bug in loading launch config (#1218)
* Fix bug in loading launch config
2023-03-20 10:20:09 -04:00
7e7f3445aa FIx TPU gradient state (#1219) 2023-03-20 09:56:07 -04:00
10c674633d ds offload optim fix to use CPUAdam (#1208)
* ds offload optim fix to use CPUAdam

* fix
2023-03-20 19:21:39 +05:30
82c2665cd6 Fix example in accumulate method (#1211) 2023-03-18 21:00:11 -04:00
2930cac698 Fix typo in TPU config (#1202) 2023-03-18 09:42:56 -04:00
901ab69a16 Better error message when using multi-GPU and Accelerate on torch <1.9.1 (#1203)
* Better err

* Split
2023-03-16 11:45:09 -04:00
780e4aa32a Fix tied weights load (#1204)
* Retie weight after loading checkpoint

* Adapt doc
2023-03-16 11:29:11 -04:00
e4620984f8 Make the Scheduler adjust the steps taken relative to the gradient accumulation steps (#1187)
* Make scheduler actually adjust the length
2023-03-15 12:16:12 -04:00
017a98c0e9 Fixup --fsdp (#1198) 2023-03-15 10:34:13 -04:00
d1aa558119 [Accelerator] We should not call to on modules that wraps accelerate loaded models (#1172)
* add v1

* fix docstring
2023-03-15 08:28:28 +01:00
41479fe483 Set drop last to ensure modulo16 restriction for fp8 (#1189)
* set drop last to ensure modulo16 restriction for fp8

* fix quality

* Use all eval samples for non-FP8 case
2023-03-14 14:35:02 -04:00
eac5d13c7b Only convert linear layers with weights multiple of 16 (#1188)
* Only convert linear layers with weights multiple of 16

* Simpler test
2023-03-13 17:03:29 -04:00
b228136cae add use_orig_params to FullyShardedDataParallelPlugin (#1184)
* add `use_orig_params` to FullyShardedDataParallelPlugin

* fix 🐛
2023-03-14 00:20:30 +05:30
90deb748c6 Add documentation about PyTorch FSDP state dict behavior (#1181) 2023-03-13 10:53:56 -04:00
d942708745 Support special mapping of dtypes when preparing device map (#1179) 2023-03-13 10:48:31 -04:00
3783180844 fixed typo in launch.py tpu_pod_launcher (#1180) 2023-03-10 18:36:52 -05:00
ea836f3057 Add repr to AlignHook for easier debugging. (#1177) 2023-03-10 14:35:11 -05:00
a4c9476204 Run accelerate_test in cli (#1176)
* Run accelerate_test in cli

* Make it run on more than one process for gather check
2023-03-10 10:28:42 -05:00
3ca8c9a997 Fix CPU error always being raised (#1175)
* Save state

* Revert to old behavior

* Fix failing test/update

* Remove duplicate test
2023-03-10 10:22:26 -05:00
2f83b1afef Fix accelerate test with new config_file errors (#1169) 2023-03-09 11:56:42 -05:00
b0591c665c Fix backward compatibility in configs wrt dynamo backend (#1168) 2023-03-09 11:39:22 -05:00
d9871c0f87 v0.18.0.dev0 2023-03-09 11:18:26 -05:00
abc2beb423 Remove outdated command directions and use in tests (#1166)
* Get rid of launch in docs

* Run instead of Launch

* Proper ddp prefix

* Include note about older torch versions
2023-03-08 14:37:46 -05:00
8749b4ece4 Fix what files get deleted through total_limit (#1165)
* Use lambda func to sort the keys

* Use inner instead

* With more explicit regex

* Regression check

* Better check that uses multiple numbers
2023-03-08 12:34:22 -05:00
4a3eaee6be Document skip_first_batches in the checkpoint usage guides (#1164)
* Include skip_first_batches

* Repeated statements

* Middle of an epoch
2023-03-08 12:17:30 -05:00
3533e2b0b1 [Accelerator] Fix issue with 8bit models (#1155)
* fix 8bit models on `accelerate`

* add bnb as dependency

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix

* skip a test

* make style

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-08 14:51:25 +01:00
3e0ceac79f Attempt to fix import error when PyTorch is build without torch.distributed module (#1108)
* Attempt to fix importing invalid `torch.distributed.ReduceOp` when torch is built without distributed support.

* Style.

* Move `torch.distributed` logic detection to `imports.py` according to @muellerzr comments

* Style.

* Update wording

* Remove raising exceptions in the case of a non-distributed setup, simply dont import the ReduceOp in this case.
2023-03-08 08:49:45 -05:00
03b617b674 Let GradientState know active dataloaders and reset the remainder (#1162) 2023-03-07 14:46:05 -05:00
840bb1aeda update support for torch dynamo compile (#1150)
* update support for torch dynamo compile

* fix tests and backward compatibility

* fix tests

* Update config_args.py

* Update config_args.py

* fix 🐛

* fix 🐛

* fix bug

* fix 🐛

* bug fix

* 😅

* Update config_utils.py

* 😅

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* resolving comments

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-07 22:05:14 +05:30
1bfde6b963 Fp8 integration (#1086)
* Draft of FP8 support

* Missing import

* Fix names

* Conversion is inplace

* Enable fp8 in examples

* Customization point for Recipe

* Auto-enable FP8 depending on compute capability

* Fix typo

* Put back mixed precision arg

* Add debug script

* Add more tests in debug

* Add more stuff to debug

* Don't forget train

* Put the train in the right place

* Add options for selective conversion

* Fix typo

* Properly recurse

* Add more debug utils

* Typo and init

* Last choice

* More fixes

* More options in example

* Remove debug scripts

* Clean up debug and new names

* Add torch.no_grad for conversion

* Optimizer is deconnected from model?

* Re-attach model parameters to optimizer

* Fix extract

* Style

* Cleanup post-rebase

* Deal with apdding

* fix examples

* Update src/accelerate/accelerator.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Address comments

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2023-03-07 09:10:10 -05:00
3482495bb5 📝 add a couple more trackers to the docs (#1158) 2023-03-06 19:06:56 -05:00
947b2a88a9 Load custom state to cpu (#1156)
The current implementation loads custom states to GPUs, leading to OOM. I add `map_location="cpu"` to the `torch.load` function, which is similar to the strategy in `load_accelerator_state`.
2023-03-06 13:15:21 -05:00
cac1ed41eb Solve arrow keys being environment dependant for accelerate config 2023-03-06 10:09:24 -05:00
9dc5b349ea [Safetensors] Relax missing metadata constraint (#1151)
* [Safetensors] Relax missing metadata constraint

* correcct

* char limit
2023-03-06 16:01:35 +01:00
0aae1e93f4 Include a note in the gradient synchronization docs on "what can go wrong" and show the timings (#1153)
* Include timing results

* Don't include tilda for accelerator

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-06 10:00:43 -05:00
78151f87a4 Fixed typos in notebook (#1146)
* Bad cut for the eval_split

* Fixed typo.
2023-03-03 14:30:53 -05:00
853823d0ae FSDP enhancements and fixes (#1145)
* fsdp version update

* fsdp fixes

* update accelerate config
2023-03-03 19:19:48 +05:30
77ae51a050 fix partial state (#1144)
* fix partial state

* fix failing tests
2023-03-03 19:03:24 +05:30
ad9cf788b1 Fix notebook_launcher (#1141)
* Fix initialization on decorator for the Accelerator
2023-03-02 12:08:32 -05:00
5f9cea4ce9 fsdp bf16 enable autocast (#1125) 2023-03-02 18:59:19 +05:30
96ffd349f3 fix lr scheduler issue (#1140)
* fix lr scheduler issue

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-02 18:41:46 +05:30
d88bbbd0e2 fix ds dist init kwargs issue (#1138)
* fix ds dist init kwargs issue

* fix
2023-03-02 18:35:16 +05:30
075b5d615d deepspeed dataloader prepare fix (#1126) 2023-03-02 18:34:35 +05:30
9b5877d1b6 Fix multinode with GPU ids when each node has 1 (#1127)
* Fix multinode

* Assert

* Reverse logic

* Use <= and not "not"

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* All on a single statement

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-01 14:02:17 -05:00
586941d107 Expand warning and grab all GPUs available by default (#1134)
* Use all GPUs by default

* Warn and include multi_gpu pull by default
2023-03-01 13:50:27 -05:00
e1b84bf503 Add tee and role to launch (#1132) 2023-03-01 12:37:16 -05:00
b2ea1c7b4f [Big model loading] Correct GPU only loading (#1121)
* [Big model loading] Correct GPU only loading

* Update src/accelerate/utils/modeling.py

* make style

* Update src/accelerate/utils/modeling.py

* make style 2

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-01 16:22:06 +01:00
bdd93cd933 Refactor launch for greater extensibility (#1123)
* Refactor `launch` for greater extensibility

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Fix

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Fix import

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

---------

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
2023-03-01 05:43:32 -05:00
639c1da8df Move dynamo.optimize to the end of model preparation (#1128) 2023-02-28 14:11:38 -05:00
fdb1402c7d Deep merge SageMaker additional_args, allowing more flexible configuration and env variable support (#1113)
* deep merge additional args

* added trailing line

* `make style`
2023-02-28 09:55:03 -05:00
0b3f219881 Add test for ops and fix reduce (#1122)
* Add test for ops and fix reduce

* Adjust testers

* Try w/o shape checK

* Passthrough?

* Make into float

* Clean

* Undo all_gather for now
2023-02-28 09:18:09 -05:00
ade4f1db92 Actually raise if exception (#1124) 2023-02-28 07:54:32 -05:00
907a86d145 TensorBoardTracker: wrong arg def (#1111) 2023-02-25 00:57:49 -08:00
f054799e7f Attempt to unwrap tracker. (#1109) 2023-02-24 15:47:54 +01:00
d4f5fd694e Update performance.mdx (#1107)
Correct import location
2023-02-23 09:05:21 -05:00
38fd30e764 Tracker rewrite and lazy process checker (#1079)
* Refactor implementation to use PartialState and adjust deprecation tests

* Utilize multi-process in Accelerator

* Use state

* Lazy PartialState

* Name, plus keep on_main_process for accelerator

* Handle if the tracker was made on main-process-only properly

* Missing variable names, oops

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Clean

* Logs

* Main process

* Clean

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2023-02-22 07:48:55 -05:00
03754c1e02 Update README.md (#1100) 2023-02-21 21:21:18 -05:00
ea36b7dceb add multi_cpu support to reduce (#1094) 2023-02-20 09:25:55 +01:00
bc9153e465 adds missing "lfs" in pull (#1091) 2023-02-17 17:40:20 +01:00
89b7e36bf6 Fix config (#1090)
* Fix config

* Proper fix
2023-02-17 10:42:24 -05:00
b34db0b987 Added SageMaker local mode config section (#1084) 2023-02-15 14:18:43 -05:00
9875714610 Update complete_cv_example.py (#1082)
minimal typo :)
2023-02-15 13:36:18 -05:00
4b47f190a9 Fix tpu_cluster arg (#1081) 2023-02-15 10:43:04 -05:00
17bc8a1103 Allow custom SageMaker Estimator arguments (#1080)
* Added additional_args to SageMaker Config

* temporary fix #1078

* temporary fix #1078 properly

* Extended SageMaker config

* Revert " temporary fix #1078 properly"

This reverts commit 81c683711d5a94ba9327686563bb55d3e8801555.

* Revert "temporary fix #1078"

This reverts commit c8a4b0973aee6ffd4612a69bb1ccd079b3dbb9ce.

* Extended documentation to reflect manual configuration changes.

* Fixed a small typo
2023-02-15 10:39:08 -05:00
279475307a SageMaker image_uri is now optional (#1077) 2023-02-15 09:31:47 -05:00
9c2e704791 Add error if passed --config_file does not exist (#1074) 2023-02-15 09:10:20 -05:00
4e1816d7ec Refactor state and make PartialState first class citizen (#1071)
* Refactor into State and expose

* Make PartialState mainstream!
2023-02-14 14:50:06 -05:00
5a2cb3b5e3 Fix/implement process-execution decorators on the Accelerator (#1070) 2023-02-14 13:36:33 -05:00
04103090cc update fsdp docs and removing deepspeed version pinning (#1059)
* update fsdp docs and removing deepspeed version pinning

* address comments
2023-02-14 16:39:47 +05:30
ca615f879f Swap utils over to use PartialState (#1065) 2023-02-13 16:08:56 -05:00
2694a6c63a Update integrations (#1063) 2023-02-13 13:28:55 -05:00
b4388b45dc Try with this (#1062) 2023-02-13 10:58:24 -05:00
69e4c3c54d Flag for deprecation (#1061) 2023-02-13 10:38:33 -05:00
68d809256c Introduce PartialState (#1055)
* Try again

* Try off multi-gpu

* This is a test

* Finished now

* PartialState

* Update logger to use new API

* backend

* Working tests

* Working again!

* Raise err instead

* Better error

* Update src/accelerate/state.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-02-13 10:23:39 -05:00
bd091a605b deepspeed hidden_size auto value default fixes (#1060) 2023-02-13 20:23:40 +05:30
cb993d7d8c Fix args by adding in the defaults (#1053) 2023-02-09 15:00:57 -05:00
028b5816c8 Use create_task (#1052) 2023-02-09 14:44:09 -05:00
8951195a15 Introduce TPU Pod launching to accelerate launch (#1049)
* Working version -- run one more test

* commands

* Undo commands

* cli

* Undo config args

* cluster

* Command

* use_alpha

* Fully working now!

* Fix log

* Wrong alpha storing
2023-02-09 13:02:14 -05:00
60460ae1af Fix cpu_offload_with_hook code snippet (#1047)
* Fix cpu_offload_with_hook code snippet

* Make model explicit for clarity.
2023-02-08 09:23:13 -05:00
978dfc38ea Load tensors directly on device (#1028)
* Load tensors directly on device

* Update src/accelerate/utils/modeling.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-02-07 13:48:28 -05:00
5002e56704 Update quality tools to 2023 (#1046)
* Setup 2023 tooling for quality

* Result of styling

* Simplify inits and remove isort and flake8 from doc

* Puts back isort skip flag
2023-02-07 13:34:05 -05:00
71e81bab00 Add cpu_offload_with_hook (#1045)
* Add cpu offload with hook

* Style

* add to init

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add documentation

* Add tests

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-07 13:09:27 -05:00
76c41f0df7 Make sure direct parameters are properly set on device (#1043) 2023-02-06 13:36:18 -05:00
2b981c0942 Add daily slack notifier for nightlies (#1042)
* Update log_reports to send to slack
2023-02-06 10:44:58 -05:00
a60640d4fa Refactor process executors to be in AcceleratorState (#1039)
* Start of refactor

* Fix yield

* Print

* Add test
2023-02-06 10:44:33 -05:00
4be70838e7 Pass keywords arguments of backward function deeper to DeepSpeed (#1037) 2023-02-03 10:39:19 -05:00
e89131c92d do not scale gradient in bf16 mode (#1036) 2023-02-02 14:01:57 -05:00
4e5cc0c6b9 fix: links to gradient synchronization (#1035) 2023-02-02 11:12:30 -05:00
587eea9bb5 enabling mps device by default and removing related config (#1030)
* enabling `mps` device by default and removing related config

* address comments

* fix tests
2023-02-01 23:27:15 +05:30
57cbcab45b Deepspeed param check (#1015)
* Deepspeed param check

On line 146, in set_module_tensor_to_device(), adding a check for deepspeed parameters in the kwargs object, and not passing them solved the error I was receiving regarding the ds parameters not being recognized by torch.nn.Parameter.__new__(). With my admittedly limited knowledge, it seemed to me that the kwargs are not necessary to pass in the case of using Deepspeed+ Accelerate, and this bears out since the model loaded fine with zero-3 cpu parameter and buffer offload on a single-GPU machine, and performed perfectly comprehensible inference outputs (slowly) using the GPU.

The error, in my case, was occurring here as called from accelerator's dispatch_model().

Please let me know if my thinking on this is in anyway wrong! This fix worked for me. 

 `transformers` version: 4.26.0
- Platform: Linux-5.15.83.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes and no (zero-3 on single machine)

* 146-150 check for Int8 arguments

146-150 check for Int8 arguments. If found, send the args as well as the value.

* Used make style on branch

* Used make style with correct versions of black and flake8 on branch
2023-02-01 11:19:01 -05:00
c0caa068ba v0.17.0.dev0 2023-01-31 12:15:08 -05:00
b51b78ffb7 It was 0.16.0.dev0 all along... 2023-01-31 11:07:26 -05:00
67dbae52be sagemaker launcher fixes (#1031)
* sagemaker launcher fixes

* fixes

* addressing comments
2023-01-31 21:17:16 +05:30
d0df263b09 With example (#1027) 2023-01-30 12:57:24 -05:00
a5026706a7 More improvements to docstrings + examples (#1010)
* Start of examples
2023-01-30 12:34:26 -05:00
20e4973903 Start of adding examples (#1001)
* Start of examples

* Missing >

* Fix docstring nit

* Add comment on main_process_first

* Make comment on randomness

* first

* Backprop issues with examples into here
2023-01-30 12:33:47 -05:00
1d9bcdd39d Efficiently skip batches in a dataloader (#1002)
* Efficiently skip batches in a dataloader

* Add method in Accelerator and example

* Apply suggestions from code review

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Rename point of access

* Add point of access to init

* Add tests

* Don't forget to include fixes silly!

* Adapt examples

* Fix quality

* Forgot one

* fix method name

* Fix DataLoaderShard reinstantation

* Fix for epoch checkpointing

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-01-30 11:56:59 -05:00
ba856524f6 Fix slow test by keeping tied weights on the same GPU (#1026) 2023-01-30 11:13:39 -05:00
332326c833 Change default for keep_fp32_wrapper (#1025)
* Change default

* Fix tests
2023-01-30 10:18:40 -05:00
e6d5776ad8 Light vs dark theme based on pick (#1023) 2023-01-30 09:35:37 -05:00
fe709a2490 Fix env var (#1024) 2023-01-30 09:33:19 -05:00
ac970148cd Include steppage in performance docs (#1013)
* Include steppage in performance docs

* New explanation
2023-01-27 12:02:47 -05:00
f0f348921d Don't force mixed precision as no in examples (#1018) 2023-01-27 10:12:27 -05:00
b37680bd66 Fix import of LrScheduler (#1017) 2023-01-27 08:50:33 -05:00
5286d843c8 Add in code exploration tool to docs (#1014)
* Add in code exploration tool to docs

* Update index to hotlink over to the explore

* With 100%

* Just do 750 for now

* Safe height

* Let's try with this

* Comment out original

* Revert

* Add in a note on the docs and remove a secondary code snippet

* Use 1550 for now so it fully fits

* 1600*
2023-01-27 07:32:34 -05:00
22bf677ceb Allow the torch device to be set with an env var (#1009)
* Allow the torch device to be set with an env var

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Fix

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Refactor

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Use self.device

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Refactor

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Add test

* Add test

* Fix test

* Tweak comment

* Fix test

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
2023-01-26 16:01:36 -05:00
bd82bec78e Fix test introduced in PR and introduce AcceleratorTestCase (#1016)
* Fix test, missing reset

* tearDown

* Refactor and inherit to avoid future errors
2023-01-26 15:35:21 -05:00
3825e478b2 Saving and loading state hooks (#991)
* [RFC] Possible design for loading and saving state hooks design

* fix bug

* add tests & docstring

* improve docs

* make style

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-01-26 20:07:21 +01:00
6c3f6792e9 Maintain accumulation steps (#1011) 2023-01-26 06:33:50 -05:00
5858ac62b4 Add styleguide (#1007)
* Add styleguide

* Uniformity

* Accelerate specific
2023-01-25 14:28:24 -05:00
5b0a03d1fb Update toctree (#1008) 2023-01-25 13:52:25 -05:00
c3ea690d48 improve deepspeed notes (#1003)
* improve deepspeed notes

* style
2023-01-23 20:45:45 -08:00
ae8c4875dc Fix parameters tying in dispatch_model (#1000)
* Fix parameters tying in dispatch_model

* Add test
2023-01-23 13:10:30 -05:00
55a528487d Fix scheduler incorrect steps when gradient accumulation enabled (#999)
* add additional check for optimizer step

* rewrite scheduler w/ grad accumulation test
2023-01-23 13:06:45 -05:00
bd1d5fad2f adding support for kwargs in load_state (#989)
* adding support for kwargs in `load_state`

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* quality 

* addressing comments

1. renaming variable to make it explicit
2. adding kwargs to `save_state` for parity

Co-Authored-By: Zachary Mueller <7831895+muellerzr@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Zachary Mueller <7831895+muellerzr@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2023-01-23 20:27:35 +05:30
b22f088ff6 Add new release_memory util (#990)
* Add new release_memory util

* Req cuda
2023-01-19 13:01:24 -05:00
f3f2f9e4b5 in sync with trfs, removing style_doc utils and using doc-builder instead (#988) 2023-01-19 19:24:44 +05:30
7e4136164e Fix test for converting tensor to proper dtype (#983)
* Fix test for converting tensor to proper dtype

* Adds a test
2023-01-18 11:21:45 -05:00
5dd631e2cd Skip wandb test for now (#984) 2023-01-18 10:57:38 -05:00
0a16f37ba1 Ensure that last batch doesn't get dropped if perfectly even in gather_for_metrics (#982)
* Add test_last_batch

* Fix gather bug
2023-01-18 10:30:34 -05:00
aaa2637a5e Fixe type error on line 36 (#981)
Fix to type error on line 36
2023-01-18 09:38:05 -05:00
7573a8cd55 Fix tied parameters test in big model inference (#979) 2023-01-17 14:52:52 -05:00
126550126d Raise minimum version for distrib launch (#978) 2023-01-17 12:24:36 -05:00
733755c94c Update README.md (#968)
When use deepspeed, We must import from accelerate package.
2023-01-12 03:18:56 +01:00
741d23301f Allowing encoded configuration for DeepSpeed (#895)
* allow-encoded-ds-config

* fix style
2023-01-11 14:32:03 +01:00
9b7ef9679f support master port when using ds multi-node launcher (#959)
* support master port when using ds multi-node launcher

* 😅
2023-01-09 23:52:00 +04:00
30a6a3435f Typo fix in src/accelerate/utils/modeling.py (#955)
Simple typo fix I happened to notice and figured I should just fix while I'm looking at it.
2023-01-07 09:58:05 +01:00
f7427c86ee Don't automatically offload buffers when loading checkpoints (#951)
* Don't automatically offload buffers when loading checkpoints

* Add test
2023-01-04 09:01:24 -05:00
d0bf459c7f Fix DeepSpeed tests (#950)
* Fix deepspeed tests

* Reset state

* With manual reset?
2023-01-03 12:49:51 -05:00
bf8fe0347b Add is_initialized method and refactor (#949)
* Add is_initialized method and refactor

* As module method
2023-01-03 10:13:44 -05:00
e60f3cab7a raise error for duplicate accelerate config values when using deepspeed_config_file (#941)
* ds config vs accelerate config checks

* add mp assertion checks and refactoring

* 😅

* minor fix

* address comments

* address comments and making doc and help clear

* 😅

* fixes

* error msg fix

* more details in error msg

* 

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* address comment

* address comment by changing cluster config

* 😅

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* use `accelerate launch` cmd args for `auto` filling

So far, `accelerate launch` cmd args were used for filling deepspeed plugin fields and not for setting `auto` values. This PR enables that too.

It also raises assertions when ambiguous values are passed in accelerate config file when using `deepspeed_config_file`

* fixes

* fixes and adding tests

* quality

* 😅

* refactor

* fix

* add documentation wrt improvements of DeepSpeed config

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* address comment

* address comment

* refactor

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2022-12-31 13:42:57 +05:30
07e2e712ca Fix offload when weights are on the GPU (#945) 2022-12-28 02:43:29 -05:00
63f09f63b8 Fix tracker (#942) 2022-12-23 12:07:56 -05:00
50b8d8e8a8 fix mp related test fails (#943) 2022-12-23 22:17:13 +05:30
0ec1f24c17 fix batch size in prepare_dataloader for iterable datasets (#937)
* fix batch size

* black
2022-12-23 02:52:52 -05:00
3c5c0f9c99 add mixed_precision_type property to AcceleratorState (#935)
* add `mixed_precision_type` property to `AcceleratorState`

* address comments
2022-12-23 12:02:20 +05:30
53b8ed1e8e Fix silly typo (#939) 2022-12-22 23:14:03 +05:30
49bbf2390d ds zero-3 init context manager (#932)
* ds zero-3 init context manager

* address comment

* renaming `set_zero3_init` to `zero3_init_context_manager`
2022-12-21 10:49:35 +05:30
aa533277f6 Honor model dtype in load_checkpoint (#920)
* Honor model dtype in

* Move dtype logic to set_module_tensor_to_device
2022-12-20 02:48:18 -05:00
ca6505a6a8 ds-z3-init and prepending ds env variables with ACCELERATE_ (#928)
* ds-z3-init and prepending ds env variables with `ACCELERATE_`

* quality

* rerun checks
2022-12-17 00:48:21 +05:30
bb6ee0b7bc Support init_on_device (#926)
* Support init_on_device

* Support mps backend as well in testing
2022-12-16 13:07:39 +01:00
7889ba6b6d Specify inference (#921) 2022-12-14 09:02:13 -05:00
f002ce2ae9 Introduce project_dir and limit the number of saved checkpoints (#916)
* Working save limit

* Centralize to project_dir

* Update docs

* Fix up tests

* Maintain old version, should fix tests

* Revert logging behavior

* Fix failing test

* Automatic checkpoint naming flag

* Logging -> Logger

* Fix naming

* Remove args and make a SaveConfiguration

* logger -> logging

* save_configuration to save_config

* Good to go now, just need to update docs

* Update all the docs

* Deprecate logging_dir param

* ProjectConfiguration

* Project_config

* Fix test

* Finish renaming

* Docfix

* Clean

* Update docs/source/usage_guides/tracking.mdx

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-12-13 08:29:58 -05:00
7fd0635d46 fix accelerate test failure with cpu config (#909)
*failure occurs when testing FP16
*autocast fail to work for cpu bf16 in some gpu+cpu platform,
no need to use is_bf16_available logic. because native_amp already contains such logic.
2022-12-13 08:29:15 -05:00
235fdf1096 🚨🚨🚨 Act on deprecations 🚨🚨🚨 (#917)
* Act on deprecations

* Act on deprecations

* Resume from checkpoint

* Finish deprecations
2022-12-12 16:09:52 -05:00
351f89758a Fix typos accelerate -> accelerator (#915) 2022-12-12 11:11:05 -05:00
7f5e94d33b fsdp enhancements (#911)
* fsdp enhancements

* fix

* fix
2022-12-09 22:23:45 +05:30
74a8ed9e48 fix issue that amp bf16 does not work for cpu in env with cuda. (#906)
and num_cpu_threads_per_process is not reset for better performance in cpu only case

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2022-12-08 09:05:34 -05:00
6bd28790c2 Fix conditional (#907)
* Fix conditional

* Into one if statement
2022-12-07 09:34:58 -05:00
2359af1870 Expand sanity checks (#905)
* Expand sanity checks

* multi_cpu to cpu
2022-12-06 15:46:47 -05:00
e6b61da7ca Add usage examples (#904) 2022-12-06 15:12:43 -05:00
344bfe2713 Flag to silence subprocess.CalledProcessError in launch (#902)
* add an option to silence subprocess.CalledProcessError when running accelerate launch

* for black

* for real this time

* Add suggestion

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Update cli.mdx

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-12-06 08:47:31 -05:00
e9d15e5973 Adds a utility function to install correct version of torch XLA (#896)
* Add utility to install torch xla wheels

* Fix formatting

* Update docs and fix lint issues
2022-12-01 15:11:41 -05:00
5315290b55 Support bfloat16 in load_offloaded_weight (#892)
* Support bfloat16 in load_offloaded_weight

* Quality
2022-11-29 13:32:31 -05:00
f4eee1cf86 Better description for improper kwargs (#894)
* Better flag

* an
2022-11-29 13:24:41 -05:00
b12f503f6d Fix windows cli selector (#893)
* Still need to test on windows

* Move imports

* Somewhat working

* More if

* undo

* Try with unicode

* All done
2022-11-29 11:36:22 -05:00
58be9901b6 fix prefix issues in tests (#891)
* fix prefix issues in tests

* fix
2022-11-29 18:57:58 +05:30
13ef1c83f9 Prefix all accelerate env vars with ACCELERATE (#890)
* Rename all env vars to prefix with accelerate

* Rich

* Undo fork launch

* Fork launched

* Fix patch env

* Finish rich
2022-11-28 14:45:14 -05:00
62e5cfcbbd fixing lr scheduler for pytorch nightly (#884) 2022-11-28 21:46:20 +05:30
762ce7cc80 Allow safetensors offload (#873)
* Allow safetensors offload

* Address review comments + auto-enable fast GPU load

* Quality
2022-11-28 10:03:50 -05:00
4a447d85be fix a bug (#887) 2022-11-28 17:48:31 +05:30
e4e5611e5d Update deprecated logging warn (#881)
Use `logging.warning()` instead of the deprecated `logging.warn()`.
2022-11-22 15:14:18 -05:00
79b712559a fix fsdp state_dict_config because of PyTorch changes (#877)
* fix fsdp state_dict_config because of PyTorch changes

* fix fsdp test

* fixes and addressing comments

Co-Authored-By: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-21 21:22:03 +05:30
eaf7899850 fixing lr_scheduler prepare issue when using pytorch nightly (#878) 2022-11-21 21:20:31 +05:30
d2e804f69d Spring cleaning (#865)
* CLean cluster and big model

* Spring cleaning :)

* Undo much!

* Bring back the fstring!

* Parenthesis for readability
2022-11-21 09:40:59 -05:00
2df1a9328a Solve pickling issues (#872)
* Raise a pickling error if tried to save w/o unwrap
2022-11-21 09:24:41 -05:00
8bf40e5870 Even more log level refined, leave alone if not explicitly set (#871)
* Even more refined, leave alone if not explicitly set

* Leave as setLevel

* Even more explicit
2022-11-18 11:33:47 -05:00
b0165a0f77 fix failing deepspeed test (#868)
* update deepspeed error message wrt `batch_size`

Co-Authored-By: Stas Bekman <stas00@users.noreply.github.com>

* 

* fix failing deepspeed test

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2022-11-18 19:41:04 +05:30
8a96b0bfb8 update deepspeed error message wrt batch_size (#861)
* update deepspeed error message wrt `batch_size`

Co-Authored-By: Stas Bekman <stas00@users.noreply.github.com>

* 

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2022-11-17 20:53:19 +05:30
0efabe485e Remove mixed precision hook as part of the unwrap_model (#860)
* Mixed precision hook

* Rename

* Rm comment, need to move

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Fix doc

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-16 16:12:53 -05:00
75c7d935fd Switch default log to warn (#859)
* Switch default log to warn

* Fix deprecation
2022-11-16 14:17:10 -05:00
bea1e75182 Revert "Update pr docs actions (#827)" (#857)
This reverts commit 56308da519db06b830dafcda917c65a1a443c55a.
2022-11-16 12:06:01 +01:00
dd8f2054d8 Clean up, add update command (#853)
* Clean up, add update command

* Use args for all but default_config

* Call explicitly with args

* Update CLI docs
2022-11-15 17:04:49 -05:00
71660af123 Refactor Accelerate config and introduce a multi-argument CLI interface (#851)
* Improve CLI to have independent names
2022-11-15 09:33:09 -05:00
5f4ba04628 Fix complete_cv example (#848) 2022-11-15 08:56:43 -05:00
39e4a5a0f3 Fix if/else (#849) 2022-11-14 12:07:51 -05:00
0d0f2cd5a7 Fix log error and add log level to get_logger (#842)
* Fix log error and add log level

* Example in docs

* Docstring fix

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Fixes

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-14 09:01:29 -05:00
e8e3709765 Introduce default-config command (#840)
* Add new default config command

* Include docs

* Rm arg
2022-11-11 11:16:01 -05:00
074d8d5a5a Add join_uneven_inputs context manager to Accelerator (#820)
* Add test for join context manager

* Add join_uneven_inputs context manager

* Format

* add conditional import for join

* Replace bare yield with nullcontext

* Update accelerator to maintain references to dataloaders

* add override option to join context manager

* format

* Add minimal docstring

* updates based on initial feedback

* remove launcher used for local testing from test script

* fix quality issues

* DEBUG: try resetting accelerator state to fix test

* Revert "DEBUG: try resetting accelerator state to fix test"

This reverts commit a13a56ea8e084cad72317cd451a176a2d3fa5dff.

* Reset state after accelerator tests

* Update src/accelerate/accelerator.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Warn if at least one iterable dataset seen

* remove launcher used for local test running

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-11-10 13:09:07 -05:00
b17fb69dd6 Highlight selection with pretty colors (#839)
* Highlight with pretty colors

* Rm comment
2022-11-10 10:35:18 -05:00
ccdc2252f7 Deepspeed example should use gather_for_metrics (#821)
* Deepspeed example should use gather_for_metrics

I believe this example should be using gather_for_metrics here instead of gather.

* Update deepspeed_with_config_support.py
2022-11-10 09:41:15 -05:00
f9317f253c fix 🐛 (#836) 2022-11-10 19:38:32 +05:30
08f64896a0 Small questionairre CLI (#830)
* Working CLI questionairre

* Forgot space

* Finish the rest

* Rename and make all funcs/options public

* Include Brian Chao in copyright

* Working number inptus

* Fix num

* Linebreak to ease viewing

* Finish sagemaker

* Clean

* Fix mixed precision
2022-11-09 14:51:16 -05:00
74642aac95 Add support for torch dynamo (#829)
* Add torch dynamo optimizations

* More work

* Fix enum values

* Add to basic config

* fix more tests

* Apply suggestions from code review

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-11-09 11:30:30 -05:00
ceffd47cdd v0.15.0.dev0 2022-11-08 14:26:26 -05:00
4ed46648e7 Isolate distrib_run (#828) 2022-11-08 11:00:08 -05:00
56308da519 Update pr docs actions (#827) 2022-11-08 10:49:25 -05:00
4855405041 adding support to return logits and generate for Megatron-LM GPT models (#819)
* adding support to return logits and generate for Megatron-LM GPT models

* addressing issue

* fix 🐛

* fixing many 🐛 and adding documentation

* remove warning

* address comments

* add docs and utilities for megatron-lm gpt generate and logits
2022-11-08 19:44:11 +05:30
cea6aaa116 Rename (#824) 2022-11-07 15:18:23 -05:00
91f8fb018b rename sklearn to proper dep (#825) 2022-11-07 15:17:26 -05:00
05d58c835f Update docs (#823) 2022-11-07 11:14:53 -05:00
874c4967d9 Rename pod-config to tpu-config + docs (#818)
* Refactor and docs

* Move file

* tests
2022-11-03 08:53:53 -04:00
dc9966df93 Update CLI docs and use mps rather than mps_device (#814)
* Update docs and use mps

* A few more deprecation warnings

* Clean

* Newlines
2022-11-02 15:34:33 -04:00
e2cd36b6cc Mlflow-tracker-v2 🔥 (#794)
* mlflow tracker class

* is_mlflow_available

* is_mlflow_available

* include mlflow dataclass

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/tracking.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* eliminate confusing variables

* make style, quality

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-02 08:38:33 -07:00
6a0082de30 Act on deprecations (#813)
* Deprecations

* fp16 related warnings

* version num

* Last one

* Keep consistent with old
2022-11-02 10:38:17 -04:00
102cf00ded add recurse argument in remove_hook_from_module (#812)
* add `recurse` argument in `remove_hook_from_module`

* correct docstring

* Update src/accelerate/hooks.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/hooks.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-02 10:32:28 -04:00
359bd1bc5f adding support to pickle and unpickle AcceleratedOptimizer (#811)
* adding support to pickle and unpickle `AcceleratedOptimizer`

* address comment

Co-Authored-By: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* add test

* fixing test

* 😅

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2022-11-02 19:43:37 +05:30
0de1644126 Refactor CLI to improve readability (#810)
* Rewrite CLI

* Comments

* remove rich

* Fix all issue

* Check better for accelerate launch and accelerate-launch

* rm aws

* Resource then paradigm

* Naming nits + make public
2022-11-02 10:04:19 -04:00
b816e258a9 Introduce a pod-config command (#802)
* Add in ability to configure pod and start CLI commands

* Further tests, add a help

* Added tests and cleaned up!

* Fix weird missing parts

* MOre tests + install accelerate with flag

* Unused pod_config_file

* Test with multiple commands

* Update src/accelerate/commands/config/cluster.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Clarity during printing

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Make public names for readability

* Fix test expected outputs and refactor response

* Fix ref errors

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-11-01 10:00:48 -04:00
c4c444a158 Deal with optimizer.differentiable in PyTorch 1.13.0 (#803)
* Update accelerator.py

* Update src/accelerate/accelerator.py

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-31 19:52:56 -04:00
f3129d1130 fix: add pdsh as default launcher (#800) 2022-10-31 16:02:23 -04:00
8c928057c6 Fix extraction of state dict in offload (#795) 2022-10-31 12:29:02 -04:00
8c0505d760 Fix device_map="auto" on CPU-only envs (#797) 2022-10-31 12:28:52 -04:00
16d548c358 Add even_batches keyword to Accelerator (#781)
* Add even_batches argument to prepare dataloader

* Add even_batches argument to accelerator

* Add e2e tests for even_batches

* Fix double import

* Fix variable name bug in test script

* Refactor test script to pytest format

* Apply documentation suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update BatchSampler warnings

* Fix typo

* Remove comment

* Add main driver method to even_batches tests

* Fix tests

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2022-10-31 12:16:03 -04:00
415b73853a Consider top-level buffers when computing infer_auto_device_map (#792)
* add `buffers` support when computing `infer_auto_device_map`

* should fix broken test

* fix broken test

* simpler solution

- use `model.named_buffers(recurse=False)` instead
Co-authored-by: Sylvain Gugger <sgugger@users.noreply.github.com>

* forward contrib credits from suggestion

Co-authored-by: sgugger <sgugger@users.noreply.github.com>
2022-10-27 23:14:17 +02:00
a5525406fc separate dataloader generator from sampler generator (#789)
* separate dataloader and sampler generator

* resolving comments

Co-Authored-By: YouJiacheng <1503679330@qq.com>
Co-Authored-By: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* minor comment resolution

Co-authored-by: YouJiacheng <1503679330@qq.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-26 02:08:54 +05:30
37b2aa0173 Add Dev Container configuration (#782)
* Add devcontainer

* Add dev container info to CONTRIBUTING.md

* Make cpu image the dev container default

* Fix comment typo
2022-10-21 10:05:49 -04:00
4df576efe8 Work in kaggle! (#783) 2022-10-20 15:39:01 -04:00
87a7e0783f fix transformers tests (#777) 2022-10-19 21:32:11 +02:00
5c8f181ab0 Add same_network + docs (#780) 2022-10-19 13:26:08 -04:00
6f7fa4f48e Make rich toggleable and seperate out a new environment utility file (#779)
* Toggleable rich

* Refactor into environment utils
2022-10-19 12:15:12 -04:00
15a854e2cd Allow BatchSamplerShard to not even out batches (#776)
* Allow BatchSamplerShard to not even out batches

* Update src/accelerate/data_loader.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Add early error

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-10-19 11:46:25 -04:00
63d0653647 Add defaults for launchers (#778)
* Add defaults

* DeepSpeed
2022-10-19 10:19:04 -04:00
21b7f15c96 Fix flakey wandb test (#775)
* Fix flakey wandb
2022-10-18 16:47:31 -04:00
49cd8d37e6 Fix all github actions issues + depreciations (#773)
* Fix all github actions issues + depreciations
2022-10-18 12:27:05 -04:00
1eafa55b80 Fix number of devices in get_balanced_memory (#774)
* Fix number of devices in get_balanced_memory

* Add test
2022-10-18 11:57:52 -04:00
9114fb09d5 Regression cli tests (#772)
* New cli tests

* Add CLI testing

* Makefile + tests

* Segment out CLI in makefile better
2022-10-18 11:07:36 -04:00
5e8ab12c3d Move io_same_device hook to before attach_align_device hook on cpu_offload and disk_offload. (#768)
* Move io_same_device hook to before attach_align_device hook on cpu_offload and disk_offload.

That way we can keep the changes on forward method for the whole module without deleting the hook we want to keep: the one with execution device and configurations on how to move the tensors between devices.

* add append flag to add hook to enable usage of sequential hooks

* add tests to append hooks

* add docstring to append flag

* address review comments

* move io_same_device hook to top on cpu_offload and disk_offload

* trigger ci
2022-10-18 10:13:52 -04:00
a63511107b updating docs to use fork of megatron-lm and minor example/docs fix (#766)
* updating docs to use fork of megatorn-lm and minor example fix

* Update megatron_lm_gpt_pretraining.py

* minor example fixes to have logs in sync with config and args

* Update megatron_lm_gpt_pretraining.py
2022-10-17 21:58:59 +05:30
Sam
a7334df955 Only wrap modules in DDP if they require grad (#761) 2022-10-17 10:14:42 -04:00
4a7268df9c update docs (#759)
* addressing comments

* minor doc updates

* Update training_zoo.mdx

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-15 08:22:49 +05:30
148f6dcaaa refactor (#758) 2022-10-15 08:05:06 +05:30
Sam
693d46826e Return unclipped gradient from grad_clip_norm_ (#756) 2022-10-14 10:04:43 -04:00
dfba92adcd ensure megatron is 2.2.0+ (#755)
* ensure megatron is 2.2.0+

* address comment

* formatting
2022-10-14 09:49:12 +05:30
4dc5049927 Change num_cpu_threads_per_process default (#753)
* Change num_cpu_threads_per_process

* Adjust based on Sylvain's feedback

* Explicit checking for None

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-13 07:26:27 +10:00
e3ebf176b8 Megatron-LM integration (#667)
* Megatron-LM integration

* add code and resolve comment

Co-Authored-By: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* add code

* add code

* fix many 🐛

* add code

* add code and reverting tracker processes

* updating logging utilities, fixing Pipeline Parallelism and dataset/dataloader 🐛 s

1. Fixing bugs related to Pipeline Parallelism
2. Fixing bugs related to dataloaders/datasets.
3. Fixing logging utilities so that all logging and tracking happens on last process when using Megatron.

* addressing comments

* resolving comments

* update code

* refactoring and adding code to support custom implementation of`AbstractTrainStep` class

* minor change

* Many fixes for supporting custom TrainStep and Megatron Indexed Datasets

* Add code, 🐛 fixes and a initial doc file with headings

* fixing a big 🐛 related to loading checkpoints

* adding doc and an example

* example test CI

* docs

* more docs

* more doc changes

* more doc changes

* docs

* more docs

* doc fixing

* trying if we can directly import megatronlm utils

* doc fixing and throwing error if megatron isn't available.

* resolving comments

* fixes to bert and t5 and more docs

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-13 00:34:08 +05:30
2697bebeb4 Add gpu_ids to SageMakerConfig though it should never be set (#751) 2022-10-12 05:48:47 +10:00
1f25825211 Use HTML relative paths for tiles (#749) 2022-10-11 21:08:18 +02:00
b04776159e [Device map] nn.Parameter don't have children (#747)
* [Device map] nn.Parameter don't have children

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-10 15:13:08 +02:00
9179e6bf85 Fix num_processes is not defined (#746)
* Fix num_processes is not defined

* Also reorganize questions

Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com>
2022-10-07 11:53:05 -04:00
ba88a710eb [ds launcher] un-hijack PYTHONPATH (#741)
* [ds launcher] un-hijack PYTHONPATH

* move to utils

* improve doc, arg names

* fix

* Update src/accelerate/commands/launch.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* style

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-10-06 21:56:51 +05:30
66edfe103a Add non_blocking kwarg to send_to_device() (#607) 2022-10-05 20:51:59 +02:00
ec183666b6 v0.14.0.dev0 2022-10-05 14:28:39 -04:00
a54cd0abd8 Release: v0.13.0 2022-10-05 14:24:25 -04:00
5fff81bac8 Auto grad accum example (#742)
* Auto grad accum example

* Include auto grad accum to exlcusion list

* Typo fix calculate -> calculate

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-05 11:42:08 -04:00
a75a56f1c2 Include examples for CI (#740) 2022-10-04 15:55:46 -04:00
b437b8b893 Fix memory leak (#739)
* Fix memory example

* Include update to docs

* batch size
2022-10-04 15:55:40 -04:00
ffca93b4a9 trlx (#738) 2022-10-04 10:23:01 -04:00
e5c9b4f2ce Add an example zoo to the documentation (#737)
* Training zoo

* Reword
2022-10-03 14:44:55 -04:00
9eb9aeefaf Add a tutorial on proper benchmarking (#734)
* Performance tut

* toc

* Apply suggestions from code review - Tips will be the death of me

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-03 13:55:13 -04:00
6ab88253cc Remove auto-bug label in issue template (#735) 2022-10-03 12:53:27 -04:00
870a7badc4 Allow for GPU-ID specification on CLI (#732)
* Specifiy GPU ids on CLI

* Configurable gpu-ids

* Expand to deepspeed

* all

* Fix nit

* Fix typo in docs

* futher tweaks

* Further tweaks

* Change for mps specifically
2022-09-30 15:35:54 -04:00
9e4fe78b95 Fix issue with one-cycle logic (#728)
* Fixed!

* Fix and write tests
2022-09-28 16:35:36 -04:00
f3c39b4c9c Fix old naming (#727) 2022-09-28 12:00:22 -04:00
2088172c9f Make running tests more efficient (#611)
* Restructure actions and make running tests more efficient

* Try with source code adjustment

* First make sure they work

* Don't move

* Local workflows reference

* Keep it as a step

* Try changing a line

* Try not using tertiary

* Fix test

* Make tests wait

* Remove linechange

* Include and run based on new setup

* Try with removing workflow

* Re-add in, it works!

* Rename for clarity
2022-09-28 11:53:14 -04:00
68fad169e6 Build and Release docker images on a release (#725)
* Docker on release

* Releases

* FOR TESTING, REVERT ONCE DONE

* With checkout

* Revert, works!

* published

* Accidental regression
2022-09-28 06:58:00 -04:00
d21c213318 Fix default for num processes (#726)
* Fix default for num processes

* Apply suggestions from code review

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-09-27 17:09:51 -04:00
40bd4aa5ce Fix regression issue (#724) 2022-09-27 12:47:48 -04:00
6d038e19a1 Specify gradients in model preparation (#722)
* Specify when a model doesn't need to be prepared more
2022-09-26 14:19:29 -04:00
b67b760f66 Allow custom device placements for different objects (#716)
* Allow custom device placements for different objects

* Update src/accelerate/accelerator.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Style

* Make doc-builder happy

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-09-23 11:31:15 -04:00
56ce94dc29 More docstring nits (#715)
* More docstring examples + nits

* Just use module since everything is wrapped

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-09-23 10:20:59 -04:00
8b16276a41 refactor(accelerate): readability improvements (#713)
* refactor(accelerate): readability improvements

Signed-off-by: Ryan Russell <git@ryanrussell.org>

* docs: `all` fixup

Signed-off-by: Ryan Russell <git@ryanrussell.org>

* Style

Signed-off-by: Ryan Russell <git@ryanrussell.org>
Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com>
2022-09-22 09:36:05 -04:00
6a39d010d7 sagemaker fixes and improvements (#708)
* adding aws sagemaker examples to examples readme

* refactoring and correcting documentation
2022-09-22 10:56:46 +05:30
82a7afdde2 docs: hooks readability improvements (#712)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-21 16:49:41 -04:00
a5d0278055 refactor(test_tracking): key_occurrence readability fixup (#710)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-21 16:26:35 -04:00
9ba82f9ca4 docs: utils readability fixups (#711)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-21 16:26:05 -04:00
293a17b4f7 docs: examples readability improvements (#709)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-21 15:57:36 -04:00
efb33d67ea Update runners with report structure, adjust env variable (#704)
* Fixup rest of the runners

* Install pytest-reportlog

* Use more explicit env var

* Fixup
2022-09-20 10:10:58 -04:00
6dc429f6f7 Add in report generation for test failures and make fail-fast false (#703)
* Add logging
2022-09-19 17:24:46 -04:00
9dfc6da9ad [doc] Fix 404'd link in memory usage guides (#702)
* Fix 404'd link in memory usage guides

* Add a dot to the final sentence
2022-09-16 07:34:17 -04:00
1044c30cb1 override DeepSpeed grad_acc_steps from accelerator obj (#698)
* override DeepSpeed `grad_acc_steps` from `accelerator` obj

* resolving comment
2022-09-15 00:37:03 +05:30
4f0a1102d1 Improve init_empty_weights to override tensor constructor (#699)
* Prevent module constructor from building tensor in cpu and then move it to meta

* Patch torch.load

* Maybe the hack to override torch.load is too dangerous?

* Make style

* No need to override torch.load as one can just load from config intead

* No sure why there's a include_buffers argument, but we need to override tensor constructor only when include_buffers argument is True
2022-09-14 18:14:51 +02:00
8d275977c3 fixing rng sync when using custom sampler and batch_sampler (#696)
* fixing rng sync when using custom sampler and batch_sampler

* addressing comments

* 

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-09-12 20:16:06 +05:30
84444658a6 fixing support for Apple Silicon GPU in notebook_launcher (#695) 2022-09-12 17:18:49 +05:30
bc70074350 Fix DataLoader with samplers that are batch samplers (#687) 2022-09-09 11:49:19 -04:00
293757d2ae rng state sync for FSDP (#688) 2022-09-09 17:34:52 +05:30
98823de572 Clean up DispatchDataloader a bit more (#686) 2022-09-07 13:13:15 -04:00
2b08b27bed Fix skip in dispatch dataloaders (#682)
* Fix skip in dispatch dataloaders

* Remove skip altogether

* Fix last occurence
2022-09-07 07:44:36 -04:00
c69659ce39 🐛 fix (#683) 2022-09-06 21:36:00 +05:30
4274a419ef adding torchrun elastic params (#680) 2022-09-06 20:24:16 +05:30
4400eb90b2 DeepSpeed launcher related changes (#626)
* launcher related changes + minor fixes

* removing minor fixes

* remove minor change

* deepspeed multinode standard launcher

* undo

* fixing the multi-node standard launcher
2022-09-06 17:36:19 +05:30
200546c5d3 deepspeed enhancements and fixes (#676)
* deepspeed enhancement and fixes

* refactor code

* 🐛 fix

* 😅
2022-09-06 17:30:35 +05:30
60d6807c36 Test for min torch version + fix all issues (#638)
* Test for min torch

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-09-02 16:56:35 -04:00
3ab46514c9 Specify local network on multinode (#674)
* Specify local

* Update src/accelerate/commands/config/cluster.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-09-02 16:48:23 -04:00
c9a88a8e06 Add aim tracker for accelerate (#649)
* Add aim tracker for accelerate

* Use close and name arg specifically


* Fix nits
2022-09-02 16:39:38 -04:00
a2a369e026 Make rich an optional dep (#673)
* Make rich an optional dep

* lagging import fix
2022-09-02 15:43:18 -04:00
44be28fbef Fix multi-node issues from launch (#672)
* Use different bits based on cloud vs non

* rdvz_backend fix
2022-09-02 15:04:49 -04:00
cf1e8dce75 Manim animation of big model inference (#671)
* Manim animation of big model inference

* Make into big section, not small

* Revert back to old style of headers
2022-09-02 10:34:46 -04:00
52c2b1c244 Cache torch_tpu check (#670) 2022-09-01 10:38:38 -04:00
efa8e7f89b accelerate bibtex (#660) 2022-09-01 08:19:57 +05:30
5e5148852b Improve docstrings more (#666) 2022-08-31 21:54:18 -04:00
00f47d035e Use debug for loggers (#655) 2022-08-31 11:29:35 -04:00
cb54e1023e Saving hyperparams in yaml file for Tensorboard for #521 (#657)
* Saving hyperparams in yaml file for Tensorboard

* Saving yaml file in logging dir

* Changing hardcoded path

* Adding try/catch, cleaning path name

* Raise error

* Updating path name

* Path create
2022-08-29 11:14:44 -04:00
d0f5f4a630 Small nits to grad accum docs (#656)
* Small nits to docs

* Be explicit on one vs other

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-08-26 06:22:07 -04:00
469b61e0bf Add static_graph arg to DistributedDataParallelKwargs. (#637)
* Add static_graph arg to DistributedDataParallelKwargs.

supported by https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html

This is particularly useful when using gradient checkpointing

See https://discuss.pytorch.org/t/ddp-and-gradient-checkpointing/132244/3 for more details

* Add 1.11 warning for static graph argument.
2022-08-20 15:30:59 -04:00
4484438626 fix link (#645) 2022-08-20 14:15:56 +02:00
36420f53f3 remove check for main process for trackers initialization (#643)
* remove check for main process for trackers initialization

* removed is_main_process check for trackers initialization
2022-08-20 07:08:22 -04:00
a3d94916a8 make init_trackers to launch on main process (#642) 2022-08-19 09:20:33 -04:00
b0f8189d34 Put back in place the guard (#634) 2022-08-12 15:21:55 -04:00
55907ef1fb Use torchrun for multinode (#631)
* Distrib launch with config

* Add param for rdvz
2022-08-12 13:06:22 -04:00
e31d8ecaf1 minor tracker fixes for complete* examples (#630)
* minor tracker fixes for complete* examples

* state repr minor fix
2022-08-12 21:32:22 +05:30
cd46dc2f4f update MPS support docs (#629) 2022-08-12 08:49:18 -04:00
5020788db8 Integrate Rich into Accelerate (#613)
Pretty error logs are here 🤗
2022-08-11 12:59:55 -04:00
010aa93cbc Fix multi-node issues and simplify param logic (#627)
* Less hacky version for args, fix multinode param
2022-08-11 12:56:33 -04:00
92341b6233 M1 mps fixes (#625)
* M1 mps fixes

* Update src/accelerate/state.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-08-11 21:03:57 +05:30
9fd08d79f9 Fully remove subprocess from the multi-gpu launcher (#623)
* Remove one of the subprocesses!
2022-08-10 11:00:46 -04:00
2656ca619f Update README.md (#622) 2022-08-09 15:00:14 -04:00
4df9010b70 Fix example (#620) 2022-08-09 12:32:50 -04:00
94b8c17b4a Added GANs example to examples (#619)
* Added link to example of Accelerator with GANs

* Update README.md

* Update examples/README.md

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-08-09 11:59:19 -04:00
35e1cd3978 Trigger doc build 2022-08-09 08:38:05 -04:00
a08779f603 Fix DeepSpeed CI (#612)
* Try with integration on makefile
2022-08-08 14:54:56 -04:00
efc7aeb064 Fix typo in docs/index.mdx (#610) 2022-08-08 18:34:08 +02:00
080f4bd7c1 v.0.13.0.dev0 2022-08-04 09:04:05 -04:00
9a660e082f fixing deepspeed slow tests issue (#604)
* fixing deepspeed slow tests issue

* skip `checkpointing` test as it leads to RAM overuasge

* disabling fsdp cpu offload mem test
2022-08-04 17:59:54 +05:30
0bb808276a add more conditions on casting (#606) 2022-08-04 08:22:16 -04:00
67d68b8adf Remove redundant .run in WandBTracker. (#605) 2022-08-04 07:23:22 -04:00
24c28a1adc Fix some typos + wordings (#603)
* Fix all typos + wordings

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-08-03 11:19:20 -04:00
afa7490ff4 M1 GPU mps device integration (#596)
* fixing metric computation

* refactoring

* Mac M1 GPU `mps` device support

* Update state.py

* reverting the `nlp_example.py` changes from the copied branch

* resolve comments

Co-Authored-By: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* docs quality

* Update docs/source/usage_guides/mps.mdx

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* resolving comments

* resolving comments

Co-Authored-By: Zachary Mueller <7831895+muellerzr@users.noreply.github.com>

* resolving comments

* resolving comments

* resolving comments on docs

Co-Authored-By: Zachary Mueller <7831895+muellerzr@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
Co-authored-by: Zachary Mueller <7831895+muellerzr@users.noreply.github.com>
2022-08-03 18:55:57 +05:30
b10fd818f9 reorg of test scripts and minor changes to tests (#602)
* reorg of test scripts and minor changes to tests

* adding the recent fix of deepspeed
2022-08-03 18:03:43 +05:30
8944975a3c Reenable Gather for Metrics (#590)
* Clean and finish

Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com>
2022-08-02 13:45:17 -04:00
15a8c6c7be Move warning (#598) 2022-08-02 13:42:08 -04:00
b52b793ea8 Shorthand way to grab a tracker (#594)
* Enable grabbing the underlying tracker
2022-08-02 09:12:32 -04:00
5dd4eaf6fa Pin deepspeed (#595) 2022-08-02 09:11:34 -04:00
29a222a261 Improve docstring (#591) 2022-08-01 17:41:27 -04:00
217dd69682 TESTS! (#589) 2022-08-01 15:56:02 -04:00
7a5a96b7b2 Fix DispatchDataloader (#588)
* Fix DispatchDataloader

* Fix last bug

* Revert part of the test fixes
2022-08-01 15:55:35 -04:00
447ad0e635 Complete revamp of the docs (#495)
Completely revamp the entirety of the Accelerate documentation
2022-08-01 10:09:14 -04:00
d5a0fc2d62 Small fixed for balanced device maps (#583) 2022-07-28 15:27:27 -04:00
7f5c60c182 Use main_process_first in the examples (#581) 2022-07-28 12:11:07 -04:00
503057132d Skip and raise NotImplementedError for gather_for_metrics for now (#580)
* Skip and raise NotImplementedError for now
2022-07-28 11:56:00 -04:00
c826b51a82 minor FSDP launcher fix (#579) 2022-07-28 20:38:21 +05:30
e0212893ea Fix gather_for_metrics (#578)
* Fix gather_for_metrics
2022-07-27 14:20:52 -04:00
e809268580 Refine test in set_module_tensor_to_device (#577) 2022-07-27 11:36:48 -04:00
f438a813ff Fix set_module_tensor_to_device (#576)
* Fix

* Refine test

* Fix test
2022-07-27 09:46:12 -04:00
75053e45c3 Add 8 bit support - chapter II (#539)
* Meta init/tensor_to_device logic for Int8 Parameters.

* add 8 bit support

* add special modules support

Co-authored-by: timdettmers <timdettmers@users.noreply.github.com>

* bad formatting

* bad formatting

* restoring the poor lines that were alone!

* small hack

- replaced paramter replacement logic

* add int8 support - v1

* replace cpu by device

* better refactoring

* put to buffer

* add else statement to avoid breaking changes

* styling

Co-authored-by: Tim Dettmers <tim.dettmers@gmail.com>
Co-authored-by: timdettmers <timdettmers@users.noreply.github.com>
2022-07-27 07:12:49 -04:00
015f228c5e Fix tests, add wandb to gitignore (#573)
* Fix tests, add wandb to gitignore

* Clean
2022-07-26 16:08:35 -04:00
1486fa35b1 Fix step (#572) 2022-07-26 12:29:05 -04:00
7a49418e51 Speed up main CI (#571)
* Speed up ci by reducing training epochs
2022-07-26 11:35:18 -04:00
d26478b95d ccl version check and import different module according to version (#567)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2022-07-26 10:11:05 -04:00
bf0017f0a8 set default num_cpu_threads_per_process to improve oob performance (#562)
* set default num_cpu_threads_per_process to improve oob performance

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* fix log info

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2022-07-26 10:10:51 -04:00
e3642a469f Add a tqdm helper (#564)
* tqdm helper
2022-07-26 10:00:00 -04:00
d6b7536750 Rename actions to be a bit more accurate (#568)
* Run slow + rename

* Name message more accuratly
2022-07-26 09:42:21 -04:00
5e25edd3b6 Fix clean (#569) 2022-07-26 09:26:05 -04:00
0c6bdc2c23 enhancements and fixes for FSDP and DeepSpeed (#532)
* checkpointing enhancements and fixes for FSDP and DeepSpeed

* resolving comments

1. Adding deprecation args and warnings in launcher for FSDP
2. Handling old configs to work with new launcher args wrt FSDP.
3. Reverting changes to public methods in `checkpointing.py` and handling it in `Accelerator`
4. Explicitly writing the defaults of various FSDP options in `dataclasses` for readability.

* fixes

1. FSDP wrapped model being added to the `_models`.
2. Not passing the env variables when args are None.

* resolving comments

* adding FSDP for all the collective operations

* adding deepspeed and fsdp tests

1. Removes mrpc datafiles and directly relies on HF datasets as it was throwing `file not found` error when running from within `tests` folder. Updating `moke_dataloaders` as a result.
2. adding `test_performance.py`, `test_memory.py` and `test_checkpointing.py` for multi-gpu FSDP and DeepSpeed tests

* reverting `mocked_dataloader` changes

* adding FSDP tests

* data files revert

* excluding fsdp tests from `tests_core`

* try 2

* adding time delay to avoid `torchrun` from crashing at times leading which causing flaky behaviour

* reducing the time of tests

* fixes

* fix

* fixes and reduce time further

* reduce time further and minor fixes

* adding a deepspeed basic e2e test for single gpu setup
2022-07-26 18:14:29 +05:30
91ff425bb0 fix: saving model weights (#556)
* fix: saving model weights

checkpointing not saving model weights if calling `accelerator.prepare_model` instead of `accelerator.prepare`
resolves issue: https://github.com/huggingface/accelerate/issues/555

* fix: saveing model weights for optimizer and scheduler
2022-07-26 08:44:09 -04:00
cc1007163b Fix wrong indentation 2022-07-26 07:47:40 -04:00
7d97e9c641 add on_main_process decorators (#488)
* add some useful decorators

* make on_(local_)main_process member of Accelerator

* update examples

* add on_process and on_local_process

* fixes wrong name for `on_local_process`

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-07-26 13:14:35 +02:00
Kim
f90ec5255b Update imports.py (#554)
torch_ccl rename
2022-07-26 13:07:37 +02:00
5391412d64 unpin datasets (#563) 2022-07-25 16:56:08 +02:00
6c4edc362f Create good defaults in accelerate launch (#553)
* Support not passing in args to launch
2022-07-22 09:40:59 -04:00
b08ae9730e Fix a few minor issues with example code in docs (#551)
* Fix a few minor issues with example code in docs

- enumerate is not actually used
- variable name "labels" does nto match
- prepare method should be called

* Apply style
2022-07-22 14:39:15 +02:00
e98dc22a37 deepspeed version 0.6.7 fix (#544)
* deepspeed version hotfix

* Update setup.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* resolving the issue! yay 🤗

* resolving circular dependency issue 😅

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-07-22 11:51:06 +05:30
27d8d45817 Rename test extras to testing (#545)
* Extras test to testing

* Fix naming
2022-07-21 15:09:38 -04:00
fdf471519c Add production testing + fix failing CI (#547)
* Add production testing

* Fix CI failure on transformers
2022-07-21 14:32:27 -04:00
164943c7d7 Add a gather_for_metrics capability (#540)
* Add test and full implementation
2022-07-21 07:40:37 -04:00
9c1e68849e Allow for kwargs to be passed to trackers (#542)
* Allow for kwarg passing to trackers

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-07-21 07:30:55 -04:00
d6c72bdff6 Add balanced option for auto device map creation (#534)
* Add balanced option for auto device map creation

* More options

* Add low0 option

* Add documentation

* Add tests

* Fix tests

* Update docs/source/big_modeling.mdx

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-07-20 17:39:52 +02:00
158acdd22c Add support for downcasting bf16 on TPUs (#523)
* Allow for downcast
2022-07-20 05:50:08 -04:00
f6df405b5c Add more documentation for device maps computations (#530)
* Add more documentation

* Unbreak navbar

* Apply suggestions from code review

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Address review comments

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2022-07-20 09:11:54 +02:00
7cf13b229f Restyle prepare one (#531) 2022-07-18 11:53:56 -04:00
e965b56bb3 Pick a better default for offload_state_dict (#529) 2022-07-18 16:55:59 +02:00
ddedeb4062 fix some parameter setting does not work for CPU DDP and bf16 fail in… (#527)
* fix some parameter setting does not work for CPU DDP and bf16 fail in DDP path

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* if number_machine > 1, get the ip and port accelerate config set

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* if main_process_ip and port is set by user, use them, else use default "127.0.0.1" when DDP is used in one machine

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2022-07-18 15:19:52 +02:00
0ee319b39b Really v0.12.0.dev0 2022-07-18 09:14:15 -04:00
ae5ca34f13 v0.12.0.dev0 2022-07-18 08:55:50 -04:00
eebeb59a36 Fix accelerate tests command (#528) 2022-07-18 14:47:34 +02:00
be4b74f42f Relese: v0.11.0 2022-07-18 08:27:58 -04:00
c93b3eb5d7 FSDP integration enhancements and fixes (#522)
* FSDP integration enhancements and fixes

* bug fixes

1. fix circular dependency
2. Add model print statement in FSDP example
3. minor fixes

* removing `always_wrap` as it is rarely useful

* removing comment

* resolving comments

* fsdp fp16 mp uses ShardedGradScaler

* fix import

* fix check

* add exception when class to wrap not found in model

* adding `FSDP_BACKWARD_PREFETCH`

* fix
2022-07-18 17:45:58 +05:30
3eea8ceee0 Warn user if no trackers are installed (#524) 2022-07-15 18:16:00 +02:00
7abc708be2 Fixup all example CI tests and properly fail (#517)
* Clean and make all tests pass
2022-07-15 18:15:45 +02:00
bb78b04cce fixing deepspeed multi-node launcher (#514)
* fixing deepspeed multi-node launcher

* minor fixes

* handling env variables for accelerate to correctly work

* resolving comments
2022-07-14 18:40:48 +05:30
7e6593756f Add special Parameters modules support (#519)
* Meta init/tensor_to_device logic for Int8 Parameters.

* add 8 bit support

* add special modules support

Co-authored-by: timdettmers <timdettmers@users.noreply.github.com>

* bad formatting

* bad formatting

* restoring the poor lines that were alone!

Co-authored-by: Tim Dettmers <tim.dettmers@gmail.com>
Co-authored-by: timdettmers <timdettmers@users.noreply.github.com>
2022-07-13 12:46:36 -04:00
960fd9d86a Don't unwrap in save_state() (#489) 2022-07-13 12:46:21 -04:00
70ca65a9a1 Fix a bug when reduce a tensor. (#513)
* return reduced result

* update doc for Accelerator.reduce

* update doc in Accelerator.reduce

* fix reduce behavior for gpu devices
2022-07-13 09:19:01 -04:00
ea0d5368bd Add benchmarks (#506)
* Add benchmarks

* Oops! Forgot one file

* Update benchmarks/README.md

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-07-12 15:16:45 -04:00
78357f44b3 Add gradient accumulation doc (#511)
* Gradient accumulation doc

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-07-12 17:36:45 +02:00
c7526e9483 Make gradient accumulation work with dispatched dataloaders (#510)
* Make grad accum work with dispatch dl

* Split print over multiple lines
2022-07-12 17:12:39 +02:00
f5ef120e77 Fix DispatchDataLoader length when split_batches=True (#509) 2022-07-12 10:35:35 -04:00
3c1f97c386 SageMaker enhancements to allow custom docker image, input channels referring to s3/remote data locations and metrics logging (#504)
* SageMaker DP and MP Support

* fix 😅

* removing SageMaker MP option

* adding support for custom image_uri, data location and metrics
2022-07-12 13:25:52 +05:30
a0514dd809 SageMaker DP Support (#494)
* SageMaker DP and MP Support

* fix 😅

* removing SageMaker MP option
2022-07-09 00:14:57 +05:30
b20f90ab17 Fix scheduler in gradient accumulation example (#500)
* Fix scheduler in gradient accumulation example

* Phrase better how the scheduler is stepped during grad accum

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-07-08 13:41:43 -04:00
cfb2a3e239 update dataloader wrappers to have total_batch_size attribute (#493)
* update dataloader wrappers to have `total_batch_size` attribute

* fix

* resolving comments

* Update src/accelerate/data_loader.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* quality

* add docstrings

* Update src/accelerate/data_loader.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* docstrings iter 2 + quality + minor change in doc

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-07-08 21:16:31 +05:30
86ce737d7f Introduce automatic gradient accumulation wrapper + fix a few test issues (#484)
* Have accelerator handle gradient accumulation

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-07-05 15:49:36 -04:00
deffaba8d6 add use_distributed property (#487)
* add distributed property in accelerate_state

* ensure num_process > 1
2022-07-05 09:19:44 -04:00
6ebddcd5e0 fixing fsdp autowrap functionality (#475)
* fixing fsdp autowrap functionality

* updating version requirements

* update version to latest torch stable version

* quality
2022-07-01 10:00:47 +05:30
4a7bc3bcb7 Use datasets 2.2.0 for now (#481) 2022-06-28 12:31:41 -04:00
1f96f3cf85 Rm gradient accumulation on TPU (#479)
* Rm gradient accumulation on TPU for now
2022-06-28 12:29:58 -04:00
bbca2700c7 Revert "Pin datasets for now (#477)" (#478)
This reverts commit a8eca60d57e8294e666b765b5331770aa0c58893.
2022-06-28 10:09:11 -04:00
a8eca60d57 Pin datasets for now (#477) 2022-06-28 09:47:39 -04:00
329209871f Some typos and cosmetic fixes (#472) 2022-06-27 05:40:07 -07:00
619ef04f09 Dev version 2022-06-24 16:41:09 -04:00
9d8ed50f7b Fix when TPU device check is ran (#469) 2022-06-24 12:07:38 -04:00
196856f357 Refactor Utility Documentation (#467)
* Add a utilities doc
2022-06-23 16:34:01 -04:00
3a5490b066 Add docbuilder to quality (#468) 2022-06-23 14:36:16 -04:00
24be733d84 Expose some is_*_available utils in docs (#466) 2022-06-23 10:34:45 -04:00
775bc790e7 Cleanup CI Warnings (#465)
* Fix named tuple warning

* Use torch AdamW instead of transformers

* Make regex string instead of literal
2022-06-23 10:06:19 -04:00
799fa935e9 Link CI slow runners to the commit (#464)
* Tweak trigger logic to link actions together
2022-06-23 08:56:01 -04:00
3ccbd9f7a0 Fix subtle bug in BF16 (#463)
* mixed precision bugfix

* Use is_tpu_available
2022-06-23 08:55:13 -04:00
f13c59f91e Include bf16 support for TPUs and CPUs, and a better check for if a CUDA device supports BF16 (#462)
* Support bf16 on TPU, CPU, and GPU in Accelerator directly
2022-06-22 17:53:42 -04:00
d39c57c11f Handle bfloat16 weights in disk offload without adding memory overhead (#460) (#461) 2022-06-22 09:13:23 -04:00
e2a968c66d Handle bfloat16 weights in disk offload (#460)
* Handle bfloat16 weights in disk offload

* Address review comments
2022-06-21 18:06:57 -04:00
dc243c0db1 Raise a clear warning if a user tries to modify the AcceleratorState (#458)
* Reinitialize warning
2022-06-21 16:42:35 -04:00
97f4c9de61 Right step point (#459) 2022-06-21 15:11:03 -04:00
73a596593e Better checks for if a TPU device exists (#456)
* Check if a TPU device actually exists
2022-06-21 12:12:00 -04:00
eeaba598f4 Offload and modules with unused submodules (#442)
* Offload and modules with unused submodules

* Renaming

* Update src/accelerate/hooks.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Address review comment

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-06-17 20:04:39 -04:00
3d92caa241 Release: v0.10.0 2022-06-15 13:58:22 -04:00
fa17f207b5 Fix docstring (#447) 2022-06-15 13:54:04 -04:00
873dcc63a4 Migrate HFDeepSpeedConfig from trfrs to accelerate (#432)
* Migrate HFDeepSpeedConfig from trfrs to accelerate

* update state.py to resolve comments

1. Adds static method to have a simple API for integrating deepspeed config in transformers trainer.

* reverting changes and addressing comments

* Marking DepSpeed and FSDP as experimental in accelerate
2022-06-15 20:56:39 +05:30
40b6fe1784 Add psutil as depenedency (#445) 2022-06-15 11:03:52 -04:00
29eef234c9 Revamp TPU internals to be more efficient + enable mixed precision types (#441)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-06-14 17:41:20 -04:00
3f0876ac03 fix fsdp torch version dependency (#437) 2022-06-11 00:36:44 +05:30
450d51ce01 Create Gradient Accumulation Example (#431)
* Gradient accumulation example
2022-06-08 14:46:04 -04:00
1b2da6c6a5 init (#429) 2022-06-08 14:07:10 -04:00
1424a8e00d Introduce no_sync context wrapper + clean up some more warnings for DDP (#428) 2022-06-08 12:56:21 -04:00
b2afd4e8da updating tests to resolve runner failures wrt deepspeed revamp (#427)
* deepspeed revamp

* Update dataclasses.py

* Update deepspeed.py

* quality

* fixing code

* quality

* FIx imports

* saving 16bit model in zero stage 3

1. Saving 16bit model in zero stage 3
2. zero init in stage 3 support using HFDeepSpeedConfig

* quality

* adding test and fixing bugs

* update makefile for deepspeed tests

* Update test.yml

* adding `deepspeed` as requirement for tests

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* quality

* addressing comments

* add example and minor updates

1. Add example to show the usage of config file with revamped deepspeed support.
2. update required deepspeed version to 0.6.5
2. reverting `reinit` change as it is not required,
3. raising Exception when using `clip_grad_value` with DeepSpeed/FSDP.

* Documentation and Zero-3 Inference Support

1. Changes to support ZeRo Stage-3 Inference support.
2. minor bug fixes.
3. Documentation.

* doc fix

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* addressing comments

* update doc to address comments and bug fixes

1. update tests and add new one testing autofill functionality of `prepare` method.
2. fix bug related to zero-3 init related to HFDeepSpeedConfig
3. Update documentation addressing comments.

* removing image and hosting it on `documentation-images` dataset

* check for hidden_size for zero_opt heurisitics

* updating tests to resolve runner failures

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-06-07 16:21:26 +05:30
2130205626 Fix secrets in Docker workflow (#426)
* Fix secrets
2022-06-07 06:47:09 -04:00
1703b79a79 DeepSpeed Revamp (#405)
* deepspeed revamp

* Update dataclasses.py

* Update deepspeed.py

* quality

* fixing code

* quality

* FIx imports

* saving 16bit model in zero stage 3

1. Saving 16bit model in zero stage 3
2. zero init in stage 3 support using HFDeepSpeedConfig

* quality

* adding test and fixing bugs

* update makefile for deepspeed tests

* Update test.yml

* adding `deepspeed` as requirement for tests

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* quality

* addressing comments

* add example and minor updates

1. Add example to show the usage of config file with revamped deepspeed support.
2. update required deepspeed version to 0.6.5
2. reverting `reinit` change as it is not required,
3. raising Exception when using `clip_grad_value` with DeepSpeed/FSDP.

* Documentation and Zero-3 Inference Support

1. Changes to support ZeRo Stage-3 Inference support.
2. minor bug fixes.
3. Documentation.

* doc fix

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* addressing comments

* update doc to address comments and bug fixes

1. update tests and add new one testing autofill functionality of `prepare` method.
2. fix bug related to zero-3 init related to HFDeepSpeedConfig
3. Update documentation addressing comments.

* removing image and hosting it on `documentation-images` dataset

* check for hidden_size for zero_opt heurisitics

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-06-07 00:52:18 +05:30
05c641bc0c Introduce a Dependency Checker to trigger new Docker Builds on main (#424)
* Introduce warning + auto build

* Trigger only on merge to main
2022-06-06 07:30:39 -04:00
da78e296ba Enable slow tests nightly (#421) 2022-06-01 20:28:31 -04:00
9e0fff9291 Push out python 3.6 + fix all tests related to the upgrade (#420)
* Update Docker for py 3.7

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-06-01 16:49:27 -04:00
938b8f358d Speedup main CI (#419)
* Speed up workflow
2022-06-01 10:59:01 -04:00
d04e8e2baa Switch to evaluate for metrics (#417)
* Switch to evaluate for metrics

* Why the heck?

* Fix syntax error

* Install from githug

* Is this the culprit?

* Upgrade Python

* Protobouf 💩

* Install from git not necessary now

* Sneaky last tensorboard

* Let's try this way

* Forgot to add all files :-/
2022-06-01 09:57:57 -04:00
8db128498c Create an issue template for Accelerate (#415) 2022-06-01 09:15:23 -04:00
114707449b Introduce post-merge runners (#416)
* Introduce post-merge runners
2022-05-31 15:11:29 -04:00
3b51d6e9ad Fix debug_launcher issues (#413)
* change to require_cpu only
2022-05-31 14:59:28 -04:00
174eb3af1d Use main egg (#414) 2022-05-31 14:58:38 -04:00
d176b552c9 Introduce nightly runners (#410)
* Introduce nightly builds
* Fixup docker images slightly
* Make device-count specific test use `torch.cuda.device_count()` rather than `Accelerator.num_processes` to avoid bug.
2022-05-31 14:14:02 -04:00
95d1edbf8d Update requirements to pin tensorboard and include psutil (#408)
* Update test requirements to include psutil, tensorboard, and the right tensorflow version
2022-05-31 09:52:16 -04:00
a91575f1bb Fix CUDA examples tests (#407)
* Fix CUDA tests

* Use num_processes to keep everything under one test
2022-05-31 09:51:21 -04:00
146ce3df48 Move datasets and transformers to under func (#411) 2022-05-31 08:47:16 -04:00
94d88fb50d Fix CUDA Dockerfile (#409)
* Install git

* Fix CPU image as well
2022-05-31 08:47:08 -04:00
b515800947 Hotfix all failing GPU tests (#401)
* Fix up makefile
2022-05-26 14:13:19 -04:00
d1f7f99684 improve metrics logged in examples (#399) 2022-05-26 17:29:49 +05:30
00ee34d9a6 Refactor offload_state_dict and fix in offload_weight (#398) 2022-05-25 16:09:25 -04:00
f6ec2660f0 Refactor version checking into a utility (#395)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-05-25 14:07:39 -04:00
b3e21686de Include fastai in frameworks (#396) 2022-05-25 13:42:09 -04:00
f12ef1416e Add packaging to requirements (#394)
* Add packaging to requirements
2022-05-25 11:33:14 -04:00
18085fa250 Better dispatch for submodules (#392) 2022-05-25 10:51:18 -04:00
6be221f15e Build Docker Images nightly (#391) 2022-05-24 15:02:08 -04:00
3c4308e8cd Revert "Better dispatch for modules"
This reverts commit 17046bfaf8b805ebbc8ac4695f731b58c61004ed.
2022-05-24 13:48:19 -04:00
17046bfaf8 Better dispatch for modules 2022-05-24 13:47:39 -04:00
07ed7e92b5 Small bugfix for the stalebot workflow (#390)
* Bugfix dispatch
2022-05-24 11:58:26 -04:00
5a679d08d3 Introduce stalebot (#387)
* Add stalebot
2022-05-23 17:10:14 -04:00
5a00ece500 Create Dockerfiles for Accelerate (#377) 2022-05-23 17:09:56 -04:00
f62ae86cfb Mix precision -> Mixed precision (#388) 2022-05-23 15:02:29 -04:00
f9de557037 Fix OneCycle step length when in multiprocess (#385)
* Special onecycle fix
2022-05-23 12:28:44 -04:00
517cbf408b V0.10.0.dev0 2022-05-20 13:51:21 -04:00
f626d87eb7 Release: v0.9.0 2022-05-20 13:46:17 -04:00
8b8c5345cd Refactor some parts in utils (#380) 2022-05-20 12:23:54 -04:00
41427c594a Better check for deepspeed availability (#379)
* Better check for deepspeed availability

* Address comment

* Simplify a bit
2022-05-20 11:05:18 -04:00
3c45b6f760 fix shuffling for ShufflerIterDataPipe instances (#376)
* fix shuffling for ShufflerIterDataPipe instances

* add versioning test for Pytorch

* fix minimum Pytorch version

Co-authored-by: Loubna ben allal <loubnabenallal@gmail.com>
2022-05-20 08:55:03 -04:00
b922c63322 fix zero stage-1 (#378) 2022-05-20 17:18:17 +05:30
23c0341262 Refactor tests to use accelerate launch (#373)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-05-19 11:48:12 -04:00
6163e20b14 deepspeed save model temp fix (#374)
* fix deepspeed model saving

* fix deepspeed zero stage-3 model save

fixes #369

Co-Authored-By: Kovvuri Satyanarayana Reddy <54667784+KOVVURISATYANARAYANAREDDY@users.noreply.github.com>

Co-authored-by: Kovvuri Satyanarayana Reddy <54667784+KOVVURISATYANARAYANAREDDY@users.noreply.github.com>
2022-05-19 18:01:53 +05:30
d33dc39a32 fix deepspeed model saving (#370) 2022-05-19 00:07:20 +05:30
043d2ec52d Add a utility for writing a barebones config file (#371)
* Create a basic_config function
2022-05-18 13:39:19 -04:00
64e41a4995 Remove tensor call (#365) 2022-05-13 10:51:14 -04:00
4736c754bf fix tracking (#361)
* fixing trackers

* quality

* bug fix

* bug fix

* addressing comments and fixing tests

* Fixing script diff test
2022-05-13 17:20:27 +05:30
28edac2c4c Update launchers.py (#363) 2022-05-13 07:25:44 -04:00
1700716760 Handle deprication errors in launch (#360)
* Adjust based on deprication
2022-05-12 11:13:50 -04:00
aa9b614967 v0.9.0.dev0 2022-05-12 11:02:19 -04:00
2943172b8f v0.8.0 Release 2022-05-12 10:52:54 -04:00
f56f4441b3 Big model inference (#345)
* Big model inference

* Reorganize port cleanup

* Last cleanup

* Test fix

* Quality

* Update src/accelerate/big_modeling.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Fix bug in default mem

* Check device map is complete

* More tests

* Make load function more general

* Apply suggestions from code review

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Quality

* Address more review comments

* Check generation results for gpt2

* Add main wrapper around everything

* Tests for final API

* Clean infer_auto_device

* Type annotations

* Apply suggestions from code review

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>

* Address review comments

* Last review comment for now

* Fix bug in clean_device_map

* Add doc

* Style

* Fixes + dtype support

* Fix test

* Add option to offload CPU state_dict

* Indent typo

* Final tweaks

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
2022-05-12 10:09:28 -04:00
45359a73ff DeepSpeed and FSDP plugin support through script (#356)
* DeepSpeed and FSDP plugin support through script

Setting env variables when DeepSpeed /FSDP plugins are provided directly through script without using accelerate launch.

* quality
2022-05-11 19:37:49 +05:30
b5b68fbb4d Fixing metric eval in distributed setup (#355) 2022-05-10 17:17:22 +05:30
d190ed7e41 Fix sample calculation in examples (#352)
* Fix metric calculation across examples
2022-05-09 15:44:49 -04:00
b923e134e7 Fix prompt for num_processes (#347)
* Fix prompt for num_processes

* Fix prompting

Handling FSDP and DeepSpeed num_processes while prompting.

* quality
2022-05-06 17:42:23 +05:30
b2956acbe9 Better prompt for number of training devices (#344)
* TPU specific
2022-05-05 13:12:32 -04:00
be0f7ce44f Handle Manual Wrapping in FSDP. Minor fix of fsdp example. (#342)
* Handle manual wrapping in FSDP. Fix fsdp example.
2022-05-05 21:15:53 +05:30
603a53f056 Improve num_processes question in CLI (#343)
* Rephrase num_processes question
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-05-05 11:07:23 -04:00
02e2ed567b Refactor utils into its own module (#340)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-05-05 10:48:07 -04:00
8abd274a7f Introduce multiprocess logger (#337) 2022-05-02 09:45:10 -04:00
b05d483944 Fixed a typo to enable running accelerate correctly (#339) 2022-05-02 07:54:57 -04:00
a74c7c9538 Create peak_memory_uasge_tracker.py (#336)
* Create peak_memory_uasge_tracker.py

Adding the example by feature for tracking peak memory usage of GPU. One example of usage is to track the peak memory reduction when using FSDP.

* fixing the typo in the file name

* reformatting

* exclude peak_memory_usage_tracker.py from tests

* renaming and highlighting proper usage

* Update test_examples.py

😅
2022-04-29 22:38:34 +05:30
a60640d7e2 Patchfix infinite loop (#335) 2022-04-29 08:34:37 -04:00
611546f12d Add guards for batch size finder (#334)
* Fix zero reached

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-28 16:34:07 -04:00
7d2a259e3d Fix fdsp config in cluster (#331)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-04-28 16:01:28 -04:00
e5c17f36a8 Clean up tests + fix import (#330) 2022-04-28 13:37:02 -04:00
20de3fc959 v0.8.0.dev0 with setup 2022-04-28 11:27:50 -04:00
f84cb0c1fa v0.8.0.dev0 2022-04-28 11:27:39 -04:00
136437e3e8 Fix default config dicts (#329)
* Fix default config dicts

* style
2022-04-28 11:23:44 -04:00
272 changed files with 50416 additions and 5556 deletions

View File

@ -0,0 +1,29 @@
// File only needed for VSCode users to have proper Docker based interpreters
{
"name": "accelerate_dev_environment",
"build": {
// ACTION NEEDED: comment/uncomment the relevant line depending on whether you are in a CPU/GPU environment
"dockerfile": "../docker/accelerate-cpu/Dockerfile"
// "dockerfile": "../docker/accelerate-gpu/Dockerfile"
},
"runArgs": [
// ACTION NEEDED: uncomment the next line if your local machine has GPUs available
// "--gpus", "all",
// Enable the docker container to access system resources
"--ipc", "host"
],
"remoteEnv": {
"PYTHONPATH": "${containerEnv:PATH}:${containerWorkspaceFolder}"
},
"customizations": {
"vscode": {
"extensions": [
// Ensure we have IntelliSense in VSCode when running inside container
"ms-python.python"
]
}
},
"workspaceFolder": "/workspaces/accelerate",
// Need git for VSCode to color code modifications. Only runs when building environment.
"onCreateCommand": "apt-get update && apt-get install -y git && pip install -e '.[dev]'"
}

63
.github/ISSUE_TEMPLATE/bug-report.yml vendored Normal file
View File

@ -0,0 +1,63 @@
name: "\U0001F41B Bug Report"
description: Submit a bug report to help us improve Accelerate
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to submit a bug report! 🐛
If this is not a bug related to the Accelerate library directly, but instead a general question about your code or the library specifically please use the [forums](https://discuss.huggingface.co/c/accelerate/18).
- type: textarea
id: system-info
attributes:
label: System Info
description: Please share your accelerate configuration with us. You can run the command `accelerate env` and copy-paste its outputs below
render: Shell
placeholder: accelerate version, OS, python version, numpy version, torch version, and accelerate's configuration
validations:
required: true
- type: checkboxes
id: information-scripts-examples
attributes:
label: Information
description: 'The problem arises when using:'
options:
- label: "The official example scripts"
- label: "My own modified scripts"
- type: checkboxes
id: information-tasks
attributes:
label: Tasks
description: "The tasks I am working on are:"
options:
- label: "One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)"
- label: "My own task or dataset (give details below)"
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Reproduction
description: |
Please provide a code sample that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
Steps to reproduce the behavior:
1.
2.
3.
- type: textarea
id: expected-behavior
validations:
required: true
attributes:
label: Expected behavior
description: "A clear and concise description of what you would expect to happen."

47
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,47 @@
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
- Big modeling: @SunMarc
- Fully-Sharded Data Parallism: @muellerzr
- DeepSpeed: @muellerzr
- Command Line Interface: @muellerzr
- Documentation: @muellerzr
- Core parts of the library: @muellerzr @BenjaminBossan @SunMarc
- Maintained examples: @muellerzr or @SunMarc
-->

View File

@ -0,0 +1,81 @@
name: Build Docker images (releases)
on:
workflow_dispatch:
release:
types: [published]
concurrency:
group: docker-image-builds
cancel-in-progress: false
jobs:
get-version:
runs-on: ubuntu-latest
outputs:
version: ${{ steps.step1.outputs.version }}
steps:
- uses: actions/checkout@v3.1.0
- id: step1
run: echo "version=$(python setup.py --version)" >> $GITHUB_OUTPUT
version-cpu:
name: "Latest Accelerate CPU [version]"
runs-on: [self-hosted, intel-cpu, 8-cpu, ci]
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push CPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-cpu/Dockerfile
push: true
tags: huggingface/accelerate:cpu-release-${{ needs.get-version.outputs.version }}
version-cuda:
name: "Latest Accelerate GPU [version]"
runs-on: [self-hosted, single-gpu, nvidia-gpu, t4, ci]
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-gpu/Dockerfile
push: true
tags: huggingface/accelerate:gpu-release-${{needs.get-version.outputs.version}}
version-cuda-deepspeed:
name: "Latest Accelerate GPU DeepSpeed [version]"
runs-on: [self-hosted, single-gpu, nvidia-gpu, t4, ci]
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-gpu-deepspeed/Dockerfile
push: true
tags: huggingface/accelerate:gpu-deepspeed-release-${{needs.get-version.outputs.version}}

View File

@ -0,0 +1,50 @@
name: Trigger docker images and run tests
on:
push:
branches:
- main
workflow_dispatch:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
jobs:
check-for-source:
runs-on: ubuntu-latest
name: Check if setup was changed
outputs:
changed: ${{ steps.was_changed.outputs.changed }}
steps:
- uses: actions/checkout@v3.1.0
with:
fetch-depth: "2"
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v41
- name: Was setup changed
id: was_changed
run: |
for file in ${{ steps.changed-files.outputs.all_changed_files }}; do
if [ `basename "${file}"` == "setup.py" ]; then
echo "changed=1" >> $GITHUB_OUTPUT
fi
done
build-docker-containers:
needs: check-for-source
if: (github.event_name == 'push') && (needs.check-for-source.outputs.changed == '1')
uses: ./.github/workflows/build_docker_images.yml
secrets: inherit
run-merge-tests:
needs: build-docker-containers
if: always()
uses: ./.github/workflows/run_merge_tests.yml
run-integration-tests:
needs: build-docker-containers
if: always()
uses: ./.github/workflows/self_hosted_integration_tests.yml

View File

@ -0,0 +1,85 @@
name: Build Docker images (scheduled)
on:
workflow_dispatch:
workflow_call:
schedule:
- cron: "0 1 * * *"
concurrency:
group: docker-image-builds
cancel-in-progress: false
jobs:
latest-cpu:
name: "Latest Accelerate CPU [dev]"
runs-on: [self-hosted, intel-cpu, 8-cpu, ci]
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Get current date
id: date
run: |
echo "date=$(date '+%Y-%m-%d')" >> $GITHUB_ENV
- name: Build and Push CPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-cpu/Dockerfile
push: true
tags: |
huggingface/accelerate:cpu-nightly
huggingface/accelerate:cpu-nightly-${{ env.date }}
latest-cuda:
name: "Latest Accelerate GPU [dev]"
runs-on: [self-hosted, nvidia-gpu, t4, ci]
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Get current date
id: date
run: |
echo "date=$(date '+%Y-%m-%d')" >> $GITHUB_ENV
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-gpu/Dockerfile
push: true
tags: |
huggingface/accelerate:gpu-nightly
huggingface/accelerate:gpu-nightly-${{ env.date }}
latest-cuda-deepspeed:
name: "Latest Accelerate GPU DeepSpeed [dev]"
runs-on: [self-hosted, nvidia-gpu, t4, ci]
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Get current date
id: date
run: |
echo "date=$(date '+%Y-%m-%d')" >> $GITHUB_ENV
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-gpu-deepspeed/Dockerfile
push: true
tags: |
huggingface/accelerate:gpu-deepspeed-nightly
huggingface/accelerate:gpu-deepspeed-nightly-${{ env.date }}

View File

@ -13,5 +13,6 @@ jobs:
with:
commit_sha: ${{ github.sha }}
package: accelerate
custom_container: huggingface/transformers-doc-builder
secrets:
token: ${{ secrets.HUGGINGFACE_PUSH }}
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}

View File

@ -14,3 +14,4 @@ jobs:
commit_sha: ${{ github.event.pull_request.head.sha }}
pr_number: ${{ github.event.number }}
package: accelerate
custom_container: huggingface/transformers-doc-builder

View File

@ -1,13 +0,0 @@
name: Delete dev documentation
on:
pull_request:
types: [ closed ]
jobs:
delete:
uses: huggingface/doc-builder/.github/workflows/delete_doc_comment.yml@main
with:
pr_number: ${{ github.event.number }}
package: accelerate

56
.github/workflows/integration_tests.yml vendored Normal file
View File

@ -0,0 +1,56 @@
# CI for specifically ensuring integrations work fine (`transformers` mainly)
# Useful tips:
# - New integrations to test should have its own job, and follow a strategy method where we check both
# the pypi and github versions.
# - When checking the latest release of the integration, use
# git checkout $(git describe --tags `git rev-list --tags --max-count=1`) to get the latest release.
name: Integration Tests
on:
pull_request:
paths:
- "src/**"
- "tests/**"
- ".github/**"
- "examples/**"
- "setup.py"
types: [opened, synchronize, reopened]
env:
HF_HOME: ~/hf_cache
jobs:
run-trainer-tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v3.1.0
- name: Set up python 3.8
uses: actions/setup-python@v3
with:
python-version: 3.8
- name: Install Accelerate from source
run: |
pip install --upgrade pip
pip install -e .
- name: Clone and install transformers
run: |
cd ..
git clone https://github.com/huggingface/transformers
cd transformers
pip install .[torch,testing]
- name: Show installed libraries
run: |
pip freeze
- name: Run Trainer tests
env:
WANDB_DISABLED: true
run: |
cd ../transformers
pytest -sv tests/trainer

229
.github/workflows/nightly.yml vendored Normal file
View File

@ -0,0 +1,229 @@
name: Self-hosted runner with slow tests (scheduled)
on:
workflow_dispatch:
schedule:
- cron: "0 2 * * *"
env:
RUN_SLOW: "yes"
IS_GITHUB_CI: "1"
SLACK_API_TOKEN: ${{ secrets.SLACK_API_TOKEN }}
jobs:
run_core_tests_single_gpu:
runs-on: [self-hosted, single-gpu, nvidia-gpu, t4, ci]
env:
CUDA_VISIBLE_DEVICES: "0"
TEST_TYPE: "single_gpu"
container:
image: huggingface/accelerate:gpu-nightly
options: --gpus all --shm-size "16gb"
defaults:
run:
shell: bash
steps:
- name: Update clone & pip install
run: |
source activate accelerate
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e . --no-deps
pip install pytest-reportlog tabulate
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run test on GPUs
working-directory: accelerate
run: |
source activate accelerate
make test
- name: Run examples on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y
make test_examples
- name: Generate Report
working-directory: accelerate
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_deepspeed_tests_single_gpu:
runs-on: [self-hosted, single-gpu, nvidia-gpu, t4, ci]
env:
CUDA_VISIBLE_DEVICES: "0"
TEST_TYPE: "single_gpu_deepspeed"
container:
image: huggingface/accelerate:gpu-deepspeed-nightly
options: --gpus all --shm-size "16gb"
defaults:
run:
shell: bash
steps:
- name: Update clone & pip install
run: |
source activate accelerate
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e . --no-deps
pip install pytest-reportlog tabulate
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run test on GPUs
working-directory: accelerate
run: |
source activate accelerate
make test_deepspeed
- name: Run Integration tests on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate
make test_integrations
- name: Run examples on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y
make test_examples
- name: Generate Report
working-directory: accelerate
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_core_tests_multi_gpu:
runs-on: [self-hosted, multi-gpu, nvidia-gpu, t4, ci]
env:
CUDA_VISIBLE_DEVICES: "0,1"
TEST_TYPE: "multi_gpu"
container:
image: huggingface/accelerate:gpu-nightly
options: --gpus all --shm-size "16gb"
defaults:
run:
shell: bash
steps:
- name: Update clone
run: |
source activate accelerate
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e . --no-deps
pip install pytest-reportlog tabulate
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run core and big modeling tests on GPUs
working-directory: accelerate
run: |
source activate accelerate
make test_core
make test_big_modeling
make test_cli
- name: Run Integration tests on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate
make test_integrations
- name: Run examples on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y
make test_examples
- name: Generate Report
working-directory: accelerate
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_deepspeed_tests_multi_gpu:
runs-on: [self-hosted, multi-gpu, nvidia-gpu, t4, ci]
env:
CUDA_VISIBLE_DEVICES: "0,1"
TEST_TYPE: "multi_gpu_deepspeed"
container:
image: huggingface/accelerate:gpu-deepspeed-nightly
options: --gpus all --shm-size "16gb"
defaults:
run:
shell: bash
steps:
- name: Update clone
run: |
source activate accelerate
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e . --no-deps
pip install pytest-reportlog tabulate
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run DeepSpeed tests
working-directory: accelerate
run: |
source activate accelerate
make test_deepspeed
- name: Run Integration tests on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate
make test_integrations
- name: Run examples on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y
make test_examples
- name: Generate Report
working-directory: accelerate
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run-integration-tests:
if: always()
uses: ./.github/workflows/self_hosted_integration_tests.yml

View File

@ -6,12 +6,17 @@ jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.6
uses: actions/setup-python@v2
- uses: actions/checkout@v3.1.0
- name: Set up Python 3.8
uses: actions/setup-python@v3
with:
python-version: 3.6
python-version: 3.8
- name: Install Python dependencies
run: pip install -e .[quality]
- name: Run Quality check
run: make quality
run: make quality
- name: Check if failure
if: ${{ failure() }}
run: |
echo "Quality check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and rerun 'make style; make quality;'" >> $GITHUB_STEP_SUMMARY

184
.github/workflows/run_merge_tests.yml vendored Normal file
View File

@ -0,0 +1,184 @@
name: Self-hosted runner tests (push to "main")
on:
workflow_call:
workflow_dispatch:
env:
TESTING_MOCKED_DATALOADERS: "1"
IS_GITHUB_CI: "1"
jobs:
run_core_tests_single_gpu:
runs-on: [self-hosted, single-gpu, nvidia-gpu, t4, ci]
env:
CUDA_VISIBLE_DEVICES: "0"
container:
image: huggingface/accelerate:gpu-nightly
options: --gpus all --shm-size "16gb"
defaults:
run:
shell: bash
steps:
- name: Install accelerate
run: |
source activate accelerate;
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e .[testing,test_trackers] -U;
pip install pytest-reportlog tabulate ;
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run CLI tests (use make cli)
working-directory: accelerate
run: |
source activate accelerate;
make test_cli
- name: Run test on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate;
make test
- name: Run examples on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate;
pip uninstall comet_ml -y;
make test_examples
- name: Generate Report
working-directory: accelerate
if: always()
run: |
pip install tabulate;
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_deepspeed_tests_single_gpu:
runs-on: [self-hosted, single-gpu, nvidia-gpu, t4, ci]
env:
CUDA_VISIBLE_DEVICES: "0"
container:
image: huggingface/accelerate:gpu-deepspeed-nightly
options: --gpus all --shm-size "16gb"
defaults:
run:
shell: bash
steps:
- name: Install accelerate
run: |
source activate accelerate;
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e .[testing,test_trackers] -U;
pip install pytest-reportlog tabulate ;
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run test on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate;
make test_deepspeed
- name: Generate Report
working-directory: accelerate
if: always()
run: |
pip install tabulate;
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_core_tests_multi_gpu:
runs-on: [self-hosted, multi-gpu, nvidia-gpu, t4, ci]
env:
CUDA_VISIBLE_DEVICES: 0,1
container:
image: huggingface/accelerate:gpu-nightly
options: --gpus all --shm-size "16gb"
defaults:
run:
shell: bash
steps:
- name: Update clone
run: |
source activate accelerate;
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e .[testing,test_trackers] -U;
pip install pytest-reportlog tabulate
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run test on GPUs
working-directory: accelerate
run: |
source activate accelerate;
make test
- name: Run examples on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate;
pip uninstall comet_ml -y;
make test_examples
- name: Generate Report
working-directory: accelerate
if: always()
run: |
source activate accelerate;
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_deepspeed_tests_multi_gpu:
runs-on: [self-hosted, multi-gpu, nvidia-gpu, t4, ci]
container:
image: huggingface/accelerate:gpu-deepspeed-nightly
options: --gpus all --shm-size "16gb"
defaults:
run:
shell: bash
steps:
- name: Install accelerate
run: |
source activate accelerate;
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e .[testing,test_trackers] -U;
pip install pytest-reportlog tabulate ;
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run test on GPUs
working-directory: accelerate
if: always()
run: |
source activate accelerate;
make test_deepspeed
- name: Generate Report
working-directory: accelerate
if: always()
run: |
pip install tabulate;
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

View File

@ -0,0 +1,125 @@
# CI for specifically ensuring integrations work fine (`transformers` mainly) on GPUs
# Useful tips:
# - `working-directory` should be set to the root of the repo, which is cloned on the actual CI runner.
# It follows the directory structure of `actions-runner/_work/{repo_name}/{repo_name}/{cloned_repo} on
# prem, but in Actions setting `working-directory` looks just in the `{repo_name}` level.
# - New integrations to test should have its own job, and follow a strategy method where we check both
# the pypi and github versions.
# - Workflow call lets this be called from `build_and_run_tests.yml`
# - When using a docker container, it's recommended to set `--shm-size`, we use 16gb.
name: Integration Tests (push to "main")
on:
workflow_call:
workflow_dispatch:
env:
HF_HOME: ~/hf_cache
defaults:
run:
shell: bash
jobs:
run-trainer-tests:
container:
image: huggingface/accelerate:gpu-deepspeed-nightly
options: --gpus all --shm-size "16gb"
runs-on: [self-hosted, multi-gpu, nvidia-gpu, t4, ci]
strategy:
fail-fast: false
matrix:
cuda_visible_devices: [
"0",
"0,1"
]
steps:
- name: Install transformers
run: |
source activate accelerate;
git clone https://github.com/huggingface/transformers --depth 1;
cd transformers;
pip install .[torch,deepspeed-testing];
cd ..;
- name: Install accelerate
run: |
source activate accelerate;
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }} ;
pip install -e .[testing];
pip uninstall comet_ml wandb dvclive -y
cd ..;
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run trainer tests
working-directory: transformers/
env:
CUDA_VISIBLE_DEVICES: ${{ matrix.cuda_visible_devices }}
WANDB_DISABLED: true
run: |
source activate accelerate;
pytest -sv tests/trainer
- name: Run deepspeed tests
working-directory: transformers/
env:
CUDA_VISIBLE_DEVICES: ${{ matrix.cuda_visible_devices }}
WANDB_DISABLED: true
if: always()
run: |
source activate accelerate;
pytest -sv tests/deepspeed
- name: Run transformers examples tests
working-directory: transformers/
env:
CUDA_VISIBLE_DEVICES: ${{ matrix.cuda_visible_devices }}
WANDB_DISABLED: true
run: |
source activate accelerate
pip install -r examples/pytorch/_tests_requirements.txt
pytest -sv examples/pytorch/test_accelerate_examples.py examples/pytorch/test_pytorch_examples.py
run-skorch-tests:
container:
image: huggingface/accelerate:gpu-nightly
options: --gpus all --shm-size "16gb"
runs-on: [self-hosted, multi-gpu, nvidia-gpu, t4, ci]
strategy:
fail-fast: false
steps:
- name: Install accelerate
run:
source activate accelerate;
git clone https://github.com/huggingface/accelerate;
cd accelerate;
git checkout ${{ github.sha }};
pip install -e .[testing];
cd ..
- name: Install skorch
run: |
source activate accelerate
git clone https://github.com/skorch-dev/skorch;
cd skorch;
git config --global --add safe.directory '*'
git checkout master && git pull
pip install .[testing]
pip install flaky
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run skorch tests
working-directory: skorch/
run: |
source activate accelerate;
pytest -sv -k TestAccelerate

28
.github/workflows/stale.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: Stale Bot
on:
schedule:
- cron: "0 15 * * *"
workflow_dispatch:
jobs:
close_stale_issues:
name: Close Stale Issues
if: github.repository == 'huggingface/accelerate'
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v3.1.0
- name: Setup Python
uses: actions/setup-python@v3
with:
python-version: 3.8
- name: Install requirements
run: |
pip install PyGithub
- name: Close stale issues
run: |
python utils/stale.py

View File

@ -1,30 +1,68 @@
name: Run Tests
on: [pull_request]
on:
pull_request:
paths:
- "src/**"
- "tests/**"
- ".github/**"
- "examples/**"
- "setup.py"
types: [opened, synchronize, reopened]
env:
HF_HOME: ~/hf_cache
TESTING_MOCKED_DATALOADERS: "1"
IS_GITHUB_CI: "1"
jobs:
test:
run-tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
pytorch-version: [
latest,
minimum,
]
test-kind: [
test_prod,
test_core,
test_cli,
test_big_modeling,
test_deepspeed,
test_fsdp,
test_example_differences,
test_checkpoint_step,
test_checkpoint_epoch,
test_rest
]
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.6
uses: actions/setup-python@v2
- uses: actions/checkout@v3.1.0
- name: Set up python 3.8
uses: actions/setup-python@v3
with:
python-version: 3.6
- name: Install Python dependencies
run: pip install setuptools==59.5.0; pip install -e .[test,test_trackers]
python-version: 3.8
- name: Install the library
run: |
if [[ ${{ matrix.test-kind }} = test_prod ]]; then pip install -e .[test_prod]; fi
if [[ ${{ matrix.test-kind }} != test_prod ]]; then pip install -e .[testing,test_trackers]; fi
if [[ ${{ matrix.test-kind }} = test_rest ]]; then pip uninstall comet_ml -y; fi
if [[ ${{ matrix.test-kind }} = minimum ]]; then pip install torch==1.10.0; fi
pip install pytest-reportlog tabulate setuptools
- name: Show installed libraries
run: |
pip freeze
- name: Run Tests
run: make test
test_examples:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.6
uses: actions/setup-python@v2
with:
python-version: 3.6
- name: Install Python dependencies
run: pip install setuptools==59.5.0; pip install -e .[test] tensorboard
- name: Run Tests
run: make test_examples
env:
PYTORCH_VERSION: ${{ matrix.pytorch-version }}
run: |
make ${{ matrix.test-kind }}
- name: Generate Report
if: always()
run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

53
.github/workflows/test_imports.yml vendored Normal file
View File

@ -0,0 +1,53 @@
name: Run Import Tests
on:
pull_request:
paths:
- "src/**"
- "tests/**"
- ".github/**"
- "examples/**"
- "setup.py"
types: [opened, synchronize, reopened]
env:
HF_HOME: ~/hf_cache
TESTING_MOCKED_DATALOADERS: "1"
IS_GITHUB_CI: "1"
jobs:
run-tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
pytorch-version: [
latest,
minimum,
]
steps:
- uses: actions/checkout@v3.1.0
- name: Set up python 3.8
uses: actions/setup-python@v3
with:
python-version: 3.8
- name: Install the library
run: |
pip install -e .
pip install pytest-reportlog tabulate setuptools git+https://github.com/muellerzr/import-timer
- name: Show installed libraries
run: |
pip freeze
- name: Run Import Tests
env:
PYTORCH_VERSION: ${{ matrix.pytorch-version }}
run: |
pytest -sv tests/test_imports.py
- name: Generate Report
if: always()
run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

15
.github/workflows/trufflehog.yml vendored Normal file
View File

@ -0,0 +1,15 @@
on:
push:
name: Secret Leaks
jobs:
trufflehog:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Secret Scanning
uses: trufflesecurity/trufflehog@main

View File

@ -0,0 +1,16 @@
name: Upload PR Documentation
on:
workflow_run:
workflows: ["Build PR Documentation"]
types:
- completed
jobs:
build:
uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@main
with:
package_name: accelerate
secrets:
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}

8
.gitignore vendored
View File

@ -135,4 +135,10 @@ dmypy.json
.idea
# Mac .DS_Store
.DS_Store
.DS_Store
# More test things
wandb
# ruff
.ruff_cache

13
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,13 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.2.1
hooks:
- id: ruff
args:
- --fix
- id: ruff-format
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: check-merge-conflict
- id: check-yaml

View File

@ -130,6 +130,9 @@ Follow these steps to start contributing:
it with `pip uninstall accelerate` before reinstalling it in editable
mode with the `-e` flag.)
Alternatively, if you are using [Visual Studio Code](https://code.visualstudio.com/Download), the fastest way to get set up is by using
the provided Dev Container. Documentation on how to get started with dev containers is available [here](https://code.visualstudio.com/docs/remote/containers).
5. Develop the features on your branch.
As you work on the features, you should make sure that the test suite
@ -149,7 +152,7 @@ Follow these steps to start contributing:
$ make test
```
`accelerate` relies on `black` and `isort` to format its source code
`accelerate` relies on `ruff` to format its source code
consistently. After you make changes, apply automatic style corrections and code verifications
that can't be automated in one go with:
@ -162,13 +165,21 @@ Follow these steps to start contributing:
$ make style
```
`accelerate` also uses `flake8` and a few custom scripts to check for coding mistakes. Quality
`accelerate` also uses a few custom scripts to check for coding mistakes. Quality
control runs in CI, however you can also run the same checks with:
```bash
$ make quality
```
You can also set up [`pre-commit`](https://pre-commit.com/) to run these checks
automatically as Git commit hooks.
```bash
$ pip install pre-commit
$ pre-commit install
```
Once you're happy with your changes, add changed files using `git add` and
make a commit with `git commit` to record your changes locally:
@ -232,4 +243,4 @@ $ python -m pytest -sv ./tests
In fact, that's how `make test` is implemented (sans the `pip install` line)!
You can specify a smaller set of tests in order to test only the feature
you're working on.
you're working on.

View File

@ -1,6 +1,6 @@
.PHONY: quality style test docs
.PHONY: quality style test docs utils
check_dirs := tests src examples
check_dirs := .
# Check that source code meets quality standards
@ -8,24 +8,65 @@ extra_quality_checks:
python utils/check_copies.py
python utils/check_dummies.py
python utils/check_repo.py
python utils/style_doc.py src/accelerate docs/source --max_len 119
doc-builder style src/accelerate docs/source --max_len 119
# this target runs checks on all files
quality:
black --check $(check_dirs)
isort --check-only $(check_dirs)
flake8 $(check_dirs)
python utils/style_doc.py src/accelerate docs/source --max_len 119 --check_only
ruff check $(check_dirs)
ruff format --check $(check_dirs)
doc-builder style src/accelerate docs/source --max_len 119 --check_only
# Format source code automatically and check is there are any problems left that need manual fixing
style:
black $(check_dirs)
isort $(check_dirs)
python utils/style_doc.py src/accelerate docs/source --max_len 119
ruff check $(check_dirs) --fix
ruff format $(check_dirs)
doc-builder style src/accelerate docs/source --max_len 119
# Run tests for the library
test_big_modeling:
python -m pytest -s -v ./tests/test_big_modeling.py ./tests/test_modeling_utils.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_big_modeling.log",)
test_core:
python -m pytest -s -v ./tests/ --ignore=./tests/test_examples.py --ignore=./tests/deepspeed --ignore=./tests/test_big_modeling.py \
--ignore=./tests/fsdp --ignore=./tests/test_cli.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_core.log",)
test_cli:
python -m pytest -s -v ./tests/test_cli.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_cli.log",)
test_deepspeed:
python -m pytest -s -v ./tests/deepspeed $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_deepspeed.log",)
test_fsdp:
python -m pytest -s -v ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_fsdp.log",)
# Since the new version of pytest will *change* how things are collected, we need `deepspeed` to
# run after test_core and test_cli
test:
python -m pytest -n auto --dist=loadfile -s -v ./tests/ --ignore=./tests/test_examples.py
$(MAKE) test_core
$(MAKE) test_cli
$(MAKE) test_big_modeling
$(MAKE) test_deepspeed
$(MAKE) test_fsdp
test_examples:
python -m pytest -n auto --dist=loadfile -s -v ./tests/test_examples.py
python -m pytest -s -v ./tests/test_examples.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_examples.log",)
# Broken down example tests for the CI runners
test_integrations:
python -m pytest -s -v ./tests/deepspeed ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_integrations.log",)
test_example_differences:
python -m pytest -s -v ./tests/test_examples.py::ExampleDifferenceTests $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_example_diff.log",)
test_checkpoint_epoch:
python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "by_epoch" $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_checkpoint_epoch.log",)
test_checkpoint_step:
python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "by_step" $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_checkpoint_step.log",)
# Same as test but used to install only the base dependencies
test_prod:
$(MAKE) test_core
test_rest:
python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "not by_step and not by_epoch" $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_rest.log",)

100
README.md
View File

@ -16,28 +16,18 @@ limitations under the License.
<p align="center">
<br>
<img src="docs/source/imgs/accelerate_logo.png" width="400"/>
<img src="https://raw.githubusercontent.com/huggingface/accelerate/main/docs/source/imgs/accelerate_logo.png" width="400"/>
<br>
<p>
<p align="center">
<!-- Uncomment when CircleCI is setup
<a href="https://circleci.com/gh/huggingface/accelerate">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
</a>
<!-- Uncomment when CircleCI is set up
<a href="https://circleci.com/gh/huggingface/accelerate"><img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"></a>
-->
<a href="https://github.com/huggingface/accelerate/blob/main/LICENSE">
<img alt="License" src="https://img.shields.io/github/license/huggingface/accelerate.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/accelerate/index.html">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/accelerate/index.html.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/accelerate/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/accelerate.svg">
</a>
<a href="https://github.com/huggingface/accelerate/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://github.com/huggingface/accelerate/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/huggingface/accelerate.svg?color=blue"></a>
<a href="https://huggingface.co/docs/accelerate/index.html"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/accelerate/index.html.svg?down_color=red&down_message=offline&up_message=online"></a>
<a href="https://github.com/huggingface/accelerate/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/accelerate.svg"></a>
<a href="https://github.com/huggingface/accelerate/blob/main/CODE_OF_CONDUCT.md"><img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"></a>
</p>
<h3 align="center">
@ -91,7 +81,7 @@ Here is an example:
optimizer.step()
```
As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp16).
As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp8, fp16, bf16).
In particular, the same code can then be run without modification on your local machine for debugging or your training environment.
@ -132,11 +122,11 @@ In particular, the same code can then be run without modification on your local
optimizer.step()
```
Want to learn more? Check out the [documentation](https://huggingface.co/docs/accelerate) or have look at our [examples](https://github.com/huggingface/accelerate/tree/main/examples).
Want to learn more? Check out the [documentation](https://huggingface.co/docs/accelerate) or have a look at our [examples](https://github.com/huggingface/accelerate/tree/main/examples).
## Launching script
🤗 Accelerate also provides an optional CLI tool that allows you to quickly configure and test your training environment before launching the scripts. No need to remember how to use `torch.distributed.launch` or to write a specific launcher for TPU training!
🤗 Accelerate also provides an optional CLI tool that allows you to quickly configure and test your training environment before launching the scripts. No need to remember how to use `torch.distributed.run` or to write a specific launcher for TPU training!
On your machine(s) just run:
```bash
@ -155,28 +145,46 @@ For instance, here is how you would run the GLUE example on the MRPC task (from
accelerate launch examples/nlp_example.py
```
This CLI tool is **optional**, and you can still use `python my_script.py` or `python -m torch.distributed.launch my_script.py` at your convenance.
This CLI tool is **optional**, and you can still use `python my_script.py` or `python -m torchrun my_script.py` at your convenience.
You can also directly pass in the arguments you would to `torchrun` as arguments to `accelerate launch` if you wish to not run` accelerate config`.
For example, here is how to launch on two GPUs:
```bash
accelerate launch --multi_gpu --num_processes 2 examples/nlp_example.py
```
To learn more, check the CLI documentation available [here](https://huggingface.co/docs/accelerate/package_reference/cli).
## Launching multi-CPU run using MPI
🤗 Here is another way to launch multi-CPU run using MPI. You can learn how to install Open MPI on [this page](https://www.open-mpi.org/faq/?category=building#easy-build). You can use Intel MPI or MVAPICH as well.
Once you have MPI setup on your cluster, just run:
```bash
accelerate config
```
Answer the questions that are asked, selecting to run using multi-CPU, and answer "yes" when asked if you want accelerate to launch mpirun.
Then, use `accelerate launch` with your script like:
```bash
accelerate launch examples/nlp_example.py
```
Alternatively, you can use mpirun directly, without using the CLI like:
```bash
mpirun -np 2 python examples/nlp_example.py
```
## Launching training using DeepSpeed
🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your python script, we provide you the `DeepSpeedPlugin`.
🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your Python script, we provide you the `DeepSpeedPlugin`.
```python
from accelerator import Accelerator, DeepSpeedPlugin
from accelerate import Accelerator, DeepSpeedPlugin
# deepspeed needs to know your gradient accumulation steps before hand, so don't forget to pass it
# deepspeed needs to know your gradient accumulation steps beforehand, so don't forget to pass it
# Remember you still need to do gradient accumulation by yourself, just like you would have done without deepspeed
deepspeed_plugin = DeepSpeedPlugin(zero_stage=2, gradient_accumulation_steps=2)
accelerator = Accelerator(fp16=True, deepspeed_plugin=deepspeed_plugin)
accelerator = Accelerator(mixed_precision='fp16', deepspeed_plugin=deepspeed_plugin)
# How to save your 🤗 Transformer?
accelerator.wait_for_everyone()
@ -196,11 +204,11 @@ from accelerate import notebook_launcher
notebook_launcher(training_function)
```
An example can be found in [this notebook](https://github.com/huggingface/notebooks/blob/master/examples/accelerate/simple_nlp_example.ipynb). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/accelerate/simple_nlp_example.ipynb)
An example can be found in [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb)
## Why should I use 🤗 Accelerate?
You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library, In fact the whole API of 🤗 Accelerate is in one class, the `Accelerator` object.
You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library. In fact, the whole API of 🤗 Accelerate is in one class, the `Accelerator` object.
## Why shouldn't I use 🤗 Accelerate?
@ -208,17 +216,25 @@ You shouldn't use 🤗 Accelerate if you don't want to write a training loop you
## Frameworks using 🤗 Accelerate
If you like the simplicity of 🤗 Accelerate but would prefer a higher-level abstraction around your training loop, some frameworks that are built on top of 🤗 Accelerate are listed below:
If you like the simplicity of 🤗 Accelerate but would prefer a higher-level abstraction around its capabilities, some frameworks and libraries that are built on top of 🤗 Accelerate are listed below:
* [Amphion](https://github.com/open-mmlab/Amphion) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development.
* [Animus](https://github.com/Scitator/animus) is a minimalistic framework to run machine learning experiments. Animus highlights common "breakpoints" in ML experiments and provides a unified interface for them within [IExperiment](https://github.com/Scitator/animus/blob/main/animus/core.py#L76).
* [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model train, and inference logic.
* [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model training, and inference logic.
* [fastai](https://github.com/fastai/fastai#installing) is a PyTorch framework for Deep Learning that simplifies training fast and accurate neural nets using modern best practices. fastai provides a [Learner](https://docs.fast.ai/learner.html#Learner) to handle the training, fine-tuning, and inference of deep learning algorithms.
* [Finetuner](https://github.com/jina-ai/finetuner) is a service that enables models to create higher-quality embeddings for semantic search, visual similarity search, cross-modal text<->image search, recommendation systems, clustering, duplication detection, anomaly detection, or other uses.
* [InvokeAI](https://github.com/invoke-ai/InvokeAI) is a creative engine for Stable Diffusion models, offering industry-leading WebUI, terminal usage support, and serves as the foundation for many commercial products.
* [Kornia](https://kornia.readthedocs.io/en/latest/get-started/introduction.html) is a differentiable library that allows classical computer vision to be integrated into deep learning models. Kornia provides a [Trainer](https://kornia.readthedocs.io/en/latest/x.html#kornia.x.Trainer) with the specific purpose to train and fine-tune the supported deep learning algorithms within the library.
* [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) is a lightweight training library, with a streamlined feature set centred around a general-purpose [Trainer](https://pytorch-accelerated.readthedocs.io/en/latest/trainer.html), that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves!
* [Open Assistant](https://projects.laion.ai/Open-Assistant/) is a chat-based assistant that understands tasks, can interact with their party systems, and retrieve information dynamically to do so.
* [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) is a lightweight training library, with a streamlined feature set centered around a general-purpose [Trainer](https://pytorch-accelerated.readthedocs.io/en/latest/trainer.html), that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves!
* [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is an open-source browser-based easy-to-use interface based on the Gradio library for Stable Diffusion.
* [torchkeras](https://github.com/lyhue1991/torchkeras) is a simple tool for training pytorch model just in a keras style, a dynamic and beautiful plot is provided in notebook to monitor your loss or metric.
* [transformers](https://github.com/huggingface/transformers) as a tool for helping train state-of-the-art machine learning models in PyTorch, Tensorflow, and JAX. (Accelerate is the backend for the PyTorch side).
## Installation
This repository is tested on Python 3.6+ and PyTorch 1.4.0+
This repository is tested on Python 3.8+ and PyTorch 1.10.0+
You should install 🤗 Accelerate in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
@ -239,5 +255,21 @@ pip install accelerate
- multi-GPU on one node (machine)
- multi-GPU on several nodes (machines)
- TPU
- FP16 with native AMP (apex on the roadmap)
- DeepSpeed support (experimental)
- FP16/BFloat16 mixed precision
- FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine)
- DeepSpeed support (Experimental)
- PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental)
- Megatron-LM support (Experimental)
## Citing 🤗 Accelerate
If you use 🤗 Accelerate in your publication, please cite it by using the following BibTeX entry.
```bibtex
@Misc{accelerate,
title = {Accelerate: Training and inference at scale made simple, efficient and adaptable.},
author = {Sylvain Gugger and Lysandre Debut and Thomas Wolf and Philipp Schmid and Zachary Mueller and Sourab Mangrulkar and Marc Sun and Benjamin Bossan},
howpublished = {\url{https://github.com/huggingface/accelerate}},
year = {2022}
}
```

46
benchmarks/README.md Normal file
View File

@ -0,0 +1,46 @@
# Big model inference benchmarks
Running inference with Accelerate on big models.
## Setup
These benchmarks use the `transformers` library:
```bash
pip install transformers
```
To reproduce or test a new setup, run
```py
python inference_acc.py model_name
```
This script supports `gpt-j-6b`, `gpt-neox`, `opt` (30B version) and `T0pp` out of the box, but you can specify any valid checkpoint for `model_name`.
To force a different `torch_dtype` than the one in the config: `--torch_dtype xxx`.
If you get an error linked to disk offload, you need to add the option `--disk-offload`
## Results
On a setup with two Titan RTXs (24GB of RAM) and 32GB of RAM, we get the following benchmarks (T0pp does not run in float16, which is why it's not included).
| Model | Model load time | Generation time | dtype | GPU 0 use | GPU 1 use | CPU use | Disk offload |
|:-----:|:---------------:|:---------------:|:-----:|:---------:|:---------:|:-------:|:------------:|
| GPT-J-6B | 8.7s | 0.05s per token | float16 | 11.7GB | 0GB | 0GB | no |
| GPT-J-6B | 12.4s | 0.06s per token | float32 | 21.9GB | 1.5GB | 0GB | no |
| GPT-Neo-X-20B | 30.9s | 0.08s per token | float16 | 21.5GB | 18GB | 0GB | no |
| GPT-Neo-X-20B | 78.2s | 10.72s per token | float32 | 20.3GB | 22.7 GB | 24.4GB | yes |
| T0pp (11B) | 29.4s | 0.05s per token | float32 | 21.1GB | 21.3GB | 0GB | no |
| OPT-30B | 34.5s | 2.37s per token | float16 | 20.7GB | 22.3GB | 14.1GB | no |
| OPT-30B | 112.3s | 33.9s per token | float32 | 20.2GB | 21.2GB | 23.5GB | yes |
Note on the results:
- using two GPUs instead of one does not slow down generation
- using CPU offload slows down a bit (see OPT-30b)
- using disk offload slows down a lot (need to implement prefetching)
You will also note that Accelerate does not use anymore GPU and CPU RAM than necessary:
- peak GPU memory is exactly the size of the model put on a given GPU
- peak CPU memory is either the size of the biggest checkpoint shard or the part of the model offloaded on CPU, whichever is bigger.

View File

@ -0,0 +1,143 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import time
import torch
import transformers
from measures_util import end_measure, log_measures, start_measure
from transformers import AutoConfig, AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer
from accelerate.utils import compute_module_sizes
DEFAULT_MODELS = {
"gpt-j-6b": {"is_causal": True, "model": "sgugger/sharded-gpt-j-6B", "tokenizer": "EleutherAI/gpt-j-6B"},
"gpt-neox": {"is_causal": True, "model": "EleutherAI/gpt-neox-20b"},
"opt": {"is_causal": True, "model": "facebook/opt-30b"},
"T0pp": {"is_causal": False, "model": "bigscience/T0pp", "model_revision": "sharded"},
}
PROMPTS = [
"Hello, my name is",
"Are unicorns real? Unicorns are",
"For the first time in several years,",
"My name is Julien and I am",
"The goal of life is",
"Whenever I'm sad, I like to",
]
def parse_args():
parser = argparse.ArgumentParser(description="Run and time generations on a big model using Accelerate.")
parser.add_argument("model_name", type=str, default=None, help="The name of the model to try.")
parser.add_argument(
"--tokenizer_name", type=str, default=None, help="The name of the tokenizer (if different from the model."
)
parser.add_argument("--is_causal", type=bool, default=None, help="Whether or not the model is causal.")
parser.add_argument(
"--model_revision", type=str, default=None, help="The revision to use for the model checkpoint."
)
parser.add_argument("--torch_dtype", type=str, default=None, help="The dtype for the model.")
parser.add_argument("--disk_offload", action="store_true")
args = parser.parse_args()
# Sanitize args
if args.model_name in DEFAULT_MODELS:
defaults = DEFAULT_MODELS[args.model_name]
args.model_name = defaults["model"]
if args.tokenizer_name is None:
args.tokenizer_name = defaults.get("tokenizer", args.model_name)
if args.is_causal is None:
args.is_causal = defaults["is_causal"]
if args.model_revision is None:
args.model_revision = defaults.get("model_revision", "main")
if args.is_causal is None:
raise ValueError("Could not infer the default for `--is_causal`, pass either True or False for it.")
if args.tokenizer_name is None:
args.tokenizer_name = args.model_name
if args.model_revision is None:
args.model_revision = "main"
return args
def main():
transformers.utils.logging.set_verbosity_error()
args = parse_args()
if args.torch_dtype is None:
config = AutoConfig.from_pretrained(args.model_name)
torch_dtype = getattr(config, "torch_dtype", torch.float32)
else:
torch_dtype = getattr(torch, args.torch_dtype)
model_cls = AutoModelForCausalLM if args.is_causal else AutoModelForSeq2SeqLM
kwargs = {
"torch_dtype": torch_dtype,
"revision": args.model_revision,
}
if args.disk_offload:
kwargs["offload_folder"] = "tmp_offload"
kwargs["offload_state_dict"] = True
start_measures = start_measure()
model = model_cls.from_pretrained(args.model_name, device_map="auto", **kwargs)
end_measures = end_measure(start_measures)
log_measures(end_measures, "Model loading")
module_sizes = compute_module_sizes(model)
device_size = {v: 0 for v in model.hf_device_map.values()}
for module, device in model.hf_device_map.items():
device_size[device] += module_sizes[module]
message = "\n".join([f"- {device}: {size // 2**20}MiB" for device, size in device_size.items()])
print(f"\nTheoretical use:\n{message}")
tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name)
start_measures = start_measure()
generation_times = []
gen_tokens = []
texts_outs = []
for prompt in PROMPTS:
inputs = tokenizer(prompt, return_tensors="pt").to(0)
tokens = inputs["input_ids"][0].tolist()
before_generate = time.time()
outputs = model.generate(inputs["input_ids"])
after_generate = time.time()
outputs = outputs[0].tolist()
num_gen_tokens = len(outputs) if outputs[: len(tokens)] != tokens else len(outputs) - len(tokens)
generation_time = after_generate - before_generate
text_out = tokenizer.decode(outputs, skip_special_tokens=True)
texts_outs.append(text_out)
generation_times.append(generation_time)
gen_tokens.append(num_gen_tokens)
print(f"Prompt: {prompt}\nGeneration {text_out}\nIn {generation_time:.2f}s for {num_gen_tokens} tokens\n")
end_measures = end_measure(start_measures)
log_measures(end_measures, "Model generation")
generation_times_per_token = [gen / tok for gen, tok in zip(generation_times, gen_tokens)]
avg_gen = sum(generation_times_per_token) / len(generation_times)
print(f"Average time of generation per token: {avg_gen:.2f}s")
print(f"First generation (avg time per token): {generation_times_per_token[0]:.2f}s")
avg_gen = sum(generation_times_per_token[1:]) / (len(generation_times_per_token) - 1)
print(f"Average time of generation per token (excluding the first): {avg_gen:.2f}s")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,98 @@
# Copyright 2023 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc
import threading
import time
import psutil
import torch
class PeakCPUMemory:
def __init__(self):
self.process = psutil.Process()
self.peak_monitoring = False
def peak_monitor(self):
self.cpu_memory_peak = -1
while True:
self.cpu_memory_peak = max(self.process.memory_info().rss, self.cpu_memory_peak)
# can't sleep or will not catch the peak right (this comment is here on purpose)
if not self.peak_monitoring:
break
def start(self):
self.peak_monitoring = True
self.thread = threading.Thread(target=self.peak_monitor)
self.thread.daemon = True
self.thread.start()
def stop(self):
self.peak_monitoring = False
self.thread.join()
return self.cpu_memory_peak
cpu_peak_tracker = PeakCPUMemory()
def start_measure():
# Time
measures = {"time": time.time()}
gc.collect()
torch.cuda.empty_cache()
# CPU mem
measures["cpu"] = psutil.Process().memory_info().rss
cpu_peak_tracker.start()
# GPU mem
for i in range(torch.cuda.device_count()):
measures[str(i)] = torch.cuda.memory_allocated(i)
torch.cuda.reset_peak_memory_stats()
return measures
def end_measure(start_measures):
# Time
measures = {"time": time.time() - start_measures["time"]}
gc.collect()
torch.cuda.empty_cache()
# CPU mem
measures["cpu"] = (psutil.Process().memory_info().rss - start_measures["cpu"]) / 2**20
measures["cpu-peak"] = (cpu_peak_tracker.stop() - start_measures["cpu"]) / 2**20
# GPU mem
for i in range(torch.cuda.device_count()):
measures[str(i)] = (torch.cuda.memory_allocated(i) - start_measures[str(i)]) / 2**20
measures[f"{i}-peak"] = (torch.cuda.max_memory_allocated(i) - start_measures[str(i)]) / 2**20
return measures
def log_measures(measures, description):
print(f"{description}:")
print(f"- Time: {measures['time']:.2f}s")
for i in range(torch.cuda.device_count()):
print(f"- GPU {i} allocated: {measures[str(i)]:.2f}MiB")
peak = measures[f"{i}-peak"]
print(f"- GPU {i} peak: {peak:.2f}MiB")
print(f"- CPU RAM allocated: {measures['cpu']:.2f}MiB")
print(f"- CPU RAM peak: {measures['cpu-peak']:.2f}MiB")

73
docker/README.md Normal file
View File

@ -0,0 +1,73 @@
<!---
Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Official Hugging Face Accelerate Docker Images
Accelerate publishes a variety of docker versions as part of our CI that users can also use. These are stable images that Accelerate can run off of which comes with a variety of different setup configurations, all of which are officially hosted on [Docker Hub](https://hub.docker.com/r/huggingface/accelerate).
A breakdown of each are given below
## Naming Conventions
Accelerate docker images follow a tagging convention of:
```bash
huggingface/accelerate:{accelerator}-{nightly,release}
```
`accelerator` in this instance is one of many applical pre-configured backend supports:
* `gpu`: Comes compiled off of the `nvidia/cuda` image and includes core parts like `bitsandbytes`. Runs off python 3.9.
* `cpu`: Comes compiled off of `python:3.9-slim` and is designed for non-CUDA based workloads.
* More to come soon
* `gpu-deepspeed`: Comes compiled off of the `nvidia/cuda` image and includes core parts like `bitsandbytes` as well as the latest `deepspeed` version. Runs off python 3.10.
## Nightlies vs Releases
Each release a new build is pushed with a version number included in the name. For a GPU-supported image of version 0.28.0 for instance, it would look like the following:
```bash
huggingface/accelerate:gpu-release-0.28.0
```
Nightlies contain two different image tags. There is a general `nightly` tag which is built each night, and a `nightly-YYYY-MM-DD` which corresponds to a build from a particular date.
For instance, here is an example nightly CPU image from 3/14/2024
```bash
huggingface/accelerate:cpu-nightly-2024-03-14
```
## Running the images
Each image comes compiled with `conda` and an `accelerate` environment contains all of the installed dependencies.
To pull down the latest nightly run:
```bash
docker pull huggingface/accelerate:gpu-nightly
```
To then run it in interactive mode with GPU-memory available, run:
```bash
docker container run --gpus all -it huggingface/accelerate:gpu-nightly
```
## DEPRECATED IMAGES
CPU and GPU docker images were hosted at `huggingface/accelerate-gpu` and `huggingface/accelerate-cpu`. These builds are now outdated and will not receive updates.
The builds at the corresponding `huggingface/accelerate:{gpu,cpu}` contain the same `Dockerfile`, so it's as simple as changing the docker image to the desired ones from above. We will not be deleting these images for posterity, but they will not be receiving updates going forward.

View File

@ -0,0 +1,35 @@
# Builds CPU-only Docker image of PyTorch
# Uses multi-staged approach to reduce size
# Stage 1
FROM python:3.8-slim as compile-image
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update
RUN apt-get install -y --no-install-recommends \
build-essential \
git \
gcc
# Setup virtual environment for Docker
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv ${VIRTUAL_ENV}
# Make sure we use the virtualenv
ENV PATH="${VIRTUAL_ENV}/bin:$PATH"
WORKDIR /workspace
# Install specific CPU torch wheel to save on space
RUN python3 -m pip install --upgrade --no-cache-dir pip
RUN python3 -m pip install --no-cache-dir \
jupyter \
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers] \
--extra-index-url https://download.pytorch.org/whl/cpu
# Stage 2
FROM python:3.8-slim AS build-image
COPY --from=compile-image /opt/venv /opt/venv
RUN useradd -ms /bin/bash user
USER user
# Make sure we use the virtualenv
ENV PATH="/opt/venv/bin:$PATH"
CMD ["/bin/bash"]

View File

@ -0,0 +1,46 @@
# Builds GPU docker image of PyTorch specifically
# Uses multi-staged approach to reduce size
# Stage 1
# Use base conda image to reduce time
FROM continuumio/miniconda3:latest AS compile-image
# Specify py version
# Note: DeepSpeed beyond v0.12.6 requires py 3.10
ENV PYTHON_VERSION=3.10
# Install apt libs
RUN apt-get update && \
apt-get install -y curl git wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists*
# Create our conda env
RUN conda create --name accelerate python=${PYTHON_VERSION} ipython jupyter pip
# We don't install pytorch here yet since CUDA isn't available
# instead we use the direct torch wheel
ENV PATH /opt/conda/envs/accelerate/bin:$PATH
# Activate our bash shell
RUN chsh -s /bin/bash
SHELL ["/bin/bash", "-c"]
# Activate the conda env, install mpy4pi, and install torch + accelerate
RUN source activate accelerate && conda install -c conda-forge mpi4py
RUN source activate accelerate && \
python3 -m pip install --no-cache-dir \
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers,deepspeed] \
--extra-index-url https://download.pytorch.org/whl/cu117
RUN python3 -m pip install --no-cache-dir bitsandbytes
# Stage 2
FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu20.04 AS build-image
COPY --from=compile-image /opt/conda /opt/conda
ENV PATH /opt/conda/bin:$PATH
# Install apt libs
RUN apt-get update && \
apt-get install -y curl git wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists*
RUN echo "source activate accelerate" >> ~/.profile
# Activate the virtualenv
CMD ["/bin/bash"]

View File

@ -0,0 +1,45 @@
# Builds GPU docker image of PyTorch specifically
# Uses multi-staged approach to reduce size
# Stage 1
# Use base conda image to reduce time
FROM continuumio/miniconda3:latest AS compile-image
# Specify py version
ENV PYTHON_VERSION=3.9
# Install apt libs
RUN apt-get update && \
apt-get install -y curl git wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists*
# Create our conda env
RUN conda create --name accelerate python=${PYTHON_VERSION} ipython jupyter pip
# We don't install pytorch here yet since CUDA isn't available
# instead we use the direct torch wheel
ENV PATH /opt/conda/envs/accelerate/bin:$PATH
# Activate our bash shell
RUN chsh -s /bin/bash
SHELL ["/bin/bash", "-c"]
# Activate the conda env, install mpy4pi, and install torch + accelerate
RUN source activate accelerate && conda install -c conda-forge mpi4py
RUN source activate accelerate && \
python3 -m pip install --no-cache-dir \
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers] \
--extra-index-url https://download.pytorch.org/whl/cu117
RUN python3 -m pip install --no-cache-dir bitsandbytes
# Stage 2
FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu20.04 AS build-image
COPY --from=compile-image /opt/conda /opt/conda
ENV PATH /opt/conda/bin:$PATH
# Install apt libs
RUN apt-get update && \
apt-get install -y curl git wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists*
RUN echo "source activate accelerate" >> ~/.profile
# Activate the virtualenv
CMD ["/bin/bash"]

267
docs/README.md Normal file
View File

@ -0,0 +1,267 @@
<!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Generating the documentation
To generate the documentation, you first have to build it. Several packages are necessary to build the doc,
you can install them with the following command, at the root of the code repository:
```bash
pip install -e ".[docs]"
```
Then you need to install our special tool that builds the documentation:
```bash
pip install git+https://github.com/huggingface/doc-builder
```
---
**NOTE**
You only need to generate the documentation to inspect it locally (if you're planning changes and want to
check how they look before committing for instance). You don't have to commit the built documentation.
---
## Building the documentation
Once you have setup the `doc-builder` and additional packages, you can generate the documentation by
typing the following command:
```bash
doc-builder build accelerate docs/source/ --build_dir ~/tmp/test-build
```
You can adapt the `--build_dir` to set any temporary folder that you prefer. This command will create it and generate
the MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite
Markdown editor.
## Previewing the documentation
To preview the docs, first install the `watchdog` module with:
```bash
pip install watchdog
```
Then run the following command:
```bash
doc-builder preview {package_name} {path_to_docs}
```
For example:
```bash
doc-builder preview accelerate docs/source/
```
The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
---
**NOTE**
The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
---
## Adding a new element to the navigation bar
Accepted files are Markdown (.md).
Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/accelerate/blob/main/docs/source/_toctree.yml) file.
## Renaming section headers and moving sections
It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
```
Sections that were moved:
[ <a href="#section-b">Section A</a><a id="section-a"></a> ]
```
and of course, if you moved it to another file, then:
```
Sections that were moved:
[ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
```
Use the relative style to link to the new file so that the versioned docs continue to work.
## Writing Documentation - Specification
The `huggingface/accelerate` documentation follows the
[Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings,
although we can write them directly in Markdown.
### Adding a new tutorial
Adding a new tutorial or section is done in two steps:
- Add a new file under `./source`. This file can either be ReStructuredText (.rst) or Markdown (.md).
- Link that file in `./source/_toctree.yml` on the correct toc-tree.
Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so
depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or
four.
### Writing source documentation
Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
and objects like True, None, or any strings should usually be put in `code`.
When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool
adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or
function to be in the main package.
If you want to create a link to some internal class or function, you need to
provide its path. For instance: \[\`utils.gather\`\]. This will be converted into a link with
`utils.gather` in the description. To get rid of the path and only keep the name of the object you are
linking to in the description, add a ~: \[\`~utils.gather\`\] will generate a link with `gather` in the description.
The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\].
#### Defining arguments in a method
Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and
an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its
description:
```
Args:
n_layers (`int`): The number of layers of the model.
```
If the description is too long to fit in one line (more than 119 characters in total), another indentation is necessary
before writing the description after the argument.
Finally, to maintain uniformity if any *one* description is too long to fit on one line, the
rest of the parameters should follow suit and have an indention before their description.
Here's an example showcasing everything so far:
```
Args:
gradient_accumulation_steps (`int`, *optional*, default to 1):
The number of steps that should pass before gradients are accumulated. A number > 1 should be combined with `Accelerator.accumulate`.
cpu (`bool`, *optional*):
Whether or not to force the script to execute on CPU. Will ignore GPU available if set to `True` and force the execution on one process only.
```
For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the
following signature:
```
def my_function(x: str = None, a: float = 1):
```
then its documentation should look like this:
```
Args:
x (`str`, *optional*):
This argument controls ... and has a description longer than 119 chars.
a (`float`, *optional*, defaults to 1):
This argument is used to ... and has a description longer than 119 chars.
```
Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even
if the first line describing your argument type and its default gets long, you can't break it on several lines. You can
however write as many lines as you want in the indented description (see the example above with `input_ids`).
#### Writing a multi-line code block
Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
````
```python
# first line of code
# second line
# etc
```
````
#### Writing a return block
The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation.
The first line should be the type of the return, followed by a line return. No need to indent further for the elements
building the return.
Here's an example of a single value return:
```
Returns:
`List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.
```
Here's an example of a tuple return, comprising several objects:
```
Returns:
`tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
- ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
- **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
```
## Styling the docstring
We have an automatic script running with the `make style` comment that will make sure that:
- the docstrings fully take advantage of the line width
- all code examples are formatted using black, like the code of the Transformers library
This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's
recommended to commit your changes before running `make style`, so you can revert the changes done by that script
easily.
## Writing documentation examples
The syntax for Example docstrings can look as follows:
```
Example:
```python
>>> import time
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
>>> if accelerator.is_main_process:
... time.sleep(2)
>>> else:
... print("I'm waiting for the main process to finish its sleep...")
>>> accelerator.wait_for_everyone()
>>> # Should print on every process at the same time
>>> print("Everyone is here")
```
```
The docstring should give a minimal, clear example of how the respective function
is to be used in inference and also include the expected (ideally sensible)
output.
Often, readers will try out the example before even going through the function
or class definitions. Therefore, it is of utmost importance that the example
works as expected.

View File

@ -1,30 +1,121 @@
- sections:
- sections:
- local: index
title: 🤗 Accelerate
- local: quicktour
title: Quick tour
- local: installation
- local: basic_tutorials/install
title: Installation
title: Get started
- local: quicktour
title: Quicktour
title: Getting started
- sections:
- local: sagemaker
title: Amazon SageMaker
title: Guides
- local: basic_tutorials/overview
title: Overview
- local: basic_tutorials/migration
title: Add Accelerate to your code
- local: basic_tutorials/execution
title: Execution process
- local: basic_tutorials/tpu
title: TPU training
- local: basic_tutorials/launch
title: Launching distributed code
- local: basic_tutorials/notebook
title: Launching distributed training from Jupyter Notebooks
title: Tutorials
- sections:
- local: accelerator
- isExpanded: true
sections:
- local: usage_guides/explore
title: Start Here!
- local: usage_guides/model_size_estimator
title: Model memory estimator
- local: usage_guides/quantization
title: Model quantization
- local: usage_guides/tracking
title: Experiment trackers
- local: usage_guides/profiler
title: Profiler
- local: usage_guides/checkpoint
title: Save and load training states
- local: basic_tutorials/troubleshooting
title: Troubleshoot
- local: usage_guides/training_zoo
title: Example Zoo
title: Accelerate
- isExpanded: true
sections:
- local: usage_guides/gradient_accumulation
title: Gradient accumulation
- local: usage_guides/local_sgd
title: Local SGD
- local: usage_guides/low_precision_training
title: Low precision (FP8) training
- local: usage_guides/deepspeed
title: DeepSpeed
- local: usage_guides/ddp_comm_hook
title: DDP Communication Hooks
- local: usage_guides/fsdp
title: Fully Sharded Data Parallelism
- local: usage_guides/megatron_lm
title: Megatron-LM
- local: usage_guides/sagemaker
title: Amazon SageMaker
- local: usage_guides/mps
title: Apple M1 GPUs
- local: usage_guides/ipex
title: IPEX training with CPU
title: Training
- isExpanded: true
sections:
- local: usage_guides/big_modeling
title: Big Model Inference
- local: usage_guides/distributed_inference
title: Distributed inference
title: Inference
title: How to guides
- sections:
- local: concept_guides/internal_mechanism
title: 🤗 Accelerate's internal mechanism
- local: concept_guides/big_model_inference
title: Loading big models into memory
- local: concept_guides/performance
title: Comparing performance across distributed setups
- local: concept_guides/deferring_execution
title: Executing and deferring jobs
- local: concept_guides/gradient_synchronization
title: Gradient synchronization
- local: concept_guides/fsdp_and_deepspeed
title: FSDP vs DeepSpeed
- local: concept_guides/low_precision_training
title: How training in low-precision environments is possible (FP8)
- local: concept_guides/training_tpu
title: TPU best practices
title: Concepts and fundamentals
- sections:
- local: package_reference/accelerator
title: Accelerator
- local: launcher
title: Notebook Launcher
- local: kwargs
title: Kwargs Handlers
- local: internal
title: Internals
- local: checkpoint
title: Checkpointing
- local: tracking
title: Experiment Tracking
- local: fsdp
title: Fully Sharded Data Parallel
- local: memory
title: Memory Utilities
title: API Reference
- local: package_reference/state
title: Stateful configuration classes
- local: package_reference/cli
title: The Command Line
- local: package_reference/torch_wrappers
title: Torch wrapper classes
- local: package_reference/tracking
title: Experiment trackers
- local: package_reference/launchers
title: Distributed launchers
- local: package_reference/deepspeed
title: DeepSpeed utilities
- local: package_reference/logging
title: Logging
- local: package_reference/big_modeling
title: Working with large models
- local: package_reference/inference
title: Distributed inference with big models
- local: package_reference/kwargs
title: Kwargs handlers
- local: package_reference/utilities
title: Utility functions and classes
- local: package_reference/megatron_lm
title: Megatron-LM Utilities
- local: package_reference/fsdp
title: Fully Sharded Data Parallelism Utilities
title: "Reference"

View File

@ -1,41 +0,0 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Accelerator
The [`Accelerator`] is the main class provided by 🤗 Accelerate. It serves at the main entrypoint for
the API. To quickly adapt your script to work on any kind of setup with 🤗 Accelerate juste:
1. Initialize an [`Accelerator`] object (that we will call `accelerator` in the rest of this
page) as early as possible in your script.
2. Pass along your model(s), optimizer(s), dataloader(s) to the [`~Accelerator.prepare`] method.
3. (Optional but best practice) Remove all the `.cuda()` or `.to(device)` in your code and let the
`accelerator` handle device placement for you.
4. Replace the `loss.backward()` in your code by `accelerator.backward(loss)`.
5. (Optional, when using distributed evaluation) Gather your predictions and labelsbefore storing them or using them
for metric computation using [`~Accelerator.gather`].
This is all what is needed in most cases. For more advanced case or a nicer experience here are the functions you
should search for and replace by the corresponding methods of your `accelerator`:
- `print` statements should be replaced by [`~Accelerator.print`] to be only printed once per
process.
- Use [`~Accelerator.is_local_main_process`] for statements that should be executed once per server.
- Use [`~Accelerator.is_main_process`] for statements that should be executed once only.
- Use [`~Accelerator.wait_for_everyone`] to make sure all processes join that point before continuing
(useful before a model save for instance).
- Use [`~Accelerator.unwrap_model`] to unwrap your model before saving it.
- Use [`~Accelerator.save`] instead of `torch.save`.
- Use [`~Accelerator.clip_grad_norm_`] instead of `torch.nn.utils.clip_grad_norm_` and
[`~Accelerator.clip_grad_value_`] instead of `torch.nn.utils.clip_grad_value_`.
[[autodoc]] Accelerator

View File

@ -0,0 +1,128 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Execution process
When working with distributed training systems, it is important to manage how and when processes are executed across GPUs. Some processes are completed faster than others, and some processes shouldn't begin if others haven't finished yet. Accelerate provides tools for orchestrating when processes are executed to ensure everything remains synchronized across all devices.
This tutorial will teach you how to execute a process on only one machine and how to delay execution until all processes have reached a certain point.
## Execute on one process
Certain code only needs to be run once on a given machine, such as printing a log statement or only displaying one progress bar on the local main process.
<hfoptions id="local-execution">
<hfoption id="statements">
You should use `accelerator.is_local_main_process` to indicate code that should only be executed once.
```py
from tqdm.auto import tqdm
progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
```
You could also wrap a statement with `accelerator.is_local_main_process`.
> [!TIP]
> For standalone `print` statements that aren't wrapped in `accelerator.is_local_main_process`, replace `print` with Accelerate's [`~Accelerator.print`] method to only print once per process.
```py
if accelerator.is_local_main_process:
print("Accelerate is the best")
```
</hfoption>
<hfoption id="function">
For a function that should only be executed once, use [`~Accelerator.on_local_main_process`].
```py
@accelerator.on_local_main_process
def do_my_thing():
"Something done once per server"
do_thing_once_per_server()
```
</hfoption>
</hfoptions>
You could also direct Accelerate to execute code once across *all processes* regardless of the number of machines. This is useful if you're uploading a final model to the Hub.
<hfoptions id="main-execution">
<hfoption id="statement">
You should use `accelerator.is_main_process` to indicate code that should only be executed once across all processes.
```py
if accelerator.is_main_process:
repo.push_to_hub()
```
</hfoption>
<hfoption id="function">
For a function that should only be executed once across all processes, use [`~Accelerator.on_main_process`].
```py
@accelerator.on_main_process
def do_my_thing():
"Something done once per server"
do_thing_once()
```
</hfoption>
</hfoptions>
## Execute on a specific process
Accelerate can also help you execute functions that should only be executed on a specific process or a local process index.
<hfoptions id="specific-execution">
<hfoption id="specific process">
Use the [`~Accelerator.on_process`] method and specify the process index to execute a function on.
```py
@accelerator.on_process(process_index=0)
def do_my_thing():
"Something done on process index 0"
do_thing_on_index_zero()
```
</hfoption>
<hfoption id="local process">
Use the [`~Accelerator.on_local_process`] method and specify the local process index to execute a function on.
```py
@accelerator.on_local_process(local_process_idx=0)
def do_my_thing():
"Something done on process index 0 on each server"
do_thing_on_index_zero_on_each_server()
```
</hfoption>
</hfoptions>
## Defer execution
When you run your script on several GPUs at the same time, some code may be executed faster than others. You might need to wait for all processes to reach a certain point before executing the next set of instructions. For instance, you shouldnt save a model before making sure every process is done with training.
To do this, add [`~Accelerator.wait_for_everyone`] in your code. This blocks all processes that have finished first from continuing until all remaining processes have reached the same point (this has no effect if you're running on a single GPU or CPU).
```py
accelerator.wait_for_everyone()
```

View File

@ -0,0 +1,102 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Installation and Configuration
Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 Accelerate. 🤗 Accelerate is tested on **Python 3.8+**.
## Installing 🤗 Accelerate
🤗 Accelerate is available on pypi and conda, as well as on GitHub. Details to install from each are below:
### pip
To install 🤗 Accelerate from pypi, perform:
```bash
pip install accelerate
```
### conda
🤗 Accelerate can also be installed with conda with:
```bash
conda install -c conda-forge accelerate
```
### Source
New features are added every day that haven't been released yet. To try them out yourself, install
from the GitHub repository:
```bash
pip install git+https://github.com/huggingface/accelerate
```
If you're working on contributing to the library or wish to play with the source code and see live
results as you run the code, an editable version can be installed from a locally-cloned version of the
repository:
```bash
git clone https://github.com/huggingface/accelerate
cd accelerate
pip install -e .
```
## Configuring 🤗 Accelerate
After installing, you need to configure 🤗 Accelerate for how the current system is setup for training.
To do so run the following and answer the questions prompted to you:
```bash
accelerate config
```
To write a barebones configuration that doesn't include options such as DeepSpeed configuration or running on TPUs, you can quickly run:
```bash
python -c "from accelerate.utils import write_basic_config; write_basic_config(mixed_precision='fp16')"
```
🤗 Accelerate will automatically utilize the maximum number of GPUs available and set the mixed precision mode.
To check that your configuration looks fine, run:
```bash
accelerate env
```
An example output is shown below, which describes two GPUs on a single machine with no mixed precision being used:
```bash
- `Accelerate` version: 0.11.0.dev0
- Platform: Linux-5.10.0-15-cloud-amd64-x86_64-with-debian-11.3
- Python version: 3.7.12
- Numpy version: 1.19.5
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- main_process_ip: None
- main_process_port: None
- main_training_function: main
- deepspeed_config: {}
- fsdp_config: {}
```

View File

@ -0,0 +1,232 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Launching your 🤗 Accelerate scripts
In the previous tutorial, you were introduced to how to modify your current training script to use 🤗 Accelerate.
The final version of that code is shown below:
```python
from accelerate import Accelerator
accelerator = Accelerator()
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
```
But how do you run this code and have it utilize the special hardware available to it?
First, you should rewrite the above code into a function, and make it callable as a script. For example:
```diff
from accelerate import Accelerator
+ def main():
accelerator = Accelerator()
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
+ if __name__ == "__main__":
+ main()
```
Next, you need to launch it with `accelerate launch`.
<Tip warning={true}>
It's recommended you run `accelerate config` before using `accelerate launch` to configure your environment to your liking.
Otherwise 🤗 Accelerate will use very basic defaults depending on your system setup.
</Tip>
## Using accelerate launch
🤗 Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them is.
<Tip>
If you are familiar with launching scripts in PyTorch yourself such as with `torchrun`, you can still do this. It is not required to use `accelerate launch`.
</Tip>
You can launch your script quickly by using:
```bash
accelerate launch {script_name.py} --arg1 --arg2 ...
```
Just put `accelerate launch` at the start of your command, and pass in additional arguments and parameters to your script afterward like normal!
Since this runs the various torch spawn methods, all of the expected environment variables can be modified here as well.
For example, here is how to use `accelerate launch` with a single GPU:
```bash
CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...
```
You can also use `accelerate launch` without performing `accelerate config` first, but you may need to manually pass in the right configuration parameters.
In this case, 🤗 Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision.
Here is how you would use all GPUs and train with mixed precision disabled:
```bash
accelerate launch --multi_gpu {script_name.py} {--arg1} {--arg2} ...
```
Or by specifying a number of GPUs to use:
```bash
accelerate launch --num_processes=2 {script_name.py} {--arg1} {--arg2} ...
```
To get more specific you should pass in the needed parameters yourself. For instance, here is how you
would also launch that same script on two GPUs using mixed precision while avoiding all of the warnings:
```bash
accelerate launch --multi_gpu --mixed_precision=fp16 --num_processes=2 {script_name.py} {--arg1} {--arg2} ...
```
For a complete list of parameters you can pass in, run:
```bash
accelerate launch -h
```
<Tip>
Even if you are not using 🤗 Accelerate in your code, you can still use the launcher for starting your scripts!
</Tip>
For a visualization of this difference, that earlier `accelerate launch` on multi-gpu would look something like so with `torchrun`:
```bash
MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --num_machines=1 {script_name.py} {--arg1} {--arg2} ...
```
You can also launch your script utilizing the launch CLI as a python module itself, enabling the ability to pass in other python-specific
launching behaviors. To do so, use `accelerate.commands.launch` instead of `accelerate launch`:
```bash
python -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2}
```
If you want to execute the script with any other python flags, you can pass them in as well similar to `-m`, such as
the below example enabling unbuffered stdout and stderr:
```bash
python -u -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2}
```
<Tip>
You can run your code on CPU as well! This is helpful for debugging and testing purposes on toy models and datasets.
```bash
accelerate launch --cpu {script_name.py} {--arg1} {--arg2}
```
</Tip>
## Why you should always use `accelerate config`
Why is it useful to the point you should **always** run `accelerate config`?
Remember that earlier call to `accelerate launch` as well as `torchrun`?
Post configuration, to run that script with the needed parts you just need to use `accelerate launch` outright, without passing anything else in:
```bash
accelerate launch {script_name.py} {--arg1} {--arg2} ...
```
## Custom Configurations
As briefly mentioned earlier, `accelerate launch` should be mostly used through combining set configurations
made with the `accelerate config` command. These configs are saved to a `default_config.yaml` file in your cache folder for 🤗 Accelerate.
This cache folder is located at (with decreasing order of priority):
- The content of your environment variable `HF_HOME` suffixed with `accelerate`.
- If it does not exist, the content of your environment variable `XDG_CACHE_HOME` suffixed with
`huggingface/accelerate`.
- If this does not exist either, the folder `~/.cache/huggingface/accelerate`.
To have multiple configurations, the flag `--config_file` can be passed to the `accelerate launch` command paired
with the location of the custom yaml.
An example yaml may look something like the following for two GPUs on a single machine using `fp16` for mixed precision:
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: MULTI_GPU
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
use_cpu: false
```
Launching a script from the location of that custom yaml file looks like the following:
```bash
accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2} ...
```
## Multi-node training
Multi-node training with 🤗Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following:
- Copy your codebase and data to all nodes. (or place them on a shared filesystem)
- Setup your python packages on all nodes.
- Run `accelerate config` on the main single node first. After specifying the number of nodes, you will be asked to specify the rank of each node (this will be 0 for the main/master node), along with the IP address and port for the main process. This is required for the worker nodes to communicate with the main process. Afterwards, you can copy or send this config file across all of your nodes, changing the `machine_rank` to 1, 2,3, etc. to avoid having to run the command (or just follow their directions directly for launching with `torchrun` as well)
Once you have done this, you can start your multi-node training run by running `accelerate launch` (or `torchrun`) on all nodes.
<Tip>
It is required that the command be ran on all nodes for everything to start, not just running it from the main node. You can use something like SLURM or a different process executor to wrap around this requirement and call everything from a single command.
</Tip>
<Tip>
It is recommended to use the intranet IP of your main node over the public IP for better latency. This is the `192.168.x.x` or the `172.x.x.x` address you see when you run `hostname -I` on the main node.
</Tip>
To get a better idea about multi-node training, check out our example for [multi-node training with FSDP](https://huggingface.co/blog/ram-efficient-pytorch-fsdp).

View File

@ -0,0 +1,221 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Add Accelerate to your code
Each distributed training framework has their own way of doing things which can require writing a lot of custom code to adapt it to your PyTorch training code and training environment. Accelerate offers a friendly way to interface with these distributed training frameworks without having to learn the specific details of each one. Accelerate takes care of those details for you, so you can focus on the training code and scale it to any distributed training environment.
In this tutorial, you'll learn how to adapt your existing PyTorch code with Accelerate and get you on your way toward training on distributed systems with ease! You'll start with a basic PyTorch training loop (it assumes all the training objects like `model` and `optimizer` have been setup already) and progressively integrate Accelerate into it.
```python
device = "cuda"
model.to(device)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss.backward()
optimizer.step()
scheduler.step()
```
## Accelerator
The [`Accelerator`] is the main class for adapting your code to work with Accelerate. It knows about the distributed setup you're using such as the number of different processes and your hardware type. This class also provides access to many of the necessary methods for enabling your PyTorch code to work in any distributed training environment and for managing and executing processes across devices.
That's why you should always start by importing and creating an [`Accelerator`] instance in your script.
```python
from accelerate import Accelerator
accelerator = Accelerator()
```
The [`Accelerator`] also knows which device to move your PyTorch objects to, so it is recommended to let Accelerate handle this for you.
```diff
- device = "cuda"
+ device = accelerator.device
model.to(device)
```
## Prepare PyTorch objects
Next, you need to prepare your PyTorch objects (model, optimizer, scheduler, etc.) for distributed training. The [`~Accelerator.prepare`] method takes care of placing your model in the appropriate container (like single GPU or multi-GPU) for your training setup, adapting the optimizer and scheduler to use Accelerate's [`~optimizer.AcceleratedOptimizer`] and [`~scheduler.AcceleratedScheduler`], and creating a new dataloader that can be sharded across processes.
> [!TIP]
> Accelerate only prepares objects that inherit from their respective PyTorch classes such as `torch.optim.Optimizer`.
The PyTorch objects are returned in the same order they're sent.
```py
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
```
## Training loop
Finally, remove the `to(device)` calls to the inputs and targets in the training loop because Accelerate's DataLoader classes automatically places them on the right device. You should also replace the usual `backward()` pass with Accelerate's [`~Accelerator.backward`] method which scales the gradients for you and uses the appropriate `backward()` method depending on your distributed setup (for example, DeepSpeed or Megatron).
```diff
- inputs = inputs.to(device)
- targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
- loss.backward()
+ accelerator.backward(loss)
```
Put everything together and your new Accelerate training loop should now look like this!
```python
from accelerate import Accelerator
accelerator = Accelerator()
device = accelerator.device
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
```
## Training features
Accelerate offers additional features - like gradient accumulation, gradient clipping, mixed precision training and more - you can add to your script to improve your training run. Let's explore these three features.
### Gradient accumulation
Gradient accumulation enables you to train on larger batch sizes by accumulating the gradients over multiple batches before updating the weights. This can be useful for getting around memory limitations. To enable this feature in Accelerate, specify the `gradient_accumulation_steps` parameter in the [`Accelerator`] class and add the [`~Accelerator.accumulate`] context manager to your script.
```diff
+ accelerator = Accelerator(gradient_accumulation_steps=2)
model, optimizer, training_dataloader = accelerator.prepare(model, optimizer, training_dataloader)
for input, label in training_dataloader:
+ with accelerator.accumulate(model):
predictions = model(input)
loss = loss_function(predictions, label)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
### Gradient clipping
Gradient clipping is a technique to prevent "exploding gradients", and Accelerate offers:
* [`~Accelerator.clip_grad_value_`] to clip gradients to a minimum and maximum value
* [`~Accelerator.clip_grad_norm_`] for normalizing gradients to a certain value
### Mixed precision
Mixed precision accelerates training by using a lower precision data type like fp16 (half-precision) to calculate the gradients. For the best performance with Accelerate, the loss should be computed inside your model (like in Transformers models) because computations outside of the model are computed in full precision.
Set the mixed precision type to use in the [`Accelerator`], and then use the [`~Accelerator.autocast`] context manager to automatically cast the values to the specified data type.
> [!WARNING]
> Accelerate enables automatic mixed precision, so [`~Accelerator.autocast`] is only needed if there are other mixed precision operations besides those performed on loss by [`~Accelerator.backward`] which already handles the scaling.
```diff
+ accelerator = Accelerator(mixed_precision="fp16")
+ with accelerator.autocast():
loss = complex_loss_function(outputs, target):
```
## Save and load
Accelerate can also save and load a *model* once training is complete or you can also save the model and optimizer *state* which could be useful for resuming training.
### Model
Once all processes are complete, unwrap the model with the [`~Accelerator.unwrap_model`] method before saving it because the [`~Accelerator.prepare`] method wrapped your model into the proper interface for distributed training. If you don't unwrap the model, saving the model state dictionary also saves any potential extra layers from the larger model and you won't be able to load the weights back into your base model.
You should use the [`~Accelerator.save_model`] method to unwrap and save the model state dictionary. This method can also save a model into sharded checkpoints or into the [safetensors](https://hf.co/docs/safetensors/index) format.
<hfoptions id="save">
<hfoption id="single checkpoint">
```py
accelerator.wait_for_everyone()
accelerator.save_model(model, save_directory)
```
<Tip>
For models from the [Transformers](https://hf.co/docs/transformers/index) library, save the model with the [`~transformers.PreTrainedModel.save_pretrained`] method so that it can be reloaded with the [`~transformers.PreTrainedModel.from_pretrained`] method.
```py
from transformers import AutoModel
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(
"path/to/my_model_directory",
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
)
model = AutoModel.from_pretrained("path/to/my_model_directory")
```
</Tip>
To load your weights, use the [`~Accelerator.unwrap_model`] method to unwrap the model first before loading the weights. All model parameters are references to tensors, so this loads your weights inside `model`.
```py
unwrapped_model = accelerator.unwrap_model(model)
path_to_checkpoint = os.path.join(save_directory,"pytorch_model.bin")
unwrapped_model.load_state_dict(torch.load(path_to_checkpoint))
```
</hfoption>
<hfoption id="sharded checkpoint">
Set `safe_serialization=True` to save the model in the safetensor format.
```py
accelerator.wait_for_everyone()
accelerator.save_model(model, save_directory, max_shard_size="1GB", safe_serialization=True)
```
To load a sharded checkpoint or a safetensor formatted checkpoint, use the [`~accelerate.load_checkpoint_in_model`] method. This method allows you to load a checkpoint onto a specific device.
```py
load_checkpoint_in_model(unwrapped_model, save_directory, device_map={"":device})
```
</hfoption>
</hfoptions>
### State
During training, you may want to save the current state of the model, optimizer, random generators, and potentially learning rate schedulers so they can be restored in the *same script*. You should add the [`~Accelerator.save_state`] and [`~Accelerator.load_state`] methods to your script to save and load states.
To further customize where and how states are saved through [`~Accelerator.save_state`], use the [`~utils.ProjectConfiguration`] class. For example, if `automatic_checkpoint_naming` is enabled, each saved checkpoint is stored at `Accelerator.project_dir/checkpoints/checkpoint_{checkpoint_number}`.
Any other stateful items to be stored should be registered with the [`~Accelerator.register_for_checkpointing`] method so they can be saved and loaded. Every object passed to this method to be stored must have a `load_state_dict` and `state_dict` function.

View File

@ -0,0 +1,476 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Launching Multi-GPU Training from a Jupyter Environment
This tutorial teaches you how to fine tune a computer vision model with 🤗 Accelerate from a Jupyter Notebook on a distributed system.
You will also learn how to setup a few requirements needed for ensuring your environment is configured properly, your data has been prepared properly, and finally how to launch training.
<Tip>
This tutorial is also available as a Jupyter Notebook [here](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_cv_example.ipynb)
</Tip>
## Configuring the Environment
Before any training can be performed, a 🤗 Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
```bash
accelerate config
```
However, if general defaults are fine and you are *not* running on a TPU, 🤗Accelerate has a utility to quickly write your GPU configuration into a config file via [`utils.write_basic_config`].
The following code will restart Jupyter after writing the configuration, as CUDA code was called to perform this.
<Tip warning={true}>
CUDA can't be initialized more than once on a multi-GPU system. It's fine to debug in the notebook and have calls to CUDA, but in order to finally train a full cleanup and restart will need to be performed.
</Tip>
```python
import os
from accelerate.utils import write_basic_config
write_basic_config() # Write a config file
os._exit(00) # Restart the notebook
```
## Preparing the Dataset and Model
Next you should prepare your dataset. As mentioned at earlier, great care should be taken when preparing the `DataLoaders` and model to make sure that **nothing** is put on *any* GPU.
If you do, it is recommended to put that specific code into a function and call that from within the notebook launcher interface, which will be shown later.
Make sure the dataset is downloaded based on the directions [here](https://github.com/huggingface/accelerate/tree/main/examples#simple-vision-example)
```python
import os, re, torch, PIL
import numpy as np
from torch.optim.lr_scheduler import OneCycleLR
from torch.utils.data import DataLoader, Dataset
from torchvision.transforms import Compose, RandomResizedCrop, Resize, ToTensor
from accelerate import Accelerator
from accelerate.utils import set_seed
from timm import create_model
```
First you need to create a function to extract the class name based on a filename:
```python
import os
data_dir = "../../images"
fnames = os.listdir(data_dir)
fname = fnames[0]
print(fname)
```
```python out
beagle_32.jpg
```
In the case here, the label is `beagle`. Using regex you can extract the label from the filename:
```python
import re
def extract_label(fname):
stem = fname.split(os.path.sep)[-1]
return re.search(r"^(.*)_\d+\.jpg$", stem).groups()[0]
```
```python
extract_label(fname)
```
And you can see it properly returned the right name for our file:
```python out
"beagle"
```
Next a `Dataset` class should be made to handle grabbing the image and the label:
```python
class PetsDataset(Dataset):
def __init__(self, file_names, image_transform=None, label_to_id=None):
self.file_names = file_names
self.image_transform = image_transform
self.label_to_id = label_to_id
def __len__(self):
return len(self.file_names)
def __getitem__(self, idx):
fname = self.file_names[idx]
raw_image = PIL.Image.open(fname)
image = raw_image.convert("RGB")
if self.image_transform is not None:
image = self.image_transform(image)
label = extract_label(fname)
if self.label_to_id is not None:
label = self.label_to_id[label]
return {"image": image, "label": label}
```
Now to build the dataset. Outside the training function you can find and declare all the filenames and labels and use them as references inside the
launched function:
```python
fnames = [os.path.join("../../images", fname) for fname in fnames if fname.endswith(".jpg")]
```
Next gather all the labels:
```python
all_labels = [extract_label(fname) for fname in fnames]
id_to_label = list(set(all_labels))
id_to_label.sort()
label_to_id = {lbl: i for i, lbl in enumerate(id_to_label)}
```
Next, you should make a `get_dataloaders` function that will return your built dataloaders for you. As mentioned earlier, if data is automatically
sent to the GPU or a TPU device when building your `DataLoaders`, they must be built using this method.
```python
def get_dataloaders(batch_size: int = 64):
"Builds a set of dataloaders with a batch_size"
random_perm = np.random.permutation(len(fnames))
cut = int(0.8 * len(fnames))
train_split = random_perm[:cut]
eval_split = random_perm[cut:]
# For training a simple RandomResizedCrop will be used
train_tfm = Compose([RandomResizedCrop((224, 224), scale=(0.5, 1.0)), ToTensor()])
train_dataset = PetsDataset([fnames[i] for i in train_split], image_transform=train_tfm, label_to_id=label_to_id)
# For evaluation a deterministic Resize will be used
eval_tfm = Compose([Resize((224, 224)), ToTensor()])
eval_dataset = PetsDataset([fnames[i] for i in eval_split], image_transform=eval_tfm, label_to_id=label_to_id)
# Instantiate dataloaders
train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size, num_workers=4)
eval_dataloader = DataLoader(eval_dataset, shuffle=False, batch_size=batch_size * 2, num_workers=4)
return train_dataloader, eval_dataloader
```
Finally, you should import the scheduler to be used later:
```python
from torch.optim.lr_scheduler import CosineAnnealingLR
```
## Writing the Training Function
Now you can build the training loop. [`notebook_launcher`] works by passing in a function to call that will be ran across the distributed system.
Here is a basic training loop for the animal classification problem:
<Tip>
The code has been split up to allow for explanations on each section. A full version that can be copy and pasted will be available at the end
</Tip>
```python
def training_loop(mixed_precision="fp16", seed: int = 42, batch_size: int = 64):
set_seed(seed)
accelerator = Accelerator(mixed_precision=mixed_precision)
```
First you should set the seed and create an [`Accelerator`] object as early in the training loop as possible.
<Tip warning={true}>
If training on the TPU, your training loop should take in the model as a parameter and it should be instantiated
outside of the training loop function. See the [TPU best practices](../concept_guides/training_tpu)
to learn why
</Tip>
Next you should build your dataloaders and create your model:
```python
train_dataloader, eval_dataloader = get_dataloaders(batch_size)
model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
```
<Tip>
You build the model here so that the seed also controls the new weight initialization
</Tip>
As you are performing transfer learning in this example, the encoder of the model starts out frozen so the head of the model can be
trained only initially:
```python
for param in model.parameters():
param.requires_grad = False
for param in model.get_classifier().parameters():
param.requires_grad = True
```
Normalizing the batches of images will make training a little faster:
```python
mean = torch.tensor(model.default_cfg["mean"])[None, :, None, None]
std = torch.tensor(model.default_cfg["std"])[None, :, None, None]
```
To make these constants available on the active device, you should set it to the Accelerator's device:
```python
mean = mean.to(accelerator.device)
std = std.to(accelerator.device)
```
Next instantiate the rest of the PyTorch classes used for training:
```python
optimizer = torch.optim.Adam(params=model.parameters(), lr=3e-2 / 25)
lr_scheduler = OneCycleLR(optimizer=optimizer, max_lr=3e-2, epochs=5, steps_per_epoch=len(train_dataloader))
```
Before passing everything to [`~Accelerator.prepare`].
<Tip>
There is no specific order to remember, you just need to unpack the objects in the same order you gave them to the prepare method.
</Tip>
```python
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
```
Now train the model:
```python
for epoch in range(5):
model.train()
for batch in train_dataloader:
inputs = (batch["image"] - mean) / std
outputs = model(inputs)
loss = torch.nn.functional.cross_entropy(outputs, batch["label"])
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
```
The evaluation loop will look slightly different compared to the training loop. The number of elements passed as well as the overall
total accuracy of each batch will be added to two constants:
```python
model.eval()
accurate = 0
num_elems = 0
```
Next you have the rest of your standard PyTorch loop:
```python
for batch in eval_dataloader:
inputs = (batch["image"] - mean) / std
with torch.no_grad():
outputs = model(inputs)
predictions = outputs.argmax(dim=-1)
```
Before finally the last major difference.
When performing distributed evaluation, the predictions and labels need to be passed through
[`~Accelerator.gather`] so that all of the data is available on the current device and a properly calculated metric can be achieved:
```python
accurate_preds = accelerator.gather(predictions) == accelerator.gather(batch["label"])
num_elems += accurate_preds.shape[0]
accurate += accurate_preds.long().sum()
```
Now you just need to calculate the actual metric for this problem, and you can print it on the main process using [`~Accelerator.print`]:
```python
eval_metric = accurate.item() / num_elems
accelerator.print(f"epoch {epoch}: {100 * eval_metric:.2f}")
```
A full version of this training loop is available below:
```python
def training_loop(mixed_precision="fp16", seed: int = 42, batch_size: int = 64):
set_seed(seed)
# Initialize accelerator
accelerator = Accelerator(mixed_precision=mixed_precision)
# Build dataloaders
train_dataloader, eval_dataloader = get_dataloaders(batch_size)
# Instantiate the model (you build the model here so that the seed also controls new weight initaliziations)
model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
# Freeze the base model
for param in model.parameters():
param.requires_grad = False
for param in model.get_classifier().parameters():
param.requires_grad = True
# You can normalize the batches of images to be a bit faster
mean = torch.tensor(model.default_cfg["mean"])[None, :, None, None]
std = torch.tensor(model.default_cfg["std"])[None, :, None, None]
# To make these constants available on the active device, set it to the accelerator device
mean = mean.to(accelerator.device)
std = std.to(accelerator.device)
# Instantiate the optimizer
optimizer = torch.optim.Adam(params=model.parameters(), lr=3e-2 / 25)
# Instantiate the learning rate scheduler
lr_scheduler = OneCycleLR(optimizer=optimizer, max_lr=3e-2, epochs=5, steps_per_epoch=len(train_dataloader))
# Prepare everything
# There is no specific order to remember, you just need to unpack the objects in the same order you gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now you train the model
for epoch in range(5):
model.train()
for batch in train_dataloader:
inputs = (batch["image"] - mean) / std
outputs = model(inputs)
loss = torch.nn.functional.cross_entropy(outputs, batch["label"])
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
accurate = 0
num_elems = 0
for batch in eval_dataloader:
inputs = (batch["image"] - mean) / std
with torch.no_grad():
outputs = model(inputs)
predictions = outputs.argmax(dim=-1)
accurate_preds = accelerator.gather(predictions) == accelerator.gather(batch["label"])
num_elems += accurate_preds.shape[0]
accurate += accurate_preds.long().sum()
eval_metric = accurate.item() / num_elems
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}: {100 * eval_metric:.2f}")
```
## Using the notebook_launcher
All that's left is to use the [`notebook_launcher`].
You pass in the function, the arguments (as a tuple), and the number of processes to train on. (See the [documentation](../package_reference/launchers) for more information)
```python
from accelerate import notebook_launcher
```
```python
args = ("fp16", 42, 64)
notebook_launcher(training_loop, args, num_processes=2)
```
In the case of running on multiple nodes, you need to set up a Jupyter session at each node and run the launching cell at the same time.
For an environment containing 2 nodes (computers) with 8 GPUs each and the main computer with an IP address of "172.31.43.8", it would look like so:
```python
notebook_launcher(training_loop, args, master_addr="172.31.43.8", node_rank=0, num_nodes=2, num_processes=8)
```
And in the second Jupyter session on the other machine:
<Tip>
Notice how the `node_rank` has changed
</Tip>
```python
notebook_launcher(training_loop, args, master_addr="172.31.43.8", node_rank=1, num_nodes=2, num_processes=8)
```
In the case of running on the TPU, it would look like so:
```python
model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
args = (model, "fp16", 42, 64)
notebook_launcher(training_loop, args, num_processes=8)
```
To launch the training process with elasticity, enabling fault tolerance, you can use the `elastic_launch` feature provided by PyTorch. This requires setting additional parameters such as `rdzv_backend` and `max_restarts`. Here is an example of how to use `notebook_launcher` with elastic capabilities:
```python
notebook_launcher(
training_loop,
args,
num_processes=2,
max_restarts=3
)
```
As it's running it will print the progress as well as state how many devices you ran on. This tutorial was ran with two GPUs:
```python out
Launching training on 2 GPUs.
epoch 0: 88.12
epoch 1: 91.73
epoch 2: 92.58
epoch 3: 93.90
epoch 4: 94.71
```
And that's it!
Please note that [`notebook_launcher`] ignores the 🤗 Accelerate config file, to launch based on the config use:
```bash
accelerate launch
```
## Debugging
A common issue when running the `notebook_launcher` is receiving a CUDA has already been initialized issue. This usually stems
from an import or prior code in the notebook that makes a call to the PyTorch `torch.cuda` sublibrary. To help narrow down what went wrong,
you can launch the `notebook_launcher` with `ACCELERATE_DEBUG_MODE=yes` in your environment and an additional check
will be made when spawning that a regular process can be created and utilize CUDA without issue. (Your CUDA code can still be ran afterwards).
## Conclusion
This notebook showed how to perform distributed training from inside of a Jupyter Notebook. Some key notes to remember:
- Make sure to save any code that use CUDA (or CUDA imports) for the function passed to [`notebook_launcher`]
- Set the `num_processes` to be the number of devices used for training (such as number of GPUs, CPUs, TPUs, etc)
- If using the TPU, declare your model outside the training loop function

View File

@ -0,0 +1,24 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Overview
Welcome to the 🤗 Accelerate tutorials! These introductory guides will help catch you up to speed on working with 🤗 Accelerate.
You'll learn how to modify your code to have it work with the API seamlessly, how to launch your script properly,
and more!
These tutorials assume some basic knowledge of Python and familiarity with the PyTorch framework.
If you have any questions about 🤗 Accelerate, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/accelerate/18).

View File

@ -0,0 +1,38 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# TPU training
A [TPU (Tensor Processing Unit)](https://cloud.google.com/tpu/docs/intro-to-tpu) is a type of hardware specifically designed for training models efficiently. Accelerate supports TPU training, but there are a few things you should be aware of, namely graph compilation. This tutorial briefly discusses compilation, and for more details, take a look at the [Training on TPUs with Accelerate](../concept_guides/training_tpu) guide.
## Compilation
A TPU creates a graph of all the operations in the training step such as the forward pass, backward pass and optimizer step. This is why the first training step always takes a while because building and compiling this graph takes time. But once compilation is complete, it is cached and all subsequent steps are much faster.
The key is to avoid compiling your code again or else training is super slow. This means all your operations must be exactly the same:
* all tensors in your batches must have the same length (for example, no dynamic padding for NLP tasks)
* your code must be static (for example, no layers with for loops that have different lengths depending on the input such as a LSTM)
## Weight tying
A common language model design is to tie the weights of the embedding and softmax layers. However, moving the model to a TPU (either yourself or passing it to the [`~Accelerator.prepare`] method) breaks the weight tying and you'll need to retie the weights.
To add special behavior (like weight tying) in your script for TPUs, set [`~Accelerator.distributed_type`] to `DistributedType.TPU` first. Then you can use the [`~transformers.PreTrainedModel.tie_weights`] method to tie the weights.
```py
if accelerator.distributed_type == DistributedType.TPU:
model.tie_weights()
```

View File

@ -0,0 +1,211 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Troubleshoot
This guide provides solutions to some issues you might encounter when using Accelerate. Not all errors are covered because Accelerate is an active library that is continuously evolving and there are many different use cases and distributed training setups. If the solutions described here don't help with your specific error, please take a look at the [Ask for help](#ask-for-help) section to learn where and how to get help.
## Logging
Logging can help you identify where an error is coming from. In a distributed setup with multiple processes, logging can be a challenge, but Accelerate provides the [`~accelerate.logging`] utility to ensure logs are synchronized.
To troubleshoot an issue, use [`~accelerate.logging`] instead of the standard Python [`logging`](https://docs.python.org/3/library/logging.html#module-logging) module. Set the verbosity level (`INFO`, `DEBUG`, `WARNING`, `ERROR`, `CRITICAL`) with the `log_level` parameter, and then you can either:
1. Export the `log_level` as the `ACCELERATE_LOG_LEVEL` environment variable.
2. Pass the `log_level` directly to `get_logger`.
For example, to set `log_level="INFO"`:
```py
from accelerate.logging import get_logger
logger = get_logger(__name__, log_level="DEBUG")
```
By default, the log is called on main processes only. To call it on all processes, pass `main_process_only=False`.
If a log should be called on all processes and in order, also pass `in_order=True`.
```py
from accelerate.logging import get_logger
logger = get_logger(__name__, log_level="DEBUG")
# log all processes
logger.debug("thing_to_log", main_process_only=False)
# log all processes in order
logger.debug("thing_to_log", main_process_only=False, in_order=True)
```
## Hanging code and timeout errors
There can be many reasons why your code is hanging. Let's take a look at how to solve some of the most common issues that can cause your code to hang.
### Mismatched tensor shapes
Mismatched tensor shapes is a common issue that can cause your code to hang for a significant amount of time on a distributed setup.
When running scripts in a distributed setup, functions such as [`Accelerator.gather`] and [`Accelerator.reduce`] are necessary to grab tensors across devices to collectively perform operations on them. These (and other) functions rely on `torch.distributed` to perform a `gather` operation, which requires tensors to have the **exact same shape** across all processes. When the tensor shapes don't match, your code hangs and you'll eventually hit a timeout exception.
You can use Accelerate's operational debug mode to immediately catch this issue. We recommend enabling this mode during the `accelerate config` setup, but you can also enable it from the CLI, as an environment variable, or by manually editing the `config.yaml` file.
<hfoptions id="mismatch">
<hfoption id="CLI">
```bash
accelerate launch --debug {my_script.py} --arg1 --arg2
```
</hfoption>
<hfoption id="environment variable">
If enabling debug mode as an environment variable, you don't need to call `accelerate launch`.
```bash
ACCELERATE_DEBUG_MODE="1" torchrun {my_script.py} --arg1 --arg2
```
</hfoption>
<hfoption id="config.yaml">
Add `debug: true` to your `config.yaml` file.
```yaml
compute_environment: LOCAL_MACHINE
debug: true
```
</hfoption>
</hfoptions>
Once you enable debug mode, you should get a traceback that points to the tensor shape mismatch issue.
```py
Traceback (most recent call last):
File "/home/zach_mueller_huggingface_co/test.py", line 18, in <module>
main()
File "/home/zach_mueller_huggingface_co/test.py", line 15, in main
broadcast_tensor = broadcast(tensor)
File "/home/zach_mueller_huggingface_co/accelerate/src/accelerate/utils/operations.py", line 303, in wrapper
accelerate.utils.operations.DistributedOperationException:
Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
Operation: `accelerate.utils.operations.broadcast`
Input shapes:
- Process 0: [1, 5]
- Process 1: [1, 2, 5]
```
### Early stopping
For early stopping in distributed training, if each process has a specific stopping condition (e.g. validation loss), it may not be synchronized across all processes. As a result, a break can happen on process 0 but not on process 1 which will cause your code to hang indefinitely until a timeout occurs.
If you have early stopping conditionals, use the `set_breakpoint` and `check_breakpoint` methods to make sure all the processes
are ended correctly.
```py
# Assume `should_do_breakpoint` is a custom defined function that returns a conditional,
# and that conditional might be true only on process 1
if should_do_breakpoint(loss):
accelerator.set_breakpoint()
# Later in the training script when we need to check for the breakpoint
if accelerator.check_breakpoint():
break
```
### Low kernel versions on Linux
On Linux with kernel version < 5.5, hanging processes have been reported. To avoid this problem, upgrade your system to a later kernel version.
### MPI
If your distributed CPU training job using MPI is hanging, ensure that you have
[passwordless SSH](https://www.open-mpi.org/faq/?category=rsh#ssh-keys) setup (using keys) between the nodes. This means
that for all nodes in your hostfile, you should to be able to SSH from one node to another without being prompted for a password.
Next, try to run the `mpirun` command as a sanity check. For example, the command below should print out the
hostnames for each of the nodes.
```bash
mpirun -f hostfile -n {number of nodes} -ppn 1 hostname
```
## CUDA Out-of-Memory
One of the most frustrating errors when it comes to running training scripts is hitting "CUDA Out-of-Memory". The entire script needs to be restarted and any progress is lost.
To address this problem, Accelerate provides the [`find_executable_batch_size`] utility that is heavily based on [toma](https://github.com/BlackHC/toma).
This utility retries code that fails due to OOM (out-of-memory) conditions and automatically lowers batch sizes. For each OOM condition, the algorithm decreases the batch size by half and retries the code until it succeeds.
To use [`find_executable_batch_size`], restructure your training function to include an inner function with `find_executable_batch_size` and build your dataloaders inside it. At a minimum, this only takes 4 new lines of code.
<Tip warning={true}>
The inner function **must** take batch size as the first parameter, but we do not pass one to it when called. The wrapper will handles this for you. Any object (models, optimizers) that consumes CUDA memory and is passed to the [`Accelerator`] also **must** be declared inside the inner function.
</Tip>
```diff
def training_function(args):
accelerator = Accelerator()
+ @find_executable_batch_size(starting_batch_size=args.batch_size)
+ def inner_training_loop(batch_size):
+ nonlocal accelerator # Ensure they can be used in our context
+ accelerator.free_memory() # Free all lingering references
model = get_model()
model.to(accelerator.device)
optimizer = get_optimizer()
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
lr_scheduler = get_scheduler(
optimizer,
num_training_steps=len(train_dataloader)*num_epochs
)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
train(model, optimizer, train_dataloader, lr_scheduler)
validate(model, eval_dataloader)
+ inner_training_loop()
```
## Non-reproducible results between device setups
If you changed the device setup and observe different model performance, it is likely you didn't update your script when moving from one setup to another. Even if you're using the same script with the same batch size, the results will still be different on a TPU, multi-GPU, and single GPU.
For example, if you were training on a single GPU with a batch size of 16 and you move to a dual GPU setup, you need to change the batch size to 8 to have the same effective batch size. This is because when training with Accelerate, the batch size passed to the dataloader is the **batch size per GPU**.
To make sure you can reproduce the results between the setups, make sure to use the same seed, adjust the batch size accordingly, and consider scaling the learning rate.
For more details and a quick reference for batch sizes, check out the [Comparing performance between different device setups](../concept_guides/performance) guide.
## Performance issues on different GPUs
If your multi-GPU setup consists of different GPUs, you may encounter some performance issues:
- There may be an imbalance in GPU memory between the GPUs. In this case, the GPU with the smaller memory will limit the batch size or the size of the model that can be loaded onto the GPUs.
- If you are using GPUs with different performance profiles, the performance will be driven by the slowest GPU you are using because the other GPUs will have to wait for it to complete its workload.
Vastly different GPUs within the same setup can lead to performance bottlenecks.
## Ask for help
If none of the solutions and advice here helped resolve your issue, you can always reach out to the community and Accelerate team for help.
- Ask for help on the Hugging Face forums by posting your question in the [🤗 Accelerate category](https://discuss.huggingface.co/c/accelerate/18). Make sure to write a descriptive post with relevant context about your setup and reproducible code to maximize the likelihood that your problem is solved!
- Post a question on [Discord](http://hf.co/join/discord), and let the team and the community help you.
- Create an Issue on the 🤗 Accelerate [GitHub repository](https://github.com/huggingface/accelerate/issues) if you think you've found a bug related to the library. Include context regarding the bug and details about your distributed setup to help us better figure out what's wrong and how we can fix it.

View File

@ -0,0 +1,341 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Handling big models for inference
When loading a pre-trained model in PyTorch, the usual workflow looks like this:
```py
import torch
my_model = ModelClass(...)
state_dict = torch.load(checkpoint_file)
my_model.load_state_dict(state_dict)
```
In plain English, those steps are:
1. Create the model with randomly initialized weights
2. Load the model weights (in a dictionary usually called a state dict) from the disk
3. Load those weights inside the model
While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pre-trained weights. If you're loading a model with 6 billion parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16).
<Tip warning={true}>
This API is quite new and still in its experimental stage. While we strive to provide a stable API, it's possible some small parts of the public API will change in the future.
</Tip>
## How the Process Works: A Quick Overview
<Youtube id="MWCSGj9jEAo" />
## How the Process Works: Working with Code
### Instantiating an empty model
The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
```py
from accelerate import init_empty_weights
with init_empty_weights():
my_model = ModelClass(...)
```
For instance:
```py
with init_empty_weights():
model = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
```
initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved to that device.
<Tip warning={true}>
You can't move a model initialized like this on CPU or another device directly, since it doesn't have any data. It's also very likely that a forward pass with that empty model will fail, as not all operations are supported on the meta device.
</Tip>
### Sharded checkpoints
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split into several smaller files that we call checkpoint shards.
🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
```bash
first_state_dict.bin
index.json
second_state_dict.bin
```
with index.json being the following file:
```
{
"linear1.weight": "first_state_dict.bin",
"linear1.bias": "first_state_dict.bin",
"linear2.weight": "second_state_dict.bin",
"linear2.bias": "second_state_dict.bin"
}
```
and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"linear1.bias"`, `second_state_dict.bin` the ones for `"linear2.weight"` and `"linear2.bias"`
### Loading weights
The second tool 🤗 Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
If you want to use big model inference with 🤗 Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
Here is how we can use this to load the [GPT2-1.5B](https://huggingface.co/marcsun13/gpt2-xl-linear-sharded) model.
Let's download the sharded version of this model.
```bash
pip install huggingface_hub
```
```py
from huggingface_hub import snapshot_download
checkpoint = "marcsun13/gpt2-xl-linear-sharded"
weights_location = snapshot_download(repo_id=checkpoint)
```
In order to initialize the model, we will use the library minGPT.
```bash
git clone https://github.com/karpathy/minGPT.git
pip install minGPT/
```
```py
from accelerate import init_empty_weights
from mingpt.model import GPT
model_config = GPT.get_default_config()
model_config.model_type = 'gpt2-xl'
model_config.vocab_size = 50257
model_config.block_size = 1024
with init_empty_weights():
model = GPT(model_config)
```
Then, load the checkpoint we just downloaded with:
```py
from accelerate import load_checkpoint_and_dispatch
model = load_checkpoint_and_dispatch(
model, checkpoint=weights_location, device_map="auto", no_split_module_classes=['Block']
)
```
By passing `device_map="auto"`, we tell 🤗 Accelerate to determine automatically where to put each layer of the model depending on the available resources:
- first, we use the maximum space available on the GPU(s)
- if we still need space, we store the remaining weights on the CPU
- if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors
#### `no_split_module_classes`
This parameter will indicate that some of the modules with the name `"Block"` should not be split across different devices. You should set here all blocks that
include a residual connection of some kind.
#### The `device_map`
You can see the `device_map` that 🤗 Accelerate picked by accessing the `hf_device_map` attribute of your model:
```py
model.hf_device_map
```
```python out
{'transformer.wte': 0,
'transformer.wpe': 0,
'transformer.drop': 0,
'transformer.h.0': 0,
...
'transformer.h.21': 0,
'transformer.h.22': 1,
'transformer.h.23': 1,
'transformer.h.24': 1,
...
'transformer.h.47': 1,
'transformer.ln_f': 1,
'lm_head': 1}
```
It's fully possible to create your own device map for the layers to use as well, specifying the GPU device to use (a number), `"cpu"`, or `"disk"` and pass this in:
```python
device_map = {
"transformer.wte": "cpu",
"transformer.wpe": 0,
"transformer.drop": "cpu",
"transformer.h.0": "disk"
}
model = load_checkpoint_and_dispatch(
model, checkpoint=weights_location, device_map=device_map
)
```
### Run the model
Now that we have done this, our model lies across several devices, and maybe the hard drive. But it can still be used as a regular PyTorch model:
```py
from mingpt.bpe import BPETokenizer
tokenizer = BPETokenizer()
inputs = tokenizer("Hello, my name is").to(0)
outputs = model.generate(x1, max_new_tokens=10, do_sample=False)[0]
tokenizer.decode(outputs.cpu().squeeze())
```
Behind the scenes, 🤗 Accelerate added hooks to the model, so that:
- at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works)
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after
This way, your model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM!
<Tip warning={true}>
This only supports the inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations.
</Tip>
### Designing a device map
You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
<Tip>
You can derive all sizes of the model (and thus compute a `device_map`) on a model that is on the meta device.
</Tip>
All the options will produce the same result when you don't have enough GPU memory to accommodate the whole model (which is to fit everything that can on the GPU, then offload weights on the CPU or even on the disk if there is not enough RAM).
When you have more GPU memory available than the model size, here is the difference between each option:
- `"auto"` and `"balanced"` evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1.
- `"balanced_low_0"` evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the `generate` function for Transformers models
- `"sequential"` will fit what it can on GPU 0, then move on GPU 1 and so forth (so won't use the last GPUs if it doesn't need to).
<Tip>
The options `"auto"` and `"balanced"` produce the same results for now, but the behavior of `"auto"` might change in the future if we find a strategy that makes more sense, while `"balanced"` will stay stable.
</Tip>
First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want to use for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`.
Here is an example where we don't want to use more than 10GiB on each of the two GPUs and no more than 30GiB of CPU RAM for the model weights:
```python
from accelerate import infer_auto_device_map
device_map = infer_auto_device_map(my_model, max_memory={0: "10GiB", 1: "10GiB", "cpu": "30GiB"})
```
<Tip warning={true}>
When a first allocation happens in PyTorch, it loads CUDA kernels which take about 1-2GB of memory depending on the GPU. Therefore you always have less usable memory than the actual size of the GPU. To see how much memory is actually used do `torch.ones(1).cuda()` and look at the memory usage.
Therefore when you create memory maps with `max_memory` make sure to adjust the available memory accordingly to avoid out-of-memory errors.
</Tip>
Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup, the close-to-ideal map is:
```python
max_memory = {0: "30GIB", 1: "46GIB", 2: "46GIB", 3: "46GIB", 4: "46GIB", 5: "46GIB", 6: "46GIB", 7: "46GIB"}
```
as you can see we gave the remaining 7 GPUs ~50% more memory than GPU 0.
If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance, if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be:
```python
device_map = {"block1": 0, "block2": 1}
```
another one that is valid could be:
```python
device_map = {"block1": 0, "block2.linear1": 0, "block2.linear2": 1, "block2.linear3": 1}
```
On the other hand, this one is not valid as it does not cover every parameter of the model:
```python
device_map = {"block1": 0, "block2.linear1": 1, "block2.linear2": 1}
```
<Tip>
To be the most efficient, make sure your device map puts the parameters on the GPUs in a sequential manner (e.g. don't put one of the first weights on GPU 0, then weights on GPU 1 and the last weight back to GPU 0) to avoid making many transfers of data between the GPUs.
</Tip>
## CPU offload only
If you want to offload your model on CPU, you can use [`cpu_offload`]. As a result, all parameters of the model will be offloaded and only one copy of the state dict of the model will be kept. During the forward pass, parameters will be extracted from that state dict and put on the execution device and passed as they are needed, then offloaded again.
```python
cpu_offload(model, execution_device)
```
You can also use [`cpu_offload_with_hook`]. This function will offloads a model on the CPU and puts it back to an execution device when executed. The difference with [`cpu_offload`] is that the model stays on the execution device after the forward and is only offloaded again when the `offload` method of the returned `hook` is called. Furthermore, [`cpu_offload_with_hook`] is more performant but less memory saving. It is useful for pipelines running a model in a loop:
```python
model_1, hook_1 = cpu_offload_with_hook(model_1, execution_device)
model_2, hook_2 = cpu_offload_with_hook(model_2, execution_device, prev_module_hook=hook_1)
model_3, hook_3 = cpu_offload_with_hook(model_3, execution_device, prev_module_hook=hook_2)
hid_1 = model_1(input)
for i in range(50):
# model1 is offloaded on the CPU at the first iteration, model 2 stays on the GPU for this whole loop.
hid_2 = model_2(hid_1)
# model2 is offloaded to the CPU just before this forward.
hid_3 = model_3(hid_3)
# For model3, you need to manually call the hook offload method.
hook_3.offload()
```
## Disk offload only
To perform disk offload, you can use [`disk_offload`]. As a result, all parameters of the model will be offloaded as memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and put on the execution device passed as they are needed, then offloaded again.
```python
disk_offload(model, offload_dir, execution_device)
```
## Limits and further development
We are aware of the current limitations in the API:
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to a lack of RAM.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) attributes devices sequentially (to avoid moving things back and forth) so if your first layer is bigger than the size of the GPU you have, it will end up with everything on the CPU/Disk.
- [`load_checkpoint_and_dispatch`] and [`load_checkpoint_in_model`] do not perform any check on the correctness of your state dict compared to your model at the moment (this will be fixed in a future version), so you may get some weird errors if trying to load a checkpoint with mismatched or missing keys.
- The model parallelism used when your model is split on several GPUs is naive and not optimized, meaning that only one GPU works at a given time and the other sits idle.
- When weights are offloaded on the CPU/hard drive, there is no pre-fetching (yet, we will work on this for future versions) which means the weights are put on the GPU when they are needed and not before.
- Hard-drive offloading might be very slow if the hardware you run on does not have fast communication between disk and CPU (like NVMes).

View File

@ -0,0 +1,130 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Deferring Executions
When you run your usual script, instructions are executed in order. Using 🤗 Accelerate to deploy your script on several
GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
faster than others.
You might need to wait for all processes to have reached a certain point before executing a given instruction. For
instance, you shouldn't save a model before being sure every process is done with training, and you wouldn't want to
continue training before all the model weights have been loaded in. To do this, just write the following line in your code:
```
accelerator.wait_for_everyone()
```
This instruction will block all the processes that arrive first until all the other processes have reached that
point (if you run your script on just one GPU or CPU, this won't do anything).
A few example cases of when to use this utility are listed below:
<Tip>
Some of these are utilized with the [`~Accelerator.main_process_first`] context manager, which utilizes [`~Accelerator.wait_for_everyone`] to
run a particular set of code on the main process beforehand before triggering and launching the other processes
</Tip>
## Downloading a Dataset
When downloading a dataset, you should download it first on the main process and then load the cached dataset afterward
<Tip>
`load_dataset` will perform a lock under the hood to stop multiple downloads from happening at once, but if you are downloading something
not using this library you should use this method.
</Tip>
```python
with accelerator.main_process_first():
datasets = load_dataset("glue", "mrpc")
```
Under the hood this is the same as calling:
```python
# First do something on the main process
if accelerator.is_main_process:
datasets = load_dataset("glue", "mrpc")
else:
accelerator.wait_for_everyone()
# And then send it to the rest of them
if not accelerator.is_main_process:
datasets = load_dataset("glue", "mrpc")
else:
accelerator.wait_for_everyone()
```
## Saving the `state_dict`
When saving the `state_dict` of the model, since you would normally save one file on just the main process
you should specify that:
```python
if accelerator.is_main_process:
model = accelerator.unwrap_model(model)
torch.save(model.state_dict(), "weights.pth")
```
## Loading in the `state_dict`
When loading in the `state_dict` to a model, optimizer, or scheduler, you should wait
for all workers to have the weights loaded in before moving on to training
```python
with accelerator.main_process_first():
state = torch.load("weights.pth")
model.load_state_dict(state)
```
## Applying a multi-worker CPU operation
Applying a `map()` operation on multiple workers, such as tokenizing should be done on the
main process first, and then propagated to each one.
```python
datasets = load_dataset("glue", "mrpc")
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
```
## Applying checks such as Early Stopping
To have a check that works with a flag set by a particular process, the `set_trigger` and `check_trigger` API should be used. Useful examples
for doing so can include situations such as using early stopping and monitoring the loss (as each loss slightly differs on each process).
Call [`Accelerator.set_trigger`] when your condition has been met, and [`Accelerator.check_trigger`] when checking if that condition has been met in any process:
```python
for (x,y) in data_loader:
logits = model(x)
loss = loss_func(logits, y)
# Assume `should_do_early_stopping` is a custom defined function that returns a conditional
if should_do_early_stopping(loss):
accelerator.set_trigger()
# Later in the training script when we need to check for the breakpoint
if accelerator.check_trigger():
break
```

View File

@ -0,0 +1,192 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Moving between FSDP And DeepSpeed
🤗 Accelerate offers flexibilty of training frameworks, by integrating two extremely powerful tools for distributed training, namely [Pytorch FSDP](../usage_guides/fsdp) and [Microsoft DeepSpeed](../usage_guides/deepspeed). The aim of this tutorial is to draw parallels, as well as to outline potential differences, to empower the user to switch seamlessly between these two frameworks.
<Tip>
To switch between the frameworks, we recommend launching code 🤗 `accelerate launch` passing in the correct config file with `--config_file`, or passing in the respective arguments directly for [FSDP and DeepSpeed](../package_reference/cli#accelerate-launch) .
Example 🤗 Accelerate configurations can be found here for [DeepSpeed](../usage_guides/deepspeed#accelerate-deepspeed-plugin) and [FSDP](../usage_guides/fsdp#how-it-works-out-of-the-box), or in the [example zoo under "Launch Configurations"](../usage_guides/explore)
</Tip>
<Tip warning={true}>
This tutorial is for single-node, multi-GPU, scenarios only.
</Tip>
## Configuring Functionalities
Model tensors are split into different GPUs in an attempt to scale up model sizes; this is termed *sharding* in FSDP, and *partitioning* in DeepSpeed. FSDP sharding and DeepSpeed ZeRO (partitioning) stages are configured by `--fsdp_sharding_strategy`, and `--zero_stage`, respectively. In particular, FSDP `FULL_SHARD` maps to DeepSpeed ZeRO stage `3`; see this [comprehensive mapping between FSDP sharding and DeepSpeed ZeRO settings](../usage_guides/fsdp#mapping-between-fsdp-sharding-strategies-and-deepspeed-zero-stages). The below table summarizes and groups similar settings:
Group | Framework | Configuration | Example | Restrictions (if any)
--|--|--|--|--
sharding / partitioning | FSDP<br>DeepSpeed | `--fsdp_sharding_strategy`<br>`--zero_stage` | `1` (`FULL_SHARD`) <br>`3` |
offload | FSDP<br>DeepSpeed | `--fsdp_offload_params`<br>`--offload_param_device`<br>`--offload_optimizer_device` | `true`<br>`cpu`<br>`cpu` | all or nothing <br><br>
model loading | FSDP<br>DeepSpeed | <span style="white-space:nowrap;">`--fsdp_cpu_ram_efficient_loading`</span><br>`--zero3_init_flag` | `true`<br>`true` | <br>only ZeRO 3
efficient checkpointing | FSDP<br>DeepSpeed | `--fsdp_state_dict_type`<br>`--zero3_save_16bit_model` | `SHARDED_STATE_DICT`<br>`true` | <br>only ZeRO 3
weights prefetching | FSDP<br><br>DeepSpeed | `--fsdp_forward_prefetch`<br>`--fsdp_backward_prefetch`<br>None | `true`<br>`BACKWARD_PRE` | <br><br>
model | FSDP<br><br>DeepSpeed | `--fsdp_auto_wrap_policy`<br><span style="white-space:nowrap;">`--fsdp_transformer_layer_cls_to_wrap`</span><br>None | `TRANSFORMER_BASED_WRAP`<br><Layer Class> |<br>Usually not needed <br>Transparent to user.
parameters summoning | FSDP<br>DeepSpeed | `--fsdp_use_orig_params`<br>None | `true` | required for `torch.compile`<br>Transparent to user
parameters syncing | FSDP<br>DeepSpeed | `--fsdp_sync_module_states`<br>None | `true` |
training | FSDP<br>DeepSpeed | None<br>`--gradient_accumulation_steps`<br>`--gradient_clipping` | <br>`auto`<br>`auto` | Transparent to user
For detailed descriptions of the above, refer to [🤗 `Accelerate` launch documentation](../package_reference/cli#accelerate-launch).
<Tip>
To access other DeepSpeed configurations, such as mixed precision settings,
you need to pass in a `--deepspeed_config_file`, see the [documentation](../usage_guides/deepspeed#deepspeed-config-file).
DeepSpeed can be also configured via [`DeepSpeedPlugin`], e.g., `DeepSpeedPlugin.zero_stage` is equivalent of `--zero_stage`, and `DeepSpeedPlugin.hf_ds_config` can be used to pass `--deepeed_config_file.`
</Tip>
<Tip>
FSDP can be also configured via [`FullyShardedDataParallelPlugin`], e.g., `FullyShardedDataParallelPlugin.sharding_strategy` is equivalent of `--fsdp_sharding_strategy`.
</Tip>
### Checkpointing
Do note that while FSDP can be configured via `--fsdp_state_dict_type` to save either full / sharded checkpoints.
<Tip>
For DeepSpeed Zero3, one could pass a `--zero3_save_16bit_model true`, which conveniently consolidates the model to a single rank and saves; this is the FSDP equivalent of `fsdp_state_dict_type: FULL_STATE_DICT`.
</Tip>
<Tip warning={true}>
For large models, consolidating the model to a single rank can be very slow.
</Tip>
<Tip>
For quicker checkpointing, for FSDP use `fsdp_state_dict_type: SHARDED_STATE_DICT`, and for DeepSpeed Zero3 [use the `zero_to_fp32.py` script to post-convert sharded checkpoints](https://www.deepspeed.ai/tutorials/zero/#extracting-weights).
</Tip>
### Offloading
FSDP only allows *all-or-nothing* offload (i.e., either offload parameters, gradients, and optimizer, or keep them all in GPU), but DeepSpeed can offload parameters and optimizer differently. Furthermore, DeepSpeed also supports [offloading to NVME](https://www.deepspeed.ai/docs/config-json/#parameter-offloading).
### Prefetching
FSDP allows two prefetching configurations `--fsdp_forward_prefetch` and `--fsdp_backward_prefetch` to improve overlap of comms / computation at a cost of extra memory, see [FSDP documentation](https://pytorch.org/docs/stable/fsdp.html).
For DeepSpeed, the prefetching will be turned on when needed, and it turns on depending on certain hyper-params like `stage3_param_persistence_threshold`, `stage3_max_reuse_distance`, etc, [that can be configured for Zero3](https://www.deepspeed.ai/docs/config-json/#parameter-offloading); 🤗 `accelerate` may set these hyper-params automatically if you don't set those explicitly in the deepspeed config file.
<Tip>
For FSDP set `fsdp_backward_prefetch: BACKWARD_PRE` for improved throughputs if memory allows.
</Tip>
### Model Loading
While FSDP require an explicit `--fsdp_cpu_ram_efficient_loading true` to activate efficient model loading, 🤗 `transformers` will activate the similar feature whenever DeepSpeed Zero3 is used.
<Tip>
For FSDP, whenever setting `--fsdp_cpu_ram_efficient_loading true`, 🤗 `accelerate` will automatically set `sync_module_states` to true.
For RAM efficient loading the weights will be loaded only in a singe rank, and thus requires `sync_module_states` to broadcast weights to other ranks.
</Tip>
### Model
FSDP requires an explicit `--fsdp_auto_wrap_policy` for the algorithm to decide how to schedule the all-gather and reduce-scatter operations. But for DeepSpeed this is transparent to the user.
<Tip>
For FSDP, simply set `fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP`. With the latest [`transformers`] versions, we try our best to figure out the suitable `fsdp_transformer_layer_cls_to_wrap` for HF transformers models. However, if you get an error regarding it, please specify this.
</Tip>
### Parameters Summoning
FSDP requires an explicit `--fsdp_use_orig_params` flag if using `torch.compile`, see [the pytorch documenation](https://pytorch.org/docs/stable/fsdp.html#module-torch.distributed.fsdp). For DeepSpeed this is transparent to the user.
<Tip>
For FSDP, when using `torch.compile` please set `fsdp_use_orig_params: True`.
</Tip>
## Training
Deepspeed requires explicit `--gradient_accumulation_steps` and `--gradient_clipping` flags. For FSDP this is transparent to the user.
<Tip>
When using DeepSpeed, set `gradient_accumulation_steps: "auto"` and `gradient_clipping: "auto"` to automatically pick up values set in the [`Accelerator`] or [`TrainingArguments`] (if using `transformers`).
</Tip>
## On Differences in Data Precision Handling
To discuss the how data precision is handled in both FSDP and Deepspeed, it is instructive to first give an overview of how model parameters are handled in these frameworks. Before the model / optimizer parameters are distributed across GPUs, parameter preparation is involved to first "flatten" them to one-dimensional [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor). The implementation of FSDP / DeepSpeed varies in the respect of the `dtype` in which these "flattened" parameters are stored, and there are ramifications with regards to how [`torch.Optimizer`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) allocate their `dtype`s. The table below outlines the processes for both frameworks; the "Local" column indicates the process occurring at a per-gpu level, therefore any memory overheads by upcasting should be understood to be amortized by the number of gpus used.
<Tip>
As a rule of thumb, for stable training with automatic mixed precision, all the trainable parameters have to be in `torch.float32`.
</Tip>
Process | Local | Framework | Details
--|--|--|--
Loading, i.e., [`AutoModel.from_pretrained(..., torch_dtype=torch_dtype)`] |
Preparation, i.e., creation of "flat params" | ✅ | FSDP<br>DeepSpeed | created in `torch_dtype`.<br> disregards `torch_dtype`, created in `float32`.
Optimizer initialization | ✅ | FSDP<br>DeepSpeed | creates parameters in `torch_dtype`<br> creates parameters in `float32`
Training Step, i.e, forward, backward, reduction | | FSDP<br>DeepSpeed | follows [`MixedPrecision`](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.MixedPrecision)<br> follows `deepspeed_config_file` mixed precision settings.
Optimizer (Pre-Step) | ✅ | FSDP<br>DeepSpeed | upcasting (if any) to `torch_dtype`<br>upcasted to `float32`
Optimizer (Actual Step) | ✅ | FSDP<br>DeepSpeed | occurs in `torch_dtype` <br> occurs in `float32`.
<Tip warning={true}>
Therefore when using DeepSpeed a small number of GPUs, be aware of potentially significant memory overheads due to the upcasting during preperation.
</Tip>
<Tip>
With FSDP, in the absence of mixed precision, it is possible to operate the [`torch.Optimizer`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) in low precision `torch_dtype`, which may be helpful when using small number of GPUs.
</Tip>
<Tip warning={true}>
With mixed precision, FSDP and DeepSpeed will upcast in the model preparation step (c.f. table above). But do note that FSDP will then save checkpoints in the upcasted precision; Deepspeed may still save low precision checkpoints if `--zero3_save_16bit_model` is specified.
</Tip>
To clarify the above table consider the concrete examples below; the optimizer pre- and actual step combined for brevity. With FSDP it is possible to operate in the two modes shown below, but DeepSpeed can only operate in one.
Framework | Model Loading (`torch_dtype`) | Mixed Precision | Preparation (Local) | Training | Optimizer (Local)
--|--|--|--|--|--
FSDP | bf16 | default (none) | bf16 | bf16 | bf16
FSDP | bf16 | bf16 | fp32 | bf16 | fp32
DeepSpeed | bf16 | bf16 | fp32 | bf16 | fp32

View File

@ -0,0 +1,184 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Gradient Synchronization
PyTorch's distributed module operates by communicating back and forth between all of the GPUs in your system.
This communication takes time, and ensuring all processes know the states of each other happens at particular triggerpoints
when using the `ddp` module.
These triggerpoints are added to the PyTorch model, specifically their `forward()` and `backward()` methods.
This happens when the model is wrapped with `DistributedDataParallel`:
```python
import torch.nn as nn
from torch.nn.parallel import DistributedDataParallel
model = nn.Linear(10, 10)
ddp_model = DistributedDataParallel(model)
```
In 🤗 Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model.
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
import torch.nn as nn
- from torch.nn.parallel import DistributedDataParallel
model = nn.Linear(10,10)
+ model = accelerator.prepare(model)
```
## The slowdown in gradient accumulation
You now understand that PyTorch adds hooks to the `forward` and `backward` method of your PyTorch model when
training in a distributed setup. But how does this risk slowing down your code?
In DDP (distributed data parallel), the specific order in which processes are performed and ran are expected
at specific points and these must also occur at roughly the same time before moving on.
The most direct example is when you update model parameters through
`optimizer.step()`.
Without gradient accumulation, all instances of the model need to have updated
their gradients computed, collated, and updated before moving on to the next
batch of data.
When performing gradient accumulation, you accumulate `n` loss gradients and
skip `optimizer.step()` until `n` batches have been reached. As all training
processes only need to synchronize by the time `optimizer.step()` is called,
without any modification to your training step, this needless inter-process
communication can cause a significant slowdown.
How can you avoid this overhead?
## Solving the slowdown problem
Since you are skipping model parameter updates when training on these batches, their gradients do not need to be synchronized until the point where `optimizer.step()` is actually called.
PyTorch cannot automagically tell when you need to do this, but they do provide a tool to help through the [`no_sync`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.no_sync) context manager
that is added to your model after converting it to DDP.
Under this context manager, PyTorch will skip synchronizing the gradients when
`.backward()` is called, and the first call to `.backward()` outside this
context manager will trigger the synchronization. See an example below:
```python
ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
for index, batch in enumerate(dataloader):
inputs, targets = batch
# Trigger gradient synchronization on the last batch
if index != (len(dataloader) - 1):
with ddp_model.no_sync():
# Gradients only accumulate
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
else:
# Gradients finally sync
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
optimizer.step()
```
In 🤗 Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
`ddp_model.no_sync` gets replaced with [`~Accelerator.no_sync`] and operates the same way:
```diff
ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
for index, batch in enumerate(dataloader):
inputs, targets = batch
# Trigger gradient synchronization on the last batch
if index != (len(dataloader)-1):
- with ddp_model.no_sync():
+ with accelerator.no_sync(model):
# Gradients only accumulate
outputs = ddp_model(inputs)
loss = loss_func(outputs, targets)
accelerator.backward(loss)
else:
# Gradients finally sync
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
As you may expect, the [`~Accelerator.accumulate`] function wraps around this conditional check by keeping track of the current batch number, leaving you with the final
gradient accumulation API:
```python
ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
for batch in dataloader:
with accelerator.accumulate(model):
optimizer.zero_grad()
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
As a result, you should either use *`accelerator.accumulate` or `accelerator.no_sync`* when it comes to API choice.
## Just how much of a slowdown is there, and easy mistakes you can make
To set up a realistic example, consider the following setup:
* Two single-GPU T4 nodes and one node with two GPUs
* Each GPU is a T4, and are hosted on GCP
* The script used is a modification of the [NLP Example](https://github.com/muellerzr/timing_experiments/blob/main/baseline.py) script
* Batch size per GPU is 16, and gradients are accumulated every 4 steps
All scripts are available in [this repository](https://github.com/muellerzr/timing_experiments).
If not careful about gradient synchronization and GPU communication, a *large* amount of time can be wasted
from when these GPUs communicate to each other during unnecessary periods.
By how much?
Reference:
- Baseline: uses no synchronization practices discussed here
- `no_sync` improperly: `no_sync` only around the `backward` call, not the `forward`
- `no_sync`: using the `no_sync` pattern properly
- `accumulate`: using [`~Accelerator.accumulate`] properly
Below are the average seconds per batch iterating over 29 batches of data for each setup on both a single node and on the dual-node setup:
| | Baseline | `no_sync` improperly | `no_sync` | `accumulate`|
| :---------: | :-------: | :------------------: | :-------: | :---------: |
| Multi-Node | 2±0.01s | 2.13±0.08s | **0.91±0.11s** | **0.91±0.11s** |
| Single Node | 0.50±0.01s | 0.50±0.01s | **0.41±0.015s** | **0.41±0.015s** |
As you can see, if you are not careful about how you set up your gradient synchronization, you can get upwards of more than a 2x slowdown during training!
If you are worried about making sure everything is done properly, we highly recommend utilizing the [`~Accelerator.accumulate`] function and passing in
`gradient_accumulation_steps` or `gradient_accumulation_plugin` to the [`Accelerator`] object so Accelerate can handle this for you.
### `no_sync` requires additional GPU memory when using FSDP
Be aware that not syncing gradients can have adverse effects while performing FSDP training. As it has been warned in `torch`, the [`no_sync` context manager for FSDP](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.FullyShardedDataParallel.no_sync) will require additional memory.
Therefore in memory intensive situations while using FSDP, we recommend to set `sync_each_batch` to `True` in the [`~utils.GradientAccumulationPlugin`] to disable `no_sync`.
See the example below where we fine-tune Mixtral (47B parameters) on 8 A100-80GB GPUs. We see that even for a modest `gradient_accumulation_steps=2` we quickly go out-of-memory (OOM) if `no_sync` is enabled. Again, this is due to additional memory overheads due to FSDP's `no_sync`. However, if `no_sync` is disabled via `sync_each_batch=True`, then the memory consumption for `gradient_accumulation_steps=16` reverts to that of `gradient_accumulation_steps=1`.
| Model | `no_sync` (accum=1) | `no_sync` (accum=2) | `no_sync` disabled (accum=16)
| :-------------: | :-----------------: | :-----------------: | :-----------------:
mixtral 8x7B | 69G | OOM | 69G
> [!WARNING]
> Disabling `no_sync` means there _will be slowdown_ due the extra data syncs, as explained by the earlier sections of this guide.

View File

@ -0,0 +1,72 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# 🤗 Accelerate's internal mechanisms
Internally, 🤗 Accelerate works by first analyzing the environment in which the script is launched to determine which
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
that information is stored in the [`~AcceleratorState`].
This class is initialized the first time you instantiate an [`~Accelerator`] as well as performing any
specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of
[`~state.AcceleratorState`]. (The same can also be done with the [`PartialState`], a more barebones version it inherits)
Then, when calling [`~Accelerator.prepare`], the library:
- wraps your model(s) in the container adapted for the distributed setup,
- wraps your optimizer(s) in an [`~optimizer.AcceleratedOptimizer`],
- wraps your scheduler(s) in an [`~scheduler.AcceleratedScheduler`]
- creates a new version of your dataloader(s) in a [`~data_loader.DataLoaderShard`] or [`~data_loader.DataLoaderDispatcher`]
While the model(s), optimizer(s), and scheduler(s) are just put in simple wrappers, the dataloader(s) are re-created. This is mostly
because PyTorch does not let the user change the `batch_sampler` of a dataloader once it's been created and the
library handles the sharding of your data between processes by changing that `batch_sampler` to yield every other
`num_processes` batches (if enabled).
The [`~data_loader.DataLoaderShard`] subclasses `DataLoader` to add the following functionality:
- it synchronizes the appropriate random number generator of all processes at each new iteration, to ensure any
randomization (like shuffling) is done the exact same way across processes.
- it puts the batches on the proper device before yielding them (unless you have opted out of
`device_placement=True`).
The [`~data_loader.DataLoaderDispatcher`] subclasses differs from the [`~data_loader.DataLoaderShard`] in that when iterating through the `DataLoader`, the data is all starting from process 0 and *then* split and sent off to each process rather than it happening at the dataset level.
The random number generator synchronization will by default synchronize:
- the `generator` attribute of a given sampler (like the PyTorch `RandomSampler`) for PyTorch >= 1.6
- the main random number generator in PyTorch <=1.5.1
You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main
[`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on a local `generator` to avoid
setting the same seed in the main random number generator in all processes.
<Tip warning={true}>
Synchronization of the main torch (or CUDA or XLA) random number generator will affect any other potential random
artifacts you could have in your dataset (like random data augmentation) in the sense that all processes will get
the same random numbers from the torch random modules (so will apply the same random data augmentation if it's
controlled by torch).
</Tip>
<Tip>
The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local
`torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example.
</Tip>
For more details about the internals, see the [Internals page](package_reference/torch_wrappers).

View File

@ -0,0 +1,74 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Low Precision Training Methods
The release of new kinds of hardware led to the emergence of new training paradigms that better utilize them. Currently, this is in the form of training
in 8-bit precision using packages such as [TransformersEngine](https://github.com/NVIDIA/TransformerEngine) (TE) or [MS-AMP](https://github.com/Azure/MS-AMP/tree/main).
For an introduction to the topics discussed today, we recommend reviewing the [low-precision usage guide](../usage_guides/low_precision_training.md) as this documentation will reference it regularly.
## A Quick Chart
Below is a quick chart from the MS-AMP documentation showing the different bit-precisions for each solution during training:
Optimization Level | Computation(GEMM) | Comm | Weight | Master Weight | Weight Gradient | Optimizer States
-- | -- | -- | -- | -- | -- | --
FP16 AMP | FP16 | FP32 | FP32 | N/A | FP32 | FP32+FP32
Nvidia TE | FP8 | FP32 | FP32 | N/A | FP32 | FP32+FP32
MS-AMP O1 | FP8 | FP8 | FP16 | N/A | FP8 | FP32+FP32
MS-AMP O2 | FP8 | FP8 | FP16 | N/A | FP8 | FP8+FP16
MS-AMP O3 | FP8 | FP8 | FP8 | FP16 | FP8 | FP8+FP16
## `TransformersEngine`
`TransformersEngine` is the first solution to trying to train in 8-bit floating point. It works by using drop-in replacement layers for certain ones in a model that utilizes their FP8-engine to reduce the number of bits (such as 32 to 8) without degrading the final accuracy of the model.
Specifically, 🤗 Accelerate will find and replace the following layers with `TransformersEngine` versions:
* `nn.LayerNorm` for `te.LayerNorm`
* `nn.Linear` for `te.Linear`
As a result we wind up with a model that has most of its layers in BF16, while some layers are in FP8 reducing some of the memory.
Anecdotally, we have noticed that performance gains don't really start showing when using `TransformerEngine` until a large majority of the layers
in the model are made up of those two layers to replace. As a result, only larger models have shown performance improvements when the number of parameters is around and upwards of a few billion.
The `TransformerEngine` can receive many different arguments that customize how it performs FP8 calculations and what they do. A full list of the arguments is available below:
* `margin`: The margin to use for the gradient scaling.
* `interval`: The interval to use for how often the scaling factor is recomputed.
* `fp8_format``: The format to use for the FP8 recipe. Must be one of `E4M3` or `HYBRID`.
* `amax_history_len`: The length of the history to use for the scaling factor computation
* `amax_compute_algo`: The algorithm to use for the scaling factor computation. Must be one of `max` or `most_recent`.
* `override_linear_precision`: Whether or not to execute `fprop`, `dgrad`, and `wgrad` GEMMS in higher precision.
You can customize each of these as part of [`utils.FP8RecipeKwargs`] to help optimize performance of your models.
If we notice in the chart mentioned earlier, TE simply casts the computation layers into FP8, while everything else is in FP32. As a result this winds up utilizing the most memory but does so with the benefit of guaranteeing the least amount of loss in end accuracy during training.
## `MS-AMP`
MS-AMP takes a different approach to `TransformersEngine` by providing three different optimization levels to convert more operations in FP8 or FP16.
* The base optimization level (`O1`), passes communications of the weights (such as in DDP) in FP8, stores the weights of the model in FP16, and leaves the optimizer states in FP32. The main benefit of this optimization level is that we can reduce the communication bandwidth by essentially half. Additionally, more GPU memory is saved due to 1/2 of everything being cast in FP8, and the weights being cast to FP16. Notably, both the optimizer states remain in FP32.
* The second optimization level (`O2`) improves upon this by also reducing the precision of the optimizer states. One is in FP8 while the other is in FP16. Generally it's been shown that this will only provide a net-gain of no degraded end accuracy, increased training speed, and reduced memory as now every state is either in FP16 or FP8.
* Finally, MS-AMP has a third optimization level (`O3`) which helps during DDP scenarios such as DeepSpeed. The weights of the model in memory are fully cast to FP8, and the master weights are now stored in FP16. This fully reduces memory by the highest factor as now not only is almost everything in FP8, only two states are left in FP16. Currently, only DeepSpeed versions up through 0.9.2 are supported, so this capability is not included in the 🤗 Accelerate integration
## Combining the two
More experiments need to be performed but it's been noted that combining both MS-AMP and TransformersEngine can lead to the highest throughput by relying on NVIDIA's optimized FP8 operators and utilizing how MS-AMP reduces the memory overhead.

View File

@ -0,0 +1,103 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Comparing performance between different device setups
Evaluating and comparing the performance from different setups can be quite tricky if you don't know what to look for.
For example, you cannot run the same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate
and expect your results to line up.
But why?
There are three reasons for this that this tutorial will cover:
1. **Setting the right seeds**
2. **Observed Batch Sizes**
3. **Learning Rates**
## Setting the Seed
While this issue has not come up as much, make sure to use [`utils.set_seed`] to fully set the seed in all distributed cases so training will be reproducible:
```python
from accelerate.utils import set_seed
set_seed(42)
```
Why is this important? Under the hood this will set **5** different seed settings:
```python
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
# ^^ safe to call this function even if cuda is not available
if is_torch_xla_available():
xm.set_rng_state(seed)
```
The random state, numpy's state, torch, torch's cuda state, and if TPUs are available torch_xla's cuda state.
## Observed Batch Sizes
When training with Accelerate, the batch size passed to the dataloader is the **batch size per GPU**. What this entails is
a batch size of 64 on two GPUs is truly a batch size of 128. As a result, when testing on a single GPU this needs to be accounted for,
as well as similarly for TPUs.
The below table can be used as a quick reference to try out different batch sizes:
<Tip>
In this example, there are two GPUs for "Multi-GPU" and a TPU pod with 8 workers
</Tip>
| Single GPU Batch Size | Multi-GPU Equivalent Batch Size | TPU Equivalent Batch Size |
|-----------------------|---------------------------------|---------------------------|
| 256 | 128 | 32 |
| 128 | 64 | 16 |
| 64 | 32 | 8 |
| 32 | 16 | 4 |
## Learning Rates
As noted in multiple sources[[1](https://aws.amazon.com/blogs/machine-learning/scalable-multi-node-deep-learning-training-using-gpus-in-the-aws-cloud/)][[2](https://docs.nvidia.com/clara/clara-train-sdk/pt/model.html#classification-models-multi-gpu-training)], the learning rate should be scaled *linearly* based on the number of devices present. The below
snippet shows doing so with Accelerate:
<Tip>
Since users can have their own learning rate schedulers defined, we leave this up to the user to decide if they wish to scale their
learning rate or not.
</Tip>
```python
learning_rate = 1e-3
accelerator = Accelerator()
learning_rate *= accelerator.num_processes
optimizer = AdamW(params=model.parameters(), lr=learning_rate)
```
You will also find that `accelerate` will step the learning rate based on the number of processes being trained on. This is because
of the observed batch size noted earlier. So in the case of 2 GPUs, the learning rate will be stepped twice as often as a single GPU
to account for the batch size being twice as large (if no changes to the batch size on the single GPU instance are made).
## Gradient Accumulation and Mixed Precision
When using gradient accumulation and mixed precision, due to how gradient averaging works (accumulation) and the precision loss (mixed precision),
some degradation in performance is expected. This will be explicitly seen when comparing the batch-wise loss between different compute
setups. However, the overall loss, metric, and general performance at the end of training should be _roughly_ the same.

View File

@ -0,0 +1,167 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Training on TPUs with 🤗 Accelerate
Training on TPUs can be slightly different from training on multi-gpu, even with 🤗 Accelerate. This guide aims to show you
where you should be careful and why, as well as the best practices in general.
## Training in a Notebook
The main carepoint when training on TPUs comes from the [`notebook_launcher`]. As mentioned in the [notebook tutorial](../usage_guides/notebook), you need to
restructure your training code into a function that can get passed to the [`notebook_launcher`] function and be careful about not declaring any tensors on the GPU.
While on a TPU that last part is not as important, a critical part to understand is that when you launch code from a notebook you do so through a process called **forking**.
When launching from the command-line, you perform **spawning**, where a python process is not currently running and you *spawn* a new process in. Since your Jupyter notebook is already
utilizing a python process, you need to *fork* a new process from it to launch your code.
Where this becomes important is in regard to declaring your model. On forked TPU processes, it is recommended that you instantiate your model *once* and pass this into your
training function. This is different than training on GPUs where you create `n` models that have their gradients synced and back-propagated at certain moments. Instead, one
model instance is shared between all the nodes and it is passed back and forth. This is important especially when training on low-resource TPUs such as those provided in Kaggle kernels or
on Google Colaboratory.
Below is an example of a training function passed to the [`notebook_launcher`] if training on CPUs or GPUs:
<Tip>
This code snippet is based off the one from the `simple_nlp_example` notebook found [here](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb) with slight
modifications for the sake of simplicity
</Tip>
```python
def training_function():
# Initialize accelerator
accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
train_dataloader, eval_dataloader = create_dataloaders(
train_batch_size=hyperparameters["train_batch_size"], eval_batch_size=hyperparameters["eval_batch_size"]
)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=hyperparameters["learning_rate"])
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader
)
num_epochs = hyperparameters["num_epochs"]
# Now we train the model
for epoch in range(num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
```python
from accelerate import notebook_launcher
notebook_launcher(training_function)
```
<Tip>
The `notebook_launcher` will default to 8 processes if 🤗 Accelerate has been configured for a TPU
</Tip>
If you use this example and declare the model *inside* the training loop, then on a low-resource system you will potentially see an error
like:
```
ProcessExitedException: process 0 terminated with signal SIGSEGV
```
This error is *extremely* cryptic but the basic explanation is you ran out of system RAM. You can avoid this entirely by reconfiguring the training function to
accept a single `model` argument, and declare it in an outside cell:
```python
# In another Jupyter cell
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
```
```diff
+ def training_function(model):
# Initialize accelerator
accelerator = Accelerator()
- model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
train_dataloader, eval_dataloader = create_dataloaders(
train_batch_size=hyperparameters["train_batch_size"], eval_batch_size=hyperparameters["eval_batch_size"]
)
...
```
And finally calling the training function with:
```diff
from accelerate import notebook_launcher
- notebook_launcher(training_function)
+ notebook_launcher(training_function, (model,))
```
<Tip>
The above workaround is only needed when launching a TPU instance from a Jupyter Notebook on a low-resource server such as Google Colaboratory or Kaggle. If
using a script or launching on a much beefier server declaring the model beforehand is not needed.
</Tip>
## Mixed Precision and Global Variables
As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), 🤗 Accelerate supports fp16 and bf16, both of which can be used on TPUs.
That being said, ideally `bf16` should be utilized as it is extremely efficient to use.
There are two "layers" when using `bf16` and 🤗 Accelerate on TPUs, at the base level and at the operation level.
At the base level, this is enabled when passing `mixed_precision="bf16"` to `Accelerator`, such as:
```python
accelerator = Accelerator(mixed_precision="bf16")
```
By default, this will cast `torch.float` and `torch.double` to `bfloat16` on TPUs.
The specific configuration being set is an environmental variable of `XLA_USE_BF16` is set to `1`.
There is a further configuration you can perform which is setting the `XLA_DOWNCAST_BF16` environmental variable. If set to `1`, then
`torch.float` is `bfloat16` and `torch.double` is `float32`.
This is performed in the `Accelerator` object when passing `downcast_bf16=True`:
```python
accelerator = Accelerator(mixed_precision="bf16", downcast_bf16=True)
```
Using downcasting instead of bf16 everywhere is good for when you are trying to calculate metrics, log values, and more where raw bf16 tensors would be unusable.
## Training Times on TPUs
As you launch your script, you may notice that training seems exceptionally slow at first. This is because TPUs
first run through a few batches of data to see how much memory to allocate before finally utilizing this configured
memory allocation extremely efficiently.
If you notice that your evaluation code to calculate the metrics of your model takes longer due to a larger batch size being used,
it is recommended to keep the batch size the same as the training data if it is too slow. Otherwise the memory will reallocate to this
new batch size after the first few iterations.
<Tip>
Just because the memory is allocated does not mean it will be used or that the batch size will increase when going back to your training dataloader.
</Tip>

View File

@ -1,120 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Fully Sharded Data Parallel
To accelerate training huge models on larger batch sizes, we can use a fully sharded data parallel model.
This type of data parallel paradigm enables fitting more data and larger models by sharding the optimizer states, gradients and parameters.
To read more about it and the benefits, check out the [Fully Sharded Data Parallel blog](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/).
We have integrated the latest PyTorch's Fully Sharded Data Parallel (FSDP) training feature.
All you need to do is enable it through the config.
## How it works out the box
On your machine(s) just run:
```bash
accelerate config
```
and answer the questions asked. This will generate a config file that will be used automatically to properly set the
default options when doing
```bash
accelerate launch my_script.py --args_to_my_script
```
For instance, here is how you would run the NLP example (from the root of the repo) with FSDP enabled:
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: FSDP
fsdp_config:
min_num_params: 2000
offload_params: false
sharding_strategy: 1
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 2
use_cpu: false
```
```bash
accelerate launch examples/nlp_example.py
```
Currently, `Accelerate` supports following config through the CLI:
```bash
`Sharding Strategy`: [1] FULL_SHARD, [2] SHARD_GRAD_OP
`Min Num Params`: FSDP\'s minimum number of parameters for Default Auto Wrapping.
`Offload Params`: Decides Whether to offload parameters and gradients to CPU.
```
## Few caveats to be aware of
- PyTorch FSDP auto wraps sub-modules, flattens the parameters and shards the parameters in place.
Due to this, any optimizer created before model wrapping gets broken and occupies more memory.
Hence, it is highly recommended and efficient to prepare model before creating optimizer.
`Accelerate` will automatically wrap the model and create an optimizer for you in case of single model with a warning message.
> FSDP Warning: When using FSDP, it is efficient and recommended to call prepare for the model before creating the optimizer
However, below is the recommended way to prepare model and optimizer while using FSDP:
```diff
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
+ model = accelerator.prepare(model)
optimizer = torch.optim.AdamW(params=model.parameters(), lr=lr)
- model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(model,
- optimizer, train_dataloader, eval_dataloader, lr_scheduler
- )
+ optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
+ optimizer, train_dataloader, eval_dataloader, lr_scheduler
+ )
```
- In case of a single model, if you have created optimizer with multiple parameter groups and called prepare with them together,
then the parameter groups will be lost and the following warning is displayed:
> FSDP Warning: When using FSDP, several parameter groups will be conflated into
> a single one due to nested module wrapping and parameter flattening.
This is because parameter groups created before wrapping will have no meaning post wrapping due parameter flattening of nested FSDP modules into 1D arrays (which can consume many layers).
For instance, below are the named parameters of FSDP model on GPU 0 (When using 2 GPUs. Around 55M (110M/2) params in 1D arrays as this will have the 1st shard of the parameters).
Here, if one has applied no weight decay for [bias, LayerNorm.weight] named parameters of unwrapped BERT model,
it can't be applied to the below FSDP wrapped model as there are no named parameters with either of those strings and
the parameters of those layers are concatenated with parameters of various other layers.
```
{
'_fsdp_wrapped_module.flat_param': torch.Size([494209]),
'_fsdp_wrapped_module._fpw_module.bert.embeddings.word_embeddings._fsdp_wrapped_module.flat_param': torch.Size([11720448]),
'_fsdp_wrapped_module._fpw_module.bert.encoder._fsdp_wrapped_module.flat_param': torch.Size([42527232])
}
```
- In case of multiple models, it is necessary to prepare the models before creating optimizers else it will throw an error.
- Mixed precision is currently not supported with FSDP.
For more control, users can leverage the `FullyShardedDataParallelPlugin` wherein they can specify `auto_wrap_policy`, `backward_prefetch` and `ignored_modules`.
After creating an instance of this class, users can pass it to the Accelerator class instantiation.
For more information on these options, please refer to the PyTorch [FullyShardedDataParallel](https://github.com/pytorch/pytorch/blob/0df2e863fbd5993a7b9e652910792bd21a516ff3/torch/distributed/fsdp/fully_sharded_data_parallel.py#L236) code.
[[autodoc]] utils.FullyShardedDataParallelPlugin

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

74
docs/source/index.md Normal file
View File

@ -0,0 +1,74 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerate
🤗 Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
+ model, optimizer, training_dataloader, scheduler
+ )
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
+ accelerator.backward(loss)
optimizer.step()
scheduler.step()
```
Built on `torch_xla` and `torch.distributed`, 🤗 Accelerate takes care of the heavy lifting, so you don't have to write any custom code to adapt to these platforms.
Convert existing codebases to utilize [DeepSpeed](usage_guides/deepspeed), perform [fully sharded data parallelism](usage_guides/fsdp), and have automatic support for mixed-precision training!
<Tip>
To get a better idea of this process, make sure to check out the [Tutorials](basic_tutorials/overview)!
</Tip>
This code can then be launched on any system through Accelerate's CLI interface:
```bash
accelerate launch {my_script.py}
```
<div class="mt-10">
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./basic_tutorials/overview"
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
<p class="text-gray-700">Learn the basics and become familiar with using 🤗 Accelerate. Start here if you are using 🤗 Accelerate for the first time!</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./usage_guides/explore"
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
<p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Accelerate to solve real-world problems.</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concept_guides/gradient_synchronization"
><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
<p class="text-gray-700">High-level explanations for building a better understanding of important topics such as avoiding subtle nuances and pitfalls in distributed training and DeepSpeed.</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/accelerator"
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
<p class="text-gray-700">Technical descriptions of how 🤗 Accelerate classes and methods work.</p>
</a>
</div>
</div>

View File

@ -1,132 +0,0 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Accelerate
Run your *raw* PyTorch training script on any kind of device
## Features
- 🤗 Accelerate provides an easy API to make your scripts run with mixed precision and on any kind of distributed
setting (multi-GPUs, TPUs etc.) while still letting you write your own training loop. The same code can then runs
seamlessly on your local machine for debugging or your training environment.
- 🤗 Accelerate also provides a CLI tool that allows you to quickly configure and test your training environment then
launch the scripts.
## Easy to integrate
A traditional training loop in PyTorch looks like this:
```python
my_model.to(device)
for batch in my_training_dataloader:
my_optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
loss.backward()
my_optimizer.step()
```
Changing it to work with accelerate is really easy and only adds a few lines of code:
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
# Use the device given by the *accelerator* object.
+ device = accelerator.device
my_model.to(device)
# Pass every important object (model, optimizer, dataloader) to *accelerator.prepare*
+ my_model, my_optimizer, my_training_dataloader = accelerate.prepare(
+ my_model, my_optimizer, my_training_dataloader
+ )
for batch in my_training_dataloader:
my_optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
# Just a small change for the backward instruction
- loss.backward()
+ accelerator.backward(loss)
my_optimizer.step()
```
and with this, your script can now run in a distributed environment (multi-GPU, TPU).
You can even simplify your script a bit by letting 🤗 Accelerate handle the device placement for you (which is safer,
especially for TPU training):
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
- my_model.to(device)
# Pass every important object (model, optimizer, dataloader) to *accelerator.prepare*
+ my_model, my_optimizer, my_training_dataloader = accelerate.prepare(
+ my_model, my_optimizer, my_training_dataloader
+ )
for batch in my_training_dataloader:
my_optimizer.zero_grad()
inputs, targets = batch
- inputs = inputs.to(device)
- targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
# Just a small change for the backward instruction
- loss.backward()
+ accelerator.backward(loss)
my_optimizer.step()
```
## Script launcher
No need to remember how to use `torch.distributed.launch` or to write a specific launcher for TPU training! 🤗
Accelerate comes with a CLI tool that will make your life easier when launching distributed scripts.
On your machine(s) just run:
```bash
accelerate config
```
and answer the questions asked. This will generate a config file that will be used automatically to properly set the
default options when doing
```bash
accelerate launch my_script.py --args_to_my_script
```
For instance, here is how you would run the NLP example (from the root of the repo):
```bash
accelerate launch examples/nlp_example.py
```
## Supported integrations
- CPU only
- single GPU
- multi-GPU on one node (machine)
- multi-GPU on several nodes (machines)
- TPU
- FP16 with native AMP (apex on the roadmap)
- DeepSpeed (experimental support)

View File

@ -1,96 +0,0 @@
<!---
Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Installation
🤗 Accelerate is tested on Python 3.6+, and PyTorch 1.6.0+.
You should install 🤗 Accelerate in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're
unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Create a virtual environment with the version of Python you're going
to use and activate it.
Now, if you want to use 🤗 Accelerate, you can install it with pip.
## Installation with pip
First you need to install PyTorch. Please refer to the
[PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) regarding the specific install command for your platform.
When PyTorch has been installed, 🤗 Accelerate can be installed using pip as follows:
```bash
pip install accelerate
```
Alternatively, for CPU-support only, you can install 🤗 Accelerate and PyTorch in one line with:
```bash
pip install accelerate[torch]
```
To check 🤗 Accelerate is properly installed, run the following command:
```bash
python -c "TODO write"
```
## Installing from source
Here is how to quickly install `accelerate` from source:
```bash
pip install git+https://github.com/huggingface/accelerate
```
Note that this will install not the latest released version, but the bleeding edge `main` version, which you may want to use in case a bug has been fixed since the last official release and a new release hasn't been yet rolled out.
While we strive to keep `main` operational at all times, if you notice some issues, they usually get fixed within a few hours or a day and and you're more than welcome to help us detect any problems by opening an [Issue](https://github.com/huggingface/accelerate/issues) and this way, things will get fixed even sooner.
Again, you can run:
```bash
python -c "TODO write"
```
to check 🤗 Accelerate is properly installed.
## Editable install
If you want to constantly use the bleeding edge `main` version of the source code, or if you want to contribute to the library and need to test the changes in the code you're making, you will need an editable install. This is done by cloning the repository and installing with the following commands:
``` bash
git clone https://github.com/huggingface/accelerate.git
cd accelerate
pip install -e .
```
This command performs a magical link between the folder you cloned the repository to and your python library paths, and it'll look inside this folder in addition to the normal library-wide paths. So if normally your python packages get installed into:
```
~/anaconda3/envs/main/lib/python3.7/site-packages/
```
now this editable install will reside where you clone the folder to, e.g. `~/accelerate/` and python will search it too.
Do note that you have to keep that `accelerate` folder around and not delete it to continue using the 🤗 Accelerate library.
Now, let's get to the real benefit of this installation approach. Say, you saw some new feature has been just committed into `main`. If you have already performed all the steps above, to update your accelerate repo to include all the latest commits, all you need to do is to `cd` into that cloned repository folder and update the clone to the latest version:
```bash
cd ~/accelerate/
git pull
```
There is nothing else to do. Your python environment will find the bleeding edge version of 🤗 Accelerate on the next run.

View File

@ -1,28 +0,0 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Notebook Launcher
Launch your training function inside a notebook. Currently supports launching a training with TPUs on [Google
Colab](https://colab.research.google.com/) and [Kaggle kernels](https://www.kaggle.com/code), as well as training on
several GPUs (if the machine on which you are running your notebook has them).
An example can be found in [this notebook](https://github.com/huggingface/notebooks/blob/master/examples/accelerate/simple_nlp_example.ipynb).
<Tip warning={true}>
Your `Accelerator` object should only be defined inside the training function. This is because the
initialization should be done inside the launcher only.
</Tip>
[[autodoc]] notebook_launcher

View File

@ -1,51 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Memory Utilities
One of the most frustrating errors when it comes to running training scripts is hitting "CUDA Out-of-Memory",
as the entire script needs to be restarted, progress is lost, and typically a developer would want to simply
start their script and let it run.
`Accelerate` provides a utility heavily based on [toma](https://github.com/BlackHC/toma) to give this capability.
## find_executable_batch_size
This algorithm operates with exponential decay, decreasing the batch size in half after each failed run on some
training script. To use it, restructure your training function to include an inner function that includes this wrapper,
and build your dataloaders inside it. At a minimum, this could look like 4 new lines of code.
> Note: The inner function *must* take in the batch size as the first parameter, but we do not pass one to it when called. The wrapper handles this for us
```diff
def training_function(args):
accelerator = Accelerator()
model = get_model()
model.to(accelerator.device)
optimizer = get_optimizer()
+ @find_executable_batch_size(starting_batch_size=args.batch_size)
+ def inner_training_loop(batch_size):
+ nonlocal model, optimizer # Ensure they can be used in our context
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
lr_scheduler = get_scheduler(
optimizer,
num_training_steps=len(train_dataloader)*num_epochs
)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
train(model, optimizer, train_dataloader, lr_scheduler)
validate(model, eval_dataloader)
+ inner_training_loop()
```
[[autodoc]] memory_utils.find_executable_batch_size

View File

@ -0,0 +1,26 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerator
The [`Accelerator`] is the main class for enabling distributed training on any type of training setup. Read the [Add Accelerator to your code](../basic_tutorials/migration) tutorial to learn more about how to add the [`Accelerator`] to your script.
## Accelerator[[api]]
[[autodoc]] Accelerator
## Utilities
[[autodoc]] accelerate.utils.gather_object

View File

@ -0,0 +1,47 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Working with large models
## Dispatching and Offloading Models
[[autodoc]] big_modeling.init_empty_weights
[[autodoc]] big_modeling.cpu_offload
[[autodoc]] big_modeling.cpu_offload_with_hook
[[autodoc]] big_modeling.disk_offload
[[autodoc]] big_modeling.dispatch_model
[[autodoc]] big_modeling.load_checkpoint_and_dispatch
[[autodoc]] big_modeling.load_checkpoint_in_model
[[autodoc]] utils.infer_auto_device_map
## Model Hooks
### Hook Classes
[[autodoc]] hooks.ModelHook
[[autodoc]] hooks.AlignDevicesHook
[[autodoc]] hooks.SequentialHook
### Adding Hooks
[[autodoc]] hooks.add_hook_to_module
[[autodoc]] hooks.attach_execution_device_hook
[[autodoc]] hooks.attach_align_device_hook
[[autodoc]] hooks.attach_align_device_hook_on_blocks
### Removing Hooks
[[autodoc]] hooks.remove_hook_from_module
[[autodoc]] hooks.remove_hook_from_submodules

View File

@ -0,0 +1,312 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# The Command Line
Below is a list of all the available commands 🤗 Accelerate with their parameters
## accelerate config
**Command**:
`accelerate config` or `accelerate-config`
Launches a series of prompts to create and save a `default_config.yml` configuration file for your training system. Should
always be ran first on your machine.
**Usage**:
```bash
accelerate config [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit
## accelerate config default
**Command**:
`accelerate config default` or `accelerate-config default`
Create a default config file for Accelerate with only a few flags set.
**Usage**:
```bash
accelerate config default [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit
* `--mixed_precision {no,fp16,bf16}` (`str`) -- Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later.
## accelerate config update
**Command**:
`accelerate config update` or `accelerate-config update`
Update an existing config file with the latest defaults while maintaining the old configuration.
**Usage**:
```bash
accelerate config update [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to the config file to update. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit
## accelerate env
**Command**:
`accelerate env` or `accelerate-env` or `python -m accelerate.commands.env`
Lists the contents of the passed 🤗 Accelerate configuration file. Should always be used when opening an issue on the [GitHub repository](https://github.com/huggingface/accelerate).
**Usage**:
```bash
accelerate env [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit
## accelerate launch
**Command**:
`accelerate launch` or `accelerate-launch` or `python -m accelerate.commands.launch`
Launches a specified script on a distributed system with the right parameters.
**Usage**:
```bash
accelerate launch [arguments] {training_script} --{training_script-argument-1} --{training_script-argument-2} ...
```
**Positional Arguments**:
- `{training_script}` -- The full path to the script to be launched in parallel
- `--{training_script-argument-1}` -- Arguments of the training script
**Optional Arguments**:
* `-h`, `--help` (`bool`) -- Show a help message and exit
* `--config_file CONFIG_FILE` (`str`)-- The config file to use for the default values in the launching script.
* `-m`, `--module` (`bool`) -- Change each process to interpret the launch script as a Python module, executing with the same behavior as 'python -m'.
* `--no_python` (`bool`) -- Skip prepending the training script with 'python' - just execute it directly. Useful when the script is not a Python script.
* `--debug` (`bool`) -- Whether to print out the torch.distributed stack trace when something fails.
* `-q`, `--quiet` (`bool`) -- Silence subprocess errors from the launch stack trace to only show the relevant tracebacks. (Only applicable to DeepSpeed and single-process configurations).
The rest of these arguments are configured through `accelerate config` and are read in from the specified `--config_file` (or default configuration) for their
values. They can also be passed in manually.
**Hardware Selection Arguments**:
* `--cpu` (`bool`) -- Whether or not to force the training on the CPU.
* `--multi_gpu` (`bool`) -- Whether or not this should launch a distributed GPU training.
* `--tpu` (`bool`) -- Whether or not this should launch a TPU training.
* `--ipex` (`bool`) -- Whether or not this should launch an Intel Pytorch Extension (IPEX) training.
**Resource Selection Arguments**:
The following arguments are useful for fine-tuning how available hardware should be used
* `--mixed_precision {no,fp16,bf16}` (`str`) -- Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later.
* `--num_processes NUM_PROCESSES` (`int`) -- The total number of processes to be launched in parallel.
* `--num_machines NUM_MACHINES` (`int`) -- The total number of machines used in this training.
* `--num_cpu_threads_per_process NUM_CPU_THREADS_PER_PROCESS` (`int`) -- The number of CPU threads per process. Can be tuned for optimal performance.
**Training Paradigm Arguments**:
The following arguments are useful for selecting which training paradigm to use.
* `--use_deepspeed` (`bool`) -- Whether or not to use DeepSpeed for training.
* `--use_fsdp` (`bool`) -- Whether or not to use FullyShardedDataParallel for training.
* `--use_megatron_lm` (`bool`) -- Whether or not to use Megatron-LM for training.
* `--use_xpu` (`bool`) -- Whether to use IPEX plugin to speed up training on XPU specifically.
**Distributed GPU Arguments**:
The following arguments are only useful when `multi_gpu` is passed or multi-gpu training is configured through `accelerate config`:
* `--gpu_ids` (`str`) -- What GPUs (by id) should be used for training on this machine as a comma-seperated list
* `--same_network` (`bool`) -- Whether all machines used for multinode training exist on the same local network.
* `--machine_rank MACHINE_RANK` (`int`) -- The rank of the machine on which this script is launched.
* `--main_process_ip MAIN_PROCESS_IP` (`str`) -- The IP address of the machine of rank 0.
* `--main_process_port MAIN_PROCESS_PORT` (`int`) -- The port to use to communicate with the machine of rank 0.
* `--rdzv_backend` (`str`) -- The rendezvous method to use, such as "static" or "c10d"
* `--rdzv_conf` (`str`) -- Additional rendezvous configuration (<key1>=<value1>,<key2>=<value2>,...).
* `--max_restarts` (`int`) -- Maximum number of worker group restarts before failing.
* `--monitor_interval` (`float`) -- Interval, in seconds, to monitor the state of workers.
**TPU Arguments**:
The following arguments are only useful when `tpu` is passed or TPU training is configured through `accelerate config`:
* `--main_training_function MAIN_TRAINING_FUNCTION` (`str`) -- The name of the main function to be executed in your script.
* `--downcast_bf16` (`bool`) -- Whether when using bf16 precision on TPUs if both float and double tensors are cast to bfloat16 or if double tensors remain as float32.
**DeepSpeed Arguments**:
The following arguments are only useful when `use_deepspeed` is passed or `deepspeed` is configured through `accelerate config`:
* `--deepspeed_config_file` (`str`) -- DeepSpeed config file.
* `--zero_stage` (`int`) -- DeepSpeed's ZeRO optimization stage.
* `--offload_optimizer_device` (`str`) -- Decides where (none|cpu|nvme) to offload optimizer states.
* `--offload_param_device` (`str`) -- Decides where (none|cpu|nvme) to offload parameters.
* `--gradient_accumulation_steps` (`int`) -- No of gradient_accumulation_steps used in your training script.
* `--gradient_clipping` (`float`) -- Gradient clipping value used in your training script.
* `--zero3_init_flag` (`str`) -- Decides Whether (true|false) to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with DeepSpeed ZeRO Stage-3.
* `--zero3_save_16bit_model` (`str`) -- Decides Whether (true|false) to save 16-bit model weights when using ZeRO Stage-3. Only applicable with DeepSpeed ZeRO Stage-3.
* `--deepspeed_hostfile` (`str`) -- DeepSpeed hostfile for configuring multi-node compute resources.
* `--deepspeed_exclusion_filter` (`str`) -- DeepSpeed exclusion filter string when using mutli-node setup.
* `--deepspeed_inclusion_filter` (`str`) -- DeepSpeed inclusion filter string when using mutli-node setup.
* `--deepspeed_multinode_launcher` (`str`) -- DeepSpeed multi-node launcher to use.
**Fully Sharded Data Parallelism Arguments**:
The following arguments are only useful when `use_fsdp` is passed or Fully Sharded Data Parallelism is configured through `accelerate config`:
* `--fsdp_offload_params` (`str`) -- Decides Whether (true|false) to offload parameters and gradients to CPU.
* `--fsdp_min_num_params` (`int`) -- FSDP's minimum number of parameters for Default Auto Wrapping.
* `--fsdp_sharding_strategy` (`int`) -- FSDP's Sharding Strategy.
* `--fsdp_auto_wrap_policy` (`str`) -- FSDP's auto wrap policy.
* `--fsdp_transformer_layer_cls_to_wrap` (`str`) -- Transformer layer class name (case-sensitive) to wrap, e.g, `BertLayer`, `GPTJBlock`, `T5Block` ...
* `--fsdp_backward_prefetch_policy` (`str`) -- FSDP's backward prefetch policy.
* `--fsdp_state_dict_type` (`str`) -- FSDP's state dict type.
* `--fsdp_forward_prefetch` (`str`) -- FSDP forward prefetch.
* `--fsdp_use_orig_params` (`str`) -- If True, allows non-uniform `requires_grad` mixed in a FSDP unit.
* `--fsdp_cpu_ram_efficient_loading` (`str`) - If true, only the first process loads the pretrained model checkoint while all other processes have empty weights. When using this, `--fsdp_sync_module_states` needs to True.
* `--fsdp_sync_module_states` (`str`) - If true, each individually wrapped FSDP unit will broadcast module parameters from rank 0.
**Megatron-LM Arguments**:
The following arguments are only useful when `use_megatron_lm` is passed or Megatron-LM is configured through `accelerate config`:
* `--megatron_lm_tp_degree` (``) -- Megatron-LM's Tensor Parallelism (TP) degree.
* `--megatron_lm_pp_degree` (``) -- Megatron-LM's Pipeline Parallelism (PP) degree.
* `--megatron_lm_num_micro_batches` (``) -- Megatron-LM's number of micro batches when PP degree > 1.
* `--megatron_lm_sequence_parallelism` (``) -- Decides Whether (true|false) to enable Sequence Parallelism when TP degree > 1.
* `--megatron_lm_recompute_activations` (``) -- Decides Whether (true|false) to enable Selective Activation Recomputation.
* `--megatron_lm_use_distributed_optimizer` (``) -- Decides Whether (true|false) to use distributed optimizer which shards optimizer state and gradients across Data Parallel (DP) ranks.
* `--megatron_lm_gradient_clipping` (``) -- Megatron-LM's gradient clipping value based on global L2 Norm (0 to disable).
**AWS SageMaker Arguments**:
The following arguments are only useful when training in SageMaker
* `--aws_access_key_id AWS_ACCESS_KEY_ID` (`str`) -- The AWS_ACCESS_KEY_ID used to launch the Amazon SageMaker training job
* `--aws_secret_access_key AWS_SECRET_ACCESS_KEY` (`str`) -- The AWS_SECRET_ACCESS_KEY used to launch the Amazon SageMaker training job
## accelerate estimate-memory
**Command**:
`accelerate estimate-memory` or `accelerate-estimate-memory` or `python -m accelerate.commands.estimate`
Estimates the total vRAM a particular model hosted on the Hub needs to be loaded in with an estimate for training. Requires that `huggingface_hub` be installed.
<Tip>
When performing inference, typically add ≤20% to the result as overall allocation [as referenced here](https://blog.eleuther.ai/transformer-math/). We will have more extensive estimations in the future that will automatically be included in the calculation.
</Tip>
**Usage**:
```bash
accelerate estimate-memory {MODEL_NAME} --library_name {LIBRARY_NAME} --dtypes {dtype_1} {dtype_2} ...
```
**Required Arguments**:
* `MODEL_NAME` (`str`)-- The model name on the Hugging Face Hub
**Optional Arguments**:
* `--library_name {timm,transformers}` (`str`) -- The library the model has an integration with, such as `transformers`, needed only if this information is not stored on the Hub
* `--dtypes {float32,float16,int8,int4}` (`[{float32,float16,int8,int4} ...]`) -- The dtypes to use for the model, must be one (or many) of `float32`, `float16`, `int8`, and `int4`
* `--trust_remote_code` (`bool`) -- Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be passed for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
## accelerate tpu-config
`accelerate tpu-config`
**Usage**:
```bash
accelerate tpu-config [arguments]
```
**Optional Arguments**:
* `-h`, `--help` (`bool`) -- Show a help message and exit
**Config Arguments**:
Arguments that can be configured through `accelerate config`.
* `--config_file` (`str`) -- Path to the config file to use for accelerate.
* `--tpu_name` (`str`) -- The name of the TPU to use. If not specified, will use the TPU specified in the config file.
* `--tpu_zone` (`str`) -- The zone of the TPU to use. If not specified, will use the zone specified in the config file.
**TPU Arguments**:
Arguments for options ran inside the TPU.
* `--command_file` (`str`) -- The path to the file containing the commands to run on the pod on startup.
* `--command` (`str`) -- A command to run on the pod. Can be passed multiple times.
* `--install_accelerate` (`bool`) -- Whether to install accelerate on the pod. Defaults to False.
* `--accelerate_version` (`str`) -- The version of accelerate to install on the pod. If not specified, will use the latest pypi version. Specify 'dev' to install from GitHub.
* `--debug` (`bool`) -- If set, will print the command that would be run instead of running it.
## accelerate test
`accelerate test` or `accelerate-test`
Runs `accelerate/test_utils/test_script.py` to verify that 🤗 Accelerate has been properly configured on your system and runs.
**Usage**:
```bash
accelerate test [arguments]
```
**Optional Arguments**:
* `--config_file CONFIG_FILE` (`str`) -- The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content
of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have such an environment variable, your cache directory
(`~/.cache` or the content of `XDG_CACHE_HOME`) suffixed with `huggingface`.
* `-h`, `--help` (`bool`) -- Show a help message and exit

View File

@ -0,0 +1,28 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Utilities for DeepSpeed
[[autodoc]] utils.DeepSpeedPlugin
[[autodoc]] utils.deepspeed.DummyOptim
[[autodoc]] utils.deepspeed.DummyScheduler
[[autodoc]] utils.deepspeed.DeepSpeedEngineWrapper
[[autodoc]] utils.deepspeed.DeepSpeedOptimizerWrapper
[[autodoc]] utils.deepspeed.DeepSpeedSchedulerWrapper

View File

@ -0,0 +1,20 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Utilities for Fully Sharded Data Parallelism
[[autodoc]] utils.merge_fsdp_weights
[[autodoc]] utils.FullyShardedDataParallelPlugin

View File

@ -0,0 +1,20 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# The inference API
These docs refer to the [PiPPy](https://github.com/PyTorch/PiPPy) integration.
[[autodoc]] inference.prepare_pippy

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Kwargs Handlers
@ -15,11 +18,22 @@ specific language governing permissions and limitations under the License.
The following objects can be passed to the main [`Accelerator`] to customize how some PyTorch objects
related to distributed training or mixed precision are created.
## AutocastKwargs
[[autodoc]] AutocastKwargs
## DistributedDataParallelKwargs
[[autodoc]] DistributedDataParallelKwargs
## FP8RecipeKwargs
[[autodoc]] utils.FP8RecipeKwargs
## ProfileKwargs
[[autodoc]] utils.ProfileKwargs
## GradScalerKwargs
[[autodoc]] GradScalerKwargs
@ -27,3 +41,7 @@ related to distributed training or mixed precision are created.
## InitProcessGroupKwargs
[[autodoc]] InitProcessGroupKwargs
## KwargsHandler
[[autodoc]] utils.KwargsHandler

View File

@ -0,0 +1,22 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Launchers
Functions for launching training on distributed processes.
[[autodoc]] accelerate.notebook_launcher
[[autodoc]] accelerate.debug_launcher

View File

@ -0,0 +1,21 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Logging with Accelerate
Refer to the [Troubleshooting guide](../usage_guides/troubleshooting#logging) or to the example below to learn
how to use 🤗 Accelerate's logger.
[[autodoc]] logging.get_logger

View File

@ -0,0 +1,32 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Utilities for Megatron-LM
[[autodoc]] utils.MegatronLMPlugin
[[autodoc]] utils.MegatronLMDummyScheduler
[[autodoc]] utils.MegatronLMDummyDataLoader
[[autodoc]] utils.AbstractTrainStep
[[autodoc]] utils.GPTTrainStep
[[autodoc]] utils.BertTrainStep
[[autodoc]] utils.T5TrainStep
[[autodoc]] utils.avg_losses_across_data_parallel_group

View File

@ -0,0 +1,28 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Stateful Classes
Below are variations of a [singleton class](https://en.wikipedia.org/wiki/Singleton_pattern) in the sense that all
instances share the same state, which is initialized on the first instantiation.
These classes are immutable and store information about certain configurations or
states.
[[autodoc]] state.PartialState
[[autodoc]] state.AcceleratorState
[[autodoc]] state.GradientState

View File

@ -8,62 +8,30 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Internals
# Wrapper classes for torch Dataloaders, Optimizers, and Schedulers
## Optimizer
The internal classes Accelerate uses to prepare objects for distributed training
when calling [`~Accelerator.prepare`].
## Datasets and DataLoaders
[[autodoc]] data_loader.prepare_data_loader
[[autodoc]] data_loader.skip_first_batches
[[autodoc]] data_loader.BatchSamplerShard
[[autodoc]] data_loader.IterableDatasetShard
[[autodoc]] data_loader.DataLoaderShard
[[autodoc]] data_loader.DataLoaderDispatcher
## Optimizers
[[autodoc]] optimizer.AcceleratedOptimizer
## DataLoader
## Schedulers
The main work on your PyTorch `DataLoader` is done by the following function:
[[autodoc]] data_loader.prepare_data_loader
### BatchSamplerShard
[[autodoc]] data_loader.DataLoaderShard
### BatchSamplerShard
[[autodoc]] data_loader.BatchSamplerShard
### IterableDatasetShard
[[autodoc]] data_loader.IterableDatasetShard
## Scheduler
[[autodoc]] scheduler.AcceleratedScheduler
## Distributed Config
### AcceleratorState
[[autodoc]] state.AcceleratorState
### DistributedType
[[autodoc]] state.DistributedType
## Tracking
[[autodoc]] tracking.GeneralTracker
## Utilities
[[autodoc]] utils.extract_model_from_parallel
[[autodoc]] utils.gather
[[autodoc]] utils.send_to_device
[[autodoc]] utils.set_seed
[[autodoc]] utils.synchronize_rng_state
[[autodoc]] utils.synchronize_rng_states
[[autodoc]] utils.wait_for_everyone
[[autodoc]] scheduler.AcceleratedScheduler

View File

@ -0,0 +1,35 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Experiment Tracking
## The Base Tracker Class
[[autodoc]] tracking.GeneralTracker
## Integrated Trackers
[[autodoc]] tracking.TensorBoardTracker
- __init__
[[autodoc]] tracking.WandBTracker
- __init__
[[autodoc]] tracking.CometMLTracker
- __init__
[[autodoc]] tracking.AimTracker
- __init__
[[autodoc]] tracking.MLflowTracker
- __init__
[[autodoc]] tracking.ClearMLTracker
- __init__

View File

@ -0,0 +1,246 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Helpful Utilities
Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case.
## Constants
Constants used throughout 🤗 Accelerate for reference
The following are constants used when utilizing [`Accelerator.save_state`]
`utils.MODEL_NAME`: `"pytorch_model"`
`utils.OPTIMIZER_NAME`: `"optimizer"`
`utils.RNG_STATE_NAME`: `"random_states"`
`utils.SCALER_NAME`: `"scaler.pt`
`utils.SCHEDULER_NAME`: `"scheduler`
The following are constants used when utilizing [`Accelerator.save_model`]
`utils.WEIGHTS_NAME`: `"pytorch_model.bin"`
`utils.SAFE_WEIGHTS_NAME`: `"model.safetensors"`
`utils.WEIGHTS_INDEX_NAME`: `"pytorch_model.bin.index.json"`
`utils.SAFE_WEIGHTS_INDEX_NAME`: `"model.safetensors.index.json"`
## Data Classes
These are basic dataclasses used throughout 🤗 Accelerate and they can be passed in as parameters.
### Standalone
These are standalone dataclasses used for checks, such as the type of distributed system being used
[[autodoc]] utils.ComputeEnvironment
[[autodoc]] utils.DistributedType
[[autodoc]] utils.DynamoBackend
[[autodoc]] utils.LoggerType
[[autodoc]] utils.PrecisionType
[[autodoc]] utils.RNGType
[[autodoc]] utils.SageMakerDistributedType
### Kwargs
These are configurable arguments for specific interactions throughout the PyTorch ecosystem that Accelerate handles under the hood.
[[autodoc]] utils.AutocastKwargs
[[autodoc]] utils.DistributedDataParallelKwargs
[[autodoc]] utils.FP8RecipeKwargs
[[autodoc]] utils.GradScalerKwargs
[[autodoc]] utils.InitProcessGroupKwargs
[[autodoc]] utils.KwargsHandler
## Plugins
These are plugins that can be passed to the [`Accelerator`] object. While they are defined elsewhere in the documentation,
for convenience all of them are available to see here:
[[autodoc]] utils.DeepSpeedPlugin
[[autodoc]] utils.FullyShardedDataParallelPlugin
[[autodoc]] utils.GradientAccumulationPlugin
[[autodoc]] utils.MegatronLMPlugin
[[autodoc]] utils.TorchDynamoPlugin
## Configurations
These are classes which can be configured and passed through to the appropriate integration
[[autodoc]] utils.BnbQuantizationConfig
[[autodoc]] utils.DataLoaderConfiguration
[[autodoc]] utils.ProjectConfiguration
## Environmental Variables
These are environmental variables that can be enabled for different use cases
* `ACCELERATE_DEBUG_MODE` (`str`): Whether to run accelerate in debug mode. More info available [here](../usage_guides/debug.md).
## Data Manipulation and Operations
These include data operations that mimic the same `torch` ops but can be used on distributed processes.
[[autodoc]] utils.broadcast
[[autodoc]] utils.broadcast_object_list
[[autodoc]] utils.concatenate
[[autodoc]] utils.convert_outputs_to_fp32
[[autodoc]] utils.convert_to_fp32
[[autodoc]] utils.gather
[[autodoc]] utils.gather_object
[[autodoc]] utils.listify
[[autodoc]] utils.pad_across_processes
[[autodoc]] utils.recursively_apply
[[autodoc]] utils.reduce
[[autodoc]] utils.send_to_device
[[autodoc]] utils.slice_tensors
## Environment Checks
These functionalities check the state of the current working environment including information about the operating system itself, what it can support, and if particular dependencies are installed.
[[autodoc]] utils.is_bf16_available
[[autodoc]] utils.is_ipex_available
[[autodoc]] utils.is_mps_available
[[autodoc]] utils.is_npu_available
[[autodoc]] utils.is_torch_version
[[autodoc]] utils.is_torch_xla_available
[[autodoc]] utils.is_xpu_available
## Environment Manipulation
[[autodoc]] utils.patch_environment
[[autodoc]] utils.clear_environment
[[autodoc]] utils.write_basic_config
When setting up 🤗 Accelerate for the first time, rather than running `accelerate config` [~utils.write_basic_config] can be used as an alternative for quick configuration.
[[autodoc]] utils.set_numa_affinity
[[autodoc]] utils.environment.override_numa_affinity
## Memory
[[autodoc]] utils.find_executable_batch_size
## Modeling
These utilities relate to interacting with PyTorch models
[[autodoc]] utils.calculate_maximum_sizes
[[autodoc]] utils.compute_module_sizes
[[autodoc]] utils.extract_model_from_parallel
[[autodoc]] utils.get_balanced_memory
[[autodoc]] utils.get_max_layer_size
[[autodoc]] utils.infer_auto_device_map
[[autodoc]] utils.load_checkpoint_in_model
[[autodoc]] utils.load_offloaded_weights
[[autodoc]] utils.load_state_dict
[[autodoc]] utils.offload_state_dict
[[autodoc]] utils.retie_parameters
[[autodoc]] utils.set_module_tensor_to_device
[[autodoc]] utils.shard_checkpoint
## Parallel
These include general utilities that should be used when working in parallel.
[[autodoc]] utils.extract_model_from_parallel
[[autodoc]] utils.save
[[autodoc]] utils.wait_for_everyone
## Random
These utilities relate to setting and synchronizing of all the random states.
[[autodoc]] utils.set_seed
[[autodoc]] utils.synchronize_rng_state
[[autodoc]] utils.synchronize_rng_states
## PyTorch XLA
These include utilities that are useful while using PyTorch with XLA.
[[autodoc]] utils.install_xla
## Loading model weights
These include utilities that are useful to load checkpoints.
[[autodoc]] utils.load_checkpoint_in_model
## Quantization
These include utilities that are useful to quantize model.
[[autodoc]] utils.load_and_quantize_model

186
docs/source/quicktour.md Normal file
View File

@ -0,0 +1,186 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Quicktour
There are many ways to launch and run your code depending on your training environment ([torchrun](https://pytorch.org/docs/stable/elastic/run.html), [DeepSpeed](https://www.deepspeed.ai/), etc.) and available hardware. Accelerate offers a unified interface for launching and training on different distributed setups, allowing you to focus on your PyTorch training code instead of the intricacies of adapting your code to these different setups. This allows you to easily scale your PyTorch code for training and inference on distributed setups with hardware like GPUs and TPUs. Accelerate also provides Big Model Inference to make loading and running inference with really large models that usually don't fit in memory more accessible.
This quicktour introduces the three main features of Accelerate:
* a unified command line launching interface for distributed training scripts
* a training library for adapting PyTorch training code to run on different distributed setups
* Big Model Inference
## Unified launch interface
Accelerate automatically selects the appropriate configuration values for any given distributed training framework (DeepSpeed, FSDP, etc.) through a unified configuration file generated from the [`accelerate config`](package_reference/cli#accelerate-config) command. You could also pass the configuration values explicitly to the command line which is helpful in certain situations like if you're using SLURM.
But in most cases, you should always run [`accelerate config`](package_reference/cli#accelerate-config) first to help Accelerate learn about your training setup.
```bash
accelerate config
```
The [`accelerate config`](package_reference/cli#accelerate-config) command creates and saves a default_config.yaml file in Accelerates cache folder. This file stores the configuration for your training environment, which helps Accelerate correctly launch your training script based on your machine.
After you've configured your environment, you can test your setup with [`accelerate test`](package_reference/cli#accelerate-test), which launches a short script to test the distributed environment.
```bash
accelerate test
```
> [!TIP]
> Add `--config_file` to the `accelerate test` or `accelerate launch` command to specify the location of the configuration file if it is saved in a non-default location like the cache.
Once your environment is setup, launch your training script with [`accelerate launch`](package_reference/cli#accelerate-launch)!
```bash
accelerate launch path_to_script.py --args_for_the_script
```
To learn more, check out the [Launch distributed code](basic_tutorials/launch) tutorial for more information about launching your scripts.
## Adapt training code
The next main feature of Accelerate is the [`Accelerator`] class which adapts your PyTorch code to run on different distributed setups.
You only need to add a few lines of code to your training script to enable it to run on multiple GPUs or TPUs.
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
+ device = accelerator.device
+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
+ model, optimizer, training_dataloader, scheduler
+ )
for batch in training_dataloader:
optimizer.zero_grad()
inputs, targets = batch
- inputs = inputs.to(device)
- targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
+ accelerator.backward(loss)
optimizer.step()
scheduler.step()
```
1. Import and instantiate the [`Accelerator`] class at the beginning of your training script. The [`Accelerator`] class initializes everything necessary for distributed training, and it automatically detects your training environment (a single machine with a GPU, a machine with several GPUs, several machines with multiple GPUs or a TPU, etc.) based on how the code was launched.
```python
from accelerate import Accelerator
accelerator = Accelerator()
```
2. Remove calls like `.cuda()` on your model and input data. The [`Accelerator`] class automatically places these objects on the appropriate device for you.
> [!WARNING]
> This step is *optional* but it is considered best practice to allow Accelerate to handle device placement. You could also deactivate automatic device placement by passing `device_placement=False` when initializing the [`Accelerator`]. If you want to explicitly place objects on a device with `.to(device)`, make sure you use `accelerator.device` instead. For example, if you create an optimizer before placing a model on `accelerator.device`, training fails on a TPU.
> [!WARNING]
> Accelerate does not use non-blocking transfers by default for its automatic device placement, which can result in potentially unwanted CUDA synchronizations. You can enable non-blocking transfers by passing a [`~utils.dataclasses.DataLoaderConfiguration`] with `non_blocking=True` set as the `dataloader_config` when initializing the [`Accelerator`]. As usual, non-blocking transfers will only work if the dataloader also has `pin_memory=True` set. Be wary that using non-blocking transfers from GPU to CPU may cause incorrect results if it results in CPU operations being performed on non-ready tensors.
```py
device = accelerator.device
```
3. Pass all relevant PyTorch objects for training (optimizer, model, dataloader(s), learning rate scheduler) to the [`~Accelerator.prepare`] method as soon as they're created. This method wraps the model in a container optimized for your distributed setup, uses Accelerates version of the optimizer and scheduler, and creates a sharded version of your dataloader for distribution across GPUs or TPUs.
```python
model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, lr_scheduler
)
```
4. Replace `loss.backward()` with [`~Accelerator.backward`] to use the correct `backward()` method for your training setup.
```py
accelerator.backward(loss)
```
Read [Accelerates internal mechanisms](concept_guides/internal_mechanism) guide to learn more details about how Accelerate adapts your code.
### Distributed evaluation
To perform distributed evaluation, pass your validation dataloader to the [`~Accelerator.prepare`] method:
```python
validation_dataloader = accelerator.prepare(validation_dataloader)
```
Each device in your distributed setup only receives a part of the evaluation data, which means you should group your predictions together with the [`~Accelerator.gather_for_metrics`] method. This method requires all tensors to be the same size on each process, so if your tensors have different sizes on each process (for instance when dynamically padding to the maximum length in a batch), you should use the [`~Accelerator.pad_across_processes`] method to pad you tensor to the largest size across processes. Note that the tensors needs to be 1D and that we concatenate the tensors along the first dimension.
```python
for inputs, targets in validation_dataloader:
predictions = model(inputs)
# Gather all predictions and targets
all_predictions, all_targets = accelerator.gather_for_metrics((predictions, targets))
# Example of use with a *Datasets.Metric*
metric.add_batch(all_predictions, all_targets)
```
For more complex cases (e.g. 2D tensors, don't want to concatenate tensors, dict of 3D tensors), you can pass `use_gather_object=True` in `gather_for_metrics`. This will return the list of objects after gathering. Note that using it with GPU tensors is not well supported and inefficient.
> [!TIP]
> Data at the end of a dataset may be duplicated so the batch can be equally divided among all workers. The [`~Accelerator.gather_for_metrics`] method automatically removes the duplicated data to calculate a more accurate metric.
## Big Model Inference
Accelerate's Big Model Inference has two main features, [`~accelerate.init_empty_weights`] and [`~accelerate.load_checkpoint_and_dispatch`], to load large models for inference that typically don't fit into memory.
> [!TIP]
> Take a look at the [Handling big models for inference](concept_guides/big_model_inference) guide for a better understanding of how Big Model Inference works under the hood.
### Empty weights initialization
The [`~accelerate.init_empty_weights`] context manager initializes models of any size by creating a *model skeleton* and moving and placing parameters each time they're created to PyTorch's [**meta**](https://pytorch.org/docs/main/meta.html) device. This way, not all weights are immediately loaded and only a small part of the model is loaded into memory at a time.
For example, loading an empty [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model takes significantly less memory than fully loading the models and weights on the CPU.
```py
from accelerate import init_empty_weights
from transformers import AutoConfig, AutoModelForCausalLM
config = AutoConfig.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1")
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config)
```
### Load and dispatch weights
The [`~accelerate.load_checkpoint_and_dispatch`] function loads full or sharded checkpoints into the empty model, and automatically distribute weights across all available devices.
The `device_map` parameter determines where to place each model layer, and specifiying `"auto"` places them on the GPU first, then the CPU, and finally the hard drive as memory-mapped tensors if there's still not enough memory. Use the `no_split_module_classes` parameter to indicate which modules shouldn't be split across devices (typically those with a residual connection).
```py
from accelerate import load_checkpoint_and_dispatch
model = load_checkpoint_and_dispatch(
model, checkpoint="mistralai/Mixtral-8x7B-Instruct-v0.1", device_map="auto", no_split_module_classes=['Block']
)
```
## Next steps
Now that you've been introduced to the main Accelerate features, your next steps could include:
* Check out the [tutorials](basic_tutorials/overview) for a gentle walkthrough of Accelerate. This is especially useful if you're new to distributed training and the library.
* Dive into the [guides](usage_guides/explore) to see how to use Accelerate for specific use-cases.
* Deepen your conceptual understanding of how Accelerate works internally by reading the [concept guides](concept_guides/internal_mechanism).
* Look up classes and commands in the [API reference](package_reference/accelerator) to see what parameters and options are available.

View File

@ -1,460 +0,0 @@
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Quick tour
Let's have a look at a look at 🤗 Accelerate main features and traps to avoid.
## Main use
To use 🤗 Accelerate in your own script, you have to change four things:
1. Import the [`Accelerator`] main class instantiate one in an `accelerator` object:
```python
from accelerate import Accelerator
accelerator = Accelerator()
```
This should happen as early as possible in your training script as it will initialize everything necessary for
distributed training. You don't need to indicate the kind of environment you are in (just one machine with a GPU, one
match with several GPUs, several machines with multiple GPUs or a TPU), the library will detect this automatically.
2. Remove the call `.to(device)` or `.cuda()` for your model and input data. The `accelerator` object
will handle this for you and place all those objects on the right device for you. If you know what you're doing, you
can leave those `.to(device)` calls but you should use the device provided by the `accelerator` object:
`accelerator.device`.
To fully deactivate the automatic device placement, pass along `device_placement=False` when initializing your
[`Accelerator`].
<Tip warning={true}>
If you place your objects manually on the proper device, be careful to create your optimizer after putting your
model on `accelerator.device` or your training will fail on TPU.
</Tip>
3. Pass all objects relevant to training (optimizer, model, training dataloader, learning rate scheduler) to the
[`~Accelerator.prepare`] method. This will make sure everything is ready for training.
```python
model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, lr_scheduler
)
```
In particular, your training dataloader will be sharded accross all GPUs/TPU cores available so that each one sees a
different portion of the training dataset. Also, the random states of all processes will be synchronized at the
beginning of each iteration through your dataloader, to make sure the data is shuffled the same way (if you decided to
use `shuffle=True` or any kind of random sampler).
<Tip>
The actual batch size for your training will be the number of devices used multiplied by the batch size you set in
your script: for instance training on 4 GPUs with a batch size of 16 set when creating the training dataloader will
train at an actual batch size of 64.
</Tip>
Alternatively, you can use the option `split_batches=True` when creating initializing your
[`Accelerator`], in which case the batch size will always stay the same, whether your run your
script on 1, 2, 4 or 64 GPUs.
You should execute this instruction as soon as all objects for training are created, before starting your actual
training loop.
<Tip warning={true}>
You should only pass the learning rate scheduler to [`~Accelerator.prepare`] when the scheduler needs to be stepped
at each optimizer step.
</Tip>
<Tip warning={true}>
Your training dataloader may change length when going through this method: if you run on X GPUs, it will have its
length divided by X (since your actual batch size will be multiplied by X), unless you set
`split_batches=True`.
</Tip>
Any instruction using your training dataloader length (for instance if you want to log the number of total training
steps) should go after the call to [`~Accelerator.prepare`].
You can perfectly send your dataloader to [`~Accelerator.prepare`] on its own, but it's best to send the
model and optimizer to [`~Accelerator.prepare`] together.
You may or may not want to send your validation dataloader to [`~Accelerator.prepare`], depending on
whether you want to run distributed evaluation or not (see below).
4. Replace the line `loss.backward()` by `accelerator.backward(loss)`.
And you're all set! With all these changes, your script will run on your local machine as well as on multiple GPUs or a
TPU! You can either use your favorite tool to launch the distributed training, or you can use the 🤗 Accelerate
launcher.
## Distributed evaluation
You can perform regular evaluation in your training script, if you leave your validation dataloader out of the
[`~Accelerator.prepare`] method. In this case, you will need to put the input data on the
`accelerator.device` manually.
To perform distributed evaluation, send along your validation dataloader to the [`~Accelerator.prepare`]
method:
```python
validation_dataloader = accelerator.prepare(validation_dataloader)
```
Like for your training dataloader, it will mean that (should you run your script on multiple devices) each device will
only see part of the evaluation data. This means you will need to group your predictions together. This is very easy to
do with the [`~Accelerator.gather`] method.
```python
for inputs, targets in validation_dataloader:
predictions = model(inputs)
# Gather all predictions and targets
all_predictions = accelerator.gather(predictions)
all_targets = accelerator.gather(targets)
# Example of use with a *Datasets.Metric*
metric.add_batch(all_predictions, all_targets)
```
<Tip warning={true}>
Like for the training dataloader, passing your validation dataloader through
[`~Accelerator.prepare`] may change its: if you run on X GPUs, it will have its length divided by X
(since your actual batch size will be multiplied by X), unless you set `split_batches=True`.
Any instruction using your training dataloader length (for instance if you need the number of total training steps
to create a learning rate scheduler) should go after the call to [`~Accelerator.prepare`].
</Tip>
<Tip warning={true}>
The [`~Accelerator.gather`] method requires the tensors to be all the same size on each process. If
you have tensors of different sizes on each process (for instance when dynamically padding to the maximum length in
a batch), you should use the [`~Accelerator.pad_across_processes`] method to pad you tensor to the
biggest size across processes.
</Tip>
## Launching your distributed script
You can use the regular commands to launch your distributed training (like `torch.distributed.launch` for
PyTorch), they are fully compatible with 🤗 Accelerate. The only caveat here is that 🤗 Accelerate uses the environment
to determine all useful information, so `torch.distributed.launch` should be used with the flag `--use_env`.
🤗 Accelerate also provides a CLI tool that unifies all launcher, so you only have to remember one command. To use it,
just run
```bash
accelerate config
```
on your machine and reply to the questions asked. This will save a *default_config.yaml* file in your cache folder for
🤗 Accelerate. That cache folder is (with decreasing order of priority):
- The content of your environment variable `HF_HOME` suffixed with *accelerate*.
- If it does not exist, the content of your environment variable `XDG_CACHE_HOME` suffixed with
*huggingface/accelerate*.
- If this does not exist either, the folder *~/.cache/huggingface/accelerate*
You can also specify with the flag `--config_file` the location of the file you want to save.
Once this is done, you can test everything is going well on your setup by running
```bash
accelerate test
```
This will launch a short script that will test the distributed environment. If it runs fine, you are ready for the next
step!
Note that if you specified a location for the config file in the previous step, you need to pass it here as well:
```bash
accelerate test --config_file path_to_config.yaml
```
Now that this is done, you can run your script with the following command:
```bash
accelerate launch path_to_script.py --args_for_the_script
```
If you stored the config file in a non-default location, you can indicate it to the launcher like his:
```bash
accelerate launch --config_file path_to_config.yaml path_to_script.py --args_for_the_script
```
You can also override any of the arguments determined by your config file, see TODO: insert ref here.
## Launching training from a notebook
In Accelerate 0.3.0, a new [`notebook_launcher`] has been introduced to help you launch your training
function from a notebook. This launcher supports launching a training with TPUs on Colab or Kaggle, as well as training
on several GPUs (if the machine on which you are running your notebook has them).
Just define a function responsible for your whole training and/or evaluation in a cell of the notebook, then execute a
cell with the following code:
```python
from accelerate import notebook_launcher
notebook_launcher(training_function)
```
<Tip warning={true}>
Your `Accelerator` object should only be defined inside the training function. This is because the
initialization should be done inside the launcher only.
</Tip>
## Training on TPU
If you want to launch your script on TPUs, there are a few caveats you should be aware of. Behind the scenes, the TPUs
will create a graph of all the operations happening in your training step (forward pass, backward pass and optimizer
step). This is why your first step of training will always be very long as building and compiling this graph for
optimizations takes some time.
The good news is that this compilation will be cached so the second step and all the following will be much faster. The
bas news is that it only applies if all of your steps do exactly the same operations, which implies:
- having all tensors of the same length in all your lengths
- having static code (i.e., not a for loop of length that could change from step to step)
Having any of the things above change between two steps will trigger a new compilation which will, once again, take a
lot of time. In practice, that means you must take special care to have all your tensors in your inputs of the same
shape (so no dynamic padding for instance if you are in an NLP problem) and should not use layer with for loops that
have different lengths depending on the inputs (such as an LSTM) or the training will be excruciatingly slow.
To introduce special behavior in your script for TPUs you can check the `distributed_type` of your
`accelerator`:
```python docstyle-ignore
from accelerate import DistributedType
if accelerator.distributed_type == DistributedType.TPU:
# do something of static shape
else:
# go crazy and be dynamic
```
The [NLP example](https://github.com/huggingface/accelerate/blob/main/examples/nlp_example.py) shows an example in
situation with dynamic padding.
One last thing to pay close attnetion to: if your model has tied weights (such as language models which tie the weights
of the embedding matrix with the weights of the decoder), moving this model to the TPU (either yourself or after you
passed your model to [`~Accelerator.prepare`]) will break the tying. You will need to retie the weights
after. You can find an example of this in the [run_clm_no_trainer](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) script in
the Transformers repository.
## Other caveats
We list here all smaller issues you could have in your script conversion and how to resolve them.
### Execute a statement only on one processes
Some of your instructions only need to run for one process on a given server: for instance a data download or a log
statement. To do this, wrap the statement in a test like this:
```python docstyle-ignore
if accelerator.is_local_main_process:
# Is executed once per server
```
Another example is progress bars: to avoid having multiple progress bars in your output, you should only display one on
the local main process:
```python
from tqdm.auto import tqdm
progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
```
The *local* means per machine: if you are running your training on two servers with several GPUs, the instruction will
be executed once on each of those servers. If you need to execute something only once for all processes (and not per
machine) for instance, uploading the final model to the 🤗 model hub, wrap it in a test like this:
```python docstyle-ignore
if accelerator.is_main_process:
# Is executed once only
```
For printing statements you only want executed once per machine, you can just replace the `print` function by
`accelerator.print`.
### Defer execution
When you run your usual script, instructions are executed in order. Using 🤗 Accelerate to deploy your script on several
GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
faster than others.
You might need to wait for all processes to have reached a certain point before executing a given instruction. For
instance, you shouldn't save a model before being sure every process is done with training. To do this, just write the
following line in your code:
```
accelerator.wait_for_everyone()
```
This instruction will block all the processes that arrive them first until all the other processes have reached that
point (if you run your script on just one GPU or CPU, this wont' do anything).
### Saving/loading a model
Saving the model you trained might need a bit of adjustment: first you should wait for all processes to reach that
point in the script as shown above, and then, you should unwrap your model before saving it. This is because when going
through the [`~Accelerator.prepare`] method, your model may have been placed inside a bigger model,
which deals with the distributed training. This in turn means that saving your model state dictionary without taking
any precaution will take that potential extra layer into account, and you will end up with weights you can't load back
in your base model.
This is why it's recommended to *unwrap* your model first. Here is an example:
```
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
accelerator.save(unwrapped_model.state_dict(), filename)
```
If your script contains a logic to load checkpoint, we also recommend you load your weights in the unwrapped model
(this is only useful if you use the load function after making your model go through
[`~Accelerator.prepare`]). Here is an example:
```
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.load_state_dict(torch.load(filename))
```
Note that since all the model parameters are references to tensors, this will load your weights inside `model`.
## Saving/loading entire states
When training your model, you may want to save the current state of the model, optimizer, random generators, and potentially LR schedulers to be restored in the _same script_.
You can use `accelerator.save_state` and `accelerator.load_state` respectively to do so, just by simply passing in a save location.
If you have registered any other stateful items to be stored through `accelerator.register_for_checkpointing` they will also be saved and/or loaded.
<Tip>
Every object passed to `register_for_checkpointing` must have a `load_state_dict` and `save_dict` function to be stored
</Tip>
### Gradient clipping
If you are using gradient clipping in your script, you should replace the calls to
`torch.nn.utils.clip_grad_norm_` or `torch.nn.utils.clip_grad_value_` with `accelerator.clip_grad_norm_`
and `accelerator.clip_grad_value_` respectively.
### Mixed Precision training
If you are running your training in Mixed Precision with Accelerate, you will get the best result with your loss being
computed inside your model (like in Transformer models for instance). Every computation outside of the model will be
executed in full precision (which is generally what you want for loss computation, expecially if it involves a
softmax). However you might want to put your loss computation inside the *accelerator.autocast* context manager:
```
with accelerator.autocast():
loss = complex_loss_function(outputs, target):
```
Another caveat with Mixed Precision training is that the gradient will skip a few updates at the beginning and
sometimes during training: because of the dynamic loss scaling strategy, there are points during training where the
gradients have overflown, and the loss scaling factor is reduced to avoid this happening again at the next step.
This means that you may update your learning rate scheduler when there was no update, which is fine in general, but may
have an impact when you have very little training data, or if the first learning rate values of your scheduler are very
important. In this case, you can skip the learning rate scheduler updates when the optimizer step was not done like
this:
```
if not accelerator.optimizer_step_was_skipped:
lr_scheduler.step()
```
### DeepSpeed
DeepSpeed support is experimental, so the underlying API will evolve in the near future and may have some slight
breaking changes. In particular, 🤗 Accelerate does not support DeepSpeed config you have written yourself yet, this
will be added in a next version.
<Tip warning={true}>
The [`notebook_launcher`] does not support the DeepSpeed integration yet.
</Tip>
## Internal mechanism
Internally, the library works by first analyzing the environment in which the script is launched to determine which
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
that information is stored in the [`~AcceleratorState`].
This class is initialized the first time you instantiate a [`Accelerator`] as well as performing any
specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of
[`~state.AcceleratorState`].
Then, when calling [`~Accelerator.prepare`], the library:
- wraps your model(s) in the container adapted for the distributed setup,
- wraps your optimizer(s) in a [`~optimizer.AcceleratedOptimizer`],
- creates a new version of your dataloader(s) in a [`~data_loader.DataLoaderShard`].
While the model(s) and optimizer(s) are just put in simple wrappers, the dataloader(s) are re-created. This is mostly
because PyTorch does not let the user change the `batch_sampler` of a dataloader once it's been created and the
library handles the sharding of your data between processes by changing that `batch_sampler` to yield every other
`num_processes` batches.
The [`~data_loader.DataLoaderShard`] subclasses `DataLoader` to add the following functionality:
- it synchronizes the appropriate random number generator of all processes at each new iteration, to ensure any
randomization (like shuffling) is done the exact same way across processes.
- it puts the batches on the proper device before yielding them (unless you have opted out of
`device_placement=True`).
The random number generator synchronization will by default synchronize:
- the `generator` attribute of a given sampler (like the PyTorch `RandomSampler`) for PyTorch >= 1.6
- the main random number generator in PyTorch <=1.5.1
You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main
[`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on local `generator` to avoid
setting the same seed in the main random number generator in all processes.
<Tip warning={true}>
Synchronization the main torch (or CUDA or XLA) random number generator will affect any other potential random
artifacts you could have in your dataset (like random data augmentation) in the sense all processes will get the
same random numbers from the torch random modules (so will apply the same random data augmentation if it's
controlled by torch).
</Tip>
<Tip>
The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local
`torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example.
</Tip>
See more details about the internal in the [Internals page](internal).

View File

@ -1,163 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Tracking
There are a large number of experiment tracking API's available, however getting them all to work with in a multi-processing environment can oftentimes be complex.
Accelerate provides a general tracking API that can be used to log useful items during your script through [`~Accelerator.log`]
## Integrated Trackers
Currently `Accelerate` supports three trackers out-of-the-box:
[[autodoc]] tracking.TensorBoardTracker
[[autodoc]] tracking.WandBTracker
[[autodoc]] tracking.CometMLTracker
To use any of them, pass in the selected type(s) to the `log_with` parameter in [`Accelerate`]:
```python
from accelerate import Accelerator
from accelerate.utils import LoggerType
accelerator = Accelerator(log_with="all") # For all available trackers in the environment
accelerator = Accelerator(log_with="wandb")
accelerator = Accelerator(log_with=["wandb", LoggerType.TENSORBOARD])
```
At the start of your experiment [`~Accelerator.init_trackers`] should be used to setup your project, and potentially add any experiment hyperparameters to be logged:
```python
hps = {"num_iterations": 5, "learning_rate": 1e-2}
accelerator.init_trackers("my_project", config=hps)
```
When you are ready to log any data, [`~Accelerator.log`] should be used.
A `step` can also be passed in to correlate the data with a particular step in the training loop.
```python
accelerator.log({"train_loss": 1.12, "valid_loss": 0.8}, step=1)
```
Once you've finished training, make sure to run [`~Accelerator.end_training`] so that all the trackers can run their finish functionalities if they have any.
```python
accelerator.end_training()
```
A full example is below:
```python
from accelerate import Accelerator
accelerator = Accelerator(log_with="all")
config = {
"num_iterations": 5,
"learning_rate": 1e-2,
"loss_function": str(my_loss_function),
}
accelerator.init_trackers("example_project", config=config)
my_model, my_optimizer, my_training_dataloader = accelerate.prepare(my_model, my_optimizer, my_training_dataloader)
device = accelerator.device
my_model.to(device)
for iteration in config["num_iterations"]:
for step, batch in my_training_dataloader:
my_optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
accelerator.backward(loss)
my_optimizer.step()
accelerator.log({"training_loss": loss}, step=step)
accelerator.end_training()
```
## Implementing Custom Trackers
To implement a new tracker to be used in `Accelerator`, a new one can be made through implementing the [`~GeneralTracker`] class.
Every tracker must implement three functions:
- `__init__`:
- Should store a `run_name` and initialize the tracker API of the integrated library.
- If a tracker stores their data locally (such as TensorBoard), a `logging_dir` parameter can be added.
- `store_init_configuration`:
- Should take in a `values` dictionary and store them as a one-time experiment configuration
- `log`:
- Should take in a `values` dictionary and a `step`, and should log them to the run
A brief example can be seen below with an integration with Weights and Biases, containing only the relevent information:
```python
from accelerate.tracking import GeneralTracker
from typing import Optional
import wandb
class MyCustomTracker(GeneralTracker):
def __init__(self, run_name: str):
self.run_name = run_name
wandb.init(self.run_name)
def store_init_configuration(self, values: dict):
wandb.config(values)
def log(self, values: dict, step: Optional[int] = None):
wandb.log(values, step=step)
```
When you are ready to build your `Accelerator` object, pass in an **instance** of your tracker to [`~Accelerator.log_with`] to have it automatically
be used with the API:
```python
tracker = MyCustomTracker("some_run_name")
accelerator = Accelerator(log_with=tracker)
```
These also can be mixed with existing trackers, including with `"all"`:
```python
tracker = MyCustomTracker("some_run_name")
accelerator = Accelerator(log_with=[tracker, "all"])
```
## When a wrapper cannot work
If a library has an API that does not follow a strict `.log` with an overall dictionary such as Neptune.AI, logging can be done manually under an `if accelerator.is_main_process` statement:
```diff
from accelerate import Accelerator
+ import neptune.new as neptune
accelerator = Accelerator()
+ run = neptune.init(...)
my_model, my_optimizer, my_training_dataloader = accelerate.prepare(my_model, my_optimizer, my_training_dataloader)
device = accelerator.device
my_model.to(device)
for iteration in config["num_iterations"]:
for batch in my_training_dataloader:
my_optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
total_loss += loss
accelerator.backward(loss)
my_optimizer.step()
+ if accelerator.is_main_process:
+ run["logs/training/batch/loss"].log(loss)
```

View File

@ -0,0 +1,150 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Handling big models for inference
One of the biggest advancements 🤗 Accelerate provides is the concept of [large model inference](../concept_guides/big_model_inference) wherein you can perform *inference* on models that cannot fully fit on your graphics card.
This tutorial will be broken down into two parts showcasing how to use both 🤗 Accelerate and 🤗 Transformers (a higher API-level) to make use of this idea.
## Using 🤗 Accelerate
For these tutorials, we'll assume a typical workflow for loading your model in such that:
```py
import torch
my_model = ModelClass(...)
state_dict = torch.load(checkpoint_file)
my_model.load_state_dict(state_dict)
```
Note that here we assume that `ModelClass` is a model that takes up more video-card memory than what can fit on your device (be it `mps` or `cuda`).
The first step is to init an empty skeleton of the model which won't take up any RAM using the [`init_empty_weights`] context manager:
```py
from accelerate import init_empty_weights
with init_empty_weights():
my_model = ModelClass(...)
```
With this `my_model` currently is "parameterless", hence leaving the smaller footprint than what one would normally get loading this onto the CPU directly.
Next we need to load in the weights to our model so we can perform inference.
For this we will use [`load_checkpoint_and_dispatch`], which as the name implies will load a checkpoint inside your empty model and dispatch the weights for each layer across all the devices you have available (GPU/MPS and CPU RAM).
To determine how this `dispatch` can be performed, generally specifying `device_map="auto"` will be good enough as 🤗 Accelerate
will attempt to fill all the space in your GPU(s), then loading them to the CPU, and finally if there is not enough RAM it will be loaded to the disk (the absolute slowest option).
<Tip>
For more details on designing your own device map, see this section of the [concept guide](../concept_guides/big_model_inference#designing-a-device-map)
</Tip>
See an example below:
```py
from accelerate import load_checkpoint_and_dispatch
model = load_checkpoint_and_dispatch(
model, checkpoint=checkpoint_file, device_map="auto"
)
```
<Tip>
If there are certain "chunks" of layers that shouldn't be split, you can pass them in as `no_split_module_classes`. Read more about it [here](../concept_guides/big_model_inference#loading-weights)
</Tip>
<Tip>
Also to save on memory (such as if the `state_dict` will not fit in RAM), a model's weights can be divided and split into multiple checkpoint files. Read more about it [here](../concept_guides/big_model_inference#sharded-checkpoints)
</Tip>
Now that the model is dispatched fully, you can perform inference as normal with the model:
```py
input = torch.randn(2,3)
input = input.to("cuda")
output = model(input)
```
What will happen now is each time the input gets passed through a layer, it will be sent from the CPU to the GPU (or disk to CPU to GPU), the output is calculated, and then the layer is pulled back off the GPU going back down the line. While this adds some overhead to the inference being performed, through this method it is possible to run **any size model** on your system, as long as the largest layer is capable of fitting on your GPU.
<Tip>
Multiple GPUs can be utilized, however this is considered "model parallelism" and as a result only one GPU will be active at a given moment, waiting for the prior one to send it the output. You should launch your script normally with `python`
and not need `torchrun`, `accelerate launch`, etc.
</Tip>
For a visual representation of this, check out the animation below:
<Youtube id="MWCSGj9jEAo" />
### Complete Example
Below is the full example showcasing what we performed above:
```py
import torch
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
with init_empty_weights():
model = MyModel(...)
model = load_checkpoint_and_dispatch(
model, checkpoint=checkpoint_file, device_map="auto"
)
input = torch.randn(2,3)
input = input.to("cuda")
output = model(input)
```
## Using 🤗 Transformers, 🤗 Diffusers, and other 🤗 Open Source Libraries
Libraries that support 🤗 Accelerate big model inference include all of the earlier logic in their `from_pretrained` constructors.
These operate by specifying a string representing the model to download from the [🤗 Hub](https://hf.co/models) and then denoting `device_map="auto"` along with a few extra parameters.
As a brief example, we will look at using `transformers` and loading in Big Science's T0pp model.
```py
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
```
After loading the model in, the initial steps from before to prepare a model have all been done and the model is fully
ready to make use of all the resources in your machine. Through these constructors, you can also save *more* memory by
specifying the precision the model is loaded into as well, through the `torch_dtype` parameter, such as:
```py
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto", torch_dtype=torch.float16)
```
To learn more about this, check out the 🤗 Transformers documentation available [here](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
## Where to go from here
For a much more detailed look at big model inference, be sure to check out the [Conceptual Guide on it](../concept_guides/big_model_inference)

View File

@ -8,36 +8,43 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Checkpointing
When training a PyTorch model with Accelerate, you may often want to save and continue a state of training. Doing so requires
saving and loading the model, optimizer, RNG generators, and the GradScaler. Inside Accelerate are two convience functions to achieve this quickly:
When training a PyTorch model with 🤗 Accelerate, you may often want to save and continue a state of training. Doing so requires
saving and loading the model, optimizer, RNG generators, and the GradScaler. Inside 🤗 Accelerate are two convenience functions to achieve this quickly:
- Use [`~Accelerator.save_state`] for saving everything mentioned above to a folder location
- Use [`~Accelerator.load_state`] for loading everything stored from an earlier `save_state`
To further customize where and how states are saved through [`~Accelerator.save_state`] the [`~utils.ProjectConfiguration`] class can be used. For example
if `automatic_checkpoint_naming` is enabled each saved checkpoint will be located then at `Accelerator.project_dir/checkpoints/checkpoint_{checkpoint_number}`.
It should be noted that the expectation is that those states come from the same training script, they should not be from two separate scripts.
- By using [`~Accelerator.register_for_checkpointing`], you can register custom objects to be automatically stored or loaded from the two prior functions,
so long as the object has a `state_dict` **and** a `load_state_dict` functionality. This could include objects such as a learning rate scheduler.
Below is a brief example using checkpointing to save and reload a state during training:
```python
from accelerate import Accelerator
import torch
accelerator = Accelerator()
accelerator = Accelerator(project_dir="my/save/path")
my_scheduler = torch.optim.lr_scheduler.StepLR(my_optimizer, step_size=1, gamma=0.99)
my_model, my_optimizer, my_training_dataloader = accelerate.prepare(my_model, my_optimizer, my_training_dataloader)
my_model, my_optimizer, my_training_dataloader = accelerator.prepare(my_model, my_optimizer, my_training_dataloader)
# Register the LR scheduler
accelerate.register_for_checkpointing(my_scheduler)
accelerator.register_for_checkpointing(my_scheduler)
# Save the starting state
accelerate.save_state("my/save/path")
accelerator.save_state()
device = accelerator.device
my_model.to(device)
@ -55,6 +62,35 @@ for epoch in range(num_epochs):
my_optimizer.step()
my_scheduler.step()
# Restore previous state
accelerate.load_state("my/save/path")
```
# Restore the previous state
accelerator.load_state("my/save/path/checkpointing/checkpoint_0")
```
## Restoring the state of the DataLoader
After resuming from a checkpoint, it may also be desirable to resume from a particular point in the active `DataLoader` if
the state was saved during the middle of an epoch. You can use [`~Accelerator.skip_first_batches`] to do so.
```python
from accelerate import Accelerator
accelerator = Accelerator(project_dir="my/save/path")
train_dataloader = accelerator.prepare(train_dataloader)
accelerator.load_state("my_state")
# Assume the checkpoint was saved 100 steps into the epoch
skipped_dataloader = accelerator.skip_first_batches(train_dataloader, 100)
# After the first iteration, go back to `train_dataloader`
# First epoch
for batch in skipped_dataloader:
# Do something
pass
# Second epoch
for batch in train_dataloader:
# Do something
pass
```

View File

@ -0,0 +1,325 @@
<!--
Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DDP Communication Hooks
Distributed Data Parallel (DDP) communication hooks provide a generic interface to control how gradients are communicated across workers by overriding the vanilla allreduce in `DistributedDataParallel`. A few built-in communication hooks are provided, and users can easily apply any of these hooks to optimize communication.
- **FP16 Compression Hook**: Compresses gradients by casting them to half-precision floating-point format (`torch.float16`), reducing communication overhead.
- **BF16 Compression Hook**: Similar to FP16, but uses the Brain Floating Point format (`torch.bfloat16`), which can be more efficient on certain hardware.
- **PowerSGD Hook**: An advanced gradient compression algorithm that provides high compression rates and can accelerate bandwidth-bound distributed training.
In this tutorial, you will see how to quickly set up DDP communication hooks and perform training with the utilities provided in 🤗 Accelerate, which can be as simple as adding just one new line of code! This demonstrates how to use DDP communication hooks to optimize gradient communication in distributed training with the 🤗 Accelerate library.
## FP16 Compression Hook
<hfoptions id="fp16">
<hfoption id="PyTorch">
```python
import torch
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed.algorithms.ddp_comm_hooks import default_hooks
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
model = MyModel()
model = DDP(model, device_ids=[torch.cuda.current_device()])
model.register_comm_hook(state=None, hook=default_hooks.fp16_compress_hook)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
optimizer.zero_grad()
```
</hfoption>
<hfoption id="Accelerate">
```python
from accelerate import Accelerator, DDPCommunicationHookType, DistributedDataParallelKwargs
import torch
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
# DDP Communication Hook setup
ddp_kwargs = DistributedDataParallelKwargs(comm_hook=DDPCommunicationHookType.FP16)
accelerator = Accelerator(kwargs_handlers=[ddp_kwargs])
model = MyModel()
optimizer = torch.optim.Adam(model.parameters())
data_loader = DataLoader(dataset, batch_size=16)
model, optimizer, data_loader = accelerator.prepare(model, optimizer, data_loader)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
</hfoption>
</hfoptions>
### BF16 Compression Hook
<Tip warning={true}>
BF16 Compression Hook API is experimental, and it requires NCCL version later than 2.9.6.
</Tip>
<hfoptions id="bf16">
<hfoption id="PyTorch">
```python
import torch
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed.algorithms.ddp_comm_hooks import default_hooks
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
model = MyModel()
model = DDP(model, device_ids=[torch.cuda.current_device()])
model.register_comm_hook(state=None, hook=default_hooks.bf16_compress_hook)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
optimizer.zero_grad()
```
</hfoption>
<hfoption id="Accelerate">
```python
from accelerate import Accelerator, DDPCommunicationHookType, DistributedDataParallelKwargs
import torch
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
# DDP Communication Hook setup
ddp_kwargs = DistributedDataParallelKwargs(comm_hook=DDPCommunicationHookType.BF16)
accelerator = Accelerator(kwargs_handlers=[ddp_kwargs])
model = MyModel()
optimizer = torch.optim.Adam(model.parameters())
data_loader = DataLoader(dataset, batch_size=16)
model, optimizer, data_loader = accelerator.prepare(model, optimizer, data_loader)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
</hfoption>
</hfoptions>
### PowerSGD Hook
<Tip warning={true}>
PowerSGD typically requires extra memory of the same size as the models gradients to enable error feedback, which can compensate for biased compressed communication and improve accuracy.
</Tip>
<hfoptions id="powerSGD">
<hfoption id="PyTorch">
```python
import torch
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed.algorithms.ddp_comm_hooks import powerSGD_hook
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
model = MyModel()
model = DDP(model, device_ids=[torch.cuda.current_device()])
state = powerSGD_hook.PowerSGDState(process_group=None)
model.register_comm_hook(state=state, hook=powerSGD_hook.powerSGD_hook)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
optimizer.zero_grad()
```
</hfoption>
<hfoption id="Accelerate">
```python
from accelerate import Accelerator, DDPCommunicationHookType, DistributedDataParallelKwargs
import torch
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
# DDP Communication Hook setup
ddp_kwargs = DistributedDataParallelKwargs(comm_hook=DDPCommunicationHookType.POWER_SGD)
accelerator = Accelerator(kwargs_handlers=[ddp_kwargs])
model = MyModel()
optimizer = torch.optim.Adam(model.parameters())
data_loader = DataLoader(dataset, batch_size=16)
model, optimizer, data_loader = accelerator.prepare(model, optimizer, data_loader)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
</hfoption>
</hfoptions>
## DDP Communication Hooks utilities
There are two additional utilities for supporting optional functionalities with the communication hooks.
### comm_wrapper
`comm_wrapper` is an option to wrap a communication hook with additional functionality. For example, it can be used to combine FP16 compression with other communication strategies. Currently supported wrappers are `no`, `fp16`, and `bf16`.
```python
from accelerate import Accelerator, DDPCommunicationHookType, DistributedDataParallelKwargs
import torch
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
# DDP Communication Hook setup
ddp_kwargs = DistributedDataParallelKwargs(
comm_hook=DDPCommunicationHookType.POWER_SGD,
comm_wrapper=DDPCommunicationHookType.FP16
)
accelerator = Accelerator(kwargs_handlers=[ddp_kwargs])
model = MyModel()
optimizer = torch.optim.Adam(model.parameters())
data_loader = DataLoader(dataset, batch_size=16)
model, optimizer, data_loader = accelerator.prepare(model, optimizer, data_loader)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
### comm_state_option
`comm_state_option` allows you to pass additional state information required by certain communication hooks. This is particularly useful for stateful hooks like `PowerSGD`, which require maintaining hyperparameters and internal states across training steps. Below is an example showcasing the use of `comm_state_option` with the `PowerSGD` hook.
```python
from accelerate import Accelerator, DDPCommunicationHookType, DistributedDataParallelKwargs
import torch
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
# DDP Communication Hook setup
ddp_kwargs = DistributedDataParallelKwargs(
comm_hook=DDPCommunicationHookType.POWER_SGD,
comm_state_option={"matrix_approximation_rank": 2}
)
accelerator = Accelerator(kwargs_handlers=[ddp_kwargs])
model = MyModel()
optimizer = torch.optim.Adam(model.parameters())
data_loader = DataLoader(dataset, batch_size=16)
model, optimizer, data_loader = accelerator.prepare(model, optimizer, data_loader)
# Training loop
for data, targets in data_loader:
outputs = model(data)
loss = criterion(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
For more advanced usage and additional hooks, refer to the [PyTorch DDP Communication Hooks documentation](https://pytorch.org/docs/stable/ddp_comm_hooks.html).

View File

@ -0,0 +1,738 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DeepSpeed
[DeepSpeed](https://github.com/microsoft/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Some of the salient optimizations are:
1. Optimizer state partitioning (ZeRO stage 1)
2. Gradient partitioning (ZeRO stage 2)
3. Parameter partitioning (ZeRO stage 3)
4. Custom mixed precision training handling
5. A range of fast CUDA-extension-based optimizers
6. ZeRO-Offload to CPU and Disk/NVMe
7. Hierarchical partitioning of model parameters (ZeRO++)
ZeRO-Offload has its own dedicated paper: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840). And NVMe-support is described in the paper [ZeRO-Infinity: Breaking the GPU
Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857).
DeepSpeed ZeRO-2 is primarily used only for training, as its features are of no use to inference.
DeepSpeed ZeRO-3 can be used for inference as well since it allows huge models to be loaded on multiple GPUs, which
won't be possible on a single GPU.
🤗 Accelerate integrates [DeepSpeed](https://github.com/microsoft/DeepSpeed) via 2 options:
1. Integration of the DeepSpeed features via `deepspeed config file` specification in `accelerate config` . You just supply your custom config file or use our template. Most of
this document is focused on this feature. This supports all the core features of DeepSpeed and gives user a lot of flexibility.
User may have to change a few lines of code depending on the config.
2. Integration via `deepspeed_plugin`.This supports subset of the DeepSpeed features and uses default options for the rest of the configurations.
User need not change any code and is good for those who are fine with most of the default settings of DeepSpeed.
## What is integrated?
Training:
1. 🤗 Accelerate integrates all features of DeepSpeed ZeRO. This includes all the ZeRO stages 1, 2 and 3 as well as ZeRO-Offload, ZeRO-Infinity (which can offload to disk/NVMe) and ZeRO++.
Below is a short description of Data Parallelism using ZeRO - Zero Redundancy Optimizer along with diagram from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)
![ZeRO Data Parallelism](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero.png)
(Source: [link](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/))
a. **Stage 1** : Shards optimizer states across data parallel workers/GPUs
b. **Stage 2** : Shards optimizer states + gradients across data parallel workers/GPUs
c. **Stage 3**: Shards optimizer states + gradients + model parameters across data parallel workers/GPUs
d. **Optimizer Offload**: Offloads the gradients + optimizer states to CPU/Disk building on top of ZERO Stage 2
e. **Param Offload**: Offloads the model parameters to CPU/Disk building on top of ZERO Stage 3
f. **Hierarchical Partitioning**: Enables efficient multi-node training with data-parallel training across nodes and ZeRO-3 sharding within a node, built on top of ZeRO Stage 3.
<u>Note</u>: With respect to Disk Offload, the disk should be an NVME for decent speed but it technically works on any Disk
Inference:
1. DeepSpeed ZeRO Inference supports ZeRO stage 3 with ZeRO-Infinity. It uses the same ZeRO protocol as training, but
it doesn't use an optimizer and a lr scheduler and only stage 3 is relevant. For more details see:
[deepspeed-zero-inference](#deepspeed-zero-inference).
## How it works?
**Pre-Requisites**: Install DeepSpeed version >=0.6.5. Please refer to the [DeepSpeed Installation details](https://github.com/microsoft/DeepSpeed#installation)
for more information.
We will first look at easy to use integration via `accelerate config`.
Followed by more flexible and feature rich `deepspeed config file` integration.
### Accelerate DeepSpeed Plugin
On your machine(s) just run:
```bash
accelerate config
```
and answer the questions asked. It will ask whether you want to use a config file for DeepSpeed to which you should answer no. Then answer the following questions to generate a basic DeepSpeed config.
This will generate a config file that will be used automatically to properly set the
default options when doing
```bash
accelerate launch my_script.py --args_to_my_script
```
For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with DeepSpeed Plugin:
**ZeRO Stage-2 DeepSpeed Plugin Example**
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: none
offload_param_device: none
zero3_init_flag: true
zero_stage: 2
distributed_type: DEEPSPEED
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
use_cpu: false
```
```bash
accelerate launch examples/nlp_example.py --mixed_precision fp16
```
**ZeRO Stage-3 with CPU Offload DeepSpeed Plugin Example**
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
distributed_type: DEEPSPEED
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
use_cpu: false
```
```bash
accelerate launch examples/nlp_example.py --mixed_precision fp16
```
Currently, `Accelerate` supports following config through the CLI:
```bash
`zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning
`gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them.
`gradient_clipping`: Enable gradient clipping with value.
`offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2.
`offload_optimizer_nvme_path`: Decides Nvme Path to offload optimizer states. If unspecified, will default to 'none'.
`offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3.
`offload_param_nvme_path`: Decides Nvme Path to offload parameters. If unspecified, will default to 'none'.
`zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3.
`zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3.
`mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training.
`deepspeed_moe_layer_cls_names`: Comma-separated list of transformer Mixture-of-Experts (MoE) layer class names (case-sensitive) to wrap ,e.g, `MixtralSparseMoeBlock`, `Qwen2MoeSparseMoeBlock`, `JetMoEAttention,JetMoEBlock` ...
`deepspeed_hostfile`: DeepSpeed hostfile for configuring multi-node compute resources.
`deepspeed_exclusion_filter`: DeepSpeed exclusion filter string when using mutli-node setup.
`deepspeed_inclusion_filter`: DeepSpeed inclusion filter string when using mutli-node setup.
`deepspeed_multinode_launcher`: DeepSpeed multi-node launcher to use. If unspecified, will default to `pdsh`.
`deepspeed_config_file`: path to the DeepSpeed config file in `json` format. See the next section for more details on this.
```
To be able to tweak more options, you will need to use a DeepSpeed config file.
### DeepSpeed Config File
On your machine(s) just run:
```bash
accelerate config
```
and answer the questions asked. It will ask whether you want to use a config file for deepspeed to which you answer yes
and provide the path to the deepspeed config file.
This will generate a config file that will be used automatically to properly set the
default options when doing
```bash
accelerate launch my_script.py --args_to_my_script
```
For instance, here is how you would run the NLP example `examples/by_feature/deepspeed_with_config_support.py` (from the root of the repo) with DeepSpeed Config File:
**ZeRO Stage-2 DeepSpeed Config File Example**
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/ubuntu/accelerate/examples/configs/deepspeed_config_templates/zero_stage2_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
use_cpu: false
```
with the contents of `zero_stage2_config.json` being:
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto",
"torch_adam": true,
"adam_w_mode": true
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": "auto",
"contiguous_gradients": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
```bash
accelerate launch examples/by_feature/deepspeed_with_config_support.py \
--config_name "gpt2-large" \
--tokenizer_name "gpt2-large" \
--dataset_name "wikitext" \
--dataset_config_name "wikitext-2-raw-v1" \
--block_size 128 \
--output_dir "./clm/clm_deepspeed_stage2_accelerate" \
--learning_rate 5e-4 \
--per_device_train_batch_size 24 \
--per_device_eval_batch_size 24 \
--num_train_epochs 3 \
--with_tracking \
--report_to "wandb"\
```
**ZeRO Stage-3 with CPU offload DeepSpeed Config File Example**
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/ubuntu/accelerate/examples/configs/deepspeed_config_templates/zero_stage3_offload_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
use_cpu: false
```
with the contents of `zero_stage3_offload_config.json` being:
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"sub_group_size": 1e9,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": "auto"
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
```bash
accelerate launch examples/by_feature/deepspeed_with_config_support.py \
--config_name "gpt2-large" \
--tokenizer_name "gpt2-large" \
--dataset_name "wikitext" \
--dataset_config_name "wikitext-2-raw-v1" \
--block_size 128 \
--output_dir "./clm/clm_deepspeed_stage3_offload_accelerate" \
--learning_rate 5e-4 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--num_train_epochs 3 \
--with_tracking \
--report_to "wandb"\
```
**ZeRO++ Config Example**
You can use the features of ZeRO++ by using the appropriate config parameters. Note that ZeRO++ is an extension for ZeRO Stage 3. Here is how the config file can be modified, from [DeepSpeed's ZeRO++ tutorial](https://www.deepspeed.ai/tutorials/zeropp/):
```json
{
"zero_optimization": {
"stage": 3,
"reduce_bucket_size": "auto",
"zero_quantized_weights": true,
"zero_hpz_partition_size": 8,
"zero_quantized_gradients": true,
"contiguous_gradients": true,
"overlap_comm": true
}
}
```
For hierarchical partitioning, the partition size `zero_hpz_partition_size` should ideally be set to the number of GPUs per node. (For example, the above config file assumes 8 GPUs per node)
**Important code changes when using DeepSpeed Config File**
1. DeepSpeed Optimizers and Schedulers. For more information on these,
see the [DeepSpeed Optimizers](https://deepspeed.readthedocs.io/en/latest/optimizers.html) and [DeepSpeed Schedulers](https://deepspeed.readthedocs.io/en/latest/schedulers.html) documentation.
We will look at the changes needed in the code when using these.
a. DS Optim + DS Scheduler: The case when both `optimizer` and `scheduler` keys are present in the DeepSpeed config file.
In this situation, those will be used and the user has to use `accelerate.utils.DummyOptim` and `accelerate.utils.DummyScheduler` to replace the PyTorch/Custom optimizers and schedulers in their code.
Below is the snippet from `examples/by_feature/deepspeed_with_config_support.py` showing this:
```python
# Creates Dummy Optimizer if `optimizer` was specified in the config file else creates Adam Optimizer
optimizer_cls = (
torch.optim.AdamW
if accelerator.state.deepspeed_plugin is None
or "optimizer" not in accelerator.state.deepspeed_plugin.deepspeed_config
else DummyOptim
)
optimizer = optimizer_cls(optimizer_grouped_parameters, lr=args.learning_rate)
# Creates Dummy Scheduler if `scheduler` was specified in the config file else creates `args.lr_scheduler_type` Scheduler
if (
accelerator.state.deepspeed_plugin is None
or "scheduler" not in accelerator.state.deepspeed_plugin.deepspeed_config
):
lr_scheduler = get_scheduler(
name=args.lr_scheduler_type,
optimizer=optimizer,
num_warmup_steps=args.num_warmup_steps,
num_training_steps=args.max_train_steps,
)
else:
lr_scheduler = DummyScheduler(
optimizer, total_num_steps=args.max_train_steps, warmup_num_steps=args.num_warmup_steps
)
```
b. Custom Optim + Custom Scheduler: The case when both `optimizer` and `scheduler` keys are absent in the DeepSpeed config file.
In this situation, no code changes are needed from the user and this is the case when using integration via DeepSpeed Plugin.
In the above example we can see that the code remains unchanged if the `optimizer` and `scheduler` keys are absent in the DeepSpeed config file.
c. Custom Optim + DS Scheduler: The case when only `scheduler` key is present in the DeepSpeed config file.
In this situation, the user has to use `accelerate.utils.DummyScheduler` to replace the PyTorch/Custom scheduler in their code.
d. DS Optim + Custom Scheduler: The case when only `optimizer` key is present in the DeepSpeed config file.
This will result in an error because you can only use DS Scheduler when using DS Optim.
2. Notice the `auto` values in the above example DeepSpeed config files. These are automatically handled by `prepare` method
based on model, dataloaders, dummy optimizer and dummy schedulers provided to `prepare` method.
Only the `auto` fields specified in above examples are handled by `prepare` method and the rest have to be explicitly specified by the user.
The `auto` values are calculated as:
- `reduce_bucket_size`: `hidden_size * hidden_size`
- `stage3_prefetch_bucket_size`: `int(0.9 * hidden_size * hidden_size)`
- `stage3_param_persistence_threshold`: `10 * hidden_size`
For the `auto` feature to work for these 3 config entries - Accelerate will use `model.config.hidden_size` or `max(model.config.hidden_sizes)` as `hidden_size`. If neither of these is available, the launching will fail and you will have to set these 3 config entries manually. Remember the first 2 config entries are the communication buffers - the larger they are the more efficient the comms will be, and the larger they are the more GPU memory they will consume, so it's a tunable performance trade-off.
**Things to note when using DeepSpeed Config File**
Below is a sample script using `deepspeed_config_file` in different scenarios.
Code `test.py`:
```python
from accelerate import Accelerator
from accelerate.state import AcceleratorState
def main():
accelerator = Accelerator()
accelerator.print(f"{AcceleratorState()}")
if __name__ == "__main__":
main()
```
**Scenario 1**: Manually tampered accelerate config file having `deepspeed_config_file` along with other entries.
1. Content of the `accelerate` config:
```yaml
command_file: null
commands: null
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: 'cpu'
offload_param_device: 'cpu'
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
deepspeed_config_file: 'ds_config.json'
distributed_type: DEEPSPEED
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config: {}
gpu_ids: null
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
megatron_lm_config: {}
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_name: null
tpu_zone: null
use_cpu: false
```
2. `ds_config.json`:
```json
{
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 3,
"stage3_gather_16bit_weights_on_model_save": false,
"offload_optimizer": {
"device": "none"
},
"offload_param": {
"device": "none"
}
},
"gradient_clipping": 1.0,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": 10,
"steps_per_print": 2000000
}
```
3. Output of `accelerate launch test.py`:
```bash
ValueError: When using `deepspeed_config_file`, the following accelerate config variables will be ignored:
['gradient_accumulation_steps', 'gradient_clipping', 'zero_stage', 'offload_optimizer_device', 'offload_param_device',
'zero3_save_16bit_model', 'mixed_precision'].
Please specify them appropriately in the DeepSpeed config file.
If you are using an accelerate config file, remove other config variables mentioned in the above specified list.
The easiest method is to create a new config following the questionnaire via `accelerate config`.
It will only ask for the necessary config variables when using `deepspeed_config_file`.
```
**Scenario 2**: Use the solution of the error to create new accelerate config and check that no ambiguity error is now thrown.
1. Run `accelerate config`:
```bash
$ accelerate config
-------------------------------------------------------------------------------------------------------------------------------
In which compute environment are you running?
This machine
-------------------------------------------------------------------------------------------------------------------------------
Which type of machine are you using?
multi-GPU
How many different machines will you use (use more than 1 for multi-node training)? [1]:
Do you wish to optimize your script with torch dynamo?[yes/NO]:
Do you want to use DeepSpeed? [yes/NO]: yes
Do you want to specify a json file to a DeepSpeed config? [yes/NO]: yes
Please enter the path to the json DeepSpeed config file: ds_config.json
Do you want to enable `deepspeed.zero.Init` when using ZeRO Stage-3 for constructing massive models? [yes/NO]: yes
How many GPU(s) should be used for distributed training? [1]:4
accelerate configuration saved at ds_config_sample.yaml
```
2. Content of the `accelerate` config:
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: ds_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config: {}
machine_rank: 0
main_training_function: main
megatron_lm_config: {}
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
use_cpu: false
```
3. Output of `accelerate launch test.py`:
```bash
Distributed environment: DEEPSPEED Backend: nccl
Num processes: 4
Process index: 0
Local process index: 0
Device: cuda:0
Mixed precision type: bf16
ds_config: {'bf16': {'enabled': True}, 'zero_optimization': {'stage': 3, 'stage3_gather_16bit_weights_on_model_save': False, 'offload_optimizer': {'device': 'none'}, 'offload_param': {'device': 'none'}}, 'gradient_clipping': 1.0, 'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 10, 'steps_per_print': inf, 'fp16': {'enabled': False}}
```
**Scenario 3**: Setting the `accelerate launch` command arguments related to DeepSpeed as `"auto"` in the DeepSpeed` configuration file and check that things work as expected.
1. New `ds_config.json` with `"auto"` for the `accelerate launch` DeepSpeed command arguments:
```json
{
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": "auto",
"stage3_gather_16bit_weights_on_model_save": "auto",
"offload_optimizer": {
"device": "auto"
},
"offload_param": {
"device": "auto"
}
},
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto",
"steps_per_print": 2000000
}
```
2. Output of `accelerate launch --mixed_precision="fp16" --zero_stage=3 --gradient_accumulation_steps=5 --gradient_clipping=1.0 --offload_param_device="cpu" --offload_optimizer_device="nvme" --zero3_save_16bit_model="true" test.py`:
```bash
Distributed environment: DEEPSPEED Backend: nccl
Num processes: 4
Process index: 0
Local process index: 0
Device: cuda:0
Mixed precision type: fp16
ds_config: {'bf16': {'enabled': False}, 'zero_optimization': {'stage': 3, 'stage3_gather_16bit_weights_on_model_save': True, 'offload_optimizer': {'device': 'nvme'}, 'offload_param': {'device': 'cpu'}}, 'gradient_clipping': 1.0, 'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 5, 'steps_per_print': inf, 'fp16': {'enabled': True, 'auto_cast': True}}
```
**Note**:
1. Remaining `"auto"` values are handled in `accelerator.prepare()` call as explained in point 2 of
`Important code changes when using DeepSpeed Config File`.
2. Only when `gradient_accumulation_steps` is `auto`, the value passed while creating `Accelerator` object via `Accelerator(gradient_accumulation_steps=k)` will be used. When using DeepSpeed Plugin, the value from it will be used and it will overwrite the value passed while creating Accelerator object.
## Saving and loading
1. Saving and loading of models is unchanged for ZeRO Stage-1 and Stage-2.
2. under ZeRO Stage-3, `state_dict` contains just the placeholders since the model weights are partitioned across multiple GPUs.
ZeRO Stage-3 has 2 options:
a. Saving the entire 16bit model weights to directly load later on using `model.load_state_dict(torch.load(pytorch_model.bin))`.
For this, either set `zero_optimization.stage3_gather_16bit_weights_on_model_save` to True in DeepSpeed Config file or set
`zero3_save_16bit_model` to True in DeepSpeed Plugin.
**Note that this option requires consolidation of the weights on one GPU it can be slow and memory demanding, so only use this feature when needed.**
Below is the snippet from `examples/by_feature/deepspeed_with_config_support.py` showing this:
```python
unwrapped_model = accelerator.unwrap_model(model)
# New Code #
# Saves the whole/unpartitioned fp16 model when in ZeRO Stage-3 to the output directory if
# `stage3_gather_16bit_weights_on_model_save` is True in DeepSpeed Config file or
# `zero3_save_16bit_model` is True in DeepSpeed Plugin.
# For Zero Stages 1 and 2, models are saved as usual in the output directory.
# The model name saved is `pytorch_model.bin`
unwrapped_model.save_pretrained(
args.output_dir,
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
state_dict=accelerator.get_state_dict(model),
)
```
b. To get 32bit weights, first save the model using `model.save_checkpoint()`.
Below is the snippet from `examples/by_feature/deepspeed_with_config_support.py` showing this:
```python
success = model.save_checkpoint(PATH, ckpt_id, checkpoint_state_dict)
status_msg = f"checkpointing: PATH={PATH}, ckpt_id={ckpt_id}"
if success:
logging.info(f"Success {status_msg}")
else:
logging.warning(f"Failure {status_msg}")
```
This will create ZeRO model and optimizer partitions along with `zero_to_fp32.py` script in checkpoint directory.
You can use this script to do offline consolidation.
It requires no configuration files or GPUs. Here is an example of its usage:
```bash
$ cd /path/to/checkpoint_dir
$ ./zero_to_fp32.py . pytorch_model.bin
Processing zero checkpoint at global_step1
Detected checkpoint of type zero stage 3, world_size: 2
Saving fp32 state dict to pytorch_model.bin (total_numel=60506624)
```
To get 32bit model for saving/inference, you can perform:
```python
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
unwrapped_model = accelerator.unwrap_model(model)
fp32_model = load_state_dict_from_zero_checkpoint(unwrapped_model, checkpoint_dir)
```
If you are only interested in the `state_dict`, you can do the following:
```python
from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir)
```
Note that all these functions require ~2x memory (general RAM) of the size of the final checkpoint.
## ZeRO Inference
DeepSpeed ZeRO Inference supports ZeRO stage 3 with ZeRO-Infinity.
It uses the same ZeRO protocol as training, but it doesn't use an optimizer and a lr scheduler and only stage 3 is relevant.
With accelerate integration, you just need to prepare the model and dataloader as shown below:
```python
model, eval_dataloader = accelerator.prepare(model, eval_dataloader)
```
## Few caveats to be aware of
1. Current integration doesnt support Pipeline Parallelism of DeepSpeed.
2. Current integration doesnt support `mpu`, limiting the tensor parallelism which is supported in Megatron-LM.
3. Current integration doesnt support multiple models.
## DeepSpeed Resources
The documentation for the internals related to deepspeed can be found [here](../package_reference/deepspeed).
- [Project's github](https://github.com/microsoft/deepspeed)
- [Usage docs](https://www.deepspeed.ai/getting-started/)
- [API docs](https://deepspeed.readthedocs.io/en/latest/index.html)
- [Blog posts](https://www.microsoft.com/en-us/research/search/?q=deepspeed)
Papers:
- [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054)
- [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840)
- [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857)
- [ZeRO++: Extremely Efficient Collective Communication for Giant Model Training](https://arxiv.org/abs/2306.10209)
Finally, please, remember that 🤗 `Accelerate` only integrates DeepSpeed, therefore if you
have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues).
<Tip>
For those interested in the similarities and differences between FSDP and DeepSpeed, please check out the [concept guide here](../concept_guides/fsdp_and_deepspeed.md)!
</Tip>

View File

@ -0,0 +1,237 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Distributed Inference with 🤗 Accelerate
Distributed inference can fall into three brackets:
1. Loading an entire model onto each GPU and sending chunks of a batch through each GPU's model copy at a time
2. Loading parts of a model onto each GPU and processing a single input at one time
3. Loading parts of a model onto each GPU and using what is called scheduled Pipeline Parallelism to combine the two prior techniques.
We're going to go through the first and the last bracket, showcasing how to do each as they are more realistic scenarios.
## Sending chunks of a batch automatically to each loaded model
This is the most memory-intensive solution, as it requires each GPU to keep a full copy of the model in memory at a given time.
Normally when doing this, users send the model to a specific device to load it from the CPU, and then move each prompt to a different device.
A basic pipeline using the `diffusers` library might look something like so:
```python
import torch
import torch.distributed as dist
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
```
Followed then by performing inference based on the specific prompt:
```python
def run_inference(rank, world_size):
dist.init_process_group("nccl", rank=rank, world_size=world_size)
pipe.to(rank)
if torch.distributed.get_rank() == 0:
prompt = "a dog"
elif torch.distributed.get_rank() == 1:
prompt = "a cat"
result = pipe(prompt).images[0]
result.save(f"result_{rank}.png")
```
One will notice how we have to check the rank to know what prompt to send, which can be a bit tedious.
A user might then also think that with 🤗 Accelerate, using the `Accelerator` to prepare a dataloader for such a task might also be
a simple way to manage this. (To learn more, check out the relevant section in the [Quick Tour](../quicktour#distributed-evaluation))
Can it manage it? Yes. Does it add unneeded extra code however: also yes.
With 🤗 Accelerate, we can simplify this process by using the [`Accelerator.split_between_processes`] context manager (which also exists in `PartialState` and `AcceleratorState`).
This function will automatically split whatever data you pass to it (be it a prompt, a set of tensors, a dictionary of the prior data, etc.) across all the processes (with a potential
to be padded) for you to use right away.
Let's rewrite the above example using this context manager:
```python
from accelerate import PartialState # Can also be Accelerator or AcceleratorState
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
distributed_state = PartialState()
pipe.to(distributed_state.device)
# Assume two processes
with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt:
result = pipe(prompt).images[0]
result.save(f"result_{distributed_state.process_index}.png")
```
And then to launch the code, we can use the 🤗 Accelerate:
If you have generated a config file to be used using `accelerate config`:
```bash
accelerate launch distributed_inference.py
```
If you have a specific config file you want to use:
```bash
accelerate launch --config_file my_config.json distributed_inference.py
```
Or if don't want to make any config files and launch on two GPUs:
> Note: You will get some warnings about values being guessed based on your system. To remove these you can do `accelerate config default` or go through `accelerate config` to create a config file.
```bash
accelerate launch --num_processes 2 distributed_inference.py
```
We've now reduced the boilerplate code needed to split this data to a few lines of code quite easily.
But what if we have an odd distribution of prompts to GPUs? For example, what if we have 3 prompts, but only 2 GPUs?
Under the context manager, the first GPU would receive the first two prompts and the second GPU the third, ensuring that
all prompts are split and no overhead is needed.
*However*, what if we then wanted to do something with the results of *all the GPUs*? (Say gather them all and perform some kind of post processing)
You can pass in `apply_padding=True` to ensure that the lists of prompts are padded to the same length, with extra data being taken
from the last sample. This way all GPUs will have the same number of prompts, and you can then gather the results.
<Tip>
This is only needed when trying to perform an action such as gathering the results, where the data on each device
needs to be the same length. Basic inference does not require this.
</Tip>
For instance:
```python
from accelerate import PartialState # Can also be Accelerator or AcceleratorState
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
distributed_state = PartialState()
pipe.to(distributed_state.device)
# Assume two processes
with distributed_state.split_between_processes(["a dog", "a cat", "a chicken"], apply_padding=True) as prompt:
result = pipe(prompt).images
```
On the first GPU, the prompts will be `["a dog", "a cat"]`, and on the second GPU it will be `["a chicken", "a chicken"]`.
Make sure to drop the final sample, as it will be a duplicate of the previous one.
You can find more complex examples [here](https://github.com/huggingface/accelerate/tree/main/examples/inference/distributed) such as how to use it with LLMs.
## Memory-efficient pipeline parallelism (experimental)
This next part will discuss using *pipeline parallelism*. This is an **experimental** API utilizing the [PiPPy library by PyTorch](https://github.com/pytorch/PiPPy/) as a native solution.
The general idea with pipeline parallelism is: say you have 4 GPUs and a model big enough it can be *split* on four GPUs using `device_map="auto"`. With this method you can send in 4 inputs at a time (for example here, any amount works) and each model chunk will work on an input, then receive the next input once the prior chunk finished, making it *much* more efficient **and faster** than the method described earlier. Here's a visual taken from the PyTorch repository:
![PiPPy example](https://camo.githubusercontent.com/681d7f415d6142face9dd1b837bdb2e340e5e01a58c3a4b119dea6c0d99e2ce0/68747470733a2f2f692e696d6775722e636f6d2f657955633934372e706e67)
To illustrate how you can use this with Accelerate, we have created an [example zoo](https://github.com/huggingface/accelerate/tree/main/examples/inference) showcasing a number of different models and situations. In this tutorial, we'll show this method for GPT2 across two GPUs.
Before you proceed, please make sure you have the latest pippy installed by running the following:
```bash
pip install torchpippy
```
We require at least version 0.2.0. To confirm that you have the correct version, run `pip show torchpippy`.
Start by creating the model on the CPU:
```{python}
from transformers import GPT2ForSequenceClassification, GPT2Config
config = GPT2Config()
model = GPT2ForSequenceClassification(config)
model.eval()
```
Next you'll need to create some example inputs to use. These help PiPPy trace the model.
<Tip warning={true}>
However you make this example will determine the relative batch size that will be used/passed
through the model at a given time, so make sure to remember how many items there are!
</Tip>
```{python}
input = torch.randint(
low=0,
high=config.vocab_size,
size=(2, 1024), # bs x seq_len
device="cpu",
dtype=torch.int64,
requires_grad=False,
)
```
Next we need to actually perform the tracing and get the model ready. To do so, use the [`inference.prepare_pippy`] function and it will fully wrap the model for pipeline parallelism automatically:
```{python}
from accelerate.inference import prepare_pippy
example_inputs = {"input_ids": input}
model = prepare_pippy(model, example_args=(input,))
```
<Tip>
There are a variety of parameters you can pass through to `prepare_pippy`:
* `split_points` lets you determine what layers to split the model at. By default we use wherever `device_map="auto" declares, such as `fc` or `conv1`.
* `num_chunks` determines how the batch will be split and sent to the model itself (so `num_chunks=1` with four split points/four GPUs will have a naive MP where a single input gets passed between the four layer split points)
</Tip>
From here, all that's left is to actually perform the distributed inference!
<Tip warning={true}>
When passing inputs, we highly recommend to pass them in as a tuple of arguments. Using `kwargs` is supported, however, this approach is experimental.
</Tip>
```{python}
args = some_more_arguments
with torch.no_grad():
output = model(*args)
```
When finished all the data will be on the last process only:
```{python}
from accelerate import PartialState
if PartialState().is_last_process:
print(output)
```
<Tip>
If you pass in `gather_output=True` to [`inference.prepare_pippy`], the output will be sent
across to all the GPUs afterwards without needing the `is_last_process` check. This is
`False` by default as it incurs a communication call.
</Tip>
And that's it! To explore more, please check out the inference examples in the [Accelerate repo](https://github.com/huggingface/accelerate/tree/main/examples/inference/pippy) and our [documentation](../package_reference/inference) as we work to improving this integration.

View File

@ -0,0 +1,51 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Learning how to incorporate 🤗 Accelerate features quickly!
Please use the interactive tool below to help you get started with learning about a particular
feature of 🤗 Accelerate and how to utilize it! It will provide you with a code diff, an explanation
towards what is going on, as well as provide you with some useful links to explore more within
the documentation!
Most code examples start from the following python code before integrating 🤗 Accelerate in some way:
```python
for batch in dataloader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss.backward()
optimizer.step()
scheduler.step()
```
<div class="block dark:hidden">
<iframe
src="https://hf-accelerate-accelerate-examples.hf.space?__theme=light"
width="850"
height="1600"
></iframe>
</div>
<div class="hidden dark:block">
<iframe
src="https://hf-accelerate-accelerate-examples.hf.space?__theme=dark"
width="850"
height="1600"
></iframe>
</div>

View File

@ -0,0 +1,200 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Fully Sharded Data Parallel
To accelerate training huge models on larger batch sizes, we can use a fully sharded data parallel model.
This type of data parallel paradigm enables fitting more data and larger models by sharding the optimizer states, gradients and parameters.
To read more about it and the benefits, check out the [Fully Sharded Data Parallel blog](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/).
We have integrated the latest PyTorch's Fully Sharded Data Parallel (FSDP) training feature.
All you need to do is enable it through the config.
## How it works out of the box
On your machine(s) just run:
```bash
accelerate config
```
and answer the questions asked. This will generate a config file that will be used automatically to properly set the
default options when doing
```bash
accelerate launch my_script.py --args_to_my_script
```
For instance, here is how you would run `examples/nlp_example.py` (from the root of the repo) with FSDP enabled:
```bash
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: false
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: BertLayer
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
```bash
accelerate launch examples/nlp_example.py
```
Currently, `Accelerate` supports the following config through the CLI:
`fsdp_sharding_strategy`: [1] FULL_SHARD (shards optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD (DDP), [4] HYBRID_SHARD (shards optimizer states, gradients and parameters within each node while each node has full copy), [5] HYBRID_SHARD_ZERO2 (shards optimizer states and gradients within each node while each node has full copy). For more information, please refer the official [PyTorch docs](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.ShardingStrategy).
`fsdp_offload_params` : Decides Whether to offload parameters and gradients to CPU
`fsdp_auto_wrap_policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP
`fsdp_transformer_layer_cls_to_wrap`: Only applicable for 🤗 Transformers. When using `fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP`, a user may provide a comma-separated string of transformer layer class names (case-sensitive) to wrap, e.g., `BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`. This is important because submodules that share weights (e.g., embedding layers) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by a couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer-based models. You can use the `model._no_split_modules` for 🤗 Transformer models by answering `yes` to `Do you want to use the model's `_no_split_modules` to wrap. It will try to use `model._no_split_modules` when possible.
`fsdp_min_num_params`: minimum number of parameters when using `fsdp_auto_wrap_policy=SIZE_BASED_WRAP`.
`fsdp_backward_prefetch_policy`: [1] BACKWARD_PRE, [2] BACKWARD_POST, [3] NO_PREFETCH
`fsdp_forward_prefetch`: if True, then FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass. Should only be used for static-graph models since the prefetching follows the first iterations execution order. i.e., if the sub-modules' order changes dynamically during the model's execution do not enable this feature.
`fsdp_state_dict_type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT
`fsdp_use_orig_params`: If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable parameters. This setting is useful in cases such as parameter-efficient fine-tuning as discussed in [this post](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019). This option also allows one to have multiple optimizer param groups. This should be `True` when creating an optimizer before preparing/wrapping the model with FSDP.
`fsdp_cpu_ram_efficient_loading`: Only applicable for 🤗 Transformers models. If True, only the first process loads the pretrained model checkpoint while all other processes have empty weights. This should be set to False if you experience errors when loading the pretrained 🤗 Transformers model via `from_pretrained` method. When this setting is True `fsdp_sync_module_states` also must to be True, otherwise all the processes except the main process would have random weights leading to unexpected behaviour during training. For this to work, make sure the distributed process group is initialized before calling Transformers `from_pretrained` method. When using 🤗 Trainer API, the distributed process group is initialized when you create an instance of `TrainingArguments` class.
`fsdp_sync_module_states`: If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0.
For additional and more nuanced control, you can specify other FSDP parameters via `FullyShardedDataParallelPlugin`.
When creating `FullyShardedDataParallelPlugin` object, pass it the parameters that weren't part of the accelerate config or if you want to override them.
The FSDP parameters will be picked based on the accelerate config file or launch command arguments and other parameters that you will pass directly through the `FullyShardedDataParallelPlugin` object will set/override that.
Below is an example:
```py
from accelerate import FullyShardedDataParallelPlugin
from torch.distributed.fsdp.fully_sharded_data_parallel import FullOptimStateDictConfig, FullStateDictConfig
fsdp_plugin = FullyShardedDataParallelPlugin(
state_dict_config=FullStateDictConfig(offload_to_cpu=False, rank0_only=False),
optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=False, rank0_only=False),
)
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
```
## Saving and loading
The new recommended way of checkpointing when using FSDP models is to use `SHARDED_STATE_DICT` as `StateDictType` when setting up the accelerate config.
Below is the code snippet to save using `save_state` utility of accelerate.
```py
accelerator.save_state("ckpt")
```
Inspect the checkpoint folder to see model and optimizer as shards per process:
```
ls ckpt
# optimizer_0 pytorch_model_0 random_states_0.pkl random_states_1.pkl scheduler.bin
cd ckpt
ls optimizer_0
# __0_0.distcp __1_0.distcp
ls pytorch_model_0
# __0_0.distcp __1_0.distcp
```
To load them back for resuming the training, use the `load_state` utility of accelerate
```py
accelerator.load_state("ckpt")
```
When using transformers `save_pretrained`, pass `state_dict=accelerator.get_state_dict(model)` to save the model state dict.
Below is an example:
```diff
unwrapped_model.save_pretrained(
args.output_dir,
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
+ state_dict=accelerator.get_state_dict(model),
)
```
### State Dict
`accelerator.get_state_dict` will call the underlying `model.state_dict` implementation using `FullStateDictConfig(offload_to_cpu=True, rank0_only=True)` context manager to get the state dict only for rank 0 and it will be offloaded to CPU.
You can then pass `state` into the `save_pretrained` method. There are several modes for `StateDictType` and `FullStateDictConfig` that you can use to control the behavior of `state_dict`. For more information, see the [PyTorch documentation](https://pytorch.org/docs/stable/fsdp.html).
If you choose to use `StateDictType.SHARDED_STATE_DICT`, the weights of the model during `Accelerator.save_state` will be split into `n` files for each sub-split on the model. To merge them back into
a single dictionary to load back into the model later after training you can use the `merge_weights` utility:
```py
from accelerate.utils import merge_fsdp_weights
# Our weights are saved usually in a `pytorch_model_fsdp_{model_number}` folder
merge_fsdp_weights("pytorch_model_fsdp_0", "output_path", safe_serialization=True)
```
The final output will then either be saved to `model.safetensors` or `pytorch_model.bin` (if `safe_serialization=False` is passed).
This can also be called using the CLI:
```bash
accelerate merge-weights pytorch_model_fsdp_0/ output_path
```
## Mapping between FSDP sharding strategies and DeepSpeed ZeRO Stages
* `FULL_SHARD` maps to the DeepSpeed `ZeRO Stage-3`. Shards optimizer states, gradients and parameters.
* `SHARD_GRAD_OP` maps to the DeepSpeed `ZeRO Stage-2`. Shards optimizer states and gradients.
* `NO_SHARD` maps to `ZeRO Stage-0`. No sharding wherein each GPU has full copy of model, optimizer states and gradients.
* `HYBRID_SHARD` maps to `ZeRO++ Stage-3` wherein `zero_hpz_partition_size=<num_gpus_per_node>`. Here, this will shard optimizer states, gradients and parameters within each node while each node has full copy.
## A few caveats to be aware of
- In case of multiple models, pass the optimizers to the prepare call in the same order as corresponding models else `accelerator.save_state()` and `accelerator.load_state()` will result in wrong/unexpected behaviour.
- This feature is incompatible with `--predict_with_generate` in the `run_translation.py` script of 🤗 `Transformers` library.
For more control, users can leverage the `FullyShardedDataParallelPlugin`. After creating an instance of this class, users can pass it to the Accelerator class instantiation.
For more information on these options, please refer to the PyTorch [FullyShardedDataParallel](https://github.com/pytorch/pytorch/blob/0df2e863fbd5993a7b9e652910792bd21a516ff3/torch/distributed/fsdp/fully_sharded_data_parallel.py#L236) code.
<Tip>
For those interested in the similarities and differences between FSDP and DeepSpeed, please check out the [concept guide here](../concept_guides/fsdp_and_deepspeed.md)!
</Tip>

View File

@ -0,0 +1,232 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Performing gradient accumulation with 🤗 Accelerate
Gradient accumulation is a technique where you can train on bigger batch sizes than
your machine would normally be able to fit into memory. This is done by accumulating gradients over
several batches, and only stepping the optimizer after a certain number of batches have been performed.
While technically standard gradient accumulation code would work fine in a distributed setup, it is not the most efficient
method for doing so and you may experience considerable slowdowns!
In this tutorial you will see how to quickly setup gradient accumulation and perform it with the utilities provided in 🤗 Accelerate,
which can total to adding just one new line of code!
This example will use a very simplistic PyTorch training loop that performs gradient accumulation every two batches:
```python
device = "cuda"
model.to(device)
gradient_accumulation_steps = 2
for index, batch in enumerate(training_dataloader):
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss = loss / gradient_accumulation_steps
loss.backward()
if (index + 1) % gradient_accumulation_steps == 0:
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
## Converting it to 🤗 Accelerate
First the code shown earlier will be converted to utilize 🤗 Accelerate without the special gradient accumulation helper:
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
+ model, optimizer, training_dataloader, scheduler
+ )
for index, batch in enumerate(training_dataloader):
inputs, targets = batch
- inputs = inputs.to(device)
- targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss = loss / gradient_accumulation_steps
+ accelerator.backward(loss)
if (index+1) % gradient_accumulation_steps == 0:
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
<Tip warning={true}>
In its current state, this code is not going to perform gradient accumulation efficiently due to a process called gradient synchronization. Read more about that in the [Concepts tutorial](../concept_guides/gradient_synchronization)!
</Tip>
## Letting 🤗 Accelerate handle gradient accumulation
All that is left now is to let 🤗 Accelerate handle the gradient accumulation for us. To do so you should pass in a `gradient_accumulation_steps` parameter to [`Accelerator`], dictating the number
of steps to perform before each call to `step()` and how to automatically adjust the loss during the call to [`~Accelerator.backward`]:
```diff
from accelerate import Accelerator
- accelerator = Accelerator()
+ accelerator = Accelerator(gradient_accumulation_steps=2)
```
Alternatively, you can pass in a `gradient_accumulation_plugin` parameter to the [`Accelerator`] object's `__init__`, which will allow you to further customize the gradient accumulation behavior.
Read more about that in the [GradientAccumulationPlugin](../package_reference/accelerator#accelerate.utils.GradientAccumulationPlugin) docs.
From here you can use the [`~Accelerator.accumulate`] context manager from inside your training loop to automatically perform the gradient accumulation for you!
You just wrap it around the entire training part of our code:
```diff
- for index, batch in enumerate(training_dataloader):
+ for batch in training_dataloader:
+ with accelerator.accumulate(model):
inputs, targets = batch
outputs = model(inputs)
```
You can remove all the special checks for the step number and the loss adjustment:
```diff
- loss = loss / gradient_accumulation_steps
accelerator.backward(loss)
- if (index+1) % gradient_accumulation_steps == 0:
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
As you can see the [`Accelerator`] is able to keep track of the batch number you are on and it will automatically know whether to step through the prepared optimizer and how to adjust the loss.
<Tip>
Typically with gradient accumulation, you would need to adjust the number of steps to reflect the change in total batches you are
training on. 🤗 Accelerate automagically does this for you by default. Behind the scenes we instantiate a [`GradientAccumulationPlugin`] configured to do this.
</Tip>
<Tip warning={true}>
The [`state.GradientState`] is sync'd with the active dataloader being iterated upon. As such it assumes naively that when we have reached the end of the dataloader everything will sync and a step will be performed. To disable this, set `sync_with_dataloader` to be `False` in the [`GradientAccumulationPlugin`]:
```{python}
from accelerate import Accelerator
from accelerate.utils import GradientAccumulationPlugin
plugin = GradientAccumulationPlugin(sync_with_dataloader=False)
accelerator = Accelerator(..., gradient_accumulation_plugin=plugin)
```
</Tip>
## The finished code
Below is the finished implementation for performing gradient accumulation with 🤗 Accelerate
```python
from accelerate import Accelerator
accelerator = Accelerator(gradient_accumulation_steps=2)
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
for batch in training_dataloader:
with accelerator.accumulate(model):
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
<Tip warning={true}>
It's important that **only one forward/backward** should be done inside the context manager `with accelerator.accumulate(model)`.
</Tip>
To learn more about what magic this wraps around, read the [Gradient Synchronization concept guide](../concept_guides/gradient_synchronization)
## Self-contained example
Here is a self-contained example that you can run to see gradient accumulation in action with 🤗 Accelerate:
```python
import torch
import copy
from accelerate import Accelerator
from accelerate.utils import set_seed
from torch.utils.data import TensorDataset, DataLoader
# seed
set_seed(0)
# define toy inputs and labels
x = torch.tensor([1., 2., 3., 4., 5., 6., 7., 8.])
y = torch.tensor([2., 4., 6., 8., 10., 12., 14., 16.])
gradient_accumulation_steps = 4
batch_size = len(x) // gradient_accumulation_steps
# define dataset and dataloader
dataset = TensorDataset(x, y)
dataloader = DataLoader(dataset, batch_size=batch_size)
# define model, optimizer and loss function
model = torch.zeros((1, 1), requires_grad=True)
model_clone = copy.deepcopy(model)
criterion = torch.nn.MSELoss()
model_optimizer = torch.optim.SGD([model], lr=0.02)
accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps)
model, model_optimizer, dataloader = accelerator.prepare(model, model_optimizer, dataloader)
model_clone_optimizer = torch.optim.SGD([model_clone], lr=0.02)
print(f"initial model weight is {model.mean().item():.5f}")
print(f"initial model weight is {model_clone.mean().item():.5f}")
for i, (inputs, labels) in enumerate(dataloader):
with accelerator.accumulate(model):
inputs = inputs.view(-1, 1)
print(i, inputs.flatten())
labels = labels.view(-1, 1)
outputs = inputs @ model
loss = criterion(outputs, labels)
accelerator.backward(loss)
model_optimizer.step()
model_optimizer.zero_grad()
loss = criterion(x.view(-1, 1) @ model_clone, y.view(-1, 1))
model_clone_optimizer.zero_grad()
loss.backward()
model_clone_optimizer.step()
print(f"w/ accumulation, the final model weight is {model.mean().item():.5f}")
print(f"w/o accumulation, the final model weight is {model_clone.mean().item():.5f}")
```
```
initial model weight is 0.00000
initial model weight is 0.00000
0 tensor([1., 2.])
1 tensor([3., 4.])
2 tensor([5., 6.])
3 tensor([7., 8.])
w/ accumulation, the final model weight is 2.04000
w/o accumulation, the final model weight is 2.04000
```

View File

@ -0,0 +1,192 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Intel® Extension for PyTorch
[IPEX](https://github.com/intel/intel-extension-for-pytorch) is optimized for CPUs with AVX-512 or above, and functionally works for CPUs with only AVX2. So, it is expected to bring performance benefit for Intel CPU generations with AVX-512 or above while CPUs with only AVX2 (e.g., AMD CPUs or older Intel CPUs) might result in a better performance under IPEX, but not guaranteed. IPEX provides performance optimizations for CPU training with both Float32 and BFloat16. The usage of BFloat16 is the main focus of the following sections.
Low precision data type BFloat16 has been natively supported on the 3rd Generation Xeon® Scalable Processors (aka Cooper Lake) with AVX512 instruction set and will be supported on the next generation of Intel® Xeon® Scalable Processors with Intel® Advanced Matrix Extensions (Intel® AMX) instruction set with further boosted performance. The Auto Mixed Precision for CPU backend has been enabled since PyTorch-1.10. At the same time, the support of Auto Mixed Precision with BFloat16 for CPU and BFloat16 optimization of operators has been massively enabled in Intel® Extension for PyTorch, and partially upstreamed to PyTorch master branch. Users can get better performance and user experience with IPEX Auto Mixed Precision.
## IPEX installation:
IPEX release is following PyTorch, to install via pip:
| PyTorch Version | IPEX version |
| :---------------: | :----------: |
| 2.0 | 2.0.0 |
| 1.13 | 1.13.0 |
| 1.12 | 1.12.300 |
| 1.11 | 1.11.200 |
| 1.10 | 1.10.100 |
```
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
```
Check more approaches for [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html).
## How It Works For Training optimization in CPU
🤗 Accelerate has integrated [IPEX](https://github.com/intel/intel-extension-for-pytorch), all you need to do is enabling it through the config.
**Scenario 1**: Acceleration of No distributed CPU training
Run <u>accelerate config</u> on your machine:
```bash
$ accelerate config
-----------------------------------------------------------------------------------------------------------------------------------------------------------
In which compute environment are you running?
This machine
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Which type of machine are you using?
No distributed training
Do you want to run your training on CPU only (even if a GPU / Apple Silicon device is available)? [yes/NO]:yes
Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU? [yes/NO]:yes
Do you wish to optimize your script with torch dynamo?[yes/NO]:NO
Do you want to use DeepSpeed? [yes/NO]: NO
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Do you wish to use FP16 or BF16 (mixed precision)?
bf16
```
This will generate a config file that will be used automatically to properly set the
default options when doing
```bash
accelerate launch my_script.py --args_to_my_script
```
For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with IPEX enabled.
default_config.yaml that is generated after `accelerate config`
```bash
compute_environment: LOCAL_MACHINE
distributed_type: 'NO'
downcast_bf16: 'no'
ipex_config:
ipex: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: true
```
```bash
accelerate launch examples/nlp_example.py
```
**Scenario 2**: Acceleration of distributed CPU training
we use Intel oneCCL for communication, combined with Intel® MPI library to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. you could refer the [here](https://huggingface.co/docs/transformers/perf_train_cpu_many) for the installation guide
Run <u>accelerate config</u> on your machine(node0):
```bash
$ accelerate config
-----------------------------------------------------------------------------------------------------------------------------------------------------------
In which compute environment are you running?
This machine
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Which type of machine are you using?
multi-CPU
How many different machines will you use (use more than 1 for multi-node training)? [1]: 4
-----------------------------------------------------------------------------------------------------------------------------------------------------------
What is the rank of this machine?
0
What is the IP address of the machine that will host the main process? 36.112.23.24
What is the port you will use to communicate with the main process? 29500
Are all the machines on the same local network? Answer `no` if nodes are on the cloud and/or on different network hosts [YES/no]: yes
Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU? [yes/NO]:yes
Do you want accelerate to launch mpirun? [yes/NO]: yes
Please enter the path to the hostfile to use with mpirun [~/hostfile]: ~/hostfile
Enter the number of oneCCL worker threads [1]: 1
Do you wish to optimize your script with torch dynamo?[yes/NO]:NO
How many processes should be used for distributed training? [1]:16
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Do you wish to use FP16 or BF16 (mixed precision)?
bf16
```
For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with IPEX enabled for distributed CPU training.
default_config.yaml that is generated after `accelerate config`
```bash
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_CPU
downcast_bf16: 'no'
ipex_config:
ipex: true
machine_rank: 0
main_process_ip: 36.112.23.24
main_process_port: 29500
main_training_function: main
mixed_precision: bf16
mpirun_config:
mpirun_ccl: '1'
mpirun_hostfile: /home/user/hostfile
num_machines: 4
num_processes: 16
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: true
```
Set following env and using intel MPI to launch the training
In node0, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument.
If you selected to have Accelerate launch `mpirun`, ensure that the location of your hostfile matches the path in the config.
```bash
$ cat hostfile
xxx.xxx.xxx.xxx #node0 ip
xxx.xxx.xxx.xxx #node1 ip
xxx.xxx.xxx.xxx #node2 ip
xxx.xxx.xxx.xxx #node3 ip
```
When Accelerate is launching `mpirun`, source the oneCCL bindings setvars.sh to get your Intel MPI environment, and then
run your script using `accelerate launch`. Note that the python script and environment needs to exist on all of the
machines being used for multi-CPU training.
```bash
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
accelerate launch examples/nlp_example.py
```
Otherwise, if you selected not to have Accelerate launch `mpirun`, run the following command in node0 and **16DDP** will
be enabled in node0,node1,node2,node3 with BF16 mixed precision. When using this method, the python script, python
environment, and accelerate config file need to be present on all of the machines used for multi-CPU training.
```bash
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
export CCL_WORKER_COUNT=1
export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip
export CCL_ATL_TRANSPORT=ofi
mpirun -f hostfile -n 16 -ppn 4 accelerate launch examples/nlp_example.py
```
## Related Resources
- [Project's github](https://github.com/intel/intel-extension-for-pytorch)
- [API docs](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/api_doc.html)
- [Tuning guide](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/performance_tuning/tuning_guide.html)
- [Blogs & Publications](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/blogs_publications.html)

View File

@ -0,0 +1,108 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Using Local SGD with 🤗 Accelerate
Local SGD is a technique for distributed training where gradients are not synchronized every step. Thus, each process updates its own version of the model weights and after a given number of steps these weights are synchronized by averaging across all processes. This improves communication efficiency and can lead to substantial training speed up especially when a computer lacks a faster interconnect such as NVLink.
Unlike gradient accumulation (where improving communication efficiency requires increasing the effective batch size), Local SGD does not require changing a batch size or a learning rate / schedule. However, if necessary, Local SGD can be combined with gradient accumulation as well.
In this tutorial you will see how to quickly setup Local SGD 🤗 Accelerate. Compared to a standard Accelerate setup, this requires only two extra lines of code.
This example will use a very simplistic PyTorch training loop that performs gradient accumulation every two batches:
```python
device = "cuda"
model.to(device)
gradient_accumulation_steps = 2
for index, batch in enumerate(training_dataloader):
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss = loss / gradient_accumulation_steps
loss.backward()
if (index + 1) % gradient_accumulation_steps == 0:
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
## Converting it to 🤗 Accelerate
First the code shown earlier will be converted to use 🤗 Accelerate with neither a LocalSGD or a gradient accumulation helper:
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
+ model, optimizer, training_dataloader, scheduler
+ )
for index, batch in enumerate(training_dataloader):
inputs, targets = batch
- inputs = inputs.to(device)
- targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss = loss / gradient_accumulation_steps
+ accelerator.backward(loss)
if (index+1) % gradient_accumulation_steps == 0:
optimizer.step()
scheduler.step()
```
## Letting 🤗 Accelerate handle model synchronization
All that is left now is to let 🤗 Accelerate handle model parameter synchronization **and** the gradient accumulation for us. For simplicity let us assume we need to synchronize every 8 steps. This is
achieved by adding one `with LocalSGD` statement and one call `local_sgd.step()` after every optimizer step:
```diff
+local_sgd_steps=8
+with LocalSGD(accelerator=accelerator, model=model, local_sgd_steps=8, enabled=True) as local_sgd:
for batch in training_dataloader:
with accelerator.accumulate(model):
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
+ local_sgd.step()
```
Under the hood, the Local SGD code **disables** automatic gradient synchronization (but accumulation still works as expected!). Instead it averages model parameters every `local_sgd_steps` steps (as well as at the end of the training loop).
## Limitations
The current implementation works only with basic multi-GPU (or multi-CPU) training without, e.g., [DeepSpeed.](https://github.com/microsoft/DeepSpeed).
## References
Although we are not aware of the true origins of this simple approach, the idea of local SGD is quite old and goes
back to at least:
Zhang, J., De Sa, C., Mitliagkas, I., & Ré, C. (2016). [Parallel SGD: When does averaging help?. arXiv preprint
arXiv:1606.07365.](https://arxiv.org/abs/1606.07365)
We credit the term Local SGD to the following paper (but there might be earlier references we are not aware of).
Stich, Sebastian Urban. ["Local SGD Converges Fast and Communicates Little." ICLR 2019-International Conference on
Learning Representations. No. CONF. 2019.](https://arxiv.org/abs/1805.09767)

View File

@ -0,0 +1,92 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Low Precision Training Methods
🤗 Accelerate provides integrations to train on lower precision methods using specified supported hardware through the `TransformersEngine` and `MS-AMP` packages. This documentation will help guide you through what hardware is supported, how to configure your [`Accelerator`] to leverage the low precision methods, and what you can expect when training.
## What training on FP8 means
To explore more of the nitty-gritty in training in FP8 with PyTorch and 🤗 Accelerate, check out the [concept_guide](../concept_guides/low_precision_training.md) on why this can be difficult. But essentially rather than training in BF16, some (or all) aspects of training a model can be performed using 8 bits instead of 16. The challenge is doing so without degrading final performance.
This is only enabled on specific NVIDIA hardware, namely:
* Anything after the 3000 series consumer graphics cards (such as the 4090)
* Hopper-based GPU architectures (such as the `H100` and `H200`)
What this will result in is some gain in the memory used (as we've cut the needed memory in half for some parts of training) and an increase in throughput *should* be seen as well for larger models that can replace certain layers with FP8-enabled ones.
## Configuring the Accelerator
Currently two different backends for FP8 are supported (`TransformersEngine` and `MS-AMP`), each with different capabilities and configurations.
To use either, the same core API is used. Just pass `mixed_precision="fp8"` to either the [`Accelerator`], during `accelerate config` when prompted about mixed precision, or as part of your `config.yaml` file in the `mixed_precision` key:
```{python}
from accelerate import Accelerator
accelerator = Accelerator(mixed_precision="fp8")
```
By default, if `MS-AMP` is available in your environment, 🤗 Accelerate will automatically utilize it as a backend. To specify it yourself (and customize other parts of the FP8 mixed precision setup), you can utilize the [`utils.FP8RecipeKwargs`]:
```{python}
from accelerate import Accelerator
from accelerate.utils import FP8RecipeKwargs
kwargs = [FP8RecipeKwargs(backend="msamp")]
# Or to specify the backend as `TransformersEngine` even if MS-AMP is installed
# kwargs = [FP8RecipeKwargs(backend="te")]
accelerator = Accelerator(mixed_precision="fp8", kwarg_handlers=kwargs)
```
## Configuring MS-AMP
Of the two, `MS-AMP` is traditionally the easier one to configure as there is only a single argument: the optimization level.
Currently two levels of optimization are supported in the 🤗 Accelerate integration, `"O1"` and `"O2"` (using the letter 'o', not zero).
* `"O1"` will cast the weight gradients and `all_reduce` communications to happen in 8-bit, while the rest are done in 16 bit. This reduces the general GPU memory usage and speeds up communication bandwidths.
* `"O2"` will also cast first-order optimizer states into 8 bit, while the second order states are in FP16. (Currently just the `Adam` optimizer is supported). This tries its best to minimize final accuracy degradation and will save the highest potential memory.
To specify an optimization level, pass it to the `FP8KwargsHandler` by setting the `optimization_level` argument:
```{python}
from accelerate import Accelerator
from accelerate.utils import FP8RecipeKwargs
kwargs = [FP8RecipeKwargs(backend="msamp", optimization_level="O2")]
accelerator = Accelerator(mixed_precision="fp8", kwarg_handlers=kwargs)
```
## Configuring TransformersEngine
TransformersEngine has much more available for customizing how and what FP8 calculations are performed. A full list of supported arguments and what they mean are available in [NVIDIA's documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html), however they are restated as part of [`FP8KwargsHandler`]'s docstring for your convenience.
🤗 Accelerate tries to set sensible defaults, but exploring and tweaking the various parameters yourself can lead to better performance potentially.
To use it, specify `backend="te"` and modify any of the arguments you want as part of your kwarg handler:
```{python}
from accelerate import Accelerator
from accelerate.utils import FP8RecipeKwargs
kwargs = [FP8RecipeKwargs(backend="te", ...)]
accelerator = Accelerator(mixed_precision="fp8", kwarg_handlers=kwargs)
```
## Further Reading
To learn more about training in FP8 please check out the following resources:
* [Our concept guide](../concept_guides/low_precision_training.md) detailing into more about both TransformersEngine and MS-AMP
* [The `transformers-engine` documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html)
* [The `MS-AMP` documentation](https://azure.github.io/MS-AMP/docs/)

View File

@ -0,0 +1,586 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Megatron-LM
[Megatron-LM](https://github.com/NVIDIA/Megatron-LM) enables training large transformer language models at scale.
It provides efficient tensor, pipeline and sequence based model parallelism for pre-training transformer based
Language Models such as [GPT](https://arxiv.org/abs/2005.14165) (Decoder Only), [BERT](https://arxiv.org/pdf/1810.04805.pdf) (Encoder Only) and [T5](https://arxiv.org/abs/1910.10683) (Encoder-Decoder).
For detailed information and how things work behind the scene please refer the github [repo](https://github.com/NVIDIA/Megatron-LM).
## What is integrated?
Accelerate integrates following feature of Megatron-LM to enable large scale pre-training/finetuning
of BERT (Encoder), GPT (Decoder) or T5 models (Encoder and Decoder):
a. **Tensor Parallelism (TP)**: Reduces memory footprint without much additional communication on intra-node ranks.
Each tensor is split into multiple chunks with each shard residing on separate GPU. At each step, the same mini-batch of data is processed
independently and in parallel by each shard followed by syncing across all GPUs (`all-reduce` operation).
In a simple transformer layer, this leads to 2 `all-reduces` in the forward path and 2 in the backward path.
For more details, please refer research paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using
Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf) and
this section of 🤗 blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#tensor-parallelism).
b. **Pipeline Parallelism (PP)**: Reduces memory footprint and enables large scale training via inter-node parallelization.
Reduces the bubble of naive PP via PipeDream-Flush schedule/1F1B schedule and Interleaved 1F1B schedule.
Layers are distributed uniformly across PP stages. For example, if a model has `24` layers and we have `4` GPUs for
pipeline parallelism, each GPU will have `6` layers (24/4). For more details on schedules to reduce the idle time of PP,
please refer to the research paper [Efficient Large-Scale Language Model Training on GPU Clusters
Using Megatron-LM](https://arxiv.org/pdf/2104.04473.pdf) and
this section of 🤗 blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#pipeline-parallelism).
c. **Sequence Parallelism (SP)**: Reduces memory footprint without any additional communication. Only applicable when using TP.
It reduces activation memory required as it prevents the same copies to be on the tensor parallel ranks
post `all-reduce` by replacing then with `reduce-scatter` and `no-op` operation would be replaced by `all-gather`.
As `all-reduce = reduce-scatter + all-gather`, this saves a ton of activation memory at no added communication cost.
To put it simply, it shards the outputs of each transformer layer along sequence dimension, e.g.,
if the sequence length is `1024` and the TP size is `4`, each GPU will have `256` tokens (1024/4) for each sample.
This increases the batch size that can be supported for training. For more details, please refer to the research paper
[Reducing Activation Recomputation in Large Transformer Models](https://arxiv.org/pdf/2205.05198.pdf).
d. **Data Parallelism (DP)** via Distributed Optimizer: Reduces the memory footprint by sharding optimizer states and gradients across DP ranks
(versus the traditional method of replicating the optimizer state across data parallel ranks).
For example, when using Adam optimizer with mixed-precision training, each parameter accounts for 12 bytes of memory.
This gets distributed equally across the GPUs, i.e., each parameter would account for 3 bytes (12/4) if we have 4 GPUs.
For more details, please refer the research paper [ZeRO: Memory Optimizations Toward Training Trillion
Parameter Models](https://arxiv.org/pdf/1910.02054.pdf) and following section of 🤗 blog
[The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#zero-data-parallelism).
e. **Selective Activation Recomputation**: Reduces the memory footprint of activations significantly via smart activation checkpointing.
It doesn't store activations occupying large memory while being fast to recompute thereby achieving great tradeoff between memory and recomputation.
For example, for GPT-3, this leads to 70% reduction in required memory for activations at the expense of
only 2.7% FLOPs overhead for recomputation of activations. For more details, please refer to the research paper
[Reducing Activation Recomputation in Large Transformer Models](https://arxiv.org/pdf/2205.05198.pdf).
f. **Fused Kernels**: Fused Softmax, Mixed Precision Fused Layer Norm and Fused gradient accumulation to weight gradient computation of linear layer.
PyTorch JIT compiled Fused GeLU and Fused Bias+Dropout+Residual addition.
g. **Support for Indexed datasets**: Efficient binary format of datasets for large scale training. Support for the `mmap`, `cached` index file and the `lazy` loader format.
h. **Checkpoint reshaping and interoperability**: Utility for reshaping Megatron-LM checkpoints of variable
tensor and pipeline parallel sizes to the beloved 🤗 Transformers sharded checkpoints as it has great support with plethora of tools
such as 🤗 Accelerate Big Model Inference, Megatron-DeepSpeed Inference etc.
Support is also available for converting 🤗 Transformers sharded checkpoints to Megatron-LM checkpoint of variable tensor and pipeline parallel sizes
for large scale training.
## Pre-Requisites
You will need to install the latest pytorch, cuda, nccl, and NVIDIA [APEX](https://github.com/NVIDIA/apex#quick-start) releases and the nltk library.
See [documentation](https://github.com/NVIDIA/Megatron-LM#setup) for more details.
Another way to setup the environment is to pull an NVIDIA PyTorch Container that comes with all the required installations from NGC.
Below is a step-by-step method to set up the conda environment:
1. Create a virtual environment
```
conda create --name ml
```
2. Assuming that the machine has CUDA 11.3 installed, installing the corresponding PyTorch GPU Version
```
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
```
3. Install Nvidia APEX
```
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
cd ..
```
4. Installing Megatron-LM
```
git clone https://github.com/NVIDIA/Megatron-LM.git
cd Megatron-LM
git checkout core_r0.5.0
pip install --no-use-pep517 -e .
```
## Accelerate Megatron-LM Plugin
Important features are directly supported via the `accelerate config` command.
An example of the corresponding questions for using Megatron-LM features is shown below:
```bash
:~$ accelerate config --config_file "megatron_gpt_config.yaml"
In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0
Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU): 2
How many different machines will you use (use more than 1 for multi-node training)? [1]:
Do you want to use DeepSpeed? [yes/NO]:
Do you want to use FullyShardedDataParallel? [yes/NO]:
Do you want to use Megatron-LM ? [yes/NO]: yes
What is the Tensor Parallelism degree/size? [1]:2
Do you want to enable Sequence Parallelism? [YES/no]:
What is the Pipeline Parallelism degree/size? [1]:2
What is the number of micro-batches? [1]:2
Do you want to enable selective activation recomputation? [YES/no]:
Do you want to use distributed optimizer which shards optimizer state and gradients across data parallel ranks? [YES/no]:
What is the gradient clipping value based on global L2 Norm (0 to disable)? [1.0]:
How many GPU(s) should be used for distributed training? [1]:4
Do you wish to use FP16 or BF16 (mixed precision)? [NO/fp16/bf16]: bf16
```
The resulting config is shown below:
```
~$ cat megatron_gpt_config.yaml
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: MEGATRON_LM
downcast_bf16: 'no'
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
megatron_lm_config:
megatron_lm_gradient_clipping: 1.0
megatron_lm_num_micro_batches: 2
megatron_lm_pp_degree: 2
megatron_lm_recompute_activations: true
megatron_lm_sequence_parallelism: true
megatron_lm_tp_degree: 2
megatron_lm_use_distributed_optimizer: true
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
use_cpu: false
```
We will take the example of GPT pre-training. The minimal changes required to the official `run_clm_no_trainer.py`
to use Megatron-LM are as follows:
1. As Megatron-LM uses its own implementation of Optimizer, the corresponding scheduler compatible with it needs to be used.
As such, support for only the Megatron-LM's scheduler is present. User will need to create `accelerate.utils.MegatronLMDummyScheduler`.
Example is given below:
```python
from accelerate.utils import MegatronLMDummyScheduler
if accelerator.distributed_type == DistributedType.MEGATRON_LM:
lr_scheduler = MegatronLMDummyScheduler(
optimizer=optimizer,
total_num_steps=args.max_train_steps,
warmup_num_steps=args.num_warmup_steps,
)
else:
lr_scheduler = get_scheduler(
name=args.lr_scheduler_type,
optimizer=optimizer,
num_warmup_steps=args.num_warmup_steps * args.gradient_accumulation_steps,
num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
)
```
2. Getting the details of the total batch size now needs to be cognization of tensor and pipeline parallel sizes.
Example of getting the effective total batch size is shown below:
```python
if accelerator.distributed_type == DistributedType.MEGATRON_LM:
total_batch_size = accelerator.state.megatron_lm_plugin.global_batch_size
else:
total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
```
3. When using Megatron-LM, the losses are already averaged across the data parallel group
```python
if accelerator.distributed_type == DistributedType.MEGATRON_LM:
losses.append(loss)
else:
losses.append(accelerator.gather_for_metrics(loss.repeat(args.per_device_eval_batch_size)))
if accelerator.distributed_type == DistributedType.MEGATRON_LM:
losses = torch.tensor(losses)
else:
losses = torch.cat(losses)
```
4. For Megatron-LM, we need to save the model using `accelerator.save_state`
```python
if accelerator.distributed_type == DistributedType.MEGATRON_LM:
accelerator.save_state(args.output_dir)
else:
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(
args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
)
```
That's it! We are good to go 🚀. Please find the example script in the examples folder at the path `accelerate/examples/by_feature/megatron_lm_gpt_pretraining.py`.
Let's run it for `gpt-large` model architecture using 4 A100-80GB GPUs.
```bash
accelerate launch --config_file megatron_gpt_config.yaml \
examples/by_feature/megatron_lm_gpt_pretraining.py \
--config_name "gpt2-large" \
--tokenizer_name "gpt2-large" \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--block_size 1024 \
--learning_rate 5e-5 \
--per_device_train_batch_size 24 \
--per_device_eval_batch_size 24 \
--num_train_epochs 5 \
--with_tracking \
--report_to "wandb" \
--output_dir "awesome_model"
```
Below are some important excerpts from the output logs:
```bash
Loading extension module fused_dense_cuda...
>>> done with compiling and loading fused kernels. Compilation time: 3.569 seconds
> padded vocab (size: 50257) with 175 dummy tokens (new size: 50432)
Building gpt model in the pre-training mode.
The Megatron LM model weights are initialized at random in `accelerator.prepare`. Please use `accelerator.load_checkpoint` to load a pre-trained checkpoint matching the distributed setup.
Preparing dataloader
Preparing dataloader
Preparing model
> number of parameters on (tensor, pipeline) model parallel rank (1, 0): 210753280
> number of parameters on (tensor, pipeline) model parallel rank (1, 1): 209445120
> number of parameters on (tensor, pipeline) model parallel rank (0, 0): 210753280
> number of parameters on (tensor, pipeline) model parallel rank (0, 1): 209445120
Preparing optimizer
Preparing scheduler
> learning rate decay style: linear
10/10/2022 22:57:22 - INFO - __main__ - ***** Running training *****
10/10/2022 22:57:22 - INFO - __main__ - Num examples = 2318
10/10/2022 22:57:22 - INFO - __main__ - Num Epochs = 5
10/10/2022 22:57:22 - INFO - __main__ - Instantaneous batch size per device = 24
10/10/2022 22:57:22 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 48
10/10/2022 22:57:22 - INFO - __main__ - Gradient Accumulation steps = 1
10/10/2022 22:57:22 - INFO - __main__ - Total optimization steps = 245
20%|████████████▍ | 49/245 [01:04<04:09, 1.27s/it]
10/10/2022 22:58:29 - INFO - __main__ - epoch 0: perplexity: 1222.1594275215962 eval_loss: 7.10837459564209
40%|████████████████████████▊ | 98/245 [02:10<03:07, 1.28s/it]
10/10/2022 22:59:35 - INFO - __main__ - epoch 1: perplexity: 894.5236583794557 eval_loss: 6.796291351318359
60%|████████████████████████████████████▌ | 147/245 [03:16<02:05, 1.28s/it]
10/10/2022 23:00:40 - INFO - __main__ - epoch 2: perplexity: 702.8458788508042 eval_loss: 6.555137634277344
80%|████████████████████████████████████████████████▊ | 196/245 [04:22<01:02, 1.28s/it]
10/10/2022 23:01:46 - INFO - __main__ - epoch 3: perplexity: 600.3220028695281 eval_loss: 6.39746618270874
100%|█████████████████████████████████████████████████████████████| 245/245 [05:27<00:00, 1.28s/it]
```
There are a large number of other options/features that one can set using `accelerate.utils.MegatronLMPlugin`.
## Advanced features to leverage writing custom train step and Megatron-LM Indexed Datasets
For leveraging more features, please go through below details.
1. Below is an example of changes required to customize the Train Step while using Megatron-LM.
You will implement the `accelerate.utils.AbstractTrainStep` or inherit from their corresponding children
`accelerate.utils.GPTTrainStep`, `accelerate.utils.BertTrainStep` or `accelerate.utils.T5TrainStep`.
```python
from accelerate.utils import MegatronLMDummyScheduler, GPTTrainStep, avg_losses_across_data_parallel_group
# Custom loss function for the Megatron model
class GPTTrainStepWithCustomLoss(GPTTrainStep):
def __init__(self, megatron_args, **kwargs):
super().__init__(megatron_args)
self.kwargs = kwargs
def get_loss_func(self):
def loss_func(inputs, loss_mask, output_tensor):
batch_size, seq_length = output_tensor.shape
losses = output_tensor.float()
loss_mask = loss_mask.view(-1).float()
loss = losses.view(-1) * loss_mask
# Resize and average loss per sample
loss_per_sample = loss.view(batch_size, seq_length).sum(axis=1)
loss_mask_per_sample = loss_mask.view(batch_size, seq_length).sum(axis=1)
loss_per_sample = loss_per_sample / loss_mask_per_sample
# Calculate and scale weighting
weights = torch.stack([(inputs == kt).float() for kt in self.kwargs["keytoken_ids"]]).sum(axis=[0, 2])
weights = 1.0 + self.kwargs["alpha"] * weights
# Calculate weighted average
weighted_loss = (loss_per_sample * weights).mean()
# Reduce loss across data parallel groups
averaged_loss = avg_losses_across_data_parallel_group([weighted_loss])
return weighted_loss, {"lm loss": averaged_loss[0]}
return loss_func
def get_forward_step_func(self):
def forward_step(data_iterator, model):
"""Forward step."""
# Get the batch.
tokens, labels, loss_mask, attention_mask, position_ids = self.get_batch(data_iterator)
output_tensor = model(tokens, position_ids, attention_mask, labels=labels)
return output_tensor, partial(self.loss_func, tokens, loss_mask)
return forward_step
def main():
# Custom loss function for the Megatron model
keytoken_ids = []
keywords = ["plt", "pd", "sk", "fit", "predict", " plt", " pd", " sk", " fit", " predict"]
for keyword in keywords:
ids = tokenizer([keyword]).input_ids[0]
if len(ids) == 1:
keytoken_ids.append(ids[0])
accelerator.print(f"Keytoken ids: {keytoken_ids}")
accelerator.state.megatron_lm_plugin.custom_train_step_class = GPTTrainStepWithCustomLoss
accelerator.state.megatron_lm_plugin.custom_train_step_kwargs = {
"keytoken_ids": keytoken_ids,
"alpha": 0.25,
}
```
2. For using the Megatron-LM datasets, a few more changes are required. Dataloaders for these datasets
are available only on rank 0 of each tensor parallel group. As such, there are rank where dataloader won't be
available and this requires tweaks to the training loop. Being able to do all this shows how
flexible and extensible 🤗 Accelerate is. The changes required are as follows.
a. For Megatron-LM indexed datasets, we need to use `MegatronLMDummyDataLoader`
and pass the required dataset args to it such as `data_path`, `seq_length` etc.
See [here](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/arguments.py#L804) for the list of available args.
```python
from accelerate.utils import MegatronLMDummyDataLoader
megatron_dataloader_config = {
"data_path": args.data_path,
"splits_string": args.splits_string,
"seq_length": args.block_size,
"micro_batch_size": args.per_device_train_batch_size,
}
megatron_dataloader = MegatronLMDummyDataLoader(**megatron_dataloader_config)
accelerator.state.megatron_lm_plugin.megatron_dataset_flag = True
```
b. `megatron_dataloader` is repeated 3 times to get training, validation and test dataloaders
as per the `args.splits_string` proportions
```python
model, optimizer, lr_scheduler, train_dataloader, eval_dataloader, _ = accelerator.prepare(
model, optimizer, lr_scheduler, megatron_dataloader, megatron_dataloader, megatron_dataloader
)
```
c. Changes to training and evaluation loops as dataloader is only available on tensor parallel ranks 0
So, we need to iterate only if the dataloader isn't `None` else provide empty dict
As such, we loop using `while` loop and break when `completed_steps` is equal to `args.max_train_steps`
This is similar to the Megatron-LM setup wherein user has to provide `max_train_steps` when using Megaton-LM indexed datasets.
This displays how flexible and extensible 🤗 Accelerate is.
```python
while completed_steps < args.max_train_steps:
model.train()
batch = next(train_dataloader) if train_dataloader is not None else {}
outputs = model(**batch)
loss = outputs.loss
...
if completed_steps % eval_interval == 0:
eval_completed_steps = 0
losses = []
while eval_completed_steps < eval_iters:
model.eval()
with torch.no_grad():
batch = next(eval_dataloader) if eval_dataloader is not None else {}
outputs = model(**batch)
```
## Utility for Checkpoint reshaping and interoperability
1. The scripts for these are present in 🤗 Transformers library under respective models.
Currently, it is available for GPT model [checkpoint_reshaping_and_interoperability.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/checkpoint_reshaping_and_interoperability.py)
2. Below is an example of conversion of checkpoint from Megatron-LM to universal 🤗 Transformers sharded checkpoint.
```bash
python checkpoint_reshaping_and_interoperability.py \
--convert_checkpoint_from_megatron_to_transformers \
--load_path "gpt/iter_0005000" \
--save_path "gpt/trfs_checkpoint" \
--max_shard_size "200MB" \
--tokenizer_name "gpt2" \
--print-checkpoint-structure
```
3. Conversion of checkpoint from transformers to megatron with `tp_size=2`, `pp_size=2` and `dp_size=2`.
```bash
python checkpoint_utils/megatgron_gpt2/checkpoint_reshaping_and_interoperability.py \
--load_path "gpt/trfs_checkpoint" \
--save_path "gpt/megatron_lm_checkpoint" \
--target_tensor_model_parallel_size 2 \
--target_pipeline_model_parallel_size 2 \
--target_data_parallel_size 2 \
--target_params_dtype "bf16" \
--make_vocab_size_divisible_by 128 \
--use_distributed_optimizer \
--print-checkpoint-structure
```
## Megatron-LM GPT models support returning logits and `megatron_generate` function for text generation
1. Returning logits require setting `require_logits=True` in MegatronLMPlugin as shown below.
These would be available on the in the last stage of pipeline.
```python
megatron_lm_plugin = MegatronLMPlugin(return_logits=True)
```
2. `megatron_generate` method for Megatron-LM GPT model: This will use Tensor and Pipeline Parallelism to complete
generations for a batch of inputs when using greedy with/without top_k/top_p sampling and for individual prompt inputs when using beam search decoding.
Only a subset of features of transformers generate is supported. This will help in using large models via tensor and pipeline parallelism
for generation (already does key-value caching and uses fused kernels by default).
This requires data parallel size to be 1, sequence parallelism and activation checkpointing to be disabled.
It also requires specifying path to tokenizer's vocab file and merges file.
Below example shows how to configure and use `megatron_generate` method for Megatron-LM GPT model.
```python
# specifying tokenizer's vocab and merges file
vocab_file = os.path.join(args.resume_from_checkpoint, "vocab.json")
merge_file = os.path.join(args.resume_from_checkpoint, "merges.txt")
other_megatron_args = {"vocab_file": vocab_file, "merge_file": merge_file}
megatron_lm_plugin = MegatronLMPlugin(other_megatron_args=other_megatron_args)
# inference using `megatron_generate` functionality
tokenizer.pad_token = tokenizer.eos_token
max_new_tokens = 64
batch_texts = [
"Are you human?",
"The purpose of life is",
"The arsenal was constructed at the request of",
"How are you doing these days?",
]
batch_encodings = tokenizer(batch_texts, return_tensors="pt", padding=True)
# top-p sampling
generated_tokens = model.megatron_generate(
batch_encodings["input_ids"],
batch_encodings["attention_mask"],
max_new_tokens=max_new_tokens,
top_p=0.8,
top_p_decay=0.5,
temperature=0.9,
)
decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy())
accelerator.print(decoded_preds)
# top-k sampling
generated_tokens = model.megatron_generate(
batch_encodings["input_ids"],
batch_encodings["attention_mask"],
max_new_tokens=max_new_tokens,
top_k=50,
temperature=0.9,
)
decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy())
accelerator.print(decoded_preds)
# adding `bos` token at the start
generated_tokens = model.megatron_generate(
batch_encodings["input_ids"], batch_encodings["attention_mask"], max_new_tokens=max_new_tokens, add_BOS=True
)
decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy())
accelerator.print(decoded_preds)
# beam search => only takes single prompt
batch_texts = ["The purpose of life is"]
batch_encodings = tokenizer(batch_texts, return_tensors="pt", padding=True)
generated_tokens = model.megatron_generate(
batch_encodings["input_ids"],
batch_encodings["attention_mask"],
max_new_tokens=max_new_tokens,
num_beams=20,
length_penalty=1.5,
)
decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy())
accelerator.print(decoded_preds)
```
3. An end-to-end example of using `megatron_generate` method for Megatron-LM GPT model is available at
[megatron_gpt2_generation.py](https://github.com/pacman100/accelerate-megatron-test/blob/main/src/inference/megatron_gpt2_generation.py) with
config file [megatron_lm_gpt_generate_config.yaml](https://github.com/pacman100/accelerate-megatron-test/blob/main/src/Configs/megatron_lm_gpt_generate_config.yaml).
The bash script with accelerate launch command is available at [megatron_lm_gpt_generate.sh](https://github.com/pacman100/accelerate-megatron-test/blob/main/megatron_lm_gpt_generate.sh).
The output logs of the script are available at [megatron_lm_gpt_generate.log](https://github.com/pacman100/accelerate-megatron-test/blob/main/output_logs/megatron_lm_gpt_generate.log).
## Support for ROPE and ALiBi Positional embeddings and Multi-Query Attention
1. For ROPE/ALiBi attention, pass `position_embedding_type` with `("absolute" | "rotary" | "alibi")` to `MegatronLMPlugin` as shown below.
```python
other_megatron_args = {"position_embedding_type": "alibi"}
megatron_lm_plugin = MegatronLMPlugin(other_megatron_args=other_megatron_args)
```
2. For Multi-Query Attention, pass `attention_head_type` with `("multihead" | "multiquery")` to `MegatronLMPlugin` as shown below.
```python
other_megatron_args = {"attention_head_type": "multiquery"}
megatron_lm_plugin = MegatronLMPlugin(other_megatron_args=other_megatron_args)
```
## Caveats
1. Supports Transformers GPT2, Megatron-BERT and T5 models.
This covers Decoder only, Encode only and Encoder-Decoder model classes.
2. Only loss is returned from model forward pass as
there is quite complex interplay of pipeline, tensor and data parallelism behind the scenes.
The `model(**batch_data)` call return loss(es) averaged across the data parallel ranks.
This is fine for most cases wherein pre-training jobs are run using Megatron-LM features and
you can easily compute the `perplexity` using the loss.
For GPT model, returning logits in addition to loss(es) is supported.
These logits aren't gathered across data parallel ranks. Use `accelerator.utils.gather_across_data_parallel_groups`
to gather logits across data parallel ranks. These logits along with labels can be used for computing various
performance metrics.
3. The main process is the last rank as the losses/logits are available in the last stage of pipeline.
`accelerator.is_main_process` and `accelerator.is_local_main_process` return `True` for last rank when using
Megatron-LM integration.
4. In `accelerator.prepare` call, a Megatron-LM model corresponding to a given Transformers model is created
with random weights. Please use `accelerator.load_state` to load the Megatron-LM checkpoint with matching TP, PP and DP partitions.
5. Currently, checkpoint reshaping and interoperability support is only available for GPT.
Soon it will be extended to BERT and T5.
6. `gradient_accumulation_steps` needs to be 1. When using Megatron-LM, micro batches in pipeline parallelism
setting is synonymous with gradient accumulation.
7. When using Megatron-LM, use `accelerator.save_state` and `accelerator.load_state` for saving and loading checkpoints.
8. Below are the mapping from Megatron-LM model architectures to the the equivalent 🤗 transformers model architectures.
Only these 🤗 transformers model architectures are supported.
a. Megatron-LM [BertModel](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/bert_model.py) :
🤗 transformers models with `megatron-bert` in config's model type, e.g.,
[MegatronBERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)
b. Megatron-LM [GPTModel](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py) :
🤗 transformers models with `gpt2` in config's model type, e.g.,
[OpenAI GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)
c. Megatron-LM [T5Model](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/t5_model.py) :
🤗 transformers models with `t5` in config's model type, e.g.,
[T5](https://huggingface.co/docs/transformers/model_doc/t5) and
[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)

View File

@ -0,0 +1,137 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Understanding how big of a model can fit on your machine
One very difficult aspect when exploring potential models to use on your machine is knowing just how big of a model will *fit* into memory with your current graphics card (such as loading the model onto CUDA).
To help alleviate this, 🤗 Accelerate has a CLI interface through `accelerate estimate-memory`. This tutorial will
help walk you through using it, what to expect, and at the end link to the interactive demo hosted on the 🤗 Hub which will
even let you post those results directly on the model repo!
Currently we support searching for models that can be used in `timm` and `transformers`.
<Tip>
This API will load the model into memory on the `meta` device, so we are not actually downloading
and loading the full weights of the model into memory, nor do we need to. As a result it's
perfectly fine to measure 8 billion parameter models (or more), without having to worry about
if your CPU can handle it!
</Tip>
## Gradio Demos
Below are a few gradio demos related to what was described above. The first is the official Hugging Face memory estimation space, utilizing Accelerate directly:
<div class="block dark:hidden">
<iframe
src="https://hf-accelerate-model-memory-usage.hf.space?__theme=light"
width="850"
height="1600"
></iframe>
</div>
<div class="hidden dark:block">
<iframe
src="https://hf-accelerate-model-memory-usage.hf.space?__theme=dark"
width="850"
height="1600"
></iframe>
</div>
A community member has taken the idea and expanded it further, allowing you to filter models directly and see if you can run a particular LLM given GPU constraints and LoRA configurations. To play with it, see [here](https://huggingface.co/spaces/Vokturz/can-it-run-llm) for more details.
## The Command
When using `accelerate estimate-memory`, you need to pass in the name of the model you want to use, potentially the framework
that model utilizing (if it can't be found automatically), and the data types you want the model to be loaded in with.
For example, here is how we can calculate the memory footprint for `bert-base-cased`:
```bash
accelerate estimate-memory bert-base-cased
```
This will download the `config.json` for `bert-based-cased`, load the model on the `meta` device, and report back how much space
it will use:
Memory Usage for loading `bert-base-cased`:
| dtype | Largest Layer | Total Size | Training using Adam |
|---------|---------------|------------|---------------------|
| float32 | 84.95 MB | 418.18 MB | 1.61 GB |
| float16 | 42.47 MB | 206.59 MB | 826.36 MB |
| int8 | 21.24 MB | 103.29 MB | 413.18 MB |
| int4 | 10.62 MB | 51.65 MB | 206.59 MB |
By default it will return all the supported dtypes (`int4` through `float32`), but if you are interested in specific ones these can be filtered.
### Specific libraries
If the source library cannot be determined automatically (like it could in the case of `bert-base-cased`), a library name can
be passed in.
```bash
accelerate estimate-memory HuggingFaceM4/idefics-80b-instruct --library_name transformers
```
Memory Usage for loading `HuggingFaceM4/idefics-80b-instruct`:
| dtype | Largest Layer | Total Size | Training using Adam |
|---------|---------------|------------|---------------------|
| float32 | 3.02 GB | 297.12 GB | 1.16 TB |
| float16 | 1.51 GB | 148.56 GB | 594.24 GB |
| int8 | 772.52 MB | 74.28 GB | 297.12 GB |
| int4 | 386.26 MB | 37.14 GB | 148.56 GB |
```bash
accelerate estimate-memory timm/resnet50.a1_in1k --library_name timm
```
Memory Usage for loading `timm/resnet50.a1_in1k`:
| dtype | Largest Layer | Total Size | Training using Adam |
|---------|---------------|------------|---------------------|
| float32 | 9.0 MB | 97.7 MB | 390.78 MB |
| float16 | 4.5 MB | 48.85 MB | 195.39 MB |
| int8 | 2.25 MB | 24.42 MB | 97.7 MB |
| int4 | 1.12 MB | 12.21 MB | 48.85 MB |
### Specific dtypes
As mentioned earlier, while we return `int4` through `float32` by default, any dtype can be used from `float32`, `float16`, `int8`, and `int4`.
To do so, pass them in after specifying `--dtypes`:
```bash
accelerate estimate-memory bert-base-cased --dtypes float32 float16
```
Memory Usage for loading `bert-base-cased`:
| dtype | Largest Layer | Total Size | Training using Adam |
|---------|---------------|------------|---------------------|
| float32 | 84.95 MB | 413.18 MB | 1.61 GB |
| float16 | 42.47 MB | 206.59 MB | 826.36 MB |
## Caveats with this calculator
This calculator will tell you how much memory is needed to purely load the model in, *not* to perform inference.
This calculation is accurate within a few % of the actual value, so it is a very good view of just how much memory it will take. For instance loading `bert-base-cased` actually takes `413.68 MB` when loaded on CUDA in full precision, and the calculator estimates `413.18 MB`.
When performing inference you can expect to add up to an additional 20% as found by [EleutherAI](https://blog.eleuther.ai/transformer-math/). We'll be conducting research into finding a more accurate estimate to these values, and will update
this calculator once done.

View File

@ -0,0 +1,54 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerated PyTorch Training on Mac
With PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training.
This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac.
Apple's Metal Performance Shaders (MPS) as a backend for PyTorch enables this and can be used via the new `"mps"` device.
This will map computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS.
For more information please refer official documents [Introducing Accelerated PyTorch Training on Mac](https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/)
and [MPS BACKEND](https://pytorch.org/docs/stable/notes/mps.html).
### Benefits of Training and Inference using Apple Silicon Chips
1. Enables users to train larger networks or batch sizes locally
2. Reduces data retrieval latency and provides the GPU with direct access to the full memory store due to unified memory architecture.
Therefore, improving end-to-end performance.
3. Reduces costs associated with cloud-based development or the need for additional local GPUs.
**Pre-requisites**: To install torch with mps support,
please follow this nice medium article [GPU-Acceleration Comes to PyTorch on M1 Macs](https://medium.com/towards-data-science/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1).
## How it works out of the box
It is enabled by default on MacOs machines with MPS enabled Apple Silicon GPUs.
To disable it, pass `--cpu` flag to `accelerate launch` command or answer the corresponding question when answering the `accelerate config` questionnaire.
You can directly run the following script to test it out on MPS enabled Apple Silicon machines:
```bash
accelerate launch /examples/cv_example.py --data_dir images
```
## A few caveats to be aware of
1. We strongly recommend to install PyTorch >= 1.13 (nightly version at the time of writing) on your MacOS machine.
It has major fixes related to model correctness and performance improvements for transformer based models.
Please refer to https://github.com/pytorch/pytorch/issues/82707 for more details.
2. Distributed setups `gloo` and `nccl` are not working with `mps` device.
This means that currently only single GPU of `mps` device type can be used.
Finally, please, remember that, 🤗 `Accelerate` only integrates MPS backend, therefore if you
have any problems or questions with regards to MPS backend usage, please, file an issue with [PyTorch GitHub](https://github.com/pytorch/pytorch/issues).

View File

@ -0,0 +1,334 @@
<!--
Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Profiler
Profiler is a tool that allows the collection of performance metrics during training and inference. Profilers context manager API can be used to better understand what model operators are the most expensive, examine their input shapes and stack traces, study device kernel activity, and visualize the execution trace. It provides insights into the performance of your model, allowing you to optimize and improve it.
This guide explains how to use PyTorch Profiler to measure the time and memory consumption of the models operators and how to integrate this with 🤗 Accelerate. We will cover various use cases and provide examples for each.
## Using profiler to analyze execution time
Profiler allows one to check which operators were called during the execution of a code range wrapped with a profiler context manager.
Lets see how we can use profiler to analyze the execution time:
<hfoptions id="cpu execution time">
<hfoption id="PyTorch">
```python
import torch
import torchvision.models as models
from torch.profiler import profile, record_function, ProfilerActivity
model = models.resnet18()
inputs = torch.randn(5, 3, 224, 224)
with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof:
model(inputs)
print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))
```
</hfoption>
<hfoption id="Accelerate">
```python
from accelerate import Accelerator, ProfileKwargs
import torch
import torchvision.models as models
model = models.resnet18()
inputs = torch.randn(5, 3, 224, 224)
profile_kwargs = ProfileKwargs(
activities=["cpu"],
record_shapes=True
)
accelerator = Accelerator(cpu=True, kwargs_handlers=[profile_kwargs])
model = accelerator.prepare(model)
with accelerator.profile() as prof:
with torch.no_grad():
model(inputs)
print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))
```
</hfoption>
</hfoptions>
The resulting table output (omitting some columns):
```
--------------------------------- ------------ ------------ ------------ ------------
Name Self CPU CPU total CPU time avg # of Calls
--------------------------------- ------------ ------------ ------------ ------------
aten::conv2d 171.000us 52.260ms 2.613ms 20
aten::convolution 227.000us 52.089ms 2.604ms 20
aten::_convolution 270.000us 51.862ms 2.593ms 20
aten::mkldnn_convolution 51.273ms 51.592ms 2.580ms 20
aten::batch_norm 118.000us 7.059ms 352.950us 20
aten::_batch_norm_impl_index 315.000us 6.941ms 347.050us 20
aten::native_batch_norm 6.305ms 6.599ms 329.950us 20
aten::max_pool2d 40.000us 4.008ms 4.008ms 1
aten::max_pool2d_with_indices 3.968ms 3.968ms 3.968ms 1
aten::add_ 780.000us 780.000us 27.857us 28
--------------------------------- ------------ ------------ ------------ ------------
Self CPU time total: 67.016ms
```
To get a finer granularity of results and include operator input shapes, pass `group_by_input_shape=True` (note: this requires running the profiler with `record_shapes=True`):
```python
print(prof.key_averages(group_by_input_shape=True).table(sort_by="cpu_time_total", row_limit=10))
```
## Using profiler to analyze memory consumption
Profiler can also show the amount of memory (used by the models tensors) that was allocated (or released) during the execution of the models operators. To enable memory profiling functionality pass `profile_memory=True`.
<hfoptions id="memory consumption">
<hfoption id="PyTorch">
```python
model = models.resnet18()
inputs = torch.randn(5, 3, 224, 224)
with profile(activities=[ProfilerActivity.CPU],
profile_memory=True, record_shapes=True) as prof:
model(inputs)
print(prof.key_averages().table(sort_by="self_cpu_memory_usage", row_limit=10))
```
</hfoption>
<hfoption id="Accelerate">
```python
model = models.resnet18()
inputs = torch.randn(5, 3, 224, 224)
profile_kwargs = ProfileKwargs(
activities=["cpu"],
profile_memory=True,
record_shapes=True
)
accelerator = Accelerator(cpu=True, kwargs_handlers=[profile_kwargs])
model = accelerator.prepare(model)
with accelerator.profile() as prof:
model(inputs)
print(prof.key_averages().table(sort_by="self_cpu_memory_usage", row_limit=10))
```
</hfoption>
</hfoptions>
The resulting table output (omitting some columns):
```
--------------------------------- ------------ ------------ ------------
Name CPU Mem Self CPU Mem # of Calls
--------------------------------- ------------ ------------ ------------
aten::empty 94.85 Mb 94.85 Mb 205
aten::max_pool2d_with_indices 11.48 Mb 11.48 Mb 1
aten::addmm 19.53 Kb 19.53 Kb 1
aten::mean 10.00 Kb 10.00 Kb 1
aten::empty_strided 492 b 492 b 5
aten::cat 240 b 240 b 6
aten::abs 480 b 240 b 4
aten::masked_select 120 b 112 b 1
aten::ne 61 b 53 b 3
aten::eq 30 b 30 b 1
--------------------------------- ------------ ------------ ------------
Self CPU time total: 69.332ms
```
## Exporting chrome trace
You can examine the sequence of profiled operators and CUDA kernels in Chrome trace viewer (`chrome://tracing`):
![profile_export](https://github.com/huggingface/accelerate/assets/100389977/5acb193f-6d11-4f7b-9873-c600c19e8172)
<hfoptions id="exporting chrome trace">
<hfoption id="PyTorch">
```python
model = models.resnet18().cuda()
inputs = torch.randn(5, 3, 224, 224).cuda()
with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA]) as prof:
model(inputs)
prof.export_chrome_trace("trace.json")
```
</hfoption>
<hfoption id="Accelerate">
```python
profile_kwargs = ProfileKwargs(
activities=["cpu", "cuda"],
output_trace_dir="trace"
)
accelerator = Accelerator(kwargs_handlers=[profile_kwargs])
model = accelerator.prepare(model)
with accelerator.profile() as prof:
model(inputs)
# The trace will be saved to the specified directory
```
</hfoption>
</hfoptions>
## Using Profiler to Analyze Long-Running Jobs
Profiler offers an additional API to handle long-running jobs (such as training loops). Tracing all of the execution can be slow and result in very large trace files. To avoid this, use optional arguments:
- `schedule_option`: Scheduling options allow you to control when profiling is active. This is useful for long-running jobs to avoid collecting too much data. Available keys are `wait`, `warmup`, `active`, `repeat` and `skip_first`. The profiler will skip the first `skip_first` steps, then wait for `wait` steps, then do the warmup for the next `warmup` steps, then do the active recording for the next `active` steps and then repeat the cycle starting with `wait` steps. The optional number of cycles is specified with the `repeat` parameter, the zero value means that the cycles will continue until the profiling is finished.
- `on_trace_ready`: specifies a function that takes a reference to the profiler as an input and is called by the profiler each time the new trace is ready.
To illustrate how the API works, consider the following example:
<hfoptions id="custom handler">
<hfoption id="PyTorch">
```python
from torch.profiler import schedule
my_schedule = schedule(
skip_first=10,
wait=5,
warmup=1,
active=3,
repeat=2
)
def trace_handler(p):
output = p.key_averages().table(sort_by="self_cuda_time_total", row_limit=10)
print(output)
p.export_chrome_trace("/tmp/trace_" + str(p.step_num) + ".json")
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
schedule=my_schedule,
on_trace_ready=trace_handler
) as p:
for idx in range(8):
model(inputs)
p.step()
```
</hfoption>
<hfoption id="Accelerate">
```python
def trace_handler(p):
output = p.key_averages().table(sort_by="self_cuda_time_total", row_limit=10)
print(output)
p.export_chrome_trace("/tmp/trace_" + str(p.step_num) + ".json")
profile_kwargs = ProfileKwargs(
activities=["cpu", "cuda"],
schedule_option={"wait": 5, "warmup": 1, "active": 3, "repeat": 2, "skip_first": 10},
on_trace_ready=trace_handler
)
accelerator = Accelerator(kwargs_handlers=[profile_kwargs])
model = accelerator.prepare(model)
with accelerator.profile() as prof:
for idx in range(8):
model(inputs)
prof.step()
```
</hfoption>
</hfoptions>
## FLOPS
Use formula to estimate the FLOPs (floating point operations) of specific operators (matrix multiplication and 2D convolution).
To measure floating-point operations (FLOPS):
<hfoptions id="FLOPS">
<hfoption id="PyTorch">
```python
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
with_flops=True
) as prof:
model(inputs)
print(prof.key_averages().table(sort_by="flops", row_limit=10))
```
</hfoption>
<hfoption id="Accelerate">
```python
profile_kwargs = ProfileKwargs(
with_flops=True
)
accelerator = Accelerator(kwargs_handlers=[profile_kwargs])
with accelerator.profile() as prof:
model(inputs)
print(prof.key_averages().table(sort_by="flops", row_limit=10))
```
</hfoption>
</hfoptions>
The resulting table output (omitting some columns):
```
------------------------------------------------------- ------------ ------------ ------------
Name Self CPU Self CUDA Total FLOPs
------------------------------------------------------- ------------ ------------ ------------
aten::conv2d 197.000us 0.000us 18135613440.000
aten::addmm 103.000us 17.000us 5120000.000
aten::mul 29.000us 2.000us 30.000
aten::convolution 409.000us 0.000us --
aten::_convolution 253.000us 0.000us --
aten::cudnn_convolution 5.465ms 2.970ms --
cudaEventRecord 138.000us 0.000us --
cudaStreamIsCapturing 43.000us 0.000us --
cudaStreamGetPriority 40.000us 0.000us --
cudaDeviceGetStreamPriorityRange 10.000us 0.000us --
------------------------------------------------------- ------------ ------------ ------------
Self CPU time total: 21.938ms
Self CUDA time total: 4.165ms
```
## Conclusion and Further Information
PyTorch Profiler is a powerful tool for analyzing the performance of your models. By integrating it with 🤗 Accelerate, you can easily profile your models and gain insights into their performance, helping you to optimize and improve them.
For more detailed information, refer to the [PyTorch Profiler documentation](https://pytorch.org/docs/stable/profiler.html).

View File

@ -0,0 +1,136 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Quantization
## `bitsandbytes` Integration
🤗 Accelerate brings `bitsandbytes` quantization to your model. You can now load any pytorch model in 8-bit or 4-bit with a few lines of code.
If you want to use 🤗 Transformers models with `bitsandbytes`, you should follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization).
To learn more about how the `bitsandbytes` quantization works, check out the blog posts on [8-bit quantization](https://huggingface.co/blog/hf-bitsandbytes-integration) and [4-bit quantization](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
### Pre-Requisites
You will need to install the following requirements:
- Install `bitsandbytes` library
```bash
pip install bitsandbytes
```
- Install latest `accelerate` from source
```bash
pip install git+https://github.com/huggingface/accelerate.git
```
- Install `minGPT` and `huggingface_hub` to run examples
```bash
git clone https://github.com/karpathy/minGPT.git
pip install minGPT/
pip install huggingface_hub
```
### How it works
First, we need to initialize our model. To save memory, we can initialize an empty model using the context manager [`init_empty_weights`].
Let's take the GPT2 model from minGPT library.
```py
from accelerate import init_empty_weights
from mingpt.model import GPT
model_config = GPT.get_default_config()
model_config.model_type = 'gpt2-xl'
model_config.vocab_size = 50257
model_config.block_size = 1024
with init_empty_weights():
empty_model = GPT(model_config)
```
Then, we need to get the path to the weights of your model. The path can be the state_dict file (e.g. "pytorch_model.bin") or a folder containing the sharded checkpoints.
```py
from huggingface_hub import snapshot_download
weights_location = snapshot_download(repo_id="marcsun13/gpt2-xl-linear-sharded")
```
Finally, you need to set your quantization configuration with [`~utils.BnbQuantizationConfig`].
Here's an example for 8-bit quantization:
```py
from accelerate.utils import BnbQuantizationConfig
bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True, llm_int8_threshold = 6)
```
Here's an example for 4-bit quantization:
```py
from accelerate.utils import BnbQuantizationConfig
bnb_quantization_config = BnbQuantizationConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4")
```
To quantize your empty model with the selected configuration, you need to use [`~utils.load_and_quantize_model`].
```py
from accelerate.utils import load_and_quantize_model
quantized_model = load_and_quantize_model(empty_model, weights_location=weights_location, bnb_quantization_config=bnb_quantization_config, device_map = "auto")
```
### Saving and loading 8-bit model
You can save your 8-bit model with accelerate using [`~Accelerator.save_model`].
```py
from accelerate import Accelerator
accelerate = Accelerator()
new_weights_location = "path/to/save_directory"
accelerate.save_model(quantized_model, new_weights_location)
quantized_model_from_saved = load_and_quantize_model(empty_model, weights_location=new_weights_location, bnb_quantization_config=bnb_quantization_config, device_map = "auto")
```
Note that 4-bit model serialization is currently not supported.
### Offload modules to cpu and disk
You can offload some modules to cpu/disk if you don't have enough space on the GPU to store the entire model on your GPUs.
This uses big model inference under the hood. Check this [documentation](https://huggingface.co/docs/accelerate/usage_guides/big_modeling) for more details.
For 8-bit quantization, the selected modules will be converted to 8-bit precision.
For 4-bit quantization, the selected modules will be kept in `torch_dtype` that the user passed in `BnbQuantizationConfig`. We will add support to convert these offloaded modules in 4-bit when 4-bit serialization will be possible.
You just need to pass a custom `device_map` in order to offload modules on cpu/disk. The offload modules will be dispatched on the GPU when needed. Here's an example :
```py
device_map = {
"transformer.wte": 0,
"transformer.wpe": 0,
"transformer.drop": 0,
"transformer.h": "cpu",
"transformer.ln_f": "disk",
"lm_head": "disk",
}
```
### Fine-tune a quantized model
It is not possible to perform pure 8bit or 4bit training on these models. However, you can train these models by leveraging parameter efficient fine tuning methods (PEFT) and train for example adapters on top of them. Please have a look at [peft](https://github.com/huggingface/peft) library for more details.
Currently, you can't add adapters on top of any quantized model. However, with the official support of adapters with 🤗 Transformers models, you can fine-tune quantized models. If you want to finetune a 🤗 Transformers model , follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization) instead. Check out this [demo](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing) on how to fine-tune a 4-bit 🤗 Transformers model.
Note that you dont need to pass `device_map` when loading the model for training. It will automatically load your model on your GPU. Please note that `device_map=auto` should be used for inference only.
### Example demo - running GPT2 1.5b on a Google Colab
Check out the Google Colab [demo](https://colab.research.google.com/drive/1T1pOgewAWVpR9gKpaEWw4orOrzPFb3yM?usp=sharing) for running quantized models on a GTP2 model. The GPT2-1.5B model checkpoint is in FP32 which uses 6GB of memory. After quantization, it uses 1.6GB with 8-bit modules and 1.2GB with 4-bit modules.

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Amazon SageMaker
@ -23,7 +26,7 @@ make it easier than ever to train Hugging Face Transformer models in [Amazon Sag
Before you can run your 🤗 Accelerate scripts on Amazon SageMaker you need to sign up for an AWS account. If you do not
have an AWS account yet learn more [here](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-set-up.html).
After you have your AWS Account you need to install the `sagemaker` sdk for 🤗 Accelerate with.
After you have your AWS Account you need to install the `sagemaker` sdk for 🤗 Accelerate with:
```bash
pip install "accelerate[sagemaker]" --upgrade
@ -31,7 +34,7 @@ pip install "accelerate[sagemaker]" --upgrade
🤗 Accelerate currently uses the 🤗 DLCs, with `transformers`, `datasets` and `tokenizers` pre-installed. 🤗
Accelerate is not in the DLC yet (will soon be added!) so to use it within Amazon SageMaker you need to create a
`requirements.txt` in the same directory where your training script is located and add it as dependency.
`requirements.txt` in the same directory where your training script is located and add it as dependency:
```
accelerate
@ -43,7 +46,7 @@ You should also add any other dependencies you have to this `requirements.txt`.
### Configure 🤗 Accelerate
You can configure the launch configuration for Amazon SageMaker the same as you do for non SageMaker training jobs with
the 🤗 Accelerate CLI.
the 🤗 Accelerate CLI:
```bash
accelerate config
@ -54,7 +57,7 @@ accelerate config
<Tip>
🤗 Accelerate is not saving any of your credentials.
🤗 Accelerate is not saving any of your credentials.
</Tip>
@ -62,7 +65,7 @@ accelerate config
The training script is very similar to a training script you might run outside of SageMaker, but to save your model
after training you need to specify either `/opt/ml/model` or use `os.environ["SM_MODEL_DIR"]` as your save
directory. After training, artifacts in this directory are uploaded to S3.
directory. After training, artifacts in this directory are uploaded to S3:
```diff
@ -72,14 +75,14 @@ directory. After training, artifacts in this directory are uploaded to S3.
<Tip warning={true}>
SageMaker doesnt support argparse actions. If you want to use, for example, boolean hyperparameters, you need to
specify type as bool in your script and provide an explicit True or False value for this hyperparameter. [[REF]](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#prepare-a-pytorch-training-script).
SageMaker doesnt support argparse actions. If you want to use, for example, boolean hyperparameters, you need to
specify type as bool in your script and provide an explicit True or False value for this hyperparameter. [[REF]](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#prepare-a-pytorch-training-script).
</Tip>
### Launch Training
You can launch your training with 🤗 Accelerate CLI with
You can launch your training with 🤗 Accelerate CLI with:
```
accelerate launch path_to_script.py --args_to_the_script
@ -92,7 +95,7 @@ arguments needed by your training script as named arguments.
<Tip>
If you run one of the example scripts, don't forget to add `accelerator.save('/opt/ml/model')` to it.
If you run one of the example scripts, don't forget to add `accelerator.save('/opt/ml/model')` to it.
</Tip>
@ -129,7 +132,26 @@ You can find your model data at: s3://your-bucket/accelerate-sagemaker-1-2021-04
### Distributed Training: Data Parallelism
*currently in development, will be supported soon.*
Set up the accelerate config by running `accelerate config` and answer the SageMaker questions and set it up.
To use SageMaker DDP, select it when asked
`What is the distributed mode? ([0] No distributed training, [1] data parallelism):`.
Example config below:
```yaml
base_job_name: accelerate-sagemaker-1
compute_environment: AMAZON_SAGEMAKER
distributed_type: DATA_PARALLEL
ec2_instance_type: ml.p3.16xlarge
iam_role_name: xxxxx
image_uri: null
mixed_precision: fp16
num_machines: 1
profile: xxxxx
py_version: py38
pytorch_version: 1.10.2
region: us-east-1
transformers_version: 4.17.0
use_cpu: false
```
### Distributed Training: Model Parallelism
@ -141,10 +163,43 @@ You can find your model data at: s3://your-bucket/accelerate-sagemaker-1-2021-04
want to use different/other Python packages you can do this by adding them to the `requirements.txt`. These packages
will be installed before your training script is started.
### Remote scripts: Use scripts located on Github
### Local Training: SageMaker Local mode
*undecided if feature is needed. Contact us if you would like this feature.*
The local mode in the SageMaker SDK allows you to run your training script locally inside the HuggingFace DLC (Deep Learning container)
or using your custom container image. This is useful for debugging and testing your training script inside the final container environment.
Local mode uses Docker compose (*Note: Docker Compose V2 is not supported yet*). The SDK will handle the authentication against ECR
to pull the DLC to your local environment. You can emulate CPU (single and multi-instance) and GPU (single instance) SageMaker training jobs.
To use local mode, you need to set your `ec2_instance_type` to `local`.
```yaml
ec2_instance_type: local
```
### Advanced configuration
The configuration allows you to override parameters for the [Estimator](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html).
These settings have to be applied in the config file and are not part of `accelerate config`. You can control many additional aspects of the training job, e.g. use Spot instances, enable network isolation and many more.
```yaml
additional_args:
# enable network isolation to restrict internet access for containers
enable_network_isolation: True
```
You can find all available configuration [here](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html).
### Use Spot Instances
*undecided if feature is needed. Contact us if you would like this feature.*
You can use Spot Instances e.g. using (see [Advanced configuration](#advanced-configuration)):
```yaml
additional_args:
use_spot_instances: True
max_wait: 86400
```
*Note: Spot Instances are subject to be terminated and training to be continued from a checkpoint. This is not handled in 🤗 Accelerate out of the box. Contact us if you would like this feature.*
### Remote scripts: Use scripts located on Github
*undecided if feature is needed. Contact us if you would like this feature.*

View File

@ -0,0 +1,233 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Tracking
There are a large number of experiment tracking API's available, however getting them all to work with in a multi-processing environment can oftentimes be complex.
🤗 Accelerate provides a general tracking API that can be used to log useful items during your script through [`Accelerator.log`]
## Integrated Trackers
Currently `Accelerate` supports seven trackers out-of-the-box:
- TensorBoard
- WandB
- CometML
- Aim
- MLFlow
- ClearML
- DVCLive
To use any of them, pass in the selected type(s) to the `log_with` parameter in [`Accelerate`]:
```python
from accelerate import Accelerator
from accelerate.utils import LoggerType
accelerator = Accelerator(log_with="all") # For all available trackers in the environment
accelerator = Accelerator(log_with="wandb")
accelerator = Accelerator(log_with=["wandb", LoggerType.TENSORBOARD])
```
At the start of your experiment [`Accelerator.init_trackers`] should be used to setup your project, and potentially add any experiment hyperparameters to be logged:
```python
hps = {"num_iterations": 5, "learning_rate": 1e-2}
accelerator.init_trackers("my_project", config=hps)
```
When you are ready to log any data, [`Accelerator.log`] should be used.
A `step` can also be passed in to correlate the data with a particular step in the training loop.
```python
accelerator.log({"train_loss": 1.12, "valid_loss": 0.8}, step=1)
```
Once you've finished training, make sure to run [`Accelerator.end_training`] so that all the trackers can run their finish functionalities if they have any.
```python
accelerator.end_training()
```
A full example is below:
```python
from accelerate import Accelerator
accelerator = Accelerator(log_with="all")
config = {
"num_iterations": 5,
"learning_rate": 1e-2,
"loss_function": str(my_loss_function),
}
accelerator.init_trackers("example_project", config=config)
my_model, my_optimizer, my_training_dataloader = accelerate.prepare(my_model, my_optimizer, my_training_dataloader)
device = accelerator.device
my_model.to(device)
for iteration in config["num_iterations"]:
for step, batch in my_training_dataloader:
my_optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
accelerator.backward(loss)
my_optimizer.step()
accelerator.log({"training_loss": loss}, step=step)
accelerator.end_training()
```
If a tracker requires a directory to save data to, such as `TensorBoard`, then pass the directory path to `project_dir`. The `project_dir` parameter is useful
when there are other configurations to be combined with in the [`~utils.ProjectConfiguration`] data class. For example, you can save the TensorBoard data to `project_dir` and everything else can be logged in the `logging_dir` parameter of [`~utils.ProjectConfiguration`:
```python
accelerator = Accelerator(log_with="tensorboard", project_dir=".")
# use with ProjectConfiguration
config = ProjectConfiguration(project_dir=".", logging_dir="another/directory")
accelerator = Accelerator(log_with="tensorboard", project_config=config)
```
## Implementing Custom Trackers
To implement a new tracker to be used in `Accelerator`, a new one can be made through implementing the [`GeneralTracker`] class.
Every tracker must implement three functions and have three properties:
- `__init__`:
- Should store a `run_name` and initialize the tracker API of the integrated library.
- If a tracker stores their data locally (such as TensorBoard), a `logging_dir` parameter can be added.
- `store_init_configuration`:
- Should take in a `values` dictionary and store them as a one-time experiment configuration
- `log`:
- Should take in a `values` dictionary and a `step`, and should log them to the run
- `name` (`str`):
- A unique string name for the tracker, such as `"wandb"` for the wandb tracker.
- This will be used for interacting with this tracker specifically
- `requires_logging_directory` (`bool`):
- Whether a `logging_dir` is needed for this particular tracker and if it uses one.
- `tracker`:
- This should be implemented as a `@property` function
- Should return the internal tracking mechanism the library uses, such as the `run` object for `wandb`.
Each method should also utilize the [`state.PartialState`] class if the logger should only be executed on the main process for instance.
A brief example can be seen below with an integration with Weights and Biases, containing only the relevant information and logging just on
the main process:
```python
from accelerate.tracking import GeneralTracker, on_main_process
from typing import Optional
import wandb
class MyCustomTracker(GeneralTracker):
name = "wandb"
requires_logging_directory = False
@on_main_process
def __init__(self, run_name: str):
self.run_name = run_name
run = wandb.init(self.run_name)
@property
def tracker(self):
return self.run.run
@on_main_process
def store_init_configuration(self, values: dict):
wandb.config(values)
@on_main_process
def log(self, values: dict, step: Optional[int] = None):
wandb.log(values, step=step)
```
When you are ready to build your `Accelerator` object, pass in an **instance** of your tracker to [`Accelerator.log_with`] to have it automatically
be used with the API:
```python
tracker = MyCustomTracker("some_run_name")
accelerator = Accelerator(log_with=tracker)
```
These also can be mixed with existing trackers, including with `"all"`:
```python
tracker = MyCustomTracker("some_run_name")
accelerator = Accelerator(log_with=[tracker, "all"])
```
## Accessing the internal tracker
If some custom interactions with a tracker might be wanted directly, you can quickly access one using the
[`Accelerator.get_tracker`] method. Just pass in the string corresponding to a tracker's `.name` attribute
and it will return that tracker on the main process.
This example shows doing so with wandb:
```python
wandb_tracker = accelerator.get_tracker("wandb")
```
From there you can interact with `wandb`'s `run` object like normal:
```python
wandb_run.log_artifact(some_artifact_to_log)
```
<Tip>
Trackers built in Accelerate will automatically execute on the correct process,
so if a tracker is only meant to be ran on the main process it will do so
automatically.
</Tip>
If you want to truly remove Accelerate's wrapping entirely, you can
achieve the same outcome with:
```python
wandb_tracker = accelerator.get_tracker("wandb", unwrap=True)
if accelerator.is_main_process:
wandb_tracker.log_artifact(some_artifact_to_log)
```
## When a wrapper cannot work
If a library has an API that does not follow a strict `.log` with an overall dictionary such as Neptune.AI, logging can be done manually under an `if accelerator.is_main_process` statement:
```diff
from accelerate import Accelerator
+ import neptune.new as neptune
accelerator = Accelerator()
+ run = neptune.init(...)
my_model, my_optimizer, my_training_dataloader = accelerate.prepare(my_model, my_optimizer, my_training_dataloader)
device = accelerator.device
my_model.to(device)
for iteration in config["num_iterations"]:
for batch in my_training_dataloader:
my_optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = my_model(inputs)
loss = my_loss_function(outputs, targets)
total_loss += loss
accelerator.backward(loss)
my_optimizer.step()
+ if accelerator.is_main_process:
+ run["logs/training/batch/loss"].log(loss)
```

View File

@ -0,0 +1,180 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Example Zoo
Below contains a non-exhaustive list of tutorials and scripts showcasing 🤗 Accelerate
## Official Accelerate Examples:
### Basic Examples
These examples showcase the base features of Accelerate and are a great starting point
- [Barebones NLP example](https://github.com/huggingface/accelerate/blob/main/examples/nlp_example.py)
- [Barebones distributed NLP example in a Jupyter Notebook](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb)
- [Barebones computer vision example](https://github.com/huggingface/accelerate/blob/main/examples/cv_example.py)
- [Barebones distributed computer vision example in a Jupyter Notebook](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_cv_example.ipynb)
- [Using Accelerate in Kaggle](https://www.kaggle.com/code/muellerzr/multi-gpu-and-accelerate)
### Feature Specific Examples
These examples showcase specific features that the Accelerate framework offers
- [Automatic memory-aware gradient accumulation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/automatic_gradient_accumulation.py)
- [Checkpointing states](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/checkpointing.py)
- [Cross validation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/cross_validation.py)
- [DeepSpeed](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/deepspeed_with_config_support.py)
- [Fully Sharded Data Parallelism](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/fsdp_with_peak_mem_tracking.py)
- [Gradient accumulation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/gradient_accumulation.py)
- [Memory-aware batch size finder](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/memory.py)
- [Metric Computation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/multi_process_metrics.py)
- [Using Trackers](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/tracking.py)
- [Using Megatron-LM](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/megatron_lm_gpt_pretraining.py)
### Full Examples
These examples showcase every feature in Accelerate at once that was shown in "Feature Specific Examples"
- [Complete NLP example](https://github.com/huggingface/accelerate/blob/main/examples/complete_nlp_example.py)
- [Complete computer vision example](https://github.com/huggingface/accelerate/blob/main/examples/complete_cv_example.py)
- [Very complete and extensible vision example showcasing SLURM, hydra, and a very extensible usage of the framework](https://github.com/yuvalkirstain/PickScore)
- [Causal language model fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py)
- [Masked language model fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm_no_trainer.py)
- [Speech pretraining example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py)
- [Translation fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py)
- [Text classification fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py)
- [Semantic segmentation fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/semantic-segmentation/run_semantic_segmentation_no_trainer.py)
- [Question answering fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa_no_trainer.py)
- [Beam search question answering fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py)
- [Multiple choice question answering fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/multiple-choice/run_swag_no_trainer.py)
- [Named entity recognition fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner_no_trainer.py)
- [Image classification fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification_no_trainer.py)
- [Summarization fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization_no_trainer.py)
- [End-to-end examples on how to use AWS SageMaker integration of Accelerate](https://github.com/huggingface/notebooks/blob/main/sagemaker/22_accelerate_sagemaker_examples/README.md)
- [Megatron-LM examples for various NLp tasks](https://github.com/pacman100/accelerate-megatron-test)
## Integration Examples
These are tutorials from libraries that integrate with 🤗 Accelerate:
> Don't find your integration here? Make a PR to include it!
### Amphion
- [Training Text-to-Speech Models with Amphion](https://github.com/open-mmlab/Amphion/blob/main/egs/tts/README.md)
- [Training Singing Voice Conversion Models with Amphion](https://github.com/open-mmlab/Amphion/blob/main/egs/svc/README.md)
- [Training Vocoders with Amphion](https://github.com/open-mmlab/Amphion/blob/main/egs/vocoder/README.md)
### Catalyst
- [Distributed training tutorial with Catalyst](https://catalyst-team.github.io/catalyst/tutorials/ddp.html)
### DALLE2-pytorch
- [Fine-tuning DALLE2](https://github.com/lucidrains/DALLE2-pytorch#usage)
### 🤗 diffusers
- [Performing textual inversion with diffusers](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
- [Training DreamBooth with diffusers](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth)
### fastai
- [Distributed training from Jupyter Notebooks with fastai](https://docs.fast.ai/tutorial.distributed.html)
- [Basic distributed training examples with fastai](https://docs.fast.ai/examples/distributed_app_examples.html)
### GradsFlow
- [Auto Image Classification with GradsFlow](https://docs.gradsflow.com/en/latest/examples/nbs/01-ImageClassification/)
### imagen-pytorch
- [Fine-tuning Imagen](https://github.com/lucidrains/imagen-pytorch#usage)
### Kornia
- [Fine-tuning vision models with Kornia's Trainer](https://kornia.readthedocs.io/en/latest/get-started/training.html)
### PyTorch Accelerated
- [Quickstart distributed training tutorial with PyTorch Accelerated](https://pytorch-accelerated.readthedocs.io/en/latest/quickstart.html)
### PyTorch3D
- [Perform Deep Learning with 3D data](https://pytorch3d.org/tutorials/)
### Stable-Dreamfusion
- [Training with Stable-Dreamfusion to convert text to a 3D model](https://colab.research.google.com/drive/1MXT3yfOFvO0ooKEfiUUvTKwUkrrlCHpF?usp=sharing)
### Tez
- [Leaf disease detection with Tez and Accelerate](https://www.kaggle.com/code/abhishek/tez-faster-and-easier-training-for-leaf-detection/notebook)
### trlx
- [How to implement a sentiment learning task with trlx](https://github.com/CarperAI/trlx#example-how-to-add-a-task)
### Comfy-UI
- [Enabling using large Stable Diffusion Models in low-vram settings using Accelerate](https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/model_management.py#L291-L296)
## In Science
Below contains a non-exhaustive list of papers utilizing 🤗 Accelerate.
> Don't find your paper here? Make a PR to include it!
* Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, Omer Levy: “Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation”, 2023; [arXiv:2305.01569](http://arxiv.org/abs/2305.01569).
* Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim: “Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models”, 2023; [arXiv:2305.04091](http://arxiv.org/abs/2305.04091).
* Arthur Câmara, Claudia Hauff: “Moving Stuff Around: A study on efficiency of moving documents into memory for Neural IR models”, 2022; [arXiv:2205.08343](http://arxiv.org/abs/2205.08343).
* Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Daniel Y. Fu, Zhiqiang Xie, Beidi Chen, Clark Barrett, Joseph E. Gonzalez, Percy Liang, Christopher Ré, Ion Stoica, Ce Zhang: “High-throughput Generative Inference of Large Language Models with a Single GPU”, 2023; [arXiv:2303.06865](http://arxiv.org/abs/2303.06865).
* Peter Melchior, Yan Liang, ChangHoon Hahn, Andy Goulding: “Autoencoding Galaxy Spectra I: Architecture”, 2022; [arXiv:2211.07890](http://arxiv.org/abs/2211.07890).
* Jiaao Chen, Aston Zhang, Mu Li, Alex Smola, Diyi Yang: “A Cheaper and Better Diffusion Language Model with Soft-Masked Noise”, 2023; [arXiv:2304.04746](http://arxiv.org/abs/2304.04746).
* Ayaan Haque, Matthew Tancik, Alexei A. Efros, Aleksander Holynski, Angjoo Kanazawa: “Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions”, 2023; [arXiv:2303.12789](http://arxiv.org/abs/2303.12789).
* Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi: “RealFusion: 360° Reconstruction of Any Object from a Single Image”, 2023; [arXiv:2302.10663](http://arxiv.org/abs/2302.10663).
* Xiaoshi Wu, Keqiang Sun, Feng Zhu, Rui Zhao, Hongsheng Li: “Better Aligning Text-to-Image Models with Human Preference”, 2023; [arXiv:2303.14420](http://arxiv.org/abs/2303.14420).
* Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang: “HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace”, 2023; [arXiv:2303.17580](http://arxiv.org/abs/2303.17580).
* Yue Yang, Wenlin Yao, Hongming Zhang, Xiaoyang Wang, Dong Yu, Jianshu Chen: “Z-LaVI: Zero-Shot Language Solver Fueled by Visual Imagination”, 2022; [arXiv:2210.12261](http://arxiv.org/abs/2210.12261).
* Sheng-Yen Chou, Pin-Yu Chen, Tsung-Yi Ho: “How to Backdoor Diffusion Models?”, 2022; [arXiv:2212.05400](http://arxiv.org/abs/2212.05400).
* Junyoung Seo, Wooseok Jang, Min-Seop Kwak, Jaehoon Ko, Hyeonsu Kim, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, Seungryong Kim: “Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation”, 2023; [arXiv:2303.07937](http://arxiv.org/abs/2303.07937).
* Or Patashnik, Daniel Garibi, Idan Azuri, Hadar Averbuch-Elor, Daniel Cohen-Or: “Localizing Object-level Shape Variations with Text-to-Image Diffusion Models”, 2023; [arXiv:2303.11306](http://arxiv.org/abs/2303.11306).
* Dídac Surís, Sachit Menon, Carl Vondrick: “ViperGPT: Visual Inference via Python Execution for Reasoning”, 2023; [arXiv:2303.08128](http://arxiv.org/abs/2303.08128).
* Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, Qifeng Chen: “FateZero: Fusing Attentions for Zero-shot Text-based Video Editing”, 2023; [arXiv:2303.09535](http://arxiv.org/abs/2303.09535).
* Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, Yejin Choi: “NaturalProver: Grounded Mathematical Proof Generation with Language Models”, 2022; [arXiv:2205.12910](http://arxiv.org/abs/2205.12910).
* Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or: “TEXTure: Text-Guided Texturing of 3D Shapes”, 2023; [arXiv:2302.01721](http://arxiv.org/abs/2302.01721).
* Puijin Cheng, Li Lin, Yijin Huang, Huaqing He, Wenhan Luo, Xiaoying Tang: “Learning Enhancement From Degradation: A Diffusion Model For Fundus Image Enhancement”, 2023; [arXiv:2303.04603](http://arxiv.org/abs/2303.04603).
* Shun Shao, Yftah Ziser, Shay Cohen: “Erasure of Unaligned Attributes from Neural Representations”, 2023; [arXiv:2302.02997](http://arxiv.org/abs/2302.02997).
* Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo: “In-Context Instruction Learning”, 2023; [arXiv:2302.14691](http://arxiv.org/abs/2302.14691).
* Shikun Liu, Linxi Fan, Edward Johns, Zhiding Yu, Chaowei Xiao, Anima Anandkumar: “Prismer: A Vision-Language Model with An Ensemble of Experts”, 2023; [arXiv:2303.02506](http://arxiv.org/abs/2303.02506).
* Haoyu Chen, Zhihua Wang, Yang Yang, Qilin Sun, Kede Ma: “Learning a Deep Color Difference Metric for Photographic Images”, 2023; [arXiv:2303.14964](http://arxiv.org/abs/2303.14964).
* Van-Hoang Le, Hongyu Zhang: “Log Parsing with Prompt-based Few-shot Learning”, 2023; [arXiv:2302.07435](http://arxiv.org/abs/2302.07435).
* Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi, Kentaro Inui: “Do Deep Neural Networks Capture Compositionality in Arithmetic Reasoning?”, 2023; [arXiv:2302.07866](http://arxiv.org/abs/2302.07866).
* Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, Prithviraj Ammanabrolu: “Behavior Cloned Transformers are Neurosymbolic Reasoners”, 2022; [arXiv:2210.07382](http://arxiv.org/abs/2210.07382).
* Martin Wessel, Tomáš Horych, Terry Ruas, Akiko Aizawa, Bela Gipp, Timo Spinde: “Introducing MBIB -- the first Media Bias Identification Benchmark Task and Dataset Collection”, 2023; [arXiv:2304.13148](http://arxiv.org/abs/2304.13148). DOI: [https://dx.doi.org/10.1145/3539618.3591882 10.1145/3539618.3591882].
* Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, Daniel Cohen-Or: “Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models”, 2023; [arXiv:2301.13826](http://arxiv.org/abs/2301.13826).
* Marcio Fonseca, Yftah Ziser, Shay B. Cohen: “Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents”, 2022; [arXiv:2205.12486](http://arxiv.org/abs/2205.12486).
* Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or: “TEXTure: Text-Guided Texturing of 3D Shapes”, 2023; [arXiv:2302.01721](http://arxiv.org/abs/2302.01721).
* Tianxing He, Jingyu Zhang, Tianle Wang, Sachin Kumar, Kyunghyun Cho, James Glass, Yulia Tsvetkov: “On the Blind Spots of Model-Based Evaluation Metrics for Text Generation”, 2022; [arXiv:2212.10020](http://arxiv.org/abs/2212.10020).
* Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham: “In-Context Retrieval-Augmented Language Models”, 2023; [arXiv:2302.00083](http://arxiv.org/abs/2302.00083).
* Dacheng Li, Rulin Shao, Hongyi Wang, Han Guo, Eric P. Xing, Hao Zhang: “MPCFormer: fast, performant and private Transformer inference with MPC”, 2022; [arXiv:2211.01452](http://arxiv.org/abs/2211.01452).
* Baolin Peng, Michel Galley, Pengcheng He, Chris Brockett, Lars Liden, Elnaz Nouri, Zhou Yu, Bill Dolan, Jianfeng Gao: “GODEL: Large-Scale Pre-Training for Goal-Directed Dialog”, 2022; [arXiv:2206.11309](http://arxiv.org/abs/2206.11309).
* Egil Rønningstad, Erik Velldal, Lilja Øvrelid: “Entity-Level Sentiment Analysis (ELSA): An exploratory task survey”, 2023, Proceedings of the 29th International Conference on Computational Linguistics, 2022, pages 6773-6783; [arXiv:2304.14241](http://arxiv.org/abs/2304.14241).
* Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, Sergey Levine: “Offline RL for Natural Language Generation with Implicit Language Q Learning”, 2022; [arXiv:2206.11871](http://arxiv.org/abs/2206.11871).
* Zhiruo Wang, Shuyan Zhou, Daniel Fried, Graham Neubig: “Execution-Based Evaluation for Open-Domain Code Generation”, 2022; [arXiv:2212.10481](http://arxiv.org/abs/2212.10481).
* Minh-Long Luu, Zeyi Huang, Eric P. Xing, Yong Jae Lee, Haohan Wang: “Expeditious Saliency-guided Mix-up through Random Gradient Thresholding”, 2022; [arXiv:2212.04875](http://arxiv.org/abs/2212.04875).
* Jun Hao Liew, Hanshu Yan, Daquan Zhou, Jiashi Feng: “MagicMix: Semantic Mixing with Diffusion Models”, 2022; [arXiv:2210.16056](http://arxiv.org/abs/2210.16056).
* Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao: “LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot Learners”, 2021; [arXiv:2110.06274](http://arxiv.org/abs/2110.06274).

View File

@ -23,11 +23,12 @@ The [nlp_example.py](./nlp_example.py) script is a simple example to train a Ber
Prior to running it you should install 🤗 Dataset and 🤗 Transformers:
```bash
pip install datasets transformers
pip install datasets evaluate transformers
```
The same script can be run in any of the following configurations:
- single CPU or single GPU
- multi CPUs
- multi GPUs (using PyTorch distributed mode)
- (multi) TPUs
- fp16 (mixed-precision) or fp32 (normal precision)
@ -51,22 +52,34 @@ To run it in each of these various modes, use the following commands:
python ./nlp_example.py # from a server with a GPU
```
- with fp16 (mixed-precision)
* from any server by passing `fp16=True` to the `Accelerator`.
* from any server by passing `mixed_precison=fp16` to the `Accelerator`.
```bash
python ./nlp_example.py --fp16
python ./nlp_example.py --mixed_precision fp16
```
* from any server with Accelerate launcher
```bash
accelerate launch --fp16 ./nlp_example.py
accelerate launch --mixed_precision fp16 ./nlp_example.py
- multi CPUs (requires Open MPI, Intel MPI, or MVAPICH)
* With Accelerate config and launcher, execute the following from node 0:
```bash
accelerate config # Select to have accelerate launch mpirun
accelerate launch ./nlp_example.py # This will run the script on each server
```
* With Intel MPI:
```bash
export CCL_WORKER_COUNT=1
export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip
mpirun -f hostfile -n 16 -ppn 4 python ./nlp_example.py
```
- multi GPUs (using PyTorch distributed mode)
* With Accelerate config and launcher
```bash
accelerate config # This will create a config file on your server
accelerate launch ./nlp_example.py # This will run the script on your server
```
* With traditional PyTorch launcher
* With traditional PyTorch launcher (`python -m torch.distributed.run` can be used instead of `torchrun`)
```bash
python -m torch.distributed.launch --nproc_per_node 2 --use_env ./nlp_example.py
torchrun --nproc_per_node 2 ./nlp_example.py
```
- multi GPUs, multi node (several machines, using PyTorch distributed mode)
* With Accelerate config and launcher, on each machine:
@ -74,18 +87,15 @@ To run it in each of these various modes, use the following commands:
accelerate config # This will create a config file on each server
accelerate launch ./nlp_example.py # This will run the script on each server
```
* With PyTorch launcher only
* With PyTorch launcher only (`python -m torch.distributed.run` can be used instead of `torchrun`). Run this command on each node:
```bash
python -m torch.distributed.launch --nproc_per_node 2 \
--use_env \
--node_rank 0 \
--master_addr master_node_ip_address \
./nlp_example.py # On the first server
python -m torch.distributed.launch --nproc_per_node 2 \
--use_env \
--node_rank 1 \
--master_addr master_node_ip_address \
./nlp_example.py # On the second server
torchrun \ # python -m torch.distributed.run
--nproc_per_node 2 \
--nnodes 2 \
--rdzv_id 2299 \ # A unique job id
--rdzv_backend c10d \
--rdzv_endpoint master_node_ip_address:29500 \
./nlp_example.py
```
- (multi) TPUs
* With Accelerate config and launcher
@ -103,6 +113,7 @@ The [cv_example.py](./cv_example.py) script is a simple example to fine-tune a R
The same script can be run in any of the following configurations:
- single CPU or single GPU
- multi CPUs
- multi GPUs (using PyTorch distributed mode)
- (multi) TPUs
- fp16 (mixed-precision) or fp32 (normal precision)
@ -136,54 +147,110 @@ To run it in each of these various modes, use the following commands:
```
- single GPU:
```bash
python ./nlp_example.py # from a server with a GPU
python ./cv_example.py # from a server with a GPU
```
- with fp16 (mixed-precision)
* from any server by passing `fp16=True` to the `Accelerator`.
* from any server by passing `mixed_precison=fp16` to the `Accelerator`.
```bash
python ./cv_example.py --data_dir path_to_data --fp16
python ./cv_example.py --data_dir path_to_data --mixed_precison fp16
```
* from any server with Accelerate launcher
```bash
accelerate launch --fp16 ./cv_example.py --data_dir path_to_data
accelerate launch --mixed_precison fp16 ./cv_example.py --data_dir path_to_data
- multi CPUs (requires Open MPI, Intel MPI, or MVAPICH)
* With Accelerate config and launcher, run the following from node 0:
```bash
accelerate config --config_file config.yaml # Select to have accelerate launch mpirun
accelerate launch ./cv_example.py --data_dir path_to_data # This will run the script on each server
```
* With Intel MPI, execute mpirun from node 0:
```bash
export CCL_WORKER_COUNT=1
export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip
mpirun -f hostfile -n 16 -ppn 4 python ./cv_example.py --data_dir path_to_data
```
- multi GPUs (using PyTorch distributed mode)
* With Accelerate config and launcher
```bash
accelerate config # This will create a config file on your server
accelerate launch ./cv_example.py --data_dir path_to_data # This will run the script on your server
accelerate config --config_file config.yaml # This will create a config file on your server to `config.yaml`
accelerate launch --config_file config.yaml ./cv_example.py --data_dir path_to_data # This will run the script on your server
```
* With traditional PyTorch launcher
* With traditional PyTorch launcher (`python -m torch.distributed.run` can be used instead of `torchrun`)
```bash
python -m torch.distributed.launch --nproc_per_node 2 --use_env ./cv_example.py --data_dir path_to_data
torchrun --nproc_per_node 2 ./cv_example.py --data_dir path_to_data
```
- multi GPUs, multi node (several machines, using PyTorch distributed mode)
* With Accelerate config and launcher, on each machine:
```bash
accelerate config # This will create a config file on each server
accelerate launch ./cv_example.py --data_dir path_to_data # This will run the script on each server
accelerate config --config_file config.yaml # This will create a config file on your server to `config.yaml`
accelerate launch --config_file config.yaml ./cv_example.py --data_dir path_to_data # This will run the script on each server
```
* With PyTorch launcher only
* With PyTorch launcher only (`python -m torch.distributed.run` can be used instead of `torchrun`). Run this command on each node:
```bash
python -m torch.distributed.launch --nproc_per_node 2 \
--use_env \
--node_rank 0 \
--master_addr master_node_ip_address \
./cv_example.py --data_dir path_to_data # On the first server
python -m torch.distributed.launch --nproc_per_node 2 \
--use_env \
--node_rank 1 \
--master_addr master_node_ip_address \
./cv_example.py --data_dir path_to_data # On the second server
torchrun \ # python -m torch.distributed.run
--nproc_per_node 2 \
--nnodes 2 \
--rdzv_id 2299 \ # A unique job id
--rdzv_backend c10d \
--rdzv_endpoint master_node_ip_address:29500 \
./cv_example.py --data_dir path_to_data
```
- (multi) TPUs
* With Accelerate config and launcher
```bash
accelerate config # This will create a config file on your TPU server
accelerate launch ./cv_example.py --data_dir path_to_data # This will run the script on each server
accelerate config --config_file config.yaml # This will create a config file on your server to `config.yaml`
accelerate launch --config_file config.yaml ./cv_example.py --data_dir path_to_data # This will run the script on each server
```
* In PyTorch:
Add an `xmp.spawn` line in your script as you usually do.
### Simple vision example (GANs)
- [huggan project](https://github.com/huggingface/community-events/tree/main/huggan)
### Using AWS SageMaker integration
- [Examples showcasing AWS SageMaker integration of 🤗 Accelerate.](https://github.com/pacman100/accelerate-aws-sagemaker)
## Simple Multi-GPU Hardware Launcher
[multigpu_remote_launcher.py](./multigpu_remote_launcher.py) is a minimal script that demonstrates launching accelerate
on multiple remote GPUs, and with automatic hardware environment and dependency setup for reproducibility. You can
easily customize the training function used, training arguments, hyperparameters, and type of compute hardware, and then
run the script to automatically launch multi GPU training on remote hardware.
This script uses [Runhouse](https://github.com/run-house/runhouse) to launch on self-hosted hardware (e.g. in your own
cloud account or on-premise cluster) but there are other options for running remotely as well. Runhouse can be installed
with `pip install runhouse`, and you can refer to
[hardware setup](https://runhouse-docs.readthedocs-hosted.com/en/latest/api/python/cluster.html#hardware-setup)
for hardware setup instructions, or this
[Colab tutorial](https://colab.research.google.com/drive/1qVwYyLTCPYPSdz9ZX7BZl9Qm0A3j7RJe) for a more in-depth walkthrough.
## SLURM Scripts
In [/slurm/submit_multigpu.sh](./slurm/submit_multigpu.sh) and [/slurm/submit_multinode.sh](./slurm/submit_multinode.sh) we present two scripts for running the examples on a machine with [SLURM](https://slurm.schedmd.com/documentation.html) workload manager.
In [/slurm/submit_multigpu.sh](./slurm/submit_multigpu.sh) the only parameter in the launcher that needs to be modified is `--num_processes`, which determines the number of GPUs we will use. In this case, using the environment variable `$SLURM_GPUS`, we indicate that we want to utilize all the GPUs available on the node we have requested.
In [/slurm/submit_multinode.sh](./slurm/submit_multinode.sh) we must specify the number of nodes that will be part of the training (`--num_machines`), how many GPUs we will use in total (`--num_processes`), the [`backend`](https://pytorch.org/docs/stable/elastic/run.html#note-on-rendezvous-backend), `--main_process_ip` which will be the address the master node and the `--main_process_port`.
In [/slurm/submit_multicpu.sh](./slurm/submit_multicpu.sh) we must specify the number of nodes that will be part of the training (`--num_machines`), how many CPU processes we will use in total (`--num_processes`), the [`backend`](https://pytorch.org/docs/stable/elastic/run.html#note-on-rendezvous-backend), `--main_process_ip` which will be the address the master node and the `--main_process_port`. `mpirun_hostfile` specifies to run the job using MPIRun.
In both scripts, we run `activateEnviroment.sh` at the beginning. This script should contain the necessary instructions to initialize the environment for execution. Below, we show an example that loads the necessary libraries ([Environment modules](https://github.com/cea-hpc/modules)), activates the Python environment, and sets up various environment variables, most of them to run the scripts in offline mode in case we don't have internet connection from the cluster.
```bash
# activateEnvironment.sh
module purge
module load anaconda3/2020.02 cuda/10.2 cudnn/8.0.5 nccl/2.9.9 arrow/7.0.0 openmpi
source activate /home/nct01/nct01328/pytorch_antoni_local
export HF_HOME=/gpfs/projects/nct01/nct01328/
export HF_LOCAL_HOME=/gpfs/projects/nct01/nct01328/HF_LOCAL
export HF_DATASETS_OFFLINE=1
export TRANSFORMERS_OFFLINE=1
export PYTHONPATH=/home/nct01/nct01328/transformers-in-supercomputers:$PYTHONPATH
export GPUS_PER_NODE=4
```
## Finer Examples
While the first two scripts are extremely barebones when it comes to what you can do with accelerate, more advanced features are documented in two other locations.

View File

@ -19,7 +19,7 @@ Adjustments to each script from the base `nlp_example.py` file can be found quic
All following scripts also accept these arguments in addition to their added ones.
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.run`), such as:
```bash
accelerate launch ../nlp_example.py --mixed_precision fp16 --cpu 0
@ -34,7 +34,7 @@ accelerate launch ../nlp_example.py --mixed_precision fp16 --cpu 0
- `output_dir`, where saved state folders should be saved to, default is current working directory
- `resume_from_checkpoint`, what checkpoint folder to resume from. ("epoch_0", "step_22", ...)
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torchrun`), such as:
(Note, `resume_from_checkpoint` assumes that we've ran the script for one epoch with the `--checkpointing_steps epoch` flag)
@ -42,6 +42,18 @@ These arguments should be added at the end of any method for starting the python
accelerate launch ./checkpointing.py --checkpointing_steps epoch output_dir "checkpointing_tutorial" --resume_from_checkpoint "checkpointing_tutorial/epoch_0"
```
### Cross Validation (`cross_validation.py`)
- Shows how to use `Accelerator.free_memory` and run cross validation efficiently with `datasets`.
- Arguments available:
- `num_folds`, the number of folds the training dataset should be split into.
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torchrun`), such as:
```bash
accelerate launch ./cross_validation.py --num_folds 2
```
### Experiment Tracking (`tracking.py`)
- Shows how to use `Accelerate.init_trackers` and `Accelerator.log`
@ -49,20 +61,61 @@ accelerate launch ./checkpointing.py --checkpointing_steps epoch output_dir "che
- Arguments available:
- `with_tracking`, whether to load in all available experiment trackers from the environment.
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torchrun`), such as:
```bash
accelerate launch ./tracking.py --with_tracking
```
### Cross Validation (`cross_validation.py`)
### Gradient Accumulation (`gradient_accumulation.py`)
- Shows how to use `Accelerator.free_memory` and run cross validation efficiently with `datasets`.
- Shows how to use `Accelerator.no_sync` to prevent gradient averaging in a distributed setup.
- Arguments available:
- `num_folds`, the number of folds the training dataset should be split into.
- `gradient_accumulation_steps`, the number of steps to perform before the gradients are accumulated and the optimizer and scheduler are stepped + zero_grad
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torchrun`), such as:
```bash
accelerate launch ./cross_validation.py --num_folds 2
accelerate launch ./gradient_accumulation.py --gradient_accumulation_steps 5
```
### LocalSGD (`local_sgd.py`)
- Shows how to use `Accelerator.no_sync` to prevent gradient averaging in a distributed setup. However, unlike gradient accumulation, this method does not change the effective batch size. Local SGD can be combined with gradient accumulation.
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torchrun`), such as:
```bash
accelerate launch ./local_sgd.py --local_sgd_steps 4
```
### DDP Communication Hook (`ddp_comm_hook.py`)
- Shows how to use DDP Communication Hooks to control and optimize gradient communication across workers in a DistributedDataParallel setup.
- Arguments available:
- `ddp_comm_hook`, the type of DDP communication hook to use. Choose between `no`, `fp16`, `bf16`, `power_sgd`, and `batched_power_sgd`.
These arguments should be added at the end of any method for starting the python script (such as `accelerate launch`, `python -m torch.distributed.run`), such as:
```bash
accelerate launch ./ddp_comm_hook.py --mixed_precision fp16 --ddp_comm_hook power_sgd
```
### Profiler (`profiler.py`)
- Shows how to use the profiling capabilities of `Accelerate` to profile PyTorch models during training.
- Uses the `ProfileKwargs` handler to customize profiling options, including activities, scheduling, and additional profiling options.
- Can generate and save profiling traces in JSON format for visualization in Chrome's tracing tool.
Arguments available:
- `--record_shapes`: If passed, records shapes for profiling.
- `--profile_memory`: If passed, profiles memory usage.
- `--with_stack`: If passed, profiles stack traces.
- `--with_flops`: If passed, profiles floating point operations (FLOPS).
- `--output_trace_dir`: If specified, saves the profiling trace to the given dir in JSON format.
- `--cpu`: If passed, trains on the CPU instead of GPU.
These arguments should be added at the end of any method for starting the Python script (such as `python`, `accelerate launch`, `python -m torchrun`), such as:
```bash
accelerate launch ./profiler.py --record_shapes --profile_memory --with_flops --output_trace_dir "profiler"
```

View File

@ -0,0 +1,242 @@
# Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
# New Code #
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator
from accelerate.utils import find_executable_batch_size
########################################################################
# This is a fully working simple example to use Accelerate,
# specifically showcasing how to combine both the gradient accumulation
# and automatic batch size finder utilities of Accelerate to perfrom
# automatic gradient accumulation
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# New additions from the base script can be found quickly by
# looking for the # New Code # tags
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
EVAL_BATCH_SIZE = 32
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
"""
Creates a set of `DataLoader`s for the `glue` dataset,
using "bert-base-cased" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
return train_dataloader, eval_dataloader
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
from accelerate.test_utils.training import mocked_dataloaders
get_dataloaders = mocked_dataloaders # noqa: F811
def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# Initialize accelerator
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
observed_batch_size = int(config["batch_size"])
metric = evaluate.load("glue", "mrpc")
# New Code #
# We use the `find_executable_batch_size` decorator, passing in the desired observed batch size
# to train on. If a CUDA OOM error occurs, it will retry this loop cutting the batch size in
# half each time. From this, we can calculate the number of gradient accumulation steps needed
# and modify the Accelerator object as a result
@find_executable_batch_size(starting_batch_size=int(observed_batch_size))
def inner_training_loop(batch_size):
# Since we need to modify the outside accelerator object, we need to bring it
# to the local scope
nonlocal accelerator
# We can calculate the number of gradient accumulation steps based on the current
# batch size vs the starting batch size
num_gradient_accumulation_steps = observed_batch_size // batch_size
# And then set it in the Accelerator directly:
accelerator.gradient_accumulation_steps = num_gradient_accumulation_steps
# Next we need to free all of the stored model references in the Accelerator each time
accelerator.free_memory()
# And set the seed so our results are reproducable each reset
set_seed(seed)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs),
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now we train the model
for epoch in range(num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
# And perform gradient accumulation
with accelerator.accumulate(model):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
# New Code #
# And call it at the end with no arguments
# Note: You could also refactor this outside of your training loop function
inner_training_loop()
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
args = parser.parse_args()
# New Code #
# We modify the starting batch size to be an observed batch size of 256, to guarentee an initial CUDA OOM
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 256}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -1,4 +1,3 @@
# coding=utf-8
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -15,18 +14,14 @@
import argparse
import os
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
from datasets import load_dataset, load_metric
from transformers import (
AdamW,
AutoModelForSequenceClassification,
AutoTokenizer,
get_linear_schedule_with_warmup,
set_seed,
)
########################################################################
@ -76,11 +71,13 @@ def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
@ -88,9 +85,22 @@ def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
@ -103,13 +113,22 @@ def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
return train_dataloader, eval_dataloader
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
from accelerate.test_utils.training import mocked_dataloaders
get_dataloaders = mocked_dataloaders # noqa: F811
def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# Initialize accelerator
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
correct_bias = config["correct_bias"]
seed = int(config["seed"])
batch_size = int(config["batch_size"])
@ -130,11 +149,11 @@ def training_function(config, args):
set_seed(seed)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
metric = load_metric("glue", "mrpc")
metric = evaluate.load("glue", "mrpc")
# If the batch size is too big we use gradient accumulation
gradient_accumulation_steps = 1
if batch_size > MAX_GPU_BATCH_SIZE:
if batch_size > MAX_GPU_BATCH_SIZE and accelerator.distributed_type != DistributedType.XLA:
gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE
batch_size = MAX_GPU_BATCH_SIZE
@ -147,7 +166,7 @@ def training_function(config, args):
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr, correct_bias=correct_bias)
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
@ -196,13 +215,15 @@ def training_function(config, args):
# Now we train the model
for epoch in range(starting_epoch, num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
# New Code #
# We need to skip steps until we reach the resumed step during the first epoch
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
overall_step += 1
continue
# New Code #
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
# We need to skip steps until we reach the resumed step
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
overall_step += resume_step
else:
# After the first iteration though, we need to go back to the original dataloader
active_dataloader = train_dataloader
for step, batch in enumerate(active_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
@ -235,8 +256,7 @@ def training_function(config, args):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
# It is slightly faster to call this once, than multiple times
predictions, references = accelerator.gather((predictions, batch["labels"]))
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
@ -263,8 +283,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
@ -289,7 +309,7 @@ def main():
help="If the training should continue from a checkpoint folder.",
)
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "correct_bias": True, "seed": 42, "batch_size": 16}
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)

View File

@ -1,4 +1,3 @@
# coding=utf-8
# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -15,23 +14,19 @@
import argparse
from typing import List
import evaluate
import numpy as np
import torch
from torch.utils.data import DataLoader
from accelerate import Accelerator, DistributedType
from datasets import DatasetDict, load_dataset, load_metric
from datasets import DatasetDict, load_dataset
# New Code #
# We'll be using StratifiedKFold for this example
from sklearn.model_selection import StratifiedKFold
from transformers import (
AdamW,
AutoModelForSequenceClassification,
AutoTokenizer,
get_linear_schedule_with_warmup,
set_seed,
)
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
########################################################################
@ -62,7 +57,7 @@ MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
# New Code #
# We need a different `get_dataloaders` function that will build dataloaders by indexs
# We need a different `get_dataloaders` function that will build dataloaders by index
def get_fold_dataloaders(
@ -75,9 +70,9 @@ def get_fold_dataloaders(
accelerator (`Accelerator`):
The main `Accelerator` object
train_idxs (list of `int`):
The split indicies for the training dataset
The split indices for the training dataset
valid_idxs (list of `int`):
The split indicies for the validation dataset
The split indices for the validation dataset
batch_size (`int`):
The size of the minibatch. Default is 16
"""
@ -96,11 +91,13 @@ def get_fold_dataloaders(
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
@ -108,9 +105,22 @@ def get_fold_dataloaders(
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
@ -129,7 +139,6 @@ def get_fold_dataloaders(
def training_function(config, args):
# New Code #
test_labels = None
test_predictions = []
# Download the dataset
datasets = load_dataset("glue", "mrpc")
@ -140,15 +149,14 @@ def training_function(config, args):
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
correct_bias = config["correct_bias"]
seed = int(config["seed"])
batch_size = int(config["batch_size"])
metric = load_metric("glue", "mrpc")
metric = evaluate.load("glue", "mrpc")
# If the batch size is too big we use gradient accumulation
gradient_accumulation_steps = 1
if batch_size > MAX_GPU_BATCH_SIZE:
if batch_size > MAX_GPU_BATCH_SIZE and accelerator.distributed_type != DistributedType.XLA:
gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE
batch_size = MAX_GPU_BATCH_SIZE
@ -157,17 +165,15 @@ def training_function(config, args):
# New Code #
# Create our folds:
folds = kfold.split(np.zeros(datasets["train"].num_rows), datasets["train"]["label"])
test_references = []
# Iterate over them
for train_idxs, valid_idxs in folds:
for i, (train_idxs, valid_idxs) in enumerate(folds):
train_dataloader, eval_dataloader, test_dataloader = get_fold_dataloaders(
accelerator,
datasets,
train_idxs,
valid_idxs,
)
if test_labels is None:
test_labels = datasets["validation"]["label"]
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
@ -177,7 +183,7 @@ def training_function(config, args):
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr, correct_bias=correct_bias)
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
@ -215,7 +221,7 @@ def training_function(config, args):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather((predictions, batch["labels"]))
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
@ -234,21 +240,20 @@ def training_function(config, args):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits
predictions, references = accelerator.gather((predictions, batch["labels"]))
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
fold_predictions.append(predictions.cpu())
metric.add_batch(
predictions=predictions.argmax(dim=-1),
references=references,
)
test_metric = metric.compute()
if i == 0:
# We need all of the test predictions
test_references.append(references.cpu())
# Use accelerator.print to print only on the main process.
test_predictions.append(torch.cat(fold_predictions, dim=0))
# We now need to release all our memory and get rid of the current model, optimizer, etc
accelerator.free_memory()
model, optimizer = accelerator.free_memory(model, optimizer)
# New Code #
# Finally we check the accuracy of our folded results:
preds = torch.stack(test_predictions, dim=0).sum(dim=0).div(int(config["n_splits"])).argmax(dim=-1)
test_metric = metric.compute(predictions=preds, references=test_labels)
test_references = torch.cat(test_references, dim=0)
preds = torch.stack(test_predictions, dim=0).sum(dim=0).div(int(args.num_folds)).argmax(dim=-1)
test_metric = metric.compute(predictions=preds, references=test_references)
accelerator.print("Average test metrics from all folds:", test_metric)
@ -257,8 +262,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
@ -267,7 +272,7 @@ def main():
# New Code #
parser.add_argument("--num_folds", type=int, default=3, help="The number of splits to perform across the dataset")
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "correct_bias": True, "seed": 42, "batch_size": 16}
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)

View File

@ -0,0 +1,231 @@
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
from accelerate.utils import DDPCommunicationHookType, DistributedDataParallelKwargs
########################################################################
# This is a fully working simple example to use Accelerate
# and perform ddp communication hook
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
"""
Creates a set of `DataLoader`s for the `glue` dataset,
using "bert-base-cased" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
return train_dataloader, eval_dataloader
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
from accelerate.test_utils.training import mocked_dataloaders
get_dataloaders = mocked_dataloaders # noqa: F811
def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# New Code #
ddp_comm_hook_type = DDPCommunicationHookType(args.ddp_comm_hook)
ddp_comm_wrapper = DDPCommunicationHookType(args.ddp_comm_wrapper)
ddp_kwargs = DistributedDataParallelKwargs(comm_hook=ddp_comm_hook_type, comm_wrapper=ddp_comm_wrapper)
# Initialize accelerator
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision, kwargs_handlers=[ddp_kwargs])
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
metric = evaluate.load("glue", "mrpc")
set_seed(seed)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs),
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now we train the model
for epoch in range(num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
# We use the new `accumulate` context manager to perform gradient accumulation
with accelerator.accumulate(model):
output = model(**batch)
loss = output.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
# New Code #
parser.add_argument(
"--ddp_comm_hook",
type=str,
default="no",
choices=["no", "fp16", "bf16", "power_sgd", "batched_power_sgd"],
help="DDP Communication hook to use. Choose between `no`, `fp16`, `bf16`, `power_sgd`, and `batched_power_sgd`.",
)
# New Code #
parser.add_argument(
"--ddp_comm_wrapper",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
help="DDP Communication wrapper to use. Choose between `no`, `fp16`, and `bf16`.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,722 @@
#!/usr/bin/env python
# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...)
on a text file or a dataset without using HuggingFace Trainer.
Here is the full list of checkpoints on the hub that can be fine-tuned by this script:
https://huggingface.co/models?filter=text-generation
"""
# You can also adapt this script on your own causal language modeling task. Pointers for this are left as comments.
import argparse
import json
import logging
import math
import os
import random
from itertools import chain
from pathlib import Path
import datasets
import torch
import transformers
from datasets import load_dataset
from huggingface_hub import HfApi
from torch.utils.data import DataLoader
from tqdm.auto import tqdm
from transformers import (
CONFIG_MAPPING,
MODEL_MAPPING,
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
SchedulerType,
default_data_collator,
get_scheduler,
)
from transformers.utils.versions import require_version
from accelerate import Accelerator, DistributedType
from accelerate.logging import get_logger
from accelerate.utils import DummyOptim, DummyScheduler, set_seed
logger = get_logger(__name__)
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
MODEL_CONFIG_CLASSES = list(MODEL_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
def parse_args():
parser = argparse.ArgumentParser(description="Finetune a transformers model on a causal language modeling task")
parser.add_argument(
"--dataset_name",
type=str,
default=None,
help="The name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--dataset_config_name",
type=str,
default=None,
help="The configuration name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--train_file", type=str, default=None, help="A csv or a json file containing the training data."
)
parser.add_argument(
"--validation_file", type=str, default=None, help="A csv or a json file containing the validation data."
)
parser.add_argument(
"--validation_split_percentage",
default=5,
help="The percentage of the train set used as validation set in case there's no validation split",
)
parser.add_argument(
"--model_name_or_path",
type=str,
help="Path to pretrained model or model identifier from huggingface.co/models.",
required=False,
)
parser.add_argument(
"--config_name",
type=str,
default=None,
help="Pretrained config name or path if not the same as model_name",
)
parser.add_argument(
"--tokenizer_name",
type=str,
default=None,
help="Pretrained tokenizer name or path if not the same as model_name",
)
parser.add_argument(
"--use_slow_tokenizer",
action="store_true",
help="If passed, will use a slow tokenizer (not backed by the 🤗 Tokenizers library).",
)
parser.add_argument(
"--per_device_train_batch_size",
type=int,
default=8,
help="Batch size (per device) for the training dataloader.",
)
parser.add_argument(
"--per_device_eval_batch_size",
type=int,
default=8,
help="Batch size (per device) for the evaluation dataloader.",
)
parser.add_argument(
"--learning_rate",
type=float,
default=5e-5,
help="Initial learning rate (after the potential warmup period) to use.",
)
parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.")
parser.add_argument("--num_train_epochs", type=int, default=3, help="Total number of training epochs to perform.")
parser.add_argument(
"--max_train_steps",
type=int,
default=None,
help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
)
parser.add_argument(
"--gradient_accumulation_steps",
type=int,
default=1,
help="Number of updates steps to accumulate before performing a backward/update pass.",
)
parser.add_argument(
"--lr_scheduler_type",
type=SchedulerType,
default="linear",
help="The scheduler type to use.",
choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"],
)
parser.add_argument(
"--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler."
)
parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.")
parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
parser.add_argument(
"--model_type",
type=str,
default=None,
help="Model type to use if training from scratch.",
choices=MODEL_TYPES,
)
parser.add_argument(
"--block_size",
type=int,
default=None,
help=(
"Optional input sequence length after tokenization. The training dataset will be truncated in block of"
" this size for training. Default to the model max input length for single sentence inputs (take into"
" account special tokens)."
),
)
parser.add_argument(
"--preprocessing_num_workers",
type=int,
default=None,
help="The number of processes to use for the preprocessing.",
)
parser.add_argument(
"--overwrite_cache", type=bool, default=False, help="Overwrite the cached training and evaluation sets"
)
parser.add_argument(
"--no_keep_linebreaks", action="store_true", help="Do not keep line breaks when using TXT files."
)
parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
parser.add_argument(
"--hub_model_id", type=str, help="The name of the repository to keep in sync with the local `output_dir`."
)
parser.add_argument("--hub_token", type=str, help="The token to use to push to the Model Hub.")
parser.add_argument(
"--checkpointing_steps",
type=str,
default=None,
help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
)
parser.add_argument(
"--resume_from_checkpoint",
type=str,
default=None,
help="If the training should continue from a checkpoint folder.",
)
# New Code #
# Whether to load the best model at the end of training
parser.add_argument(
"--load_best_model",
action="store_true",
help="Whether to load the best model at the end of training",
)
parser.add_argument(
"--with_tracking",
action="store_true",
help="Whether to enable experiment trackers for logging.",
)
parser.add_argument(
"--report_to",
type=str,
default="all",
help=(
'The integration to report the results and logs to. Supported platforms are `"tensorboard"`,'
' `"wandb"`, `"comet_ml"`, and `"dvclive"`. Use `"all"` (default) to report to all integrations.'
"Only applicable when `--with_tracking` is passed."
),
)
args = parser.parse_args()
# Sanity checks
if args.dataset_name is None and args.train_file is None and args.validation_file is None:
raise ValueError("Need either a dataset name or a training/validation file.")
else:
if args.train_file is not None:
extension = args.train_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, json or txt file."
if args.validation_file is not None:
extension = args.validation_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, json or txt file."
if args.push_to_hub:
assert args.output_dir is not None, "Need an `output_dir` to create a repo when `--push_to_hub` is passed."
return args
# New Code #
def evaluate(args, model, eval_dataloader, accelerator, eval_dataset):
model.eval()
losses = []
for step, batch in enumerate(eval_dataloader):
with torch.no_grad():
outputs = model(**batch)
loss = outputs.loss
losses.append(accelerator.gather_for_metrics(loss.repeat(args.per_device_eval_batch_size)))
losses = torch.cat(losses)
try:
eval_loss = torch.mean(losses)
perplexity = math.exp(eval_loss)
except OverflowError:
perplexity = float("inf")
return perplexity, eval_loss
def main():
args = parse_args()
# Initialize the accelerator. We will let the accelerator handle device placement for us in this example.
# If we're using tracking, we also need to initialize it here and it will by default pick up all supported trackers
# in the environment
# when using DeepSpeed, the `gradient_accumulation_steps` is properly set from the DeepSpeed plugin/config
# or from `accelerate launch` via `--gradient_accumulation_steps` else
# defaulting to the passed `args.gradient_accumulation_steps`
accelerator = (
Accelerator(
log_with=args.report_to,
project_dir=args.output_dir,
gradient_accumulation_steps=args.gradient_accumulation_steps,
)
if args.with_tracking
else Accelerator(gradient_accumulation_steps=args.gradient_accumulation_steps)
)
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logger.info(accelerator.state, main_process_only=False)
if accelerator.is_local_main_process:
datasets.utils.logging.set_verbosity_warning()
transformers.utils.logging.set_verbosity_info()
else:
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
# If passed along, set the training seed now.
if args.seed is not None:
set_seed(args.seed)
# Handle the repository creation
if accelerator.is_main_process:
if args.push_to_hub:
api = HfApi(token=args.hub_token)
# Create repo (repo_name from args or inferred)
repo_name = args.hub_model_id
if repo_name is None:
repo_name = Path(args.output_dir).absolute().name
repo_id = api.create_repo(repo_name, exist_ok=True).repo_id
with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
if "step_*" not in gitignore:
gitignore.write("step_*\n")
if "epoch_*" not in gitignore:
gitignore.write("epoch_*\n")
elif args.output_dir is not None:
os.makedirs(args.output_dir, exist_ok=True)
accelerator.wait_for_everyone()
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
# (the dataset will be downloaded automatically from the datasets Hub).
#
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
# 'text' is found. You can easily tweak this behavior (see below).
#
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
# download the dataset.
if args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
raw_datasets = load_dataset(args.dataset_name, args.dataset_config_name)
if "validation" not in raw_datasets.keys():
raw_datasets["validation"] = load_dataset(
args.dataset_name,
args.dataset_config_name,
split=f"train[:{args.validation_split_percentage}%]",
)
raw_datasets["train"] = load_dataset(
args.dataset_name,
args.dataset_config_name,
split=f"train[{args.validation_split_percentage}%:]",
)
else:
data_files = {}
dataset_args = {}
if args.train_file is not None:
data_files["train"] = args.train_file
if args.validation_file is not None:
data_files["validation"] = args.validation_file
extension = args.train_file.split(".")[-1]
if extension == "txt":
extension = "text"
dataset_args["keep_linebreaks"] = not args.no_keep_linebreaks
raw_datasets = load_dataset(extension, data_files=data_files, **dataset_args)
# If no validation data is there, validation_split_percentage will be used to divide the dataset.
if "validation" not in raw_datasets.keys():
raw_datasets["validation"] = load_dataset(
extension,
data_files=data_files,
split=f"train[:{args.validation_split_percentage}%]",
**dataset_args,
)
raw_datasets["train"] = load_dataset(
extension,
data_files=data_files,
split=f"train[{args.validation_split_percentage}%:]",
**dataset_args,
)
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
# https://huggingface.co/docs/datasets/loading_datasets.html.
# Load pretrained model and tokenizer
#
# In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
if args.config_name:
config = AutoConfig.from_pretrained(args.config_name)
elif args.model_name_or_path:
config = AutoConfig.from_pretrained(args.model_name_or_path)
else:
config = CONFIG_MAPPING[args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, use_fast=not args.use_slow_tokenizer)
elif args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path, use_fast=not args.use_slow_tokenizer)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported by this script."
"You can do it from another script, save it, and load it from here, using --tokenizer_name."
)
if args.model_name_or_path:
model = AutoModelForCausalLM.from_pretrained(
args.model_name_or_path,
from_tf=bool(".ckpt" in args.model_name_or_path),
config=config,
)
else:
logger.info("Training new model from scratch")
model = AutoModelForCausalLM.from_config(config)
model.resize_token_embeddings(len(tokenizer))
# Preprocessing the datasets.
# First we tokenize all the texts.
column_names = raw_datasets["train"].column_names
text_column_name = "text" if "text" in column_names else column_names[0]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
with accelerator.main_process_first():
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on dataset",
)
if args.block_size is None:
block_size = tokenizer.model_max_length
if block_size > 1024:
logger.warning(
f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). "
"Picking 1024 instead. You can change that default value by passing --block_size xxx."
)
block_size = 1024
else:
if args.block_size > tokenizer.model_max_length:
logger.warning(
f"The block_size passed ({args.block_size}) is larger than the maximum length for the model"
f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}."
)
block_size = min(args.block_size, tokenizer.model_max_length)
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= block_size:
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
# Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a remainder
# for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value might be slower
# to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
with accelerator.main_process_first():
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.overwrite_cache,
desc=f"Grouping texts in chunks of {block_size}",
)
train_dataset = lm_datasets["train"]
eval_dataset = lm_datasets["validation"]
# Log a few random samples from the training set:
for index in random.sample(range(len(train_dataset)), 3):
logger.info(f"Sample {index} of the training set: {train_dataset[index]}.")
# DataLoaders creation:
train_dataloader = DataLoader(
train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=args.per_device_train_batch_size
)
eval_dataloader = DataLoader(
eval_dataset, collate_fn=default_data_collator, batch_size=args.per_device_eval_batch_size
)
# Optimizer
# Split weights in two groups, one with weight decay and the other not.
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": args.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
# New Code #
# Creates Dummy Optimizer if `optimizer` was specified in the config file else creates Adam Optimizer
optimizer_cls = (
torch.optim.AdamW
if accelerator.state.deepspeed_plugin is None
or "optimizer" not in accelerator.state.deepspeed_plugin.deepspeed_config
else DummyOptim
)
optimizer = optimizer_cls(optimizer_grouped_parameters, lr=args.learning_rate)
# On TPU, the tie weights in our model have been disconnected, so we need to restore the ties.
if accelerator.distributed_type == DistributedType.XLA:
model.tie_weights()
# Scheduler and math around the number of training steps.
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / accelerator.gradient_accumulation_steps)
overrode_max_train_steps = False
if args.max_train_steps is None:
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
overrode_max_train_steps = True
else:
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
# New Code #
# Creates Dummy Scheduler if `scheduler` was specified in the config file else creates `args.lr_scheduler_type` Scheduler
if (
accelerator.state.deepspeed_plugin is None
or "scheduler" not in accelerator.state.deepspeed_plugin.deepspeed_config
):
lr_scheduler = get_scheduler(
name=args.lr_scheduler_type,
optimizer=optimizer,
num_warmup_steps=args.num_warmup_steps,
num_training_steps=args.max_train_steps,
)
else:
lr_scheduler = DummyScheduler(
optimizer, total_num_steps=args.max_train_steps, warmup_num_steps=args.num_warmup_steps
)
# Prepare everything with our `accelerator`.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# We need to recalculate our total training steps as the size of the training dataloader may have changed.
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / accelerator.gradient_accumulation_steps)
if overrode_max_train_steps:
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
# Afterwards we recalculate our number of training epochs
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
# Figure out how many steps we should save the Accelerator states
checkpointing_steps = args.checkpointing_steps
if checkpointing_steps is not None and checkpointing_steps.isdigit():
checkpointing_steps = int(checkpointing_steps)
# We need to initialize the trackers we use, and also store our configuration.
# The trackers initializes automatically on the main process.
if args.with_tracking:
experiment_config = vars(args)
# TensorBoard cannot log Enums, need the raw value
experiment_config["lr_scheduler_type"] = experiment_config["lr_scheduler_type"].value
accelerator.init_trackers("clm_no_trainer", experiment_config)
# Train!
total_batch_size = (
args.per_device_train_batch_size * accelerator.num_processes * accelerator.gradient_accumulation_steps
)
logger.info("***** Running training *****")
logger.info(f" Num examples = {len(train_dataset)}")
logger.info(f" Num Epochs = {args.num_train_epochs}")
logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}")
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
logger.info(f" Gradient Accumulation steps = {accelerator.gradient_accumulation_steps}")
logger.info(f" Total optimization steps = {args.max_train_steps}")
# Only show the progress bar once on each machine.
progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
completed_steps = 0
starting_epoch = 0
best_metric = None
best_metric_checkpoint = None
# Potentially load in the weights and states from a previous save
if args.resume_from_checkpoint:
accelerator.load_state(args.resume_from_checkpoint)
accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
path = os.path.basename(args.resume_from_checkpoint)
training_difference = os.path.splitext(path)[0]
if "epoch" in training_difference:
starting_epoch = int(training_difference.replace("epoch_", "")) + 1
resume_step = None
completed_steps = starting_epoch * num_update_steps_per_epoch
else:
resume_step = int(training_difference.replace("step_", ""))
starting_epoch = resume_step // num_update_steps_per_epoch
resume_step -= starting_epoch * num_update_steps_per_epoch
completed_steps = resume_step
# update progress bar if resumed from checkpoint
progress_bar.update(completed_steps)
for epoch in range(starting_epoch, args.num_train_epochs):
model.train()
if args.with_tracking:
total_loss = 0
# skip new `skip_first_batches` to skip the batches when resuming from ckpt
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
# We need to skip steps until we reach the resumed step
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
else:
# After the first iteration though, we need to go back to the original dataloader
active_dataloader = train_dataloader
for step, batch in enumerate(active_dataloader):
# In particular, DeepSpeed handles `gradient_accumulation` via `DeepSpeedEngine`.
# Below, we use `accelerator.accumulate` if the user
# wants to switch to other approaches such as plain DDP, PyTorch FSDP ...
# This avoids having to change any code as things are all handled across different distributed setups.
with accelerator.accumulate(model):
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
if accelerator.sync_gradients:
progress_bar.update(1)
completed_steps += 1
# We keep track of the loss at each epoch
if args.with_tracking:
step_loss = accelerator.reduce(loss.detach().clone()).item()
total_loss += step_loss
if isinstance(checkpointing_steps, int):
if completed_steps % checkpointing_steps == 0:
output_dir = f"step_{completed_steps}"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
if completed_steps >= args.max_train_steps:
break
perplexity, eval_loss = evaluate(args, model, eval_dataloader, accelerator, eval_dataset)
logger.info(f"epoch {epoch}: perplexity: {perplexity} eval_loss: {eval_loss}")
if args.with_tracking:
accelerator.log(
{
"perplexity": perplexity,
"eval_loss": eval_loss,
"train_loss": total_loss / len(train_dataloader),
"epoch": epoch,
"step": completed_steps,
},
step=completed_steps,
)
if isinstance(checkpointing_steps, str) and checkpointing_steps == "epoch":
accelerator.save_state(os.path.join(args.output_dir, f"epoch_{epoch}"))
# New Code #
# Tracks the best checkpoint and best metric
if best_metric is None or best_metric > perplexity:
best_metric = perplexity
best_metric_checkpoint = os.path.join(args.output_dir, "best_checkpoint")
accelerator.save_state(best_metric_checkpoint)
accelerator.print(f"New best metric: {best_metric} at epoch {epoch}")
accelerator.print(f"best_metric_checkpoint: {best_metric_checkpoint}")
# New Code #
# Loads the best checkpoint after the training is finished
if args.load_best_model:
accelerator.load_state(best_metric_checkpoint)
# New Code #
# Evaluates using the best checkpoint
perplexity, eval_loss = evaluate(args, model, eval_dataloader, accelerator, eval_dataset)
logger.info(f"Best model metrics: perplexity: {perplexity} eval_loss: {eval_loss}")
if perplexity != best_metric:
raise AssertionError(
f"Best metric {best_metric} does not match the metric {perplexity} of the loaded best model."
)
if args.output_dir is not None:
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
# New Code #
# Saves the whole/unpartitioned fp16 model when in ZeRO Stage-3 to the output directory if
# `stage3_gather_16bit_weights_on_model_save` is True in DeepSpeed Config file or
# `zero3_save_16bit_model` is True in DeepSpeed Plugin.
# For Zero Stages 1 and 2, models are saved as usual in the output directory.
# The model name saved is `pytorch_model.bin`
unwrapped_model.save_pretrained(
args.output_dir,
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
state_dict=accelerator.get_state_dict(model),
)
if accelerator.is_main_process:
tokenizer.save_pretrained(args.output_dir)
if args.push_to_hub:
api.upload_folder(
repo_id=repo_id,
folder_path=args.output_dir,
commit_message="End of training",
)
with open(os.path.join(args.output_dir, "all_results.json"), "w") as f:
json.dump({"perplexity": perplexity, "eval_loss": eval_loss.item()}, f)
if __name__ == "__main__":
main()

Some files were not shown because too many files have changed in this diff Show More