Compare commits

...

390 Commits

Author SHA1 Message Date
8514c35192 Release: v0.21.0 2023-07-13 11:16:25 -04:00
5b9c5881b6 add compatibility with peft (#1725)
* add compatibility with peft

* update docs
2023-07-13 10:33:44 -04:00
0209606364 add Comfy-UI (#1723) 2023-07-13 19:02:50 +05:30
5909c1a514 Fix typo 2023-07-13 09:27:30 -04:00
e7150b0b15 New tactic (#1719) 2023-07-12 18:50:17 -04:00
e8c64f598b Remove duplicate code (#1717) 2023-07-12 14:22:07 -04:00
a14081ccc5 Optimize get_scale to reduce async calls (#1718)
* Optimize

* Comment
2023-07-12 14:00:28 -04:00
d895809613 Keep old behavior (#1716) 2023-07-12 13:24:31 -04:00
02015eb25c fix version (#1701) 2023-07-12 11:48:48 -04:00
19bcd43e14 Modify loading checkpoint behavior (#1715)
* Add check for the whole state dict

* fix style
2023-07-12 11:48:06 -04:00
59f2fff3cf add multi_gpu decorator (#1712) 2023-07-12 11:17:07 -04:00
c33adecc9f Add Ascend NPU accelerator support (#1676)
* add Ascend NPU accelerator support

* fix code  styles

* enable accelerate test on npu

* fix typo&code styles

---------

Co-authored-by: jihuazhong <jihuazhong1@huawei.com>
2023-07-12 08:43:02 -04:00
518c206a2a Fix the bug where DataLoaderDispatcher gets stuck in an infinite wait when the dataset is an IterDataPipe during multi-process training. (#1709)
Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
2023-07-12 07:44:36 -04:00
65b5c2cfad Fixes for issue #1683: failed to run accelerate config in colab (#1692)
* Fixes for issue #1683: failed to run accelerate config in colab

Fixes for issue #1683: failed to run accelerate config in colab

* Fixes for issue #1683: failed to run accelerate config in colab, change input2 to a formal variable name

change input2 to a formal variable name

* Fixes for issue #1683: failed to run accelerate config in colab

removed unnecessary spaces

* Fix for #1683 failed to run accelerate config in colab 

fixed reformatting issue, during the quality check

* Fixes for issue #1683: failed to run accelerate config in colab

refactor the code, passed black, ruff, doc-builder test; modified the prompt in colab.

* Fixes for issue #1683: failed to run accelerate config in colab

fixed black, ruff, doc-builder, modified prompt during choice input

* Fixes for issue #1683: failed to run accelerate config in colab

use utils.imports _is_package_available() method instead, to be consistent with the rest of the library code.

* Fixes for issue #1683: failed to run accelerate config in colab

add default choice, wrap up import check with try catch, passed quality check, style check and test cases
2023-07-12 07:15:02 -04:00
7954a28a71 Fix launcher validation (#1705)
* unstash

* fix validation of launcher args

* bug fix

* cond for tpu
2023-07-11 14:30:44 -04:00
3bdb35abfa Skip tests when bnb isn't available (#1706)
* bnb is available

* Some more
2023-07-11 14:29:17 -04:00
d58aac2e1e Update tracking.md (#1702) 2023-07-11 14:15:59 -04:00
a4c2654f50 Deepcopy on Accelerator to return self (#1694)
* Deepcopy

* Clean

* deepcopy
2023-07-11 14:14:15 -04:00
27d29087b2 Add offload for 8-bit model (#1699)
* Add offload for 8-bit model

* fix saved 8bit model offload and add tests

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/utils/modeling.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* add doc on how offload works

* remove enable_offload

* make style doc

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-11 13:46:15 -04:00
c7698834fc Move mixed precision wrapping ahead of DDP/FSDP wrapping (#1682)
* Update accelerator.py

* Update accelerator.py

* Update accelerator.py

* Update accelerator.py

* Update accelerator.py

* Update test_script.py

* Update test_script.py

* Update test_script.py

* Update test_script.py

* Update test_script.py
2023-07-11 10:35:13 -04:00
64d7b58c44 Improve quality errors (#1698)
* Purposefully fail

* Step summary

* Right bash

* Take 2

* Post to job summary

* Extra space
2023-07-11 09:09:02 -04:00
e3aae2ac65 Fixup docs (#1697) 2023-07-11 08:36:37 -04:00
d0a7991b65 Fix nightly tests (#1696)
* Debug start

* Fix

* Workflow
2023-07-11 08:36:23 -04:00
180ef7c415 update readme in examples (#1678) 2023-07-10 12:19:27 -04:00
95bffdec43 remove duplicate class (#1691) 2023-07-07 10:29:00 -04:00
c74c28c6d1 Fix workflow CI (#1690)
* Try again

* Accelerate only

* Try pushing again
2023-07-07 09:46:00 -04:00
e0f5e03009 fix bnb tests (#1679)
* fix tests

* Fix 8bit serialization tests
2023-07-05 10:13:20 -04:00
dfbfbdfea8 Add docs for saving Transformers models (#1671)
* add section to package_reference/accelerator.md explaining saving for Transformers models

* rename `model` to `unwrapped_model`

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-03 10:34:30 -04:00
24ae624d96 Doc big model inference (#1670)
* change example

* fix spaces

* add link to transformers

* Fix style
2023-06-30 18:00:52 -04:00
40f822a1e3 replace save funct in doc (#1672) 2023-06-30 17:03:19 -04:00
a0bfe2140c Bnb quantization (#1626)
* Add get_quantized_model func

* Add tests for 4bit and 8bit quantization

* Add tests

* Fix style

* Add offload tests

* Fix style

* Fix

* Fix conflit

* fix generate quality test

* fix style

* add check for bnb layers and fix .to(cpu)

* Fix 8bit serialization and memory issue

* add import

* Change quantize_model to load_and_quantize_model

* Add tests for saving 8bit model

* Fix bnb dataclass

* fix style

* fix tests

* fix style

* remove depedency on tie_weights

* remove depedency on base_model_prefix

* remove depedency on device

* fix style

* Add doc about quantization

* fix import

* Fix text

* fix func name

* fix arg in dataclass

* Update docs/source/usage_guides/quantization.md

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix funct name

* Add real model

* Fix doc

* put bash tag

* Update src/accelerate/utils/bnb.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-30 10:59:04 -04:00
c6443f8bd4 Update broken Runhouse link in examples/README.md (#1668) 2023-06-30 08:51:28 -05:00
3cd02e9340 change the import place to avoid import error (#1653) 2023-06-30 11:55:30 +05:30
17ec2ede11 remove safetensor dep on shard_checkpoint (#1664)
* remove safetensor dep on shard_checkpoint

* fix style

* group function
2023-06-29 11:23:13 -04:00
e30938700a 🚨🚨🚨 Spring cleaning: PyTorch 1.10 🚨🚨🚨 (#1662)
* Bookmark

* Bump torch v

* More stuff

* Remove never called else
2023-06-29 09:26:15 -04:00
b864946606 🚨🚨🚨 Spring cleaning: Python 3.8 🚨🚨🚨 (#1661)
* Py 3.8

* Rm typed dict

* Workflows
2023-06-29 08:46:19 -04:00
bc234c040c [BigModeling] Final fix for dispatch int8 and fp4 models (#1660)
* final fix for dispatch int8 and fp4 models

* Update src/accelerate/big_modeling.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2023-06-28 11:16:13 -04:00
662a7dd905 docker cpu py version (#1659) 2023-06-28 10:37:29 -04:00
d3db2d4fe5 TIL (#1657) 2023-06-28 10:36:49 -04:00
96f926a25e Bump integration (#1658) 2023-06-28 10:32:43 -04:00
a9d43cda80 [BigModeling] Add missing check for quantized models (#1652)
* add missing check

* better check

* better check

* much better check
2023-06-28 16:07:30 +02:00
effccbdc84 Check for port usage before launch (#1656)
* Check for port usage

* Just comm

* Right flag in err

* Better err, happy now
2023-06-28 09:10:01 -04:00
d141b4ce79 Fix device_map (#1651) 2023-06-27 21:36:00 -04:00
bc49d0f9b3 Doc save model (#1650)
* add doc for save_model func

* fix doc

* fix path issue

* add load_checkpoint_in_model doc in utilities

* oups

* Update docs/source/package_reference/utilities.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-06-27 16:08:56 -04:00
5ea7c81277 Change dispatch_model when we have only one device (#1648)
* Change dispatch_model when we have only one device

* Fix style

* add else statement

* fix style

* Fix error message

* Fix style
2023-06-27 14:58:11 -04:00
efe4481a28 add save model (#1641)
* add save model

* Fix duplicates function and remove args

* Fix style

* fix description

* add save_model to Accelerator object

* Revert "fix potential OOM when resuming with multi-GPU training (#1444)"

This reverts commit 3a381bfa48dfb082c1f8e892a9a07ca5717bf0df.

* Fix style

* Fix description

* Replace state_dict() by accelerator get_state_dict

* FIx state dict

* clean comment
2023-06-27 11:10:42 -04:00
df215cc243 Add skorch to runners (#1646)
* Skorch tests

* Take 2

* runs-on

* Take 2

* Rm needs

* Needs testing deps

* dep

* Only use all GPUs

* Add skorch tests

* rm

* nl
2023-06-27 10:08:22 -04:00
5791d949ff fix modeling low zero (#1634)
* fix modeling low zero

* low zero logic change
2023-06-26 13:19:48 -04:00
b76409ba05 fix autocasting bug (#1637)
* fix autocasting bug

* refactor and resolve comment
2023-06-26 20:18:36 +05:30
a25c4eacae Swap disable rich (#1640) 2023-06-26 09:59:10 -04:00
d8437ae096 Fix nightly 2023-06-26 09:20:01 -04:00
2fa22f3342 deepspeed z2/z1 state_dict bloating fix (#1638)
* deepspeed z2/z1 state_dict bloating fix

* fix
2023-06-26 17:44:36 +05:30
a2ecb58132 fix: Megatron is not installed. please build it from source. (#1636)
The megatron package name is mismatch with dist directory name.

Signed-off-by: yuanwu <yuan.wu@intel.com>
2023-06-26 08:13:28 -04:00
73cc944067 fixes offload dtype (#1631)
* Fix offload dtype

* Set dtype on meta device

* fix style
2023-06-22 17:38:09 -04:00
b16916f447 Fix transformers sync bug with accumulate (#1624)
* Fix transformers sync

* Docs + expose

* Right arg

* bool
2023-06-22 04:42:54 -04:00
36f8e48747 Fix workflow (#1625)
* Fix steps

* Right runs-on

* Fix directory

* Just integration

* Fix check

* Disable wandb

* Fin

* Diff
2023-06-21 16:04:55 -04:00
790cb8b461 Fix tb issue (#1623) 2023-06-21 13:48:41 -04:00
7b4d12623a Doc to md (#1618)
* Convert doc files to MD

* Convert doc files to Markdown
2023-06-20 18:12:19 -04:00
956c6baf71 Fix failing multinode tests (#1616)
* Should fix multinode test

* For testing, remove after

* try this

* Try disabling

* Try again

* move more

* Fix multinode tests

* New check

* Fix err

* Fix test
2023-06-20 15:32:13 -04:00
485e8c8cb4 Ignore low_zero option when only device is available (#1617) 2023-06-20 12:28:56 -04:00
aaf38c2f35 fix for arc gpus (#1615) 2023-06-20 11:09:11 -04:00
f433457244 reset end_of_dataloader for dataloader_dispatcher (#1609)
* reset end_of_dataloader for dataloader_dispatcher

* add ruff fixes
2023-06-20 08:41:11 -04:00
535b52cef2 Remove GPU safetensors env variable (#1603) 2023-06-16 10:59:41 -04:00
e60a424398 Remove asking xpu plugin for non xpu devices (#1594)
* remove asking xpu plugin for non xpu devices

* style
2023-06-15 13:11:24 -04:00
32f85ce524 Add triggers for CI workflow (#1597)
* Trigger

* Space
2023-06-15 09:12:41 -04:00
0983a9b9b4 Integration tests (#1593)
* Integration tests

* Typofix

* Clean up python version

* Trainer typo

* Clean env

* rm cache
2023-06-15 02:42:34 -04:00
e5d0df44f0 Update modeling.py (#1595) 2023-06-14 17:59:28 -04:00
50eabe5b1d FSDP updates (#1576)
* FSDP updates

* quality and import fixes

* bug fix and adding contributors

Co-Authored-By: Vik Paruchuri <github@vikas.sh>
Co-Authored-By: raghavanone <115454562+raghavanone@users.noreply.github.com>

* fix 🐛

* update docs and example

* quality

* fixes and updates

* use logger

* fix circular dependency issue

* quality

* refactor

* quality

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* address comment

---------

Co-authored-by: Vik Paruchuri <github@vikas.sh>
Co-authored-by: raghavanone <115454562+raghavanone@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-13 20:36:32 +05:30
f2d1047059 Update checkpoint.mdx (#1587) 2023-06-13 09:57:52 -04:00
3e68f1da63 Fix test (#1586) 2023-06-13 09:03:47 -04:00
f8b0696076 fix logger level (#1579) 2023-06-13 08:55:10 -04:00
51a2ca5d88 Return false if CUDA available (#1581) 2023-06-13 08:44:31 -04:00
51de46e368 Update training_tpu.mdx (#1582) 2023-06-13 07:52:59 -04:00
e2b0224ec4 improve oob performance when use mpirun to start DDP finetune without accelerate launch (#1575)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-06-13 07:52:26 -04:00
db11bd5035 Get Torch version using importlib instead of pkg_resources (#1585)
This fixes the following warning:
> pkg_resources is deprecated as an API
2023-06-13 07:50:12 -04:00
543c59af22 Expand prepare() doc (#1580)
* Expand device_placement

* Expand doc

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update accelerator.py

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-12 14:37:43 -04:00
81765e6e00 Make sure that we only set is_accelerator_prepared on items accelerate actually prepares (#1578)
* Other items

* Better test and check

* Align test

* Clean
2023-06-12 12:09:31 -04:00
a4ebc14fab fix the bug in xpu (#1508)
* fix bug in is_xpu_available

* fix device configure bug for DDP with ccl backend

* enable accelerate launch for DistributedType.MULTI_XPU

* fix the bug in wait_for_everyone for xpu

* fix the bug in rng_sync_check for xpu

* refactoring code according to muellerzr's suggestion

* define RegressionModel4XPU for xpu to avoid ccl bug

* make MULTI_XPU independent on env var 'CCL_WORKER_COUNT'
2023-06-12 11:34:21 -04:00
058f6f70f5 Perminant solution (#1577) 2023-06-12 11:29:36 -04:00
665d5180fc Check for bak and expand docs on directory structure (#1571)
* Check for bak and expand doc

* Better regex

* Update docstring

* Use exclusion at beginning and simplify check for digit
2023-06-09 13:10:53 -04:00
d1ea9ab40c Introduce listify, fix tensorboard silently failing (#1570)
* Introduce untensorify, fix logging with tensor

* Clean imports and make note

* untensorify -> listify
2023-06-09 12:50:28 -04:00
632dce67ab Raise error instead of warn (#1568) 2023-06-09 12:18:26 -04:00
e41864ce9d Update mixed precision integrations in README (#1569) 2023-06-09 11:26:33 -04:00
979991aa78 Update gradient sync docs to reflect importance of optimizer.step() (#1565)
Before this commit, this documentation suggested that model parameters
are updated when `accelerator.backward()` is called (which in turn calls
`loss.backward()`). This isn't the case - parameter updates happen when
`optimizer.step()` is called.

This commit:
1. Updates this documentation to reflect this within the discussion of
   gradient accumulation.
2. Adds calls to `optimizer.step()` as that's key to gradient
   accumulation.
2. Adds optimizer.zero_grad() for consistency with `accelerator.accumulate()`'s docs
3. Does some related word-smithing

To make sure I was thinking about gradient accumulation correctly, I'm
using `huggingface/transformer`'s performance guide for a working
definition of gradient accumulation, which this diff is consistent with:

> The idea behind gradient accumulation is to instead of calculating the
gradients for the whole batch at once to do it in smaller steps. The way
we do that is to calculate the gradients iteratively in smaller batches
by doing a forward and backward pass through the model and accumulating
the gradients in the process. *When enough gradients are accumulated we
run the model’s optimization step*. This way we can easily increase the
overall batch size to numbers that would never fit into the GPU’s
memory. In turn, however, the added forward and backward passes can slow
down the training a bit.

(https://huggingface.co/docs/transformers/perf_train_gpu_one#gradient-accumulation)

Another huggingface example of gradient accumulation that is consistent
with this change: [run_glue_no_trainer.py][0]

[0]: https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py#L518-L532
2023-06-09 09:30:43 -04:00
7fc1e438d1 [bnb] Fix failing int8 tests (#1567)
* fix int8 tests

* replace with `replace_8bit_linear`
2023-06-09 14:53:07 +02:00
040f178569 Update big_modeling.mdx (#1564) 2023-06-08 15:52:05 -04:00
87c81315a1 Reset dataloader end_of_datalaoder at each iter (#1562) 2023-06-08 12:08:17 -04:00
f1e84decc9 [core] Fix possibility to passNoneType objects in prepare (#1561)
* add possibility to pass nonetype objects

* adds nice test
2023-06-08 14:56:22 +02:00
eafddf02e3 fix the typo when setting the "_accelerator_prepared" attribute (#1560)
* fix the typo when setting the "_accelerator_prepared" attribute

* use the name "_is_accelerate_prepared" instead
2023-06-07 18:18:08 -04:00
f0029d6f60 Fix tests not being ran on multi-GPU nightly (#1558)
* Fix tests not being ran

* More tests
2023-06-07 15:14:02 -04:00
3147de9010 Fix load_state_dict when there is one device and disk (#1557) 2023-06-07 14:57:20 -04:00
d448ebaf90 Update README.md (#1556) 2023-06-07 14:44:27 -04:00
65dd4f2039 Avoid double wrapping of all accelerate.prepare objects (#1555)
* Add step reset to free memory

* Check if not Accelerated Optimizer

* Continue

* Another try

* Check the rest

* Try with just check on init

* Change logic based on review

* Update

* Oops very big logic issue!
2023-06-07 13:37:19 -04:00
7ee2c79da9 Update launch.mdx (#1553) 2023-06-07 13:35:51 -04:00
bbe2e30901 [doc build] Use secrets (#1551) 2023-06-07 18:42:09 +02:00
0ab72613a7 v0.21.0.dev0 2023-06-07 10:12:36 -04:00
6f14e619b2 Update migration.mdx (#1549) 2023-06-07 09:50:09 -04:00
90e9703d99 Eval mode (#1540) 2023-06-07 09:27:05 -04:00
5f21cde3c7 [documentation] grammar fixes in gradient_synchronization.mdx (#1547)
* Update deferring_execution.mdx

* [documentation] grammar fixes in gradient_synchronization.mdx

These changes are grammatical and do not affect the ideas communicated in the file.
2023-06-06 17:06:03 -04:00
76ccfae682 Add mps support to big inference modeling (#1545)
* Add mps support

* make style

* Fix syntax

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Fix condition

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-06 16:31:02 -04:00
62357f218f Apply deprecations (#1537)
* MPS

* Update examples

* Fix env var

* device type

* Fix test
2023-06-06 13:04:45 -04:00
be1b76e97a Update deferring_execution.mdx (#1544) 2023-06-06 11:59:30 -04:00
3f2b5da094 Update performance.mdx (#1543) 2023-06-06 09:54:25 -04:00
3f1cb09e7b Update deepspeed.mdx (#1541) 2023-06-06 09:54:03 -04:00
7a39d928f5 Prevent using extra VRAM for static device_map (#1536) 2023-06-06 09:31:41 -04:00
961fe728d9 remove ipexplugin, let ACCELERATE_USE_IPEX/ACCELERATE_USE_XPU control the ipex and xpu (#1503)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-06-06 09:27:31 -04:00
ef0c4bf277 Officially support naive PP for quantized models + PEFT (#1523)
* officially support naive PP

- relax check
- add test

* Apply suggestions from code review

* more tests

* Update src/accelerate/accelerator.py
2023-06-06 14:41:59 +02:00
de855b3247 Raise ValueError on iterable dataset if we've hit the end and attempting to go beyond it (#1531)
* Raise ValueError on iterable

* Clean
2023-06-06 07:51:22 -04:00
b9628f13c2 Check tied parameters (#1529)
* Check that parameters are tied correctly

* Fix style

* Fix condition

* Fix failing test

* Fix check_tied_parameters function

* Fix condition

* Fix arg

* Apply suggestions from code review

Fix log

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Fix tests and comments

Fix comments and tests

Fix description

* Remove dep

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-05 15:17:49 -04:00
16ca01feea Refactor mp into its own wrapper (#1527)
* Better, clean version

* Diff

* oops need return

* Make adjustments

* Docstring
2023-06-05 12:00:51 -04:00
4cbbde8945 Fixup deepspeed/cli tests (#1526) 2023-06-05 11:35:21 -04:00
eba6eb79dc Fix a bug when parameters tied belong to the same module (#1514)
* Fix a bug when parameters tied belong to the same module

* Address review comments

* Add tests
2023-06-02 17:07:39 -04:00
109f3272f5 Swap env vars for XPU and IPEX + CLI (#1513)
* Swap env vars

* Clean up CLI

* use_xpu

* Add CLI docs

* Ipex only

* Nit

* Check

* Capitolize

* Make changes from review
2023-06-02 13:30:16 -04:00
85901cdcf9 should set correct dtype to ipex optimize and use amp logic in native… (#1511)
* should set correct dtype to ipex optimize and use amp logic in native_amp logic in prepare_model

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* remove mix precision set in ipex, directly use it from accelerate state

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* raise import error if ipex is not valid in prepare ipex

* Update src/accelerate/accelerator.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-06-02 10:45:17 -04:00
5e74d932b9 NVME path support for deepspeed (#1484)
* NVME path support for deepspeed

* modify stage 3 ds test

* review commit and fixes

* review commits
2023-06-02 09:55:17 -04:00
090c65cd9d Add assertion when call prepare with deepspeed config. (#1468) 2023-06-02 09:55:04 -04:00
b7d5d9072a adjust overriding of model's forward function (#1492)
* adjust overriding of model's forward function

* bug fix

* extend solution to all model.forward overrides

* leave fp8 section alone

* make style

---------

Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2023-06-02 07:52:56 -04:00
d4262021d5 Fix 4bit model on multiple devices (#1506)
* Add 4bit case and fix device index

* Fix style
2023-06-01 15:10:51 -04:00
8ae56dc51d [bnb] Add fp4 support for dispatch (#1505)
* add fp4 support for dispatch

* add tests

* refactor
2023-06-01 20:41:03 +02:00
c9fbb71e37 fix crash when ipex is installed and torch has no xpu (#1502)
also when cpu flag is set. should use cpu instead of XPU
2023-06-01 11:48:55 -04:00
4d583ad6a1 Allow key skipping in big model inference (#1491)
* Allow key skipping in big model inference

* Add a repr
2023-05-31 15:04:52 -04:00
70d999ee4a Use empty like when we only need to create buffers (#1497)
* Use empty like

* Make
2023-05-31 11:53:17 -04:00
3913fa4dd0 Let gather_for_metrics always run (#1496) 2023-05-31 10:59:31 -04:00
f9b2e6769b Update README.md (#1493) 2023-05-31 09:25:29 -04:00
d3f8c52f4c Only use IPEX if available (#1495)
* Only use IPEX if available

* Check first, then make plugin
2023-05-31 08:18:13 -04:00
af12e7b023 Add rdzv-backend (#1490)
* Add rdzv

* rm print

* Doc

* Better help
2023-05-31 08:06:55 -04:00
68376babd8 Fix gradient state bugs in multiple dataloader (#1483)
* Fix gradient state bugs in multiple dataloader

* Fix style issue

* Update src/accelerate/data_loader.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Add docstring

* Fix style

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-05-30 10:56:42 -04:00
7d24bdefb5 Move to device (#1478) 2023-05-26 15:01:02 -04:00
bb296348e1 Split tensors as part of split_between_processes (#1477)
* Try with this

* Remove import to be late

* Apply padding properly for tensors

* Pad across tensors

* Check to see if this works

* Use -1

* Properly send the first item as what's to be padded

* Update docstring

* Add tests

* Fix test

* Update typehints and docstrings
2023-05-26 14:23:07 -04:00
0226f75025 Imrpove sagemaker (#1470)
* Should fix everything now:

* Simplify logic
2023-05-24 15:50:31 -04:00
419c9ce22a Update gradient accumulation docs, and remove redundant example (#1461) 2023-05-24 10:43:42 -04:00
2249fbde0d update register_empty_buffer to match torch args (#1465) 2023-05-24 08:32:38 -04:00
e0ffea5bc3 Check for xpu specifically (#1472) 2023-05-23 12:42:12 -04:00
9a86a49f72 update conversion of layers to retain original data type. (#1467)
* add dtype to retain original dtype of layers in convert_model

* updated params_dtype

* ran make style,quality:
2023-05-23 05:19:57 -04:00
70920895e8 Fix skip first batch being perminant (#1466)
* Better version of fix

* Failing diff test

* Special str
2023-05-22 14:18:16 -04:00
bf3cd30a66 4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) (#1458)
* Added change for FP4.

* fix suggestion

* better check

---------

Co-authored-by: younesbelkada <younesbelkada@gmail.com>
2023-05-22 11:35:14 -04:00
bfa74e51d2 Document how to use commands with python module instead of argparse (#1457)
* Include other commands

* Add another paragraph

* Reverse order

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2023-05-19 12:32:54 -04:00
e6699e6aba Refactor and simplify xpu device in state (#1456)
* Refactor and simplify xpu device in state

* review commit
2023-05-19 10:43:24 -04:00
0871e93a74 fix error for CPU DDP using trainer api. (#1455)
init_process_group() got multiple values for argument 'backend'

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-05-19 06:32:11 -04:00
86720fdb11 Adds in_order argument that defaults to False, to log in order. (#1262)
* Adds `in_order` argument that defaults to False, to log in order.

Ads `in_order` argument that defaults to `False`, to log in order. 
It really helps with readability.  Defaults to false to not break backwards comp.

* fixed formatting

* Update src/accelerate/logging.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Fixed quality & suggestions

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-05-18 15:01:26 -04:00
1deab71e3c Update with cli instructions (#1453)
* Update with cli instructions

* Also update basic tut
2023-05-18 11:32:26 -04:00
5d1cee3d81 Auto multigpu logic (#1452) 2023-05-18 11:12:58 -04:00
5904f56c45 [docs] Replace state.rank -> process_index (#1450)
I couldn't find a rank property in `PartialState`.
2023-05-18 07:13:39 -04:00
99d790dc34 split_between_processes (#1449) 2023-05-17 15:35:36 -04:00
1760d2dc8c Add to (#1448) 2023-05-17 14:52:25 -04:00
b93bfac16d Distributed prompting/inference utility (#1410)
* Splitter

* Rename and fix

* Change value

* Add plus 1?

* mvp

* Nested processes

* Start of implementation

* Fin

* Introduce util

* Return non-nested for now

* Future annotation

* Fix

* Fix failing tests, make it fully nested

* Fin

* Start doc

* Fixup tests

* Add is_torch_version

* Should work now with padding

* Include padding

* Docstrings

* toctree

* Dash

* Note on when padding is needed

* Apply typo fixes from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Try quicklink

* Use dash

* URL

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-17 14:41:25 -04:00
981c6fb8d6 Fix ci (#1447) 2023-05-17 13:49:56 -04:00
6413f25ba9 Raise error when logging improperly (#1446)
* Raise error when logging

* Update src/accelerate/logging.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-17 11:16:35 -04:00
39e20d3e55 Fixes in infer_auto_device_map (#1441) 2023-05-17 10:54:42 -04:00
3a381bfa48 fix potential OOM when resuming with multi-GPU training (#1444)
* load `optimizers`, `schedulers`, `scalers` and `states` in different devices

* only apply to the optimizer state
2023-05-17 10:53:17 -04:00
bc82d18821 fixed: ZeroDivisionError: division by zero (#1436)
* Update modeling.py

fixed: ZeroDivisionError: division by zero

* fixed style

* code optimize

---------

Co-authored-by: xingwei <xingwei@i-click.com>
2023-05-17 08:59:12 -04:00
330d60b817 Make sure torch compiled model can also be unwrapped (#1437)
* Make sure torch compiled model can also be unwrapped

* Apply suggestions from code review

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* add tests

* fix double import

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-05-16 19:03:36 +01:00
612ecef7b8 Fix XPU (#1440) 2023-05-16 13:03:22 -04:00
9493d7276b [core] Introducing CustomDtype enum for custom dtypes (#1434)
* working v1 - draft

* format

* more comments
2023-05-16 16:24:17 +02:00
40c6e0ca41 Ensure that it gets installed (#1439) 2023-05-16 09:50:53 -04:00
a28491bc24 Let quality yell at the user if it's a version difference (#1438)
* Let quality yell at the user if it's a version difference

* Also include in style
2023-05-16 09:30:08 -04:00
435079aafb Improve Slack Updater (#1433)
* Update log_reports to send to slack

* REVERT this change, just for testing!

* Add slack_sdk dep

* Second one

* Try now?

* Remove len

* Need secret

* Try with new version

* Right boldface

* Fix import

* New format, use tabulate

* Add tabulate to yml

* Quality

* Purposefully fail

* Working updater, now to test

* Int

* Print payload

* Append

* Change maxcolwidth

* Offset

* More offset

* Context

* No max width

* gh format

* max-col-width'

* Reduce max

* Non-working tables

* Rm md report

* Try now

* Try with just count

* Use table

* New version

* Use table

* Try with thread

* Should be working now

* Clean

* Fixup test reports fully

* Revert workflow

* Keep tabulate in workflow ci

* Update other workflows

* Use blocks for better formatting

* ONe more test

* Works as expected
2023-05-16 09:08:10 -04:00
dcde1e93d0 Fix bug on ipex for diffusers (#1426) 2023-05-12 23:32:01 +02:00
ab379793d4 Intel GPU support initialization (#1118)
* Intel GPU support initialization

* rng state for xpu ,accel backend

* add xpu variable and clean code

* checkpointing, hooks, colls & megatronlm porting

* fix runtime errors

* test utils and xpu runtime checks

* fix unknown import in constant

* Resolve amp and cuda/xpu tensor placement

* add ipex for state and hooks

* add mingxiao's ipex changes and source code rebase changes

* add ipex binding in cluster

* resolve megatron lm issues and modelling memory

* indent fix and syntax

* versioning and sanity checks

* use kwargs and add upstream

* revert megatron lm xpu changes

* cleanups and test npr

* fix merge conflict

* fix merge conflict

* Fix merge conflict

* review commits

* make style, ruff code styling

* hf doc builder code style

* Review commits and code style

* remove xpu plugin and use only ipex by default if cpu/xpu present

* review commits and fix tests on state

* fix test in state

* add xpu condition in optimizer and code style/testing

* fix test add warn for ipex

* fix test

* fix test

* fix test and condition

* fix  amp test prod,cli ,core

* fix minimum torch tests

* refine accelerator and modelling for tests

* refine modeling and merge

* Fix slow cuda tests

* doc and retrigger test
2023-05-11 09:03:24 -04:00
b50e75f85d Make mlflow logging dir optional (#1413) 2023-05-11 12:03:13 +02:00
f95067bfbf fix deepspeed failing tests (#1411)
* changes required for DS integration

* changing the default value of `zero_force_ds_cpu_optimizer` to True to fix the failing tests
2023-05-11 10:35:46 +05:30
d07fd959cc changes required for DS integration (#1406) 2023-05-11 00:47:32 +05:30
873b39b85b use existing mlflow experiment if exists (#1403)
Co-authored-by: Rustem Galiullin <rustem.galiullin@bayanat.ai>
2023-05-10 11:51:21 +02:00
da39665055 Adding support for local SGD. (#1378)
* Adding support for local SGD.

* Update src/accelerate/local_sgd.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Update src/accelerate/local_sgd.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Update src/accelerate/local_sgd.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* fixing reduction + adding a test.

* style fix.

* Update docs/source/usage_guides/local_sgd.mdx

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/local_sgd.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update examples/by_feature/local_sgd.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-09 10:52:03 -04:00
d95d68ec46 Support TPU v2 and v3 on new PyTorch/XLA TPU runtime (#1385)
* Use numpy Generator instead of global seed

* Implement SharedDict descriptor

* Formatting and comments

* Remove `GlobalSharedDict`

* Formatting

* Formatting with `doc-builder` installed correctly
2023-05-09 09:12:43 -04:00
fafadc5323 Add in a section on papers using Accelerate (#1399)
* Start of papers

* Add back in PickScore

* Rm non-urld

* Test

* Remove space
2023-05-09 15:00:50 +02:00
145fca5a09 Support TPU v4 with new PyTorch/XLA TPU runtime (#1393)
* Fix `XLA_USE_BF16` when not using mixed precision

* Fix RNG sync during data loading

* Fix hanging during checkpointing

* Remove extra _mp_fn

* Use all_gather to implement _tpu_gather

* Use collective_broadcast for torch RNG state

* Formatting and comments.

* Fix formatting with `make style`
2023-05-08 13:53:43 -04:00
9fe690706d v0.20.0.dev0 2023-05-08 08:37:42 -04:00
6e81938282 Update training_zoo.mdx (#1397) 2023-05-07 19:00:46 -04:00
e965d590cd Fix gather_obj (#1391)
* Fix gather_obj

* Fix cpu test

* Requires torch 1.7

* Set torch version
2023-05-05 17:55:51 +02:00
6dfcf5b8ef Bump torch v (#1392) 2023-05-05 17:55:21 +02:00
e4ea4ed4de Log Images and other types to wandb (#962)
* add image logging

* add table logging

* add artifact logging capabilities

* fix black

* remove log_iamges on base class

* fix docstring

* quality

* remove the artifact code

* add main proc decorator

* add main process to log_images in ternsorboard

* quality

---------

Co-authored-by: Thomas Capelle <thomas.capelle@steady-sun.com>
2023-05-05 16:11:16 +02:00
fa8e1cff91 fix config bug for 'mixed_precision' from 'yaml.safe_load()' (#1386)
* fix config bug for 'mixed_precision' from 'yaml.safe_load()'

* Update src/accelerate/commands/config/config_args.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-05 07:37:09 -04:00
60856787ac Fix flakey thread issue (#1387)
* Fix thread issue?

* Fix bool

* \<2

* Below 2.0 fully

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-04 14:41:53 -04:00
995563fec9 delete textfile after tests are done (#1381) 2023-05-02 09:58:06 -04:00
2d62bd1570 Seperate out contextmanager generation (#1379)
* Seperate out contextmanager generation

* Move over to modeling

* Switch import
2023-05-02 09:54:53 -04:00
f8169eaded Improve accelerate env reporting (#1376)
* Have env state GPU kind

* Include system RAM

* CLean
2023-05-01 11:08:26 -04:00
75ab711993 Special transformers case from args (#1364)
* Special transformers case

* Reduce to single line

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Revert

* Clean

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-01 09:44:44 -04:00
f489a86573 Fix default FSDP_MIN_NUM_PARAMS (#1367)
FSDP_MIN_NUM_PARAMS default changed from 1e8 to 100000000 (no floats allowed)
2023-04-28 12:35:07 -04:00
2708c1ae31 fix: typing issues, and replace deprecated python typing (Optional, Union) to | (#1363) 2023-04-27 10:50:53 -04:00
e30034ed07 Better check for packages availability (#1356)
* Better check for packages availability

* lint
2023-04-26 08:46:16 -04:00
78bf8bcb21 fix bnb slow test (#1355) 2023-04-25 13:30:37 +02:00
57f2cf5fa7 using deepspeed.comm for distrbiuted init (#1352) 2023-04-25 09:37:16 +05:30
e06e7b35e7 Support FP8 mixed precision training for Ada Lovelace GPUs (#1348)
* Support FP8 mixed training for Ada Lovelace GPUs

* Black format

* Updating error message
2023-04-24 13:01:12 -04:00
5651521833 Pop more backend options (#1342)
* Fixup more args

* Consistency
2023-04-20 11:41:24 -04:00
ba0ee8a54d only update progress bar when done with tensor (#1341) 2023-04-20 08:57:44 -04:00
c2a162932a Fix nested context manager for main_process_first() (#1304)
* Fix nested context manager for main_process_first()

* Fix test for main_process_first()

* Improve test for main_process_first()

* Fix formatting

* Fix test with single process
2023-04-20 06:38:12 -04:00
c29c3c5e70 Rm unused amp check (#1340) 2023-04-19 14:33:37 -04:00
945085edb3 Temp skip test (#1339) 2023-04-19 14:25:58 -04:00
70388fa44e Verbosity, Progress Bar for Loading (#1329)
* added progress bar to tensor loader, and allocation info when verbose

* align coding style with norms
2023-04-19 09:21:02 -04:00
2fee0c15fd v0.19.0.dev0 2023-04-18 11:00:52 -04:00
c05ed13fc9 Fix clearning of memory (#1332) 2023-04-18 10:53:32 -04:00
5e6351502a Remove repetitive devices in load_state_dict() (#1321)
Previously devices() was a list containing duplicate entries. This
changes it into a set.

This significantly speeds safetensors loading when the device map is
long, as the safetensors loop loads each weight entry for each device
entry.

Co-authored-by: John Doe <john.doe@example.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-04-17 15:57:07 -04:00
ee0c587182 ensure module prefixes only match that module (#1319)
Co-authored-by: John Doe <john.doe@example.com>
2023-04-17 15:52:35 -04:00
43e7229a1a Add test flag and import check for dynamo (#1322)
* Add is_dynamo_available + marker

* Use min_torch_version instead
2023-04-17 13:58:53 -04:00
8b96515ed2 Upgrade torch version on main tests (#1323)
* Upgrade torch version on main tests'

* Also in docker
2023-04-17 13:52:20 -04:00
9d9ea62785 Ensure that dynamo is compatible with mixed precision (#1318)
* Fixed

* Use args kwargs
2023-04-17 13:10:39 -04:00
2106e87d58 offload the previous model hook before the current module is moved to the execution device (#1315) 2023-04-14 21:24:59 -04:00
40980e8fe8 Default to nccl (#1314) 2023-04-14 10:18:37 -04:00
f2f810c536 Allow xpu backend (#1313)
* Allow xpu set

* Use in dataclass
2023-04-13 15:23:48 -04:00
0a9403f308 Bug fix in setattr (#1312) 2023-04-13 07:09:27 -04:00
75a693c9b4 Simplify MPS implementation (#1308)
* Simplify MPS implementation

* Quality

* Update src/accelerate/state.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-04-12 08:54:44 -04:00
55691b14c2 add usage guide for ipex plugin (#1270)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-04-07 08:23:12 -04:00
b757b62325 Set the state device dependant to Accelerator on multigpu (#1220)
* Set the state device dependant to Accelerator on multigpu
2023-04-06 13:59:59 -04:00
15dbf9722b fix for load_checkpoint_and_dispatch(device_map=None) (#1297)
The `load_checkpoint_and_dispatch` method has `device_map: Optional[Union[str, Dict[str, Union[int, str, torch.device]]]] = None,`

But if you pass `device_map=None` you get an error:

```
accelerate/big_modeling.py", line 477, in load_checkpoint_and_dispatch
    if offload_state_dict is None and "disk" in device_map.values():
AttributeError: 'NoneType' object has no attribute 'values'
```
2023-04-06 12:55:37 -04:00
419ecf38af Make note about grad accum and prec (#1296) 2023-04-06 11:55:19 -04:00
3cb9d5fd9c Raise better error on notebook_launcher (#1293)
* Raise better error

* Better err

* Move import
2023-04-04 14:42:29 -04:00
f1298b143e fix bnb slow test (#1292) 2023-04-04 20:02:03 +02:00
07ad358f2d Check for dtype attr (#1288) 2023-04-03 16:57:46 -04:00
211707857d Expound error on recursively_apply (#1286)
* Expound

* Adjust test
2023-04-03 14:07:32 -04:00
e57d5d0eae Raise more explicit error when transformer_engine isn't installed (#1287)
* Raise err for unsupported fp8

* Change hardware spec

* Rm hardware part since we don't check it

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Style

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-04-03 13:40:28 -04:00
92d072043e Fix TypeError bug in honor_type (#1285)
* Use is_namedtuple
2023-04-03 12:23:12 -04:00
3d1a0f7e98 fix attribute error in DataloaderShared (#1278)
When running in single GPU, the `batch_sampler` of `DataLoaderShared` is a `torch.utils.data.sampler.BatchSampler` object instead of `DataSamplerShared ` object, which does not contain necessary attributes to calculate `total_batch_size`.
2023-04-03 09:44:59 -04:00
8b3e30887a Minor fix whitespace colon (#1272)
More readability
2023-04-03 09:42:56 -04:00
3e304c4a1a Update quicktour.mdx (#1273) 2023-04-03 09:42:48 -04:00
1c102f23cc Missing fp8 (#1284) 2023-04-03 09:42:21 -04:00
4c0d5a46ba Raise import err (#1283) 2023-04-03 09:37:17 -04:00
d0c17d707f Fix reduce operation (#1268)
Co-authored-by: amax <amax@admin.cluster.local>
2023-03-31 09:24:36 -04:00
b41d8d8228 Change error raised to ValueError (#1267) 2023-03-30 10:37:08 -04:00
3a6db664c7 Update bug-report.yml (#1264) 2023-03-30 09:17:58 -04:00
166520feea ipex intel extension for pytorch integration (#1255)
* ipex intel extension for pytorch integration

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

Co-authored-by: jianan-gu <jianan.gu@intel.com>

Co-authored-by: Wang, Yi A <yi.a.wang@intel.com>

* fix test error

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* fix the review comment and add testcase

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2023-03-30 09:08:17 -04:00
663f5120c2 Check attribute 'overflow' exists in optimizer. (#1259)
* Check attribute 'overflow' exists in optimizer.

* Fix code formatting. ;)
2023-03-28 09:26:17 -04:00
23ac55fcab [core] Add Quantization support for dispatch_model (#1237)
* add quantization support for `dispatch_model`

* fix multi-gpu

* more chaecks

* fix bias issue

* Update src/accelerate/utils/modeling.py

Co-authored-by: Andrei Panferov <andrei@BlackSamorez.ru>

* make style

* add tests

* left some todos

---------

Co-authored-by: Andrei Panferov <andrei@BlackSamorez.ru>
2023-03-27 15:33:52 -04:00
93951ce516 handle missing deepspeed config (#1251) 2023-03-24 16:10:12 -04:00
ae86a00be0 raise error when dataloader with None as batch_size when using DS (#1250) 2023-03-24 21:15:23 +05:30
532da3e342 Fix pypi image (#1249) 2023-03-24 11:34:36 -04:00
a826e4441d Handle multiple tied parameters (#1241)
* Handle multiple tied parameters

* Add tests

* Ensure backward compatibility with Transformers

* Update src/accelerate/utils/modeling.py

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>

* Gate test requiring Transformers

---------

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
2023-03-24 09:53:29 -04:00
1fe27e7c95 Hardware Auto-Setup Example/Tutorial for Distributed Launch (#1227)
* add self hosted hardware example

add multi gpu launch script

add auto setup hardware docs

remove an example

tiny fixes

* add colab link

* style

* update readme, remove docs page
2023-03-24 09:46:29 -04:00
c1a6c209df Change multinode to multigpu (#1247) 2023-03-24 09:40:21 -04:00
8ebd6ab2ee backfill ds plugin attributes when using ds_config (#1235)
* backfill ds pluging attributes when using ds_config

* add test

* refactoring code
2023-03-23 21:28:02 +05:30
ea9b85477d remove empty dicts while saving accelerate config (#1236) 2023-03-23 19:14:21 +05:30
420ff21c3b extensions has been removed and replaced by customizations (#1075)
Co-authored-by: Dennis Bappert <bappert@outlook.com>
2023-03-23 09:15:23 -04:00
b1b3312749 Make grad accum steps mutable on the Accelerator object (#1233)
* Make grad accum steps mutable

* Reset state
2023-03-22 17:44:31 -04:00
6e4e870203 add additional check before deleting env variable (#1229) 2023-03-22 15:03:18 -04:00
a3065e1842 Silence dynamo_backend (#1226) 2023-03-22 11:34:08 -04:00
4eaf36e1c4 docs: add finetuner to ppl who use accelerate (#1224) 2023-03-22 09:08:21 -04:00
e7bb060c0e Fix get_logger kwarg documentation issue (#1222) 2023-03-22 08:05:00 -04:00
a15d307426 Fix bug in loading launch config (#1218)
* Fix bug in loading launch config
2023-03-20 10:20:09 -04:00
7e7f3445aa FIx TPU gradient state (#1219) 2023-03-20 09:56:07 -04:00
10c674633d ds offload optim fix to use CPUAdam (#1208)
* ds offload optim fix to use CPUAdam

* fix
2023-03-20 19:21:39 +05:30
82c2665cd6 Fix example in accumulate method (#1211) 2023-03-18 21:00:11 -04:00
2930cac698 Fix typo in TPU config (#1202) 2023-03-18 09:42:56 -04:00
901ab69a16 Better error message when using multi-GPU and Accelerate on torch <1.9.1 (#1203)
* Better err

* Split
2023-03-16 11:45:09 -04:00
780e4aa32a Fix tied weights load (#1204)
* Retie weight after loading checkpoint

* Adapt doc
2023-03-16 11:29:11 -04:00
e4620984f8 Make the Scheduler adjust the steps taken relative to the gradient accumulation steps (#1187)
* Make scheduler actually adjust the length
2023-03-15 12:16:12 -04:00
017a98c0e9 Fixup --fsdp (#1198) 2023-03-15 10:34:13 -04:00
d1aa558119 [Accelerator] We should not call to on modules that wraps accelerate loaded models (#1172)
* add v1

* fix docstring
2023-03-15 08:28:28 +01:00
41479fe483 Set drop last to ensure modulo16 restriction for fp8 (#1189)
* set drop last to ensure modulo16 restriction for fp8

* fix quality

* Use all eval samples for non-FP8 case
2023-03-14 14:35:02 -04:00
eac5d13c7b Only convert linear layers with weights multiple of 16 (#1188)
* Only convert linear layers with weights multiple of 16

* Simpler test
2023-03-13 17:03:29 -04:00
b228136cae add use_orig_params to FullyShardedDataParallelPlugin (#1184)
* add `use_orig_params` to FullyShardedDataParallelPlugin

* fix 🐛
2023-03-14 00:20:30 +05:30
90deb748c6 Add documentation about PyTorch FSDP state dict behavior (#1181) 2023-03-13 10:53:56 -04:00
d942708745 Support special mapping of dtypes when preparing device map (#1179) 2023-03-13 10:48:31 -04:00
3783180844 fixed typo in launch.py tpu_pod_launcher (#1180) 2023-03-10 18:36:52 -05:00
ea836f3057 Add repr to AlignHook for easier debugging. (#1177) 2023-03-10 14:35:11 -05:00
a4c9476204 Run accelerate_test in cli (#1176)
* Run accelerate_test in cli

* Make it run on more than one process for gather check
2023-03-10 10:28:42 -05:00
3ca8c9a997 Fix CPU error always being raised (#1175)
* Save state

* Revert to old behavior

* Fix failing test/update

* Remove duplicate test
2023-03-10 10:22:26 -05:00
2f83b1afef Fix accelerate test with new config_file errors (#1169) 2023-03-09 11:56:42 -05:00
b0591c665c Fix backward compatibility in configs wrt dynamo backend (#1168) 2023-03-09 11:39:22 -05:00
d9871c0f87 v0.18.0.dev0 2023-03-09 11:18:26 -05:00
abc2beb423 Remove outdated command directions and use in tests (#1166)
* Get rid of launch in docs

* Run instead of Launch

* Proper ddp prefix

* Include note about older torch versions
2023-03-08 14:37:46 -05:00
8749b4ece4 Fix what files get deleted through total_limit (#1165)
* Use lambda func to sort the keys

* Use inner instead

* With more explicit regex

* Regression check

* Better check that uses multiple numbers
2023-03-08 12:34:22 -05:00
4a3eaee6be Document skip_first_batches in the checkpoint usage guides (#1164)
* Include skip_first_batches

* Repeated statements

* Middle of an epoch
2023-03-08 12:17:30 -05:00
3533e2b0b1 [Accelerator] Fix issue with 8bit models (#1155)
* fix 8bit models on `accelerate`

* add bnb as dependency

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix

* skip a test

* make style

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-08 14:51:25 +01:00
3e0ceac79f Attempt to fix import error when PyTorch is build without torch.distributed module (#1108)
* Attempt to fix importing invalid `torch.distributed.ReduceOp` when torch is built without distributed support.

* Style.

* Move `torch.distributed` logic detection to `imports.py` according to @muellerzr comments

* Style.

* Update wording

* Remove raising exceptions in the case of a non-distributed setup, simply dont import the ReduceOp in this case.
2023-03-08 08:49:45 -05:00
03b617b674 Let GradientState know active dataloaders and reset the remainder (#1162) 2023-03-07 14:46:05 -05:00
840bb1aeda update support for torch dynamo compile (#1150)
* update support for torch dynamo compile

* fix tests and backward compatibility

* fix tests

* Update config_args.py

* Update config_args.py

* fix 🐛

* fix 🐛

* fix bug

* fix 🐛

* bug fix

* 😅

* Update config_utils.py

* 😅

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* resolving comments

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-07 22:05:14 +05:30
1bfde6b963 Fp8 integration (#1086)
* Draft of FP8 support

* Missing import

* Fix names

* Conversion is inplace

* Enable fp8 in examples

* Customization point for Recipe

* Auto-enable FP8 depending on compute capability

* Fix typo

* Put back mixed precision arg

* Add debug script

* Add more tests in debug

* Add more stuff to debug

* Don't forget train

* Put the train in the right place

* Add options for selective conversion

* Fix typo

* Properly recurse

* Add more debug utils

* Typo and init

* Last choice

* More fixes

* More options in example

* Remove debug scripts

* Clean up debug and new names

* Add torch.no_grad for conversion

* Optimizer is deconnected from model?

* Re-attach model parameters to optimizer

* Fix extract

* Style

* Cleanup post-rebase

* Deal with apdding

* fix examples

* Update src/accelerate/accelerator.py

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Address comments

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2023-03-07 09:10:10 -05:00
3482495bb5 📝 add a couple more trackers to the docs (#1158) 2023-03-06 19:06:56 -05:00
947b2a88a9 Load custom state to cpu (#1156)
The current implementation loads custom states to GPUs, leading to OOM. I add `map_location="cpu"` to the `torch.load` function, which is similar to the strategy in `load_accelerator_state`.
2023-03-06 13:15:21 -05:00
cac1ed41eb Solve arrow keys being environment dependant for accelerate config 2023-03-06 10:09:24 -05:00
9dc5b349ea [Safetensors] Relax missing metadata constraint (#1151)
* [Safetensors] Relax missing metadata constraint

* correcct

* char limit
2023-03-06 16:01:35 +01:00
0aae1e93f4 Include a note in the gradient synchronization docs on "what can go wrong" and show the timings (#1153)
* Include timing results

* Don't include tilda for accelerator

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-06 10:00:43 -05:00
78151f87a4 Fixed typos in notebook (#1146)
* Bad cut for the eval_split

* Fixed typo.
2023-03-03 14:30:53 -05:00
853823d0ae FSDP enhancements and fixes (#1145)
* fsdp version update

* fsdp fixes

* update accelerate config
2023-03-03 19:19:48 +05:30
77ae51a050 fix partial state (#1144)
* fix partial state

* fix failing tests
2023-03-03 19:03:24 +05:30
ad9cf788b1 Fix notebook_launcher (#1141)
* Fix initialization on decorator for the Accelerator
2023-03-02 12:08:32 -05:00
5f9cea4ce9 fsdp bf16 enable autocast (#1125) 2023-03-02 18:59:19 +05:30
96ffd349f3 fix lr scheduler issue (#1140)
* fix lr scheduler issue

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-02 18:41:46 +05:30
d88bbbd0e2 fix ds dist init kwargs issue (#1138)
* fix ds dist init kwargs issue

* fix
2023-03-02 18:35:16 +05:30
075b5d615d deepspeed dataloader prepare fix (#1126) 2023-03-02 18:34:35 +05:30
9b5877d1b6 Fix multinode with GPU ids when each node has 1 (#1127)
* Fix multinode

* Assert

* Reverse logic

* Use <= and not "not"

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* All on a single statement

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-01 14:02:17 -05:00
586941d107 Expand warning and grab all GPUs available by default (#1134)
* Use all GPUs by default

* Warn and include multi_gpu pull by default
2023-03-01 13:50:27 -05:00
e1b84bf503 Add tee and role to launch (#1132) 2023-03-01 12:37:16 -05:00
b2ea1c7b4f [Big model loading] Correct GPU only loading (#1121)
* [Big model loading] Correct GPU only loading

* Update src/accelerate/utils/modeling.py

* make style

* Update src/accelerate/utils/modeling.py

* make style 2

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-03-01 16:22:06 +01:00
bdd93cd933 Refactor launch for greater extensibility (#1123)
* Refactor `launch` for greater extensibility

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Fix

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Fix import

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

---------

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
2023-03-01 05:43:32 -05:00
639c1da8df Move dynamo.optimize to the end of model preparation (#1128) 2023-02-28 14:11:38 -05:00
fdb1402c7d Deep merge SageMaker additional_args, allowing more flexible configuration and env variable support (#1113)
* deep merge additional args

* added trailing line

* `make style`
2023-02-28 09:55:03 -05:00
0b3f219881 Add test for ops and fix reduce (#1122)
* Add test for ops and fix reduce

* Adjust testers

* Try w/o shape checK

* Passthrough?

* Make into float

* Clean

* Undo all_gather for now
2023-02-28 09:18:09 -05:00
ade4f1db92 Actually raise if exception (#1124) 2023-02-28 07:54:32 -05:00
907a86d145 TensorBoardTracker: wrong arg def (#1111) 2023-02-25 00:57:49 -08:00
f054799e7f Attempt to unwrap tracker. (#1109) 2023-02-24 15:47:54 +01:00
d4f5fd694e Update performance.mdx (#1107)
Correct import location
2023-02-23 09:05:21 -05:00
38fd30e764 Tracker rewrite and lazy process checker (#1079)
* Refactor implementation to use PartialState and adjust deprecation tests

* Utilize multi-process in Accelerator

* Use state

* Lazy PartialState

* Name, plus keep on_main_process for accelerator

* Handle if the tracker was made on main-process-only properly

* Missing variable names, oops

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* Clean

* Logs

* Main process

* Clean

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2023-02-22 07:48:55 -05:00
03754c1e02 Update README.md (#1100) 2023-02-21 21:21:18 -05:00
ea36b7dceb add multi_cpu support to reduce (#1094) 2023-02-20 09:25:55 +01:00
bc9153e465 adds missing "lfs" in pull (#1091) 2023-02-17 17:40:20 +01:00
89b7e36bf6 Fix config (#1090)
* Fix config

* Proper fix
2023-02-17 10:42:24 -05:00
b34db0b987 Added SageMaker local mode config section (#1084) 2023-02-15 14:18:43 -05:00
9875714610 Update complete_cv_example.py (#1082)
minimal typo :)
2023-02-15 13:36:18 -05:00
4b47f190a9 Fix tpu_cluster arg (#1081) 2023-02-15 10:43:04 -05:00
17bc8a1103 Allow custom SageMaker Estimator arguments (#1080)
* Added additional_args to SageMaker Config

* temporary fix #1078

* temporary fix #1078 properly

* Extended SageMaker config

* Revert " temporary fix #1078 properly"

This reverts commit 81c683711d5a94ba9327686563bb55d3e8801555.

* Revert "temporary fix #1078"

This reverts commit c8a4b0973aee6ffd4612a69bb1ccd079b3dbb9ce.

* Extended documentation to reflect manual configuration changes.

* Fixed a small typo
2023-02-15 10:39:08 -05:00
279475307a SageMaker image_uri is now optional (#1077) 2023-02-15 09:31:47 -05:00
9c2e704791 Add error if passed --config_file does not exist (#1074) 2023-02-15 09:10:20 -05:00
4e1816d7ec Refactor state and make PartialState first class citizen (#1071)
* Refactor into State and expose

* Make PartialState mainstream!
2023-02-14 14:50:06 -05:00
5a2cb3b5e3 Fix/implement process-execution decorators on the Accelerator (#1070) 2023-02-14 13:36:33 -05:00
04103090cc update fsdp docs and removing deepspeed version pinning (#1059)
* update fsdp docs and removing deepspeed version pinning

* address comments
2023-02-14 16:39:47 +05:30
ca615f879f Swap utils over to use PartialState (#1065) 2023-02-13 16:08:56 -05:00
2694a6c63a Update integrations (#1063) 2023-02-13 13:28:55 -05:00
b4388b45dc Try with this (#1062) 2023-02-13 10:58:24 -05:00
69e4c3c54d Flag for deprecation (#1061) 2023-02-13 10:38:33 -05:00
68d809256c Introduce PartialState (#1055)
* Try again

* Try off multi-gpu

* This is a test

* Finished now

* PartialState

* Update logger to use new API

* backend

* Working tests

* Working again!

* Raise err instead

* Better error

* Update src/accelerate/state.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-02-13 10:23:39 -05:00
bd091a605b deepspeed hidden_size auto value default fixes (#1060) 2023-02-13 20:23:40 +05:30
cb993d7d8c Fix args by adding in the defaults (#1053) 2023-02-09 15:00:57 -05:00
028b5816c8 Use create_task (#1052) 2023-02-09 14:44:09 -05:00
8951195a15 Introduce TPU Pod launching to accelerate launch (#1049)
* Working version -- run one more test

* commands

* Undo commands

* cli

* Undo config args

* cluster

* Command

* use_alpha

* Fully working now!

* Fix log

* Wrong alpha storing
2023-02-09 13:02:14 -05:00
60460ae1af Fix cpu_offload_with_hook code snippet (#1047)
* Fix cpu_offload_with_hook code snippet

* Make model explicit for clarity.
2023-02-08 09:23:13 -05:00
978dfc38ea Load tensors directly on device (#1028)
* Load tensors directly on device

* Update src/accelerate/utils/modeling.py

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-02-07 13:48:28 -05:00
5002e56704 Update quality tools to 2023 (#1046)
* Setup 2023 tooling for quality

* Result of styling

* Simplify inits and remove isort and flake8 from doc

* Puts back isort skip flag
2023-02-07 13:34:05 -05:00
71e81bab00 Add cpu_offload_with_hook (#1045)
* Add cpu offload with hook

* Style

* add to init

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add documentation

* Add tests

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-02-07 13:09:27 -05:00
76c41f0df7 Make sure direct parameters are properly set on device (#1043) 2023-02-06 13:36:18 -05:00
2b981c0942 Add daily slack notifier for nightlies (#1042)
* Update log_reports to send to slack
2023-02-06 10:44:58 -05:00
a60640d4fa Refactor process executors to be in AcceleratorState (#1039)
* Start of refactor

* Fix yield

* Print

* Add test
2023-02-06 10:44:33 -05:00
4be70838e7 Pass keywords arguments of backward function deeper to DeepSpeed (#1037) 2023-02-03 10:39:19 -05:00
e89131c92d do not scale gradient in bf16 mode (#1036) 2023-02-02 14:01:57 -05:00
4e5cc0c6b9 fix: links to gradient synchronization (#1035) 2023-02-02 11:12:30 -05:00
587eea9bb5 enabling mps device by default and removing related config (#1030)
* enabling `mps` device by default and removing related config

* address comments

* fix tests
2023-02-01 23:27:15 +05:30
57cbcab45b Deepspeed param check (#1015)
* Deepspeed param check

On line 146, in set_module_tensor_to_device(), adding a check for deepspeed parameters in the kwargs object, and not passing them solved the error I was receiving regarding the ds parameters not being recognized by torch.nn.Parameter.__new__(). With my admittedly limited knowledge, it seemed to me that the kwargs are not necessary to pass in the case of using Deepspeed+ Accelerate, and this bears out since the model loaded fine with zero-3 cpu parameter and buffer offload on a single-GPU machine, and performed perfectly comprehensible inference outputs (slowly) using the GPU.

The error, in my case, was occurring here as called from accelerator's dispatch_model().

Please let me know if my thinking on this is in anyway wrong! This fix worked for me. 

 `transformers` version: 4.26.0
- Platform: Linux-5.15.83.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes and no (zero-3 on single machine)

* 146-150 check for Int8 arguments

146-150 check for Int8 arguments. If found, send the args as well as the value.

* Used make style on branch

* Used make style with correct versions of black and flake8 on branch
2023-02-01 11:19:01 -05:00
c0caa068ba v0.17.0.dev0 2023-01-31 12:15:08 -05:00
b51b78ffb7 It was 0.16.0.dev0 all along... 2023-01-31 11:07:26 -05:00
67dbae52be sagemaker launcher fixes (#1031)
* sagemaker launcher fixes

* fixes

* addressing comments
2023-01-31 21:17:16 +05:30
d0df263b09 With example (#1027) 2023-01-30 12:57:24 -05:00
a5026706a7 More improvements to docstrings + examples (#1010)
* Start of examples
2023-01-30 12:34:26 -05:00
20e4973903 Start of adding examples (#1001)
* Start of examples

* Missing >

* Fix docstring nit

* Add comment on main_process_first

* Make comment on randomness

* first

* Backprop issues with examples into here
2023-01-30 12:33:47 -05:00
1d9bcdd39d Efficiently skip batches in a dataloader (#1002)
* Efficiently skip batches in a dataloader

* Add method in Accelerator and example

* Apply suggestions from code review

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Rename point of access

* Add point of access to init

* Add tests

* Don't forget to include fixes silly!

* Adapt examples

* Fix quality

* Forgot one

* fix method name

* Fix DataLoaderShard reinstantation

* Fix for epoch checkpointing

---------

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2023-01-30 11:56:59 -05:00
ba856524f6 Fix slow test by keeping tied weights on the same GPU (#1026) 2023-01-30 11:13:39 -05:00
332326c833 Change default for keep_fp32_wrapper (#1025)
* Change default

* Fix tests
2023-01-30 10:18:40 -05:00
e6d5776ad8 Light vs dark theme based on pick (#1023) 2023-01-30 09:35:37 -05:00
fe709a2490 Fix env var (#1024) 2023-01-30 09:33:19 -05:00
ac970148cd Include steppage in performance docs (#1013)
* Include steppage in performance docs

* New explanation
2023-01-27 12:02:47 -05:00
f0f348921d Don't force mixed precision as no in examples (#1018) 2023-01-27 10:12:27 -05:00
b37680bd66 Fix import of LrScheduler (#1017) 2023-01-27 08:50:33 -05:00
5286d843c8 Add in code exploration tool to docs (#1014)
* Add in code exploration tool to docs

* Update index to hotlink over to the explore

* With 100%

* Just do 750 for now

* Safe height

* Let's try with this

* Comment out original

* Revert

* Add in a note on the docs and remove a secondary code snippet

* Use 1550 for now so it fully fits

* 1600*
2023-01-27 07:32:34 -05:00
22bf677ceb Allow the torch device to be set with an env var (#1009)
* Allow the torch device to be set with an env var

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Fix

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Refactor

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Use self.device

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Refactor

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>

* Add test

* Add test

* Fix test

* Tweak comment

* Fix test

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
2023-01-26 16:01:36 -05:00
bd82bec78e Fix test introduced in PR and introduce AcceleratorTestCase (#1016)
* Fix test, missing reset

* tearDown

* Refactor and inherit to avoid future errors
2023-01-26 15:35:21 -05:00
3825e478b2 Saving and loading state hooks (#991)
* [RFC] Possible design for loading and saving state hooks design

* fix bug

* add tests & docstring

* improve docs

* make style

* Update src/accelerate/accelerator.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-01-26 20:07:21 +01:00
6c3f6792e9 Maintain accumulation steps (#1011) 2023-01-26 06:33:50 -05:00
5858ac62b4 Add styleguide (#1007)
* Add styleguide

* Uniformity

* Accelerate specific
2023-01-25 14:28:24 -05:00
5b0a03d1fb Update toctree (#1008) 2023-01-25 13:52:25 -05:00
c3ea690d48 improve deepspeed notes (#1003)
* improve deepspeed notes

* style
2023-01-23 20:45:45 -08:00
ae8c4875dc Fix parameters tying in dispatch_model (#1000)
* Fix parameters tying in dispatch_model

* Add test
2023-01-23 13:10:30 -05:00
55a528487d Fix scheduler incorrect steps when gradient accumulation enabled (#999)
* add additional check for optimizer step

* rewrite scheduler w/ grad accumulation test
2023-01-23 13:06:45 -05:00
bd1d5fad2f adding support for kwargs in load_state (#989)
* adding support for kwargs in `load_state`

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* quality 

* addressing comments

1. renaming variable to make it explicit
2. adding kwargs to `save_state` for parity

Co-Authored-By: Zachary Mueller <7831895+muellerzr@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Zachary Mueller <7831895+muellerzr@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2023-01-23 20:27:35 +05:30
b22f088ff6 Add new release_memory util (#990)
* Add new release_memory util

* Req cuda
2023-01-19 13:01:24 -05:00
f3f2f9e4b5 in sync with trfs, removing style_doc utils and using doc-builder instead (#988) 2023-01-19 19:24:44 +05:30
7e4136164e Fix test for converting tensor to proper dtype (#983)
* Fix test for converting tensor to proper dtype

* Adds a test
2023-01-18 11:21:45 -05:00
5dd631e2cd Skip wandb test for now (#984) 2023-01-18 10:57:38 -05:00
0a16f37ba1 Ensure that last batch doesn't get dropped if perfectly even in gather_for_metrics (#982)
* Add test_last_batch

* Fix gather bug
2023-01-18 10:30:34 -05:00
aaa2637a5e Fixe type error on line 36 (#981)
Fix to type error on line 36
2023-01-18 09:38:05 -05:00
7573a8cd55 Fix tied parameters test in big model inference (#979) 2023-01-17 14:52:52 -05:00
126550126d Raise minimum version for distrib launch (#978) 2023-01-17 12:24:36 -05:00
733755c94c Update README.md (#968)
When use deepspeed, We must import from accelerate package.
2023-01-12 03:18:56 +01:00
741d23301f Allowing encoded configuration for DeepSpeed (#895)
* allow-encoded-ds-config

* fix style
2023-01-11 14:32:03 +01:00
9b7ef9679f support master port when using ds multi-node launcher (#959)
* support master port when using ds multi-node launcher

* 😅
2023-01-09 23:52:00 +04:00
30a6a3435f Typo fix in src/accelerate/utils/modeling.py (#955)
Simple typo fix I happened to notice and figured I should just fix while I'm looking at it.
2023-01-07 09:58:05 +01:00
f7427c86ee Don't automatically offload buffers when loading checkpoints (#951)
* Don't automatically offload buffers when loading checkpoints

* Add test
2023-01-04 09:01:24 -05:00
d0bf459c7f Fix DeepSpeed tests (#950)
* Fix deepspeed tests

* Reset state

* With manual reset?
2023-01-03 12:49:51 -05:00
bf8fe0347b Add is_initialized method and refactor (#949)
* Add is_initialized method and refactor

* As module method
2023-01-03 10:13:44 -05:00
e60f3cab7a raise error for duplicate accelerate config values when using deepspeed_config_file (#941)
* ds config vs accelerate config checks

* add mp assertion checks and refactoring

* 😅

* minor fix

* address comments

* address comments and making doc and help clear

* 😅

* fixes

* error msg fix

* more details in error msg

* 

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* address comment

* address comment by changing cluster config

* 😅

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* use `accelerate launch` cmd args for `auto` filling

So far, `accelerate launch` cmd args were used for filling deepspeed plugin fields and not for setting `auto` values. This PR enables that too.

It also raises assertions when ambiguous values are passed in accelerate config file when using `deepspeed_config_file`

* fixes

* fixes and adding tests

* quality

* 😅

* refactor

* fix

* add documentation wrt improvements of DeepSpeed config

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* address comment

* address comment

* refactor

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2022-12-31 13:42:57 +05:30
07e2e712ca Fix offload when weights are on the GPU (#945) 2022-12-28 02:43:29 -05:00
63f09f63b8 Fix tracker (#942) 2022-12-23 12:07:56 -05:00
50b8d8e8a8 fix mp related test fails (#943) 2022-12-23 22:17:13 +05:30
0ec1f24c17 fix batch size in prepare_dataloader for iterable datasets (#937)
* fix batch size

* black
2022-12-23 02:52:52 -05:00
3c5c0f9c99 add mixed_precision_type property to AcceleratorState (#935)
* add `mixed_precision_type` property to `AcceleratorState`

* address comments
2022-12-23 12:02:20 +05:30
53b8ed1e8e Fix silly typo (#939) 2022-12-22 23:14:03 +05:30
49bbf2390d ds zero-3 init context manager (#932)
* ds zero-3 init context manager

* address comment

* renaming `set_zero3_init` to `zero3_init_context_manager`
2022-12-21 10:49:35 +05:30
aa533277f6 Honor model dtype in load_checkpoint (#920)
* Honor model dtype in

* Move dtype logic to set_module_tensor_to_device
2022-12-20 02:48:18 -05:00
ca6505a6a8 ds-z3-init and prepending ds env variables with ACCELERATE_ (#928)
* ds-z3-init and prepending ds env variables with `ACCELERATE_`

* quality

* rerun checks
2022-12-17 00:48:21 +05:30
bb6ee0b7bc Support init_on_device (#926)
* Support init_on_device

* Support mps backend as well in testing
2022-12-16 13:07:39 +01:00
7889ba6b6d Specify inference (#921) 2022-12-14 09:02:13 -05:00
f002ce2ae9 Introduce project_dir and limit the number of saved checkpoints (#916)
* Working save limit

* Centralize to project_dir

* Update docs

* Fix up tests

* Maintain old version, should fix tests

* Revert logging behavior

* Fix failing test

* Automatic checkpoint naming flag

* Logging -> Logger

* Fix naming

* Remove args and make a SaveConfiguration

* logger -> logging

* save_configuration to save_config

* Good to go now, just need to update docs

* Update all the docs

* Deprecate logging_dir param

* ProjectConfiguration

* Project_config

* Fix test

* Finish renaming

* Docfix

* Clean

* Update docs/source/usage_guides/tracking.mdx

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2022-12-13 08:29:58 -05:00
7fd0635d46 fix accelerate test failure with cpu config (#909)
*failure occurs when testing FP16
*autocast fail to work for cpu bf16 in some gpu+cpu platform,
no need to use is_bf16_available logic. because native_amp already contains such logic.
2022-12-13 08:29:15 -05:00
235fdf1096 🚨🚨🚨 Act on deprecations 🚨🚨🚨 (#917)
* Act on deprecations

* Act on deprecations

* Resume from checkpoint

* Finish deprecations
2022-12-12 16:09:52 -05:00
351f89758a Fix typos accelerate -> accelerator (#915) 2022-12-12 11:11:05 -05:00
7f5e94d33b fsdp enhancements (#911)
* fsdp enhancements

* fix

* fix
2022-12-09 22:23:45 +05:30
74a8ed9e48 fix issue that amp bf16 does not work for cpu in env with cuda. (#906)
and num_cpu_threads_per_process is not reset for better performance in cpu only case

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2022-12-08 09:05:34 -05:00
6bd28790c2 Fix conditional (#907)
* Fix conditional

* Into one if statement
2022-12-07 09:34:58 -05:00
2359af1870 Expand sanity checks (#905)
* Expand sanity checks

* multi_cpu to cpu
2022-12-06 15:46:47 -05:00
e6b61da7ca Add usage examples (#904) 2022-12-06 15:12:43 -05:00
344bfe2713 Flag to silence subprocess.CalledProcessError in launch (#902)
* add an option to silence subprocess.CalledProcessError when running accelerate launch

* for black

* for real this time

* Add suggestion

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>

* Update cli.mdx

Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
2022-12-06 08:47:31 -05:00
e9d15e5973 Adds a utility function to install correct version of torch XLA (#896)
* Add utility to install torch xla wheels

* Fix formatting

* Update docs and fix lint issues
2022-12-01 15:11:41 -05:00
169 changed files with 12935 additions and 3118 deletions

View File

@ -15,10 +15,14 @@
"remoteEnv": {
"PYTHONPATH": "${containerEnv:PATH}:${containerWorkspaceFolder}"
},
"extensions": [
// Ensure we have IntelliSense in VSCode when running inside container
"ms-python.python"
],
"customizations": {
"vscode": {
"extensions": [
// Ensure we have IntelliSense in VSCode when running inside container
"ms-python.python"
]
}
},
"workspaceFolder": "/workspaces/accelerate",
// Need git for VSCode to color code modifications. Only runs when building environment.
"onCreateCommand": "apt-get update && apt-get install -y git && pip install -e '.[dev]'"

View File

@ -55,4 +55,3 @@ body:
attributes:
label: Expected behavior
description: "A clear and concise description of what you would expect to happen."
render: Shell

View File

@ -42,4 +42,9 @@ jobs:
run-merge-tests:
needs: build-docker-containers
if: always()
uses: ./.github/workflows/run_merge_tests.yml
uses: ./.github/workflows/run_merge_tests.yml
run-integration-tests:
needs: run-merge-tests
if: always()
uses: ./.github/workflows/self_hosted_integration_tests.yml

View File

@ -15,3 +15,4 @@ jobs:
package: accelerate
secrets:
token: ${{ secrets.HUGGINGFACE_PUSH }}
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}

View File

@ -1,13 +1,14 @@
name: Delete dev documentation
name: Delete doc comment
on:
pull_request:
types: [ closed ]
workflow_run:
workflows: ["Delete doc comment trigger"]
types:
- completed
jobs:
delete:
uses: huggingface/doc-builder/.github/workflows/delete_doc_comment.yml@main
with:
pr_number: ${{ github.event.number }}
package: accelerate
secrets:
comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}

View File

@ -0,0 +1,12 @@
name: Delete doc comment trigger
on:
pull_request:
types: [ closed ]
jobs:
delete:
uses: huggingface/doc-builder/.github/workflows/delete_doc_comment_trigger.yml@main
with:
pr_number: ${{ github.event.number }}

64
.github/workflows/integration_tests.yml vendored Normal file
View File

@ -0,0 +1,64 @@
# CI for specifically ensuring integrations work fine (`transformers` mainly)
# Useful tips:
# - New integrations to test should have its own job, and follow a strategy method where we check both
# the pypi and github versions.
# - When checking the latest release of the integration, use
# git checkout $(git describe --tags `git rev-list --tags --max-count=1`) to get the latest release.
name: Integration Tests
on:
pull_request:
paths:
- "src/**"
- "tests/**"
- ".github/**"
- "examples/**"
- "setup.py"
types: [opened, synchronize, reopened]
env:
HF_HOME: ~/hf_cache
jobs:
run-trainer-tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
transformers-version: [
pypi,
github
]
steps:
- uses: actions/checkout@v3.1.0
- name: Set up python 3.8
uses: actions/setup-python@v3
with:
python-version: 3.8
- name: Install Accelerate from source
run: |
pip install --upgrade pip
pip install -e .
- name: Clone and install transformers
run: |
cd ..
git clone https://github.com/huggingface/transformers
cd transformers
if [[ ${{ matrix.transformers-version }} = pypi ]]; then
git checkout $(git describe --tags `git rev-list --tags --max-count=1`)
fi
pip install .[torch,testing]
- name: Show installed libraries
run: |
pip freeze
- name: Run Trainer tests
env:
WANDB_DISABLED: true
run: |
cd ../transformers
pytest -sv tests/trainer

View File

@ -8,12 +8,15 @@ on:
env:
RUN_SLOW: "yes"
IS_GITHUB_CI: "1"
SLACK_API_TOKEN: ${{ secrets.SLACK_API_TOKEN }}
jobs:
run_all_tests_single_gpu:
runs-on: [self-hosted, docker-gpu, multi-gpu]
env:
CUDA_VISIBLE_DEVICES: "0"
TEST_TYPE: "single_gpu"
container:
image: huggingface/accelerate-gpu:latest
options: --gpus all --shm-size "16gb"
@ -28,12 +31,13 @@ jobs:
git config --global --add safe.directory '*'
git fetch && git checkout ${{ github.sha }}
pip install -e . --no-deps
pip install pytest-reportlog
pip install pytest-reportlog tabulate
- name: Run test on GPUs
run: |
source activate accelerate
make test
- name: Run examples on GPUs
run: |
source activate accelerate
@ -43,12 +47,14 @@ jobs:
- name: Generate Report
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_all_tests_multi_gpu:
runs-on: [self-hosted, docker-gpu, multi-gpu]
env:
CUDA_VISIBLE_DEVICES: "0,1"
TEST_TYPE: "multi_gpu"
container:
image: huggingface/accelerate-gpu:latest
options: --gpus all --shm-size "16gb"
@ -63,13 +69,14 @@ jobs:
git config --global --add safe.directory '*'
git fetch && git checkout ${{ github.sha }}
pip install -e . --no-deps
pip install pytest-reportlog
pip install pytest-reportlog tabulate
- name: Run core and big modeling tests on GPUs
run: |
source activate accelerate
make test_big_modeling
make test_core
make test_big_modeling
make test_cli
- name: Run Integration tests on GPUs
run: |
@ -85,4 +92,11 @@ jobs:
- name: Generate Report
if: always()
run: |
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run-integration-tests:
needs: [run_all_tests_single_gpu, run_all_tests_multi_gpu]
if: always()
uses: ./.github/workflows/self_hosted_integration_tests.yml

View File

@ -7,11 +7,16 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.7
- name: Set up Python 3.8
uses: actions/setup-python@v3
with:
python-version: 3.7
python-version: 3.8
- name: Install Python dependencies
run: pip install -e .[quality]
- name: Run Quality check
run: make quality
run: make quality
- name: Check if failure
if: ${{ failure() }}
run: |
echo "Quality check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and rerun 'make style; make quality;'" >> $GITHUB_STEP_SUMMARY

View File

@ -26,8 +26,8 @@ jobs:
source activate accelerate
git config --global --add safe.directory '*'
git fetch && git checkout ${{ github.sha }}
pip install -e .[testing,test_trackers]
pip install pytest-reportlog
pip install -e .[testing,test_trackers] -U
pip install pytest-reportlog tabulate
- name: Run CLI tests
run: |
@ -47,6 +47,7 @@ jobs:
- name: Generate Report
if: always()
run: |
pip install tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_all_tests_multi_gpu:
@ -64,13 +65,8 @@ jobs:
source activate accelerate
git config --global --add safe.directory '*'
git fetch && git checkout ${{ github.sha }}
pip install -e .[testing,test_trackers]
pip install pytest-reportlog
- name: Run CLI tests
run: |
source activate accelerate
make test_cli
pip install -e .[testing,test_trackers] -U
pip install pytest-reportlog tabulate
- name: Run test on GPUs
run: |
@ -86,4 +82,5 @@ jobs:
- name: Generate Report
if: always()
run: |
pip install tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

View File

@ -0,0 +1,125 @@
# CI for specifically ensuring integrations work fine (`transformers` mainly) on GPUs
# Useful tips:
# - `working-directory` should be set to the root of the repo, which is cloned on the actual CI runner.
# It follows the directory structure of `actions-runner/_work/{repo_name}/{repo_name}/{cloned_repo} on
# prem, but in Actions setting `working-directory` looks just in the `{repo_name}` level.
# - New integrations to test should have its own job, and follow a strategy method where we check both
# the pypi and github versions.
# - Workflow call lets this be called from `build_and_run_tests.yml`
# - When using a docker container, it's recommended to set `--shm-size`, we use 16gb.
name: Integration Tests (push to "main")
on:
workflow_call:
workflow_dispatch:
env:
HF_HOME: ~/hf_cache
defaults:
run:
shell: bash
jobs:
run-trainer-tests:
container:
image: huggingface/accelerate-gpu:latest
options: --gpus all --shm-size "16gb"
runs-on: [self-hosted, docker-gpu, multi-gpu]
strategy:
fail-fast: false
matrix:
transformers-version: [
pypi,
github
]
cuda_visible_devices: [
"0",
"0,1"
]
steps:
- name: Update accelerate clone and pip install
working-directory: accelerate/
run:
source activate accelerate;
git config --global --add safe.directory '*';
git checkout main && git fetch && git checkout ${{ github.sha }};
pip install -e .;
- name: Update transformers clone & pip install
working-directory: transformers/
run: |
source activate accelerate
git config --global --add safe.directory '*'
git checkout main && git pull
if [[ ${{ matrix.transformers-version }} = pypi ]]; then
git checkout $(git describe --tags `git rev-list --tags --max-count=1`)
fi
pip install .[torch,deepspeed-testing]
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run trainer tests
working-directory: transformers/
env:
CUDA_VISIBLE_DEVICES: ${{ matrix.cuda_visible_devices }}
WANDB_DISABLED: true
run: |
source activate accelerate;
pytest -sv tests/trainer
- name: Run deepspeed tests
working-directory: transformers/
env:
CUDA_VISIBLE_DEVICES: ${{ matrix.cuda_visible_devices }}
WANDB_DISABLED: true
run: |
source activate accelerate;
pytest -sv tests/deepspeed
run-skorch-tests:
container:
image: huggingface/accelerate-gpu:latest
options: --gpus all --shm-size "16gb"
runs-on: [self-hosted, docker-gpu, multi-gpu]
strategy:
fail-fast: false
matrix:
skorch-version: [
pypi,
github
]
steps:
- name: Update accelerate clone and pip install
working-directory: accelerate/
run:
source activate accelerate;
git config --global --add safe.directory '*';
git checkout main && git fetch && git checkout ${{ github.sha }};
pip install -e .;
- name: Update skorch clone & pip install
working-directory: skorch/
run: |
source activate accelerate
git config --global --add safe.directory '*'
git checkout main && git pull
if [[ ${{ matrix.skorch-version }} = pypi ]]; then
git checkout $(git describe --tags `git rev-list --tags --max-count=1`)
fi
pip install .[testing]
pip install flaky
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run skorch tests
working-directory: skorch/
run: |
source activate accelerate;
pytest -sv -k TestAccelerate

View File

@ -18,7 +18,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v1
with:
python-version: 3.7
python-version: 3.8
- name: Install requirements
run: |

View File

@ -23,7 +23,7 @@ jobs:
matrix:
pytorch-version: [
latest,
minimum
minimum,
]
test-kind: [
test_prod,
@ -39,10 +39,10 @@ jobs:
]
steps:
- uses: actions/checkout@v3.1.0
- name: Set up python 3.7
- name: Set up python 3.8
uses: actions/setup-python@v3
with:
python-version: 3.7
python-version: 3.8
- name: Activate python cache
uses: actions/cache@v3
@ -58,8 +58,8 @@ jobs:
if [[ ${{ matrix.test-kind }} = test_prod ]]; then pip install -e .[test_prod]; fi
if [[ ${{ matrix.test-kind }} != test_prod ]]; then pip install -e .[testing,test_trackers]; fi
if [[ ${{ matrix.test-kind }} = test_rest ]]; then pip uninstall comet_ml -y; fi
if [[ ${{ matrix.pytorch-version }} = minimum ]]; then pip install torch==1.6.0; fi
pip install pytest-reportlog
if [[ ${{ matrix.test-kind }} = minimum ]]; then pip install torch==1.10.0; fi
pip install pytest-reportlog tabulate
- name: Run Tests
env:

View File

@ -0,0 +1,16 @@
name: Upload PR Documentation
on:
workflow_run:
workflows: ["Build PR Documentation"]
types:
- completed
jobs:
build:
uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@main
with:
package_name: accelerate
secrets:
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}

5
.gitignore vendored
View File

@ -138,4 +138,7 @@ dmypy.json
.DS_Store
# More test things
wandb
wandb
# ruff
.ruff_cache

View File

@ -152,7 +152,7 @@ Follow these steps to start contributing:
$ make test
```
`accelerate` relies on `black` and `isort` to format its source code
`accelerate` relies on `black` and `ruff` to format its source code
consistently. After you make changes, apply automatic style corrections and code verifications
that can't be automated in one go with:
@ -165,7 +165,7 @@ Follow these steps to start contributing:
$ make style
```
`accelerate` also uses `flake8` and a few custom scripts to check for coding mistakes. Quality
`accelerate` also uses a few custom scripts to check for coding mistakes. Quality
control runs in CI, however you can also run the same checks with:
```bash

View File

@ -1,6 +1,6 @@
.PHONY: quality style test docs
.PHONY: quality style test docs utils
check_dirs := tests src examples benchmarks
check_dirs := tests src examples benchmarks utils
# Check that source code meets quality standards
@ -8,27 +8,26 @@ extra_quality_checks:
python utils/check_copies.py
python utils/check_dummies.py
python utils/check_repo.py
python utils/style_doc.py src/accelerate docs/source --max_len 119
doc-builder style src/accelerate docs/source --max_len 119
# this target runs checks on all files
quality:
black --check $(check_dirs)
isort --check-only $(check_dirs)
flake8 $(check_dirs)
python utils/style_doc.py src/accelerate docs/source --max_len 119 --check_only
black --required-version 23 --check $(check_dirs)
ruff $(check_dirs)
doc-builder style src/accelerate docs/source --max_len 119 --check_only
# Format source code automatically and check is there are any problems left that need manual fixing
style:
black $(check_dirs)
isort $(check_dirs)
python utils/style_doc.py src/accelerate docs/source --max_len 119
black --required-version 23 $(check_dirs)
ruff $(check_dirs) --fix
doc-builder style src/accelerate docs/source --max_len 119
# Run tests for the library
test:
python -m pytest -s -v ./tests/ --ignore=./tests/test_examples.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_all.log",)
test_big_modeling:
python -m pytest -s -v ./tests/test_big_modeling.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_big_modeling.log",)
python -m pytest -s -v ./tests/test_big_modeling.py ./tests/test_modeling_utils.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_big_modeling.log",)
test_core:
python -m pytest -s -v ./tests/ --ignore=./tests/test_examples.py --ignore=./tests/deepspeed --ignore=./tests/test_big_modeling.py \

View File

@ -16,12 +16,12 @@ limitations under the License.
<p align="center">
<br>
<img src="docs/source/imgs/accelerate_logo.png" width="400"/>
<img src="https://raw.githubusercontent.com/huggingface/accelerate/main/docs/source/imgs/accelerate_logo.png" width="400"/>
<br>
<p>
<p align="center">
<!-- Uncomment when CircleCI is setup
<!-- Uncomment when CircleCI is set up
<a href="https://circleci.com/gh/huggingface/accelerate">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
</a>
@ -91,7 +91,7 @@ Here is an example:
optimizer.step()
```
As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp16).
As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp8, fp16, bf16).
In particular, the same code can then be run without modification on your local machine for debugging or your training environment.
@ -132,11 +132,11 @@ In particular, the same code can then be run without modification on your local
optimizer.step()
```
Want to learn more? Check out the [documentation](https://huggingface.co/docs/accelerate) or have look at our [examples](https://github.com/huggingface/accelerate/tree/main/examples).
Want to learn more? Check out the [documentation](https://huggingface.co/docs/accelerate) or have a look at our [examples](https://github.com/huggingface/accelerate/tree/main/examples).
## Launching script
🤗 Accelerate also provides an optional CLI tool that allows you to quickly configure and test your training environment before launching the scripts. No need to remember how to use `torch.distributed.launch` or to write a specific launcher for TPU training!
🤗 Accelerate also provides an optional CLI tool that allows you to quickly configure and test your training environment before launching the scripts. No need to remember how to use `torch.distributed.run` or to write a specific launcher for TPU training!
On your machine(s) just run:
```bash
@ -155,7 +155,17 @@ For instance, here is how you would run the GLUE example on the MRPC task (from
accelerate launch examples/nlp_example.py
```
This CLI tool is **optional**, and you can still use `python my_script.py` or `python -m torch.distributed.launch my_script.py` at your convenance.
This CLI tool is **optional**, and you can still use `python my_script.py` or `python -m torchrun my_script.py` at your convenience.
You can also directly pass in the arguments you would to `torchrun` as arguments to `accelerate launch` if you wish to not run` accelerate config`.
For example, here is how to launch on two GPUs:
```bash
accelerate launch --multi_gpu --num_processes 2 examples/nlp_example.py
```
To learn more, check the CLI documentation available [here](https://huggingface.co/docs/accelerate/package_reference/cli).
## Launching multi-CPU run using MPI
@ -168,15 +178,15 @@ mpirun -np 2 python examples/nlp_example.py
## Launching training using DeepSpeed
🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your python script, we provide you the `DeepSpeedPlugin`.
🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your Python script, we provide you the `DeepSpeedPlugin`.
```python
from accelerator import Accelerator, DeepSpeedPlugin
from accelerate import Accelerator, DeepSpeedPlugin
# deepspeed needs to know your gradient accumulation steps before hand, so don't forget to pass it
# deepspeed needs to know your gradient accumulation steps beforehand, so don't forget to pass it
# Remember you still need to do gradient accumulation by yourself, just like you would have done without deepspeed
deepspeed_plugin = DeepSpeedPlugin(zero_stage=2, gradient_accumulation_steps=2)
accelerator = Accelerator(fp16=True, deepspeed_plugin=deepspeed_plugin)
accelerator = Accelerator(mixed_precision='fp16', deepspeed_plugin=deepspeed_plugin)
# How to save your 🤗 Transformer?
accelerator.wait_for_everyone()
@ -200,7 +210,7 @@ An example can be found in [this notebook](https://github.com/huggingface/notebo
## Why should I use 🤗 Accelerate?
You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library, In fact the whole API of 🤗 Accelerate is in one class, the `Accelerator` object.
You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library. In fact, the whole API of 🤗 Accelerate is in one class, the `Accelerator` object.
## Why shouldn't I use 🤗 Accelerate?
@ -208,18 +218,24 @@ You shouldn't use 🤗 Accelerate if you don't want to write a training loop you
## Frameworks using 🤗 Accelerate
If you like the simplicity of 🤗 Accelerate but would prefer a higher-level abstraction around your training loop, some frameworks that are built on top of 🤗 Accelerate are listed below:
If you like the simplicity of 🤗 Accelerate but would prefer a higher-level abstraction around its capabilities, some frameworks and libraries that are built on top of 🤗 Accelerate are listed below:
* [Animus](https://github.com/Scitator/animus) is a minimalistic framework to run machine learning experiments. Animus highlights common "breakpoints" in ML experiments and provides a unified interface for them within [IExperiment](https://github.com/Scitator/animus/blob/main/animus/core.py#L76).
* [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model train, and inference logic.
* [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model training, and inference logic.
* [fastai](https://github.com/fastai/fastai#installing) is a PyTorch framework for Deep Learning that simplifies training fast and accurate neural nets using modern best practices. fastai provides a [Learner](https://docs.fast.ai/learner.html#Learner) to handle the training, fine-tuning, and inference of deep learning algorithms.
* [Finetuner](https://github.com/jina-ai/finetuner) is a service that enables models to create higher-quality embeddings for semantic search, visual similarity search, cross-modal text<->image search, recommendation systems, clustering, duplication detection, anomaly detection, or other uses.
* [InvokeAI](https://github.com/invoke-ai/InvokeAI) is a creative engine for Stable Diffusion models, offering industry-leading WebUI, terminal usage support, and serves as the foundation for many commercial products.
* [Kornia](https://kornia.readthedocs.io/en/latest/get-started/introduction.html) is a differentiable library that allows classical computer vision to be integrated into deep learning models. Kornia provides a [Trainer](https://kornia.readthedocs.io/en/latest/x.html#kornia.x.Trainer) with the specific purpose to train and fine-tune the supported deep learning algorithms within the library.
* [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) is a lightweight training library, with a streamlined feature set centred around a general-purpose [Trainer](https://pytorch-accelerated.readthedocs.io/en/latest/trainer.html), that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves!
* [Open Assistant](https://projects.laion.ai/Open-Assistant/) is a chat-based assistant that understands tasks, can interact with their party systems, and retrieve information dynamically to do so.
* [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) is a lightweight training library, with a streamlined feature set centered around a general-purpose [Trainer](https://pytorch-accelerated.readthedocs.io/en/latest/trainer.html), that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves!
* [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is an open-source browser-based easy-to-use interface based on the Gradio library for Stable Diffusion.
* [torchkeras](https://github.com/lyhue1991/torchkeras) is a simple tool for training pytorch model just in a keras style, a dynamic and beautiful plot is provided in notebook to monitor your loss or metric.
* [transformers](https://github.com/huggingface/transformers) as a tool for helping train state-of-the-art machine learning models in PyTorch, Tensorflow, and JAX. (Accelerate is the backend for the PyTorch side).
## Installation
This repository is tested on Python 3.6+ and PyTorch 1.4.0+
This repository is tested on Python 3.8+ and PyTorch 1.10.0+
You should install 🤗 Accelerate in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
@ -240,7 +256,8 @@ pip install accelerate
- multi-GPU on one node (machine)
- multi-GPU on several nodes (machines)
- TPU
- FP16 with native AMP (apex on the roadmap)
- FP16/BFloat16 mixed precision
- FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine)
- DeepSpeed support (Experimental)
- PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental)
- Megatron-LM support (Experimental)

View File

@ -16,12 +16,12 @@ import argparse
import time
import torch
import transformers
from accelerate.utils import compute_module_sizes
from measures_util import end_measure, log_measures, start_measure
from transformers import AutoConfig, AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer
from accelerate.utils import compute_module_sizes
DEFAULT_MODELS = {
"gpt-j-6b": {"is_causal": True, "model": "sgugger/sharded-gpt-j-6B", "tokenizer": "EleutherAI/gpt-j-6B"},

View File

@ -2,9 +2,8 @@ import gc
import threading
import time
import torch
import psutil
import torch
class PeakCPUMemory:

View File

@ -1,7 +1,7 @@
# Builds CPU-only Docker image of PyTorch
# Uses multi-staged approach to reduce size
# Stage 1
FROM python:3.7-slim as compile-image
FROM python:3.8-slim as compile-image
ARG DEBIAN_FRONTEND=noninteractive
@ -25,7 +25,7 @@ RUN python3 -m pip install --no-cache-dir \
--extra-index-url https://download.pytorch.org/whl/cpu
# Stage 2
FROM python:3.7-slim AS build-image
FROM python:3.8-slim AS build-image
COPY --from=compile-image /opt/venv /opt/venv
RUN useradd -ms /bin/bash user
USER user

View File

@ -4,7 +4,7 @@
# Use base conda image to reduce time
FROM continuumio/miniconda3:latest AS compile-image
# Specify py version
ENV PYTHON_VERSION=3.7.3
ENV PYTHON_VERSION=3.8
# Install apt libs
RUN apt-get update && \
apt-get install -y curl git wget && \
@ -23,7 +23,9 @@ SHELL ["/bin/bash", "-c"]
RUN source activate accelerate && \
python3 -m pip install --no-cache-dir \
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers] \
--extra-index-url https://download.pytorch.org/whl/cu113
--extra-index-url https://download.pytorch.org/whl/cu117
RUN python3 -m pip install --no-cache-dir bitsandbytes
# Stage 2
FROM nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04 AS build-image

267
docs/README.md Normal file
View File

@ -0,0 +1,267 @@
<!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Generating the documentation
To generate the documentation, you first have to build it. Several packages are necessary to build the doc,
you can install them with the following command, at the root of the code repository:
```bash
pip install -e ".[docs]"
```
Then you need to install our special tool that builds the documentation:
```bash
pip install git+https://github.com/huggingface/doc-builder
```
---
**NOTE**
You only need to generate the documentation to inspect it locally (if you're planning changes and want to
check how they look before committing for instance). You don't have to commit the built documentation.
---
## Building the documentation
Once you have setup the `doc-builder` and additional packages, you can generate the documentation by
typing the following command:
```bash
doc-builder build accelerate docs/source/ --build_dir ~/tmp/test-build
```
You can adapt the `--build_dir` to set any temporary folder that you prefer. This command will create it and generate
the MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite
Markdown editor.
## Previewing the documentation
To preview the docs, first install the `watchdog` module with:
```bash
pip install watchdog
```
Then run the following command:
```bash
doc-builder preview {package_name} {path_to_docs}
```
For example:
```bash
doc-builder preview accelerate docs/source/
```
The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
---
**NOTE**
The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
---
## Adding a new element to the navigation bar
Accepted files are Markdown (.md).
Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/accelerate/blob/main/docs/source/_toctree.yml) file.
## Renaming section headers and moving sections
It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
```
Sections that were moved:
[ <a href="#section-b">Section A</a><a id="section-a"></a> ]
```
and of course, if you moved it to another file, then:
```
Sections that were moved:
[ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
```
Use the relative style to link to the new file so that the versioned docs continue to work.
## Writing Documentation - Specification
The `huggingface/accelerate` documentation follows the
[Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings,
although we can write them directly in Markdown.
### Adding a new tutorial
Adding a new tutorial or section is done in two steps:
- Add a new file under `./source`. This file can either be ReStructuredText (.rst) or Markdown (.md).
- Link that file in `./source/_toctree.yml` on the correct toc-tree.
Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so
depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or
four.
### Writing source documentation
Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
and objects like True, None, or any strings should usually be put in `code`.
When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool
adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or
function to be in the main package.
If you want to create a link to some internal class or function, you need to
provide its path. For instance: \[\`utils.gather\`\]. This will be converted into a link with
`utils.gather` in the description. To get rid of the path and only keep the name of the object you are
linking to in the description, add a ~: \[\`~utils.gather\`\] will generate a link with `gather` in the description.
The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\].
#### Defining arguments in a method
Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and
an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its
description:
```
Args:
n_layers (`int`): The number of layers of the model.
```
If the description is too long to fit in one line (more than 119 characters in total), another indentation is necessary
before writing the description after the argument.
Finally, to maintain uniformity if any *one* description is too long to fit on one line, the
rest of the parameters should follow suit and have an indention before their description.
Here's an example showcasing everything so far:
```
Args:
gradient_accumulation_steps (`int`, *optional*, default to 1):
The number of steps that should pass before gradients are accumulated. A number > 1 should be combined with `Accelerator.accumulate`.
cpu (`bool`, *optional*):
Whether or not to force the script to execute on CPU. Will ignore GPU available if set to `True` and force the execution on one process only.
```
For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the
following signature:
```
def my_function(x: str = None, a: float = 1):
```
then its documentation should look like this:
```
Args:
x (`str`, *optional*):
This argument controls ... and has a description longer than 119 chars.
a (`float`, *optional*, defaults to 1):
This argument is used to ... and has a description longer than 119 chars.
```
Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even
if the first line describing your argument type and its default gets long, you can't break it on several lines. You can
however write as many lines as you want in the indented description (see the example above with `input_ids`).
#### Writing a multi-line code block
Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
````
```python
# first line of code
# second line
# etc
```
````
#### Writing a return block
The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation.
The first line should be the type of the return, followed by a line return. No need to indent further for the elements
building the return.
Here's an example of a single value return:
```
Returns:
`List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.
```
Here's an example of a tuple return, comprising several objects:
```
Returns:
`tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
- ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
- **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
```
## Styling the docstring
We have an automatic script running with the `make style` comment that will make sure that:
- the docstrings fully take advantage of the line width
- all code examples are formatted using black, like the code of the Transformers library
This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's
recommended to commit your changes before running `make style`, so you can revert the changes done by that script
easily.
## Writing documentation examples
The syntax for Example docstrings can look as follows:
```
Example:
```python
>>> import time
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
>>> if accelerator.is_main_process:
... time.sleep(2)
>>> else:
... print("I'm waiting for the main process to finish its sleep...")
>>> accelerator.wait_for_everyone()
>>> # Should print on every process at the same time
>>> print("Everyone is here")
```
```
The docstring should give a minimal, clear example of how the respective function
is to be used in inference and also include the expected (ideally sensible)
output.
Often, readers will try out the example before even going through the function
or class definitions. Therefore, it is of utmost importance that the example
works as expected.

View File

@ -17,36 +17,46 @@
title: Launching distributed training from Jupyter Notebooks
title: Tutorials
- sections:
- local: usage_guides/explore
title: Start Here!
- local: usage_guides/training_zoo
title: Example Zoo
- local: usage_guides/big_modeling
title: How to perform inference on large models with small resources
- local: usage_guides/quantization
title: How to quantize model
- local: usage_guides/distributed_inference
title: How to perform distributed inference with normal resources
- local: usage_guides/gradient_accumulation
title: Performing gradient accumulation
- local: usage_guides/fsdp
title: Fully Sharded Data Parallelism
- local: usage_guides/local_sgd
title: Accelerating training with local SGD
- local: usage_guides/checkpoint
title: Saving and loading training states
- local: usage_guides/deepspeed
title: How to use DeepSpeed
- local: usage_guides/tracking
title: Using experiment trackers
- local: usage_guides/big_modeling
title: How to use large models with small resources
- local: usage_guides/memory
title: How to avoid CUDA Out-of-Memory
- local: usage_guides/sagemaker
title: Using 🤗 Accelerate on SageMaker
- local: usage_guides/mps
title: How to use Apple Silicon M1 GPUs
- local: usage_guides/deepspeed
title: How to use DeepSpeed
- local: usage_guides/fsdp
title: How to use Fully Sharded Data Parallelism
- local: usage_guides/megatron_lm
title: How to use Megatron-LM
- local: usage_guides/training_zoo
title: 🤗 Accelerate Example Zoo
- local: usage_guides/sagemaker
title: How to use 🤗 Accelerate with SageMaker
- local: usage_guides/ipex
title: How to use 🤗 Accelerate with Intel® Extension for PyTorch for cpu
title: How-To Guides
- sections:
- local: concept_guides/performance
title: Comparing performance across distributed setups
- local: concept_guides/gradient_synchronization
title: Gradient synchronization
- local: concept_guides/deferring_execution
title: Executing and deferring jobs
- local: concept_guides/gradient_synchronization
title: Gradient synchronization
- local: concept_guides/training_tpu
title: TPU best practices
title: Concepts and fundamentals
@ -75,4 +85,4 @@
title: Utility functions and classes
- local: package_reference/megatron_lm
title: Megatron-LM Utilities
title: "Reference"
title: "Reference"

View File

@ -8,11 +8,14 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Installation and Configuration
Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 Accelerate. 🤗 Accelerate is tested on **Python 3.7+**.
Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 Accelerate. 🤗 Accelerate is tested on **Python 3.8+**.
## Installing 🤗 Accelerate

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Launching your 🤗 Accelerate scripts
@ -36,7 +39,7 @@ for batch in training_dataloader:
But how do you run this code and have it utilize the special hardware available to it?
First you should rewrite the above code into a function, and make it callable as a script. For example:
First, you should rewrite the above code into a function, and make it callable as a script. For example:
```diff
from accelerate import Accelerator
@ -61,7 +64,7 @@ First you should rewrite the above code into a function, and make it callable as
+ main()
```
Next you need to launch it with `accelerate launch`.
Next, you need to launch it with `accelerate launch`.
<Tip warning={true}>
@ -74,7 +77,7 @@ Next you need to launch it with `accelerate launch`.
## Using accelerate launch
🤗 Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them are.
This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them is.
<Tip>
@ -88,7 +91,7 @@ You can launch your script quickly by using:
accelerate launch {script_name.py} --arg1 --arg2 ...
```
Just put `accelerate launch` at the start of your command, and pass in additional arguments and parameters to your script afterwards like normal!
Just put `accelerate launch` at the start of your command, and pass in additional arguments and parameters to your script afterward like normal!
Since this runs the various torch spawn methods, all of the expected environment variables can be modified here as well.
For example, here is how to use `accelerate launch` with a single GPU:
@ -105,6 +108,12 @@ Here is how you would use all GPUs and train with mixed precision disabled:
accelerate launch --multi_gpu {script_name.py} {--arg1} {--arg2} ...
```
Or by specifying a number of GPUs to use:
```bash
accelerate launch --num_processes=2 {script_name.py} {--arg1} {--arg2} ...
```
To get more specific you should pass in the needed parameters yourself. For instance, here is how you
would also launch that same script on two GPUs using mixed precision while avoiding all of the warnings:
@ -130,6 +139,21 @@ For a visualization of this difference, that earlier `accelerate launch` on mult
MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --num_machines=1 {script_name.py} {--arg1} {--arg2} ...
```
You can also launch your script utilizing the launch CLI as a python module itself, enabling the ability to pass in other python-specific
launching behaviors. To do so, use `accelerate.commands.launch` instead of `accelerate launch`:
```bash
python -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2}
```
If you want to execute the script with any other python flags, you can pass them in as well similar to `-m`, such as
the below example enabling unbuffered stdout and stderr:
```bash
python -u -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2}
```
## Why you should always use `accelerate config`
Why is it useful to the point you should **always** run `accelerate config`?
@ -175,4 +199,4 @@ use_cpu: false
Launching a script from the location of that custom yaml file looks like the following:
```bash
accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2} ...
```
```

View File

@ -8,13 +8,16 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Migrating your code to 🤗 Accelerate
This tutorial will detail how to easily convert existing PyTorch code to use 🤗 Accelerate!
You'll see that by just changing a few lines of code, 🤗 Accelerate can perform its magic and get you on
your way towards running your code on distributed systems with ease!
your way toward running your code on distributed systems with ease!
## The base training loop
@ -65,7 +68,7 @@ change the definition of `device` to come from [`Accelerator`]:
### Preparing your objects
Next you need to pass all of the important objects related to training into [`~Accelerator.prepare`]. 🤗 Accelerate will
Next, you need to pass all of the important objects related to training into [`~Accelerator.prepare`]. 🤗 Accelerate will
make sure everything is setup in the current environment for you to start training:
```
@ -73,7 +76,7 @@ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
```
These objects are returned in the same order they were sent in with. By default when using `device_placement=True`, all of the objects that can be sent to the right device will be.
These objects are returned in the same order they were sent in. By default when using `device_placement=True`, all of the objects that can be sent to the right device will be.
If you need to work with data that isn't passed to [~Accelerator.prepare] but should be on the active device, you should pass in the `device` you made earlier.
<Tip warning={true}>
@ -121,3 +124,6 @@ for batch in training_dataloader:
scheduler.step()
```
## More Resources
To check out more ways on how to migrate to 🤗 Accelerate, check out our [interactive migration tutorial](https://huggingface.co/docs/accelerate/usage_guides/explore) which showcases other items that need to be watched for when using Accelerate and how to do so quickly.

View File

@ -8,9 +8,12 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Launching Multi-Node Training from a Jupyter Environment
# Launching Multi-GPU Training from a Jupyter Environment
This tutorial teaches you how to fine tune a computer vision model with 🤗 Accelerate from a Jupyter Notebook on a distributed system.
You will also learn how to setup a few requirements needed for ensuring your environment is configured properly, your data has been prepared properly, and finally how to launch training.
@ -35,7 +38,7 @@ The following code will restart Jupyter after writing the configuration, as CUDA
<Tip warning={true}>
CUDA can't be initialized more than once on a multi-node system. It's fine to debug in the notebook and have calls to CUDA, but in order to finally train a full cleanup and restart will need to be performed.
CUDA can't be initialized more than once on a multi-GPU system. It's fine to debug in the notebook and have calls to CUDA, but in order to finally train a full cleanup and restart will need to be performed.
</Tip>
@ -153,7 +156,7 @@ def get_dataloaders(batch_size: int = 64):
random_perm = np.random.permutation(len(fnames))
cut = int(0.8 * len(fnames))
train_split = random_perm[:cut]
eval_split = random_perm[:cut]
eval_split = random_perm[cut:]
# For training a simple RandomResizedCrop will be used
train_tfm = Compose([RandomResizedCrop((224, 224), scale=(0.5, 1.0)), ToTensor()])
@ -337,7 +340,7 @@ def training_loop(mixed_precision="fp16", seed: int = 42, batch_size: int = 64):
mean = torch.tensor(model.default_cfg["mean"])[None, :, None, None]
std = torch.tensor(model.default_cfg["std"])[None, :, None, None]
# To make this constant available on the active device, set it to the accelerator device
# To make these constants available on the active device, set it to the accelerator device
mean = mean.to(accelerator.device)
std = std.to(accelerator.device)
@ -426,4 +429,4 @@ This notebook showed how to perform distributed training from inside of a Jupyte
- Make sure to save any code that use CUDA (or CUDA imports) for the function passed to [`notebook_launcher`]
- Set the `num_processes` to be the number of devices used for training (such as number of GPUs, CPUs, TPUs, etc)
- If using the TPU, declare your model outside the training loop function
- If using the TPU, declare your model outside the training loop function

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Overview

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Deferring Executions
@ -27,7 +30,7 @@ accelerator.wait_for_everyone()
This instruction will block all the processes that arrive first until all the other processes have reached that
point (if you run your script on just one GPU or CPU, this won't do anything).
A few example cases for when to use this utility are listed below:
A few example cases of when to use this utility are listed below:
<Tip>
@ -38,7 +41,7 @@ A few example cases for when to use this utility are listed below:
## Downloading a Dataset
When downloading a dataset, you should download it first on the main process and then loading the cached dataset in afterwards
When downloading a dataset, you should download it first on the main process and then load the cached dataset afterward
<Tip>
@ -104,4 +107,4 @@ with accelerator.main_process_first():
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
```
```

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Gradient Synchronization
@ -45,22 +48,30 @@ training in a distributed setup. But how does this risk slowing down your code?
In DDP (distributed data parallel), the specific order in which processes are performed and ran are expected
at specific points and these must also occur at roughly the same time before moving on.
The most direct example is when you update all of the parameters in a model through `.backward()`. All instances of the model
need to have updated their gradients, collated, and updated again before moving onto the next batch of data. But when performing
gradient accumulation, you accumulate `n` losses and skip `.backward()` until `n` batches have been reached. This
can cause a significant slowdown since all the processes need to communicate with them more times than needed. How
can you avoid this overhead?
The most direct example is when you update model parameters through
`optimizer.step()`.
Without gradient accumulation, all instances of the model need to have updated
their gradients computed, collated, and updated before moving on to the next
batch of data.
When performing gradient accumulation, you accumulate `n` loss gradients and
skip `optimizer.step()` until `n` batches have been reached. As all training
processes only need to sychronize by the time `optimizer.step()` is called,
without any modification to your training step, this neededless inter-process
communication can cause a significant slowdown.
How can you avoid this overhead?
## Solving the slowdown problem
Since you are skipping these batches, their gradients do not need to be synchronized until the point where `.backward()` is actually called.
Since you are skipping model parameter updates when training on these batches, their gradients do not need to be synchronized until the point where `optimizer.step()` is actually called.
PyTorch cannot automagically tell when you need to do this, but they do provide a tool to help through the [`no_sync`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.no_sync) context manager
that is added to your model after converting it to DDP.
Under this context manager, PyTorch will skip synchronizing the gradients when `.backward()` is called, and the first call to `.backward()` outside this
Under this context manager, PyTorch will skip synchronizing the gradients when
`.backward()` is called, and the first call to `.backward()` outside this
context manager will trigger the synchronization. See an example below:
```python
ddp_model, dataloader = accelerator.prepare(model, dataloader)
ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
for index, batch in enumerate(dataloader):
inputs, targets = batch
@ -76,13 +87,14 @@ for index, batch in enumerate(dataloader):
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
optimizer.step()
```
In 🤗 Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
`ddp_model.no_sync` gets replaced with [`~Accelerator.no_sync`] and operates the same way:
```diff
ddp_model, dataloader = accelerator.prepare(model, dataloader)
ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
for index, batch in enumerate(dataloader):
inputs, targets = batch
@ -99,13 +111,15 @@ In 🤗 Accelerate to make this an API that can be called no matter the training
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
As you may expect, the [`~Accelerator.accumulate`] function wraps around this conditional check by keeping track of the current batch number, leaving you with the final
gradient accumulation API:
```python
ddp_model, dataloader = accelerator.prepare(model, dataloader)
ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
for batch in dataloader:
with accelerator.accumulate(model):
@ -114,6 +128,42 @@ for batch in dataloader:
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
As a result, you should either use *`accelerator.accumulate` or `accelerator.no_sync`* when it comes to API choice.
As a result, you should either use *`accelerator.accumulate` or `accelerator.no_sync`* when it comes to API choice.
## Just how much of a slowdown is there, and easy mistakes you can make
To set up a realistic example, consider the following setup:
* Two single-GPU T4 nodes and one node with two GPUs
* Each GPU is a T4, and are hosted on GCP
* The script used is a modification of the [NLP Example](https://github.com/muellerzr/timing_experiments/blob/main/baseline.py) script
* Batch size per GPU is 16, and gradients are accumulated every 4 steps
All scripts are available in [this repository](https://github.com/muellerzr/timing_experiments).
If not careful about gradient synchronization and GPU communication, a *large* amount of time can be wasted
from when these GPUs communicate to each other during unnecessary periods.
By how much?
Reference:
- Baseline: uses no synchronization practices discussed here
- `no_sync` improperly: `no_sync` only around the `backward` call, not the `forward`
- `no_sync`: using the `no_sync` pattern properly
- `accumulate`: using [`~Accelerator.accumulate`] properly
Below are the average seconds per batch iterating over 29 batches of data for each setup on both a single node and on the dual-node setup:
| | Baseline | `no_sync` improperly | `no_sync` | `accumulate`|
| :---------: | :-------: | :------------------: | :-------: | :---------: |
| Multi-Node | 2±0.01s | 2.13±0.08s | **0.91±0.11s** | **0.91±0.11s** |
| Single Node | 0.50±0.01s | 0.50±0.01s | **0.41±0.015s** | **0.41±0.015s** |
As you can see, if you are not careful about how you set up your gradient synchronization, you can get upwards of more than a 2x slowdown during training!
If you are worried about making sure everything is done properly, we highly recommend utilizing the [`~Accelerator.accumulate`] function and passing in
`gradient_accumulation_steps` or `gradient_accumulation_plugin` to the [`Accelerator`] object so Accelerate can handle this for you.

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Comparing performance between different device setups
@ -18,7 +21,7 @@ and expect your results to line up.
But why?
There's three reasons for this that this tutorial will cover:
There are three reasons for this that this tutorial will cover:
1. **Setting the right seeds**
2. **Observed Batch Sizes**
@ -26,10 +29,10 @@ There's three reasons for this that this tutorial will cover:
## Setting the Seed
While this issue has not come up as much, make sure to use [`utils.set_seed`] to fully set the seed in all distributed cases so training will be reproducable:
While this issue has not come up as much, make sure to use [`utils.set_seed`] to fully set the seed in all distributed cases so training will be reproducible:
```python
from accelerate import set_seed
from accelerate.utils import set_seed
set_seed(42)
```
@ -58,7 +61,7 @@ The below table can be used as a quick reference to try out different batch size
<Tip>
In this example there are two GPUs for "Multi-GPU" and a TPU pod with 8 workers
In this example, there are two GPUs for "Multi-GPU" and a TPU pod with 8 workers
</Tip>
@ -89,3 +92,12 @@ learning_rate *= accelerator.num_processes
optimizer = AdamW(params=model.parameters(), lr=learning_rate)
```
You will also find that `accelerate` will step the learning rate based on the number of processes being trained on. This is because
of the observed batch size noted earlier. So in the case of 2 GPUs, the learning rate will be stepped twice as often as a single GPU
to account for the batch size being twice as large (if no changes to the batch size on the single GPU instance are made).
## Gradient Accumulation and Mixed Precision
When using gradient accumulation and mixed precision, due to how gradient averaging works (accumulation) and the precision loss (mixed precision),
some degradation in performance is expected. This will be explicitly seen when comparing the batch-wise loss between different compute
setups. However, the overall loss, metric, and general performance at the end of training should be _roughly_ the same.

View File

@ -8,11 +8,14 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Training on TPUs with 🤗 Accelerate
Training on TPUs can be slightly different than training on multi-gpu, even with 🤗 Accelerate. This guide aims to show you
Training on TPUs can be slightly different from training on multi-gpu, even with 🤗 Accelerate. This guide aims to show you
where you should be careful and why, as well as the best practices in general.
## Training in a Notebook
@ -24,8 +27,8 @@ While on a TPU that last part is not as important, a critical part to understand
When launching from the command-line, you perform **spawning**, where a python process is not currently running and you *spawn* a new process in. Since your Jupyter notebook is already
utilizing a python process, you need to *fork* a new process from it to launch your code.
Where this becomes important is in regards to declaring your model. On forked TPU processes, it is recommended that you instantiate your model *once* and pass this into your
training function. This is different than training on GPUs where you create `n` models that have their gradients synced and back-propagated at certain moments. Instead one
Where this becomes important is in regard to declaring your model. On forked TPU processes, it is recommended that you instantiate your model *once* and pass this into your
training function. This is different than training on GPUs where you create `n` models that have their gradients synced and back-propagated at certain moments. Instead, one
model instance is shared between all the nodes and it is passed back and forth. This is important especially when training on low-resource TPUs such as those provided in Kaggle kernels or
on Google Colaboratory.
@ -134,7 +137,7 @@ At the base level, this is enabled when passing `mixed_precision="bf16"` to `Acc
```python
accelerator = Accelerator(mixed_precision="bf16")
```
By default this will cast `torch.float` and `torch.double` to `bfloat16` on TPUs.
By default, this will cast `torch.float` and `torch.double` to `bfloat16` on TPUs.
The specific configuration being set is an environmental variable of `XLA_USE_BF16` is set to `1`.
There is a further configuration you can perform which is setting the `XLA_DOWNCAST_BF16` environmental variable. If set to `1`, then
@ -161,4 +164,4 @@ new batch size after the first few iterations.
Just because the memory is allocated does not mean it will be used or that the batch size will increase when going back to your training dataloader.
</Tip>
</Tip>

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerate
@ -55,7 +58,7 @@ accelerate launch {my_script.py}
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
<p class="text-gray-700">Learn the basics and become familiar with using 🤗 Accelerate. Start here if you are using 🤗 Accelerate for the first time!</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./usage_guides/gradient_accumulation"
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./usage_guides/explore"
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
<p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Accelerate to solve real-world problems.</p>
</a>

View File

@ -8,12 +8,15 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerator
The [`Accelerator`] is the main class provided by 🤗 Accelerate.
It serves at the main entrypoint for the API.
It serves at the main entry point for the API.
## Quick adaptation of your code
@ -45,7 +48,7 @@ you should search for and replace by the corresponding methods of your `accelera
### Printing
`print` statements should be replaced by [`~Accelerator.print`] to be printed once per process
`print` statements should be replaced by [`~Accelerator.print`] to be printed once per process:
```diff
- print("My thing I want to print!")
@ -113,25 +116,55 @@ def do_my_thing():
### Synchronicity control
Use [`~Accelerator.wait_for_everyone`] to make sure all processes join that point before continuing. (Useful before a model save for instance)
Use [`~Accelerator.wait_for_everyone`] to make sure all processes join that point before continuing. (Useful before a model save for instance).
### Saving and loading
Use [`~Accelerator.unwrap_model`] before saving to remove all special model wrappers added during the distributed process.
```python
model = MyModel()
model = accelerator.prepare(model)
# Unwrap
model = accelerator.unwrap_model(model)
```
Use [`~Accelerator.save`] instead of `torch.save`:
Use [`~Accelerator.save_model`] instead of `torch.save` to save a model. It will remove all model wrappers added during the distributed process, get the state_dict of the model and save it.
```diff
state_dict = model.state_dict()
- torch.save(state_dict, "my_state.pkl")
+ accelerator.save(state_dict, "my_state.pkl")
+ accelerator.save_model(model, save_directory)
```
[`~Accelerator.save_model`] can also save a model into sharded checkpoints or with safetensors format.
Here is an example:
```python
accelerator.save_model(model, save_directory, max_shard_size="1GB", safe_serialization=True)
```
#### 🤗 Transformers models
If you are using models from the [🤗 Transformers](https://huggingface.co/docs/transformers/) library, you can use the `.save_pretrained()` method.
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("bert-base-cased")
model = accelerator.prepare(model)
# ...fine-tune with PyTorch...
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(
"path/to/my_model_directory",
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
)
```
This will ensure your model stays compatible with other 🤗 Transformers functionality like the `.from_pretrained()` method.
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("path/to/my_model_directory")
```
### Operations
@ -157,7 +190,22 @@ multi-device training, check if the step should actually be performed, and auto-
scheduler.step()
optimizer.zero_grad()
```
#### GradientAccumulationPlugin
[[autodoc]] utils.GradientAccumulationPlugin
Instead of passing `gradient_accumulation_steps` you can instantiate a GradientAccumulationPlugin and pass it to the [`Accelerator`]'s `__init__`
as `gradient_accumulation_plugin`. You can only pass either one of `gradient_accumulation_plugin` or `gradient_accumulation_steps` passing both will raise an error.
```diff
from accelerate.utils import GradientAccumulationPlugin
gradient_accumulation_plugin = GradientAccumulationPlugin(num_steps=2)
- accelerator = Accelerator()
+ accelerator = Accelerator(gradient_accumulation_plugin=gradient_accumulation_plugin)
```
In addition to the number of steps, this also lets you configure whether or not you adjust your learning rate scheduler to account for the change in steps due to accumulation.
## Overall API documentation:
[[autodoc]] Accelerator
[[autodoc]] Accelerator

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Working with large models

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# The Command Line
@ -83,7 +86,7 @@ accelerate config update [arguments]
**Command**:
`accelerate env` or `accelerate-env`
`accelerate env` or `accelerate-env` or `python -m accelerate.commands.env`
Lists the contents of the passed 🤗 Accelerate configuration file. Should always be used when opening an issue on the [GitHub repository](https://github.com/huggingface/accelerate).
@ -103,7 +106,7 @@ accelerate env [arguments]
**Command**:
`accelerate launch` or `accelerate-launch`
`accelerate launch` or `accelerate-launch` or `python -m accelerate.commands.launch`
Launches a specified script on a distributed system with the right parameters.
@ -125,6 +128,8 @@ accelerate launch [arguments] {training_script} --{training_script-argument-1} -
* `-m`, `--module` (`bool`) -- Change each process to interpret the launch script as a Python module, executing with the same behavior as 'python -m'.
* `--no_python` (`bool`) -- Skip prepending the training script with 'python' - just execute it directly. Useful when the script is not a Python script.
* `--debug` (`bool`) -- Whether to print out the torch.distributed stack trace when something fails.
* `-q`, `--quiet` (`bool`) -- Silence subprocess errors from the launch stack trace to only show the relevant tracebacks. (Only applicable to DeepSpeed and single-process configurations).
The rest of these arguments are configured through `accelerate config` and are read in from the specified `--config_file` (or default configuration) for their
values. They can also be passed in manually.
@ -133,8 +138,8 @@ values. They can also be passed in manually.
* `--cpu` (`bool`) -- Whether or not to force the training on the CPU.
* `--multi_gpu` (`bool`) -- Whether or not this should launch a distributed GPU training.
* `--mps` (`bool`) -- Whether or not this should use MPS-enabled GPU device on MacOS machines.
* `--tpu` (`bool`) -- Whether or not this should launch a TPU training.
* `--ipex` (`bool`) -- Whether or not this should launch an Intel Pytorch Extension (IPEX) training.
**Resource Selection Arguments**:
@ -152,6 +157,7 @@ The following arguments are useful for selecting which training paradigm to use.
* `--use_deepspeed` (`bool`) -- Whether or not to use DeepSpeed for training.
* `--use_fsdp` (`bool`) -- Whether or not to use FullyShardedDataParallel for training.
* `--use_megatron_lm` (`bool`) -- Whether or not to use Megatron-LM for training.
* `--use_xpu` (`bool`) -- Whether to use IPEX plugin to speed up training on XPU specifically.
**Distributed GPU Arguments**:
@ -162,6 +168,7 @@ The following arguments are only useful when `multi_gpu` is passed or multi-gpu
* `--machine_rank MACHINE_RANK` (`int`) -- The rank of the machine on which this script is launched.
* `--main_process_ip MAIN_PROCESS_IP` (`str`) -- The IP address of the machine of rank 0.
* `--main_process_port MAIN_PROCESS_PORT` (`int`) -- The port to use to communicate with the machine of rank 0.
* `--rdzv_backend` (`str`) -- The rendezvous method to use, such as "static" or "c10d"
* `--rdzv_conf` (`str`) -- Additional rendezvous configuration (<key1>=<value1>,<key2>=<value2>,...).
* `--max_restarts` (`int`) -- Maximum number of worker group restarts before failing.
* `--monitor_interval` (`float`) -- Interval, in seconds, to monitor the state of workers.

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Utilities for DeepSpeed

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Kwargs Handlers

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Launchers

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Logging with Accelerate

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Utilities for Megatron-LM

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Stateful Classes
@ -18,6 +21,8 @@ instances share the same state, which is initialized on the first instantiation.
These classes are immutable and store information about certain configurations or
states.
[[autodoc]] state.PartialState
[[autodoc]] state.AcceleratorState
[[autodoc]] state.GradientState

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Wrapper classes for torch Dataloaders, Optimizers, and Schedulers

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Experiment Tracking
@ -24,3 +27,7 @@ specific language governing permissions and limitations under the License.
- __init__
[[autodoc]] tracking.CometMLTracker
- __init__
[[autodoc]] tracking.AimTracker
- __init__
[[autodoc]] tracking.MLflowTracker
- __init__

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Helpful Utilities
@ -24,6 +27,8 @@ These are basic dataclasses used throughout 🤗 Accelerate and they can be pass
[[autodoc]] utils.PrecisionType
[[autodoc]] utils.ProjectConfiguration
## Data Manipulation and Operations
These include data operations that mimic the same `torch` ops but can be used on distributed processes.
@ -93,3 +98,24 @@ These utilities relate to setting and synchronizing of all the random states.
[[autodoc]] utils.synchronize_rng_state
[[autodoc]] utils.synchronize_rng_states
## PyTorch XLA
These include utilities that are useful while using PyTorch with XLA.
[[autodoc]] utils.install_xla
## Loading model weights
These include utilities that are useful to load checkpoints.
[[autodoc]] utils.load_checkpoint_in_model
## Quantization
These include utilities that are useful to quantize model.
[[autodoc]] utils.load_and_quantize_model
[[autodoc]] utils.BnbQuantizationConfig

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Quick tour
@ -67,9 +70,9 @@ use `shuffle=True` or any kind of random sampler).
</Tip>
Alternatively, you can use the option `split_batches=True` when creating initializing your
[`Accelerator`], in which case the batch size will always stay the same, whether your run your
script on 1, 2, 4 or 64 GPUs.
Alternatively, you can use the option `split_batches=True` when creating and initializing your
[`Accelerator`], in which case the batch size will always stay the same, whether you run your
script on 1, 2, 4, or 64 GPUs.
You should execute this instruction as soon as all objects for training are created, before starting your actual
training loop.
@ -164,9 +167,8 @@ should be calculated through the [`~Accelerator.gather_for_metrics`] method to a
## Launching your distributed script
You can use the regular commands to launch your distributed training (like `torch.distributed.launch` for
PyTorch), they are fully compatible with 🤗 Accelerate. The only caveat here is that 🤗 Accelerate uses the environment
to determine all useful information, so `torch.distributed.launch` should be used with the flag `--use_env`.
You can use the regular commands to launch your distributed training (like `torch.distributed.run` for
PyTorch), they are fully compatible with 🤗 Accelerate.
🤗 Accelerate also provides a CLI tool that unifies all launchers, so you only have to remember one command. To use it,
just run:
@ -206,7 +208,7 @@ Now that this is done, you can run your script with the following command:
accelerate launch path_to_script.py --args_for_the_script
```
If you stored the config file in a non-default location, you can indicate it to the launcher like his:
If you stored the config file in a non-default location, you can indicate it to the launcher like this:
```bash
accelerate launch --config_file path_to_config.yaml path_to_script.py --args_for_the_script
@ -346,31 +348,48 @@ point in the script as shown above, and then, you should unwrap your model befor
through the [`~Accelerator.prepare`] method, your model may have been placed inside a bigger model,
which deals with the distributed training. This in turn means that saving your model state dictionary without taking
any precaution will take that potential extra layer into account, and you will end up with weights you can't load back
in your base model.
This is why it's recommended to *unwrap* your model first. Here is an example:
in your base model. The [`~Accelerator.save_model`] method will help you to achieve that. It will unwrap your model and save
the model state dictionnary.
Here is an example:
```
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
accelerator.save(unwrapped_model.state_dict(), filename)
accelerator.save_model(model, save_directory)
```
The [`~Accelerator.save_model`] method can also save a model into sharded checkpoints or with safetensors format.
Here is an example:
```python
accelerator.wait_for_everyone()
accelerator.save_model(model, save_directory, max_shard_size="1GB", safe_serialization=True)
```
If your script contains logic to load a checkpoint, we also recommend you load your weights in the unwrapped model
(this is only useful if you use the load function after making your model go through
[`~Accelerator.prepare`]). Here is an example:
```
```python
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.load_state_dict(torch.load(filename))
path_to_checkpoint = os.path.join(save_directory,"pytorch_model.bin")
unwrapped_model.load_state_dict(torch.load(path_to_checkpoint))
```
Note that since all the model parameters are references to tensors, this will load your weights inside `model`.
If you want to load a sharded checkpoint or a checkpoint with safetensors format into the model with a specific `device`, we recommend you to load it with [`~utils.load_checkpoint_in_model`] function. Here's an example:
```python
load_checkpoint_in_model(unwrapped_model, save_directory, device_map={"":device})
```
## Saving/loading entire states
When training your model, you may want to save the current state of the model, optimizer, random generators, and potentially LR schedulers to be restored in the _same script_.
You can use [`~Accelerator.save_state`] and [`~Accelerator.load_state`] respectively to do so, just by simply passing in a save location.
You can use [`~Accelerator.save_state`] and [`~Accelerator.load_state`] respectively to do so.
To further customize where and how states saved through [`~Accelerator.save_state`] the [`~utils.ProjectConfiguration`] class can be used. For example
if `automatic_checkpoint_naming` is enabled each saved checkpoint will be located then at `Accelerator.project_dir/checkpoints/checkpoint_{checkpoint_number}`.
If you have registered any other stateful items to be stored through [`~Accelerator.register_for_checkpointing`] they will also be saved and/or loaded.
<Tip>

View File

@ -8,11 +8,14 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Handling big models
# Handling big models for inference
When loading a pretrained model in PyTorch, the usual workflow looks like this:
When loading a pre-trained model in PyTorch, the usual workflow looks like this:
```py
import torch
@ -27,7 +30,7 @@ In plain English, those steps are:
2. Load the model weights (in a dictionary usually called a state dict) from the disk
3. Load those weights inside the model
While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pretrained weights. If you're loading a model with 6 billions parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16).
While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pre-trained weights. If you're loading a model with 6 billion parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16).
<Tip warning={true}>
@ -43,7 +46,7 @@ While this works very well for regularly sized models, this workflow has some cl
### Instantiating an empty model
The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM, so that step 1 can be done on models of any size. Here is how it works:
The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
```py
from accelerate import init_empty_weights
@ -59,7 +62,7 @@ with init_empty_weights():
model = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
```
initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved on that device.
initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved to that device.
<Tip warning={true}>
@ -69,9 +72,9 @@ initializes an empty model with a bit more than 100B parameters. Behind the scen
### Sharded checkpoints
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split in several smaller files that we call checkpoint shards.
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split into several smaller files that we call checkpoint shards.
🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. For instance we could have a folder containing:
🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
```bash
first_state_dict.bin
@ -96,44 +99,58 @@ and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"l
The second tool 🤗 Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
Here is how we can use this to load the [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) model. You clone the sharded version of this model with:
If you want to use big model inference with 🤗 Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
Here is how we can use this to load the [GPT2-1.5B](https://huggingface.co/marcsun13/gpt2-xl-linear-sharded) model.
Let's download the sharded version of this model.
```bash
git clone https://huggingface.co/sgugger/sharded-gpt-j-6B
cd sharded-gpt-j-6B
git-lfs install
git pull
pip install huggingface_hub
```
then we can initialize the model with
```py
from huggingface_hub import snapshot_download
checkpoint = "marcsun13/gpt2-xl-linear-sharded"
weights_location = snapshot_download(repo_id=checkpoint)
```
In order to initialize the model, we will use the library minGTP.
```bash
git clone https://github.com/karpathy/minGPT.git
pip install minGPT/
```
```py
from accelerate import init_empty_weights
from transformers import AutoConfig, AutoModelForCausalLM
from mingpt.model import GPT
checkpoint = "EleutherAI/gpt-j-6B"
config = AutoConfig.from_pretrained(checkpoint)
model_config = GPT.get_default_config()
model_config.model_type = 'gpt2-xl'
model_config.vocab_size = 50257
model_config.block_size = 1024
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config)
model = GPT(model_config)
```
and load the checkpoint we just downloaded with:
Then, load the checkpoint we just downloaded with:
```py
from accelerate import load_checkpoint_and_dispatch
model = load_checkpoint_and_dispatch(
model, "sharded-gpt-j-6B", device_map="auto", no_split_module_classes=["GPTJBlock"]
model, checkpoint=weights_location, device_map="auto", no_split_module_classes=['Block']
)
```
By passing `device_map="auto"`, we tell 🤗 Accelerate to determine automatically where to put each layer of the model depending on the available resources:
- first we use the maximum space available on the GPU(s)
- first, we use the maximum space available on the GPU(s)
- if we still need space, we store the remaining weights on the CPU
- if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors
`no_split_module_classes=["GPTJBlock"]` indicates that the modules that are `GPTJBlock` should not be split on different devices. You should set here all blocks that include a residual connection of some kind.
`no_split_module_classes=["Block"]` indicates that the modules that are `Block` should not be split on different devices. You should set here all blocks that include a residual connection of some kind.
You can see the `device_map` that 🤗 Accelerate picked by accessing the `hf_device_map` attribute of your model:
@ -143,6 +160,7 @@ model.hf_device_map
```python out
{'transformer.wte': 0,
'transformer.wpe': 0,
'transformer.drop': 0,
'transformer.h.0': 0,
'transformer.h.1': 0,
@ -160,26 +178,46 @@ model.hf_device_map
'transformer.h.13': 0,
'transformer.h.14': 0,
'transformer.h.15': 0,
'transformer.h.16': 0,
'transformer.h.17': 0,
'transformer.h.18': 0,
'transformer.h.19': 0,
'transformer.h.20': 0,
'transformer.h.21': 0,
'transformer.h.22': 0,
'transformer.h.23': 0,
'transformer.h.24': 1,
'transformer.h.25': 1,
'transformer.h.26': 1,
'transformer.h.27': 1,
'transformer.ln_f': 1,
'transformer.h.16': 0,
'transformer.h.17': 0,
'transformer.h.18': 0,
'transformer.h.19': 0,
'transformer.h.20': 0,
'transformer.h.21': 0,
'transformer.h.22': 1,
'transformer.h.23': 1,
'transformer.h.24': 1,
'transformer.h.25': 1,
'transformer.h.26': 1,
'transformer.h.27': 1,
'transformer.h.28': 1,
'transformer.h.29': 1,
'transformer.h.30': 1,
'transformer.h.31': 1,
'transformer.h.32': 1,
'transformer.h.33': 1,
'transformer.h.34': 1,
'transformer.h.35': 1,
'transformer.h.36': 1,
'transformer.h.37': 1,
'transformer.h.38': 1,
'transformer.h.39': 1,
'transformer.h.40': 1,
'transformer.h.41': 1,
'transformer.h.42': 1,
'transformer.h.43': 1,
'transformer.h.44': 1,
'transformer.h.45': 1,
'transformer.h.46': 1,
'transformer.h.47': 1,
'transformer.ln_f': 1,
'lm_head': 1}
```
You can also design your `device_map` yourself, if you prefer to explicitly decide where each layer should be. In this case, the command above becomes:
You can also design your `device_map` yourself if you prefer to explicitly decide where each layer should be. In this case, the command above becomes:
```py
model = load_checkpoint_and_dispatch(model, "sharded-gpt-j-6B", device_map=my_device_map)
model = load_checkpoint_and_dispatch(model, checkpoint=weights_location, device_map=my_device_map)
```
### Run the model
@ -187,31 +225,30 @@ model = load_checkpoint_and_dispatch(model, "sharded-gpt-j-6B", device_map=my_de
Now that we have done this, our model lies across several devices, and maybe the hard drive. But it can still be used as a regular PyTorch model:
```py
from transformers import AutoTokenizer
from mingpt.bpe import BPETokenizer
tokenizer = BPETokenizer()
inputs = tokenizer("Hello, my name is").to(0)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
inputs = tokenizer("Hello, my name is", return_tensors="pt")
inputs = inputs.to(0)
output = model.generate(inputs["input_ids"])
tokenizer.decode(output[0].tolist())
outputs = model.generate(x1, max_new_tokens=10, do_sample=False)[0]
tokenizer.decode(outputs.cpu().squeeze())
```
Behind the scenes, 🤗 Accelerate added hooks to the model, so that:
- at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works)
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass, and cleaned up just after
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass, and cleaned up just after
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after
This way, you model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM!
This way, your model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM!
<Tip warning={true}>
This only supports inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations.
This only supports the inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations.
</Tip>
### Designing a device map
You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself, if you want more control over where each layer should go.
You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
<Tip>
@ -221,7 +258,7 @@ You can let 🤗 Accelerate handle the device map computation by setting `device
All the options will produce the same result when you don't have enough GPU memory to accommodate the whole model (which is to fit everything that can on the GPU, then offload weights on the CPU or even on the disk if there is not enough RAM).
When you have more GPU memory available than the model size, here the difference between each option:
When you have more GPU memory available than the model size, here is the difference between each option:
- `"auto"` and `"balanced"` evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1.
- `"balanced_low_0"` evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the `generate` function for Transformers models
- `"sequential"` will fit what it can on GPU 0, then move on GPU 1 and so forth (so won't use the last GPUs if it doesn't need to).
@ -232,9 +269,9 @@ When you have more GPU memory available than the model size, here the difference
</Tip>
First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want used for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`.
First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want to use for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`.
Here is an example where we don't want to use more than 10GiB on each of two GPUs and no more than 30GiB of CPU RAM for the model weights:
Here is an example where we don't want to use more than 10GiB on each of the two GPUs and no more than 30GiB of CPU RAM for the model weights:
```python
from accelerate import infer_auto_device_map
@ -246,18 +283,18 @@ device_map = infer_auto_device_map(my_model, max_memory={0: "10GiB", 1: "10GiB",
When a first allocation happens in PyTorch, it loads CUDA kernels which take about 1-2GB of memory depending on the GPU. Therefore you always have less usable memory than the actual size of the GPU. To see how much memory is actually used do `torch.ones(1).cuda()` and look at the memory usage.
Therefore when you create memory maps with `max_memory` make sure to adjust the avaialble memory accordingly to avoid out-of-memory errors.
Therefore when you create memory maps with `max_memory` make sure to adjust the available memory accordingly to avoid out-of-memory errors.
</Tip>
Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup the close to ideal map is:
Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup, the close-to-ideal map is:
```python
max_memory = {0: "30GIB", 1: "46GIB", 2: "46GIB", 3: "46GIB", 4: "46GIB", 5: "46GIB", 6: "46GIB", 7: "46GIB"}
```
as you can see we gave the remaining 7 GPUs ~50% more memory than GPU 0.
If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be:
If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance, if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be:
```python
device_map = {"block1": 0, "block2": 1}
@ -286,9 +323,9 @@ device_map = {"block1": 0, "block2.linear1": 1, "block2.linear2": 1}
We are aware of the current limitations in the API:
- While this could theoretically work on just one CPU with potential disk offload, you need at least one GPU to run this API. This will be fixed in further development.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to lack of RAM.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to a lack of RAM.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) attributes devices sequentially (to avoid moving things back and forth) so if your first layer is bigger than the size of the GPU you have, it will end up with everything on the CPU/Disk.
- [`load_checkpoint_and_dispatch`] and [`load_checkpoint_in_model`] do not perform any check on the correctness of your state dict compared to your model at the moment (this will be fixed in a future version), so you may get some weird errors if trying to load a checkpoint with mismatched or missing keys.
- The model parallelism used when your model is split on several GPUs is naive and not optimized, meaning that only one GPU works at a given time and the other sits idle.
- When weights are offloaded on the CPU/hard drive, there is no pre-fetching (yet, we will work on this for future versions) which means the weights are put on the GPU when they are needed and not before.
- Hard-drive offloading might be very slow if the hardware you run on does not have fast communication between disk and CPU (like NVMes).
- Hard-drive offloading might be very slow if the hardware you run on does not have fast communication between disk and CPU (like NVMes).

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Checkpointing
@ -17,27 +20,31 @@ saving and loading the model, optimizer, RNG generators, and the GradScaler. Ins
- Use [`~Accelerator.save_state`] for saving everything mentioned above to a folder location
- Use [`~Accelerator.load_state`] for loading everything stored from an earlier `save_state`
To further customize where and how states are saved through [`~Accelerator.save_state`] the [`~utils.ProjectConfiguration`] class can be used. For example
if `automatic_checkpoint_naming` is enabled each saved checkpoint will be located then at `Accelerator.project_dir/checkpoints/checkpoint_{checkpoint_number}`.
It should be noted that the expectation is that those states come from the same training script, they should not be from two separate scripts.
- By using [`~Accelerator.register_for_checkpointing`], you can register custom objects to be automatically stored or loaded from the two prior functions,
so long as the object has a `state_dict` **and** a `load_state_dict` functionality. This could include objects such as a learning rate scheduler.
Below is a brief example using checkpointing to save and reload a state during training:
```python
from accelerate import Accelerator
import torch
accelerator = Accelerator()
accelerator = Accelerator(project_dir="my/save/path")
my_scheduler = torch.optim.lr_scheduler.StepLR(my_optimizer, step_size=1, gamma=0.99)
my_model, my_optimizer, my_training_dataloader = accelerate.prepare(my_model, my_optimizer, my_training_dataloader)
my_model, my_optimizer, my_training_dataloader = accelerator.prepare(my_model, my_optimizer, my_training_dataloader)
# Register the LR scheduler
accelerate.register_for_checkpointing(my_scheduler)
accelerator.register_for_checkpointing(my_scheduler)
# Save the starting state
accelerate.save_state("my/save/path")
accelerator.save_state()
device = accelerator.device
my_model.to(device)
@ -55,6 +62,35 @@ for epoch in range(num_epochs):
my_optimizer.step()
my_scheduler.step()
# Restore previous state
accelerate.load_state("my/save/path")
# Restore the previous state
accelerator.load_state("my/save/path/checkpointing/checkpoint_0")
```
## Restoring the state of the DataLoader
After resuming from a checkpoint, it may also be desirable to resume from a particular point in the active `DataLoader` if
the state was saved during the middle of an epoch. You can use [`~Accelerator.skip_first_batches`] to do so.
```python
from accelerate import Accelerator
accelerator = Accelerator(project_dir="my/save/path")
train_dataloader = accelerator.prepare(train_dataloader)
accelerator.load_state("my_state")
# Assume the checkpoint was saved 100 steps into the epoch
skipped_dataloader = accelerator.skip_first_batches(train_dataloader, 100)
# After the first iteration, go back to `train_dataloader`
# First epoch
for batch in skipped_dataloader:
# Do something
pass
# Second epoch
for batch in train_dataloader:
# Do something
pass
```

View File

@ -8,11 +8,14 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DeepSpeed
[DeepSpeed](https://github.com/microsoft/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Currently it provides full support for:
[DeepSpeed](https://github.com/microsoft/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Currently, it provides full support for:
1. Optimizer state partitioning (ZeRO stage 1)
2. Gradient partitioning (ZeRO stage 2)
@ -26,14 +29,14 @@ Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857).
DeepSpeed ZeRO-2 is primarily used only for training, as its features are of no use to inference.
DeepSpeed ZeRO-3 can be used for inference as well, since it allows huge models to be loaded on multiple GPUs, which
DeepSpeed ZeRO-3 can be used for inference as well since it allows huge models to be loaded on multiple GPUs, which
won't be possible on a single GPU.
🤗 Accelerate integrates [DeepSpeed](https://github.com/microsoft/DeepSpeed) via 2 options:
1. Integration of the DeepSpeed features via `deepspeed config file` specification in `accelerate config` . You just supply your custom config file or use our template. Most of
this document is focused on this feature. This supports all the core features of DeepSpeed and gives user a lot of flexibility.
User may have to change few lines of code depending on the config.
User may have to change a few lines of code depending on the config.
2. Integration via `deepspeed_plugin`.This supports subset of the DeepSpeed features and uses default options for the rest of the configurations.
User need not change any code and is good for those who are fine with most of the default settings of DeepSpeed.
@ -57,7 +60,7 @@ Below is a short description of Data Parallelism using ZeRO - Zero Redundancy Op
e. **Param Offload**: Offloads the model parameters to CPU/Disk building on top of ZERO Stage 3
<u>Note</u>: With respect to Disk Offload, the disk should be an NVME for decent speed but it technically work on any Disk
<u>Note</u>: With respect to Disk Offload, the disk should be an NVME for decent speed but it technically works on any Disk
Inference:
@ -352,8 +355,8 @@ accelerate launch examples/by_feature/deepspeed_with_config_support.py \
see the [DeepSpeed Optimizers](https://deepspeed.readthedocs.io/en/latest/optimizers.html) and [DeepSpeed Schedulers](https://deepspeed.readthedocs.io/en/latest/schedulers.html) documentation.
We will look at the changes needed in the code when using these.
a. DS Optim + DS Scheduler: The case when both `optimizer` and `scheduler` keys present in the DeepSpeed config file.
In this situation, those will be used and user has to use `accelerate.utils.DummyOptim` and `accelerate.utils.DummyScheduler` to replace the PyTorch/Custom optimizers and schedulers in their code.
a. DS Optim + DS Scheduler: The case when both `optimizer` and `scheduler` keys are present in the DeepSpeed config file.
In this situation, those will be used and the user has to use `accelerate.utils.DummyOptim` and `accelerate.utils.DummyScheduler` to replace the PyTorch/Custom optimizers and schedulers in their code.
Below is the snippet from `examples/by_feature/deepspeed_with_config_support.py` showing this:
```python
# Creates Dummy Optimizer if `optimizer` was spcified in the config file else creates Adam Optimizer
@ -386,7 +389,7 @@ We will look at the changes needed in the code when using these.
In the above example we can see that the code remains unchanged if the `optimizer` and `scheduler` keys are absent in the DeepSpeed config file.
c. Custom Optim + DS Scheduler: The case when only `scheduler` key is present in the DeepSpeed config file.
In this situation, user has to use `accelerate.utils.DummyScheduler` to replace the PyTorch/Custom scheduler in their code.
In this situation, the user has to use `accelerate.utils.DummyScheduler` to replace the PyTorch/Custom scheduler in their code.
d. DS Optim + Custom Scheduler: The case when only `optimizer` key is present in the DeepSpeed config file.
This will result in an error because you can only use DS Scheduler when using DS Optim.
@ -395,6 +398,196 @@ We will look at the changes needed in the code when using these.
based on model, dataloaders, dummy optimizer and dummy schedulers provided to `prepare` method.
Only the `auto` fields specified in above examples are handled by `prepare` method and the rest have to be explicitly specified by the user.
**Things to note when using DeepSpeed Config File**
Below is a sample script using `deepspeed_config_file` in different scenarios.
Code `test.py`:
```python
from accelerate import Accelerator
from accelerate.state import AcceleratorState
def main():
accelerator = Accelerator()
accelerator.print(f"{AcceleratorState()}")
if __name__ == "__main__":
main()
```
**Scenario 1**: Manually tampered accelerate config file having `deepspeed_config_file` along with other entries.
1. Content of the `accelerate` config:
```yaml
command_file: null
commands: null
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: 'cpu'
offload_param_device: 'cpu'
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
deepspeed_config_file: 'ds_config.json'
distributed_type: DEEPSPEED
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config: {}
gpu_ids: null
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
megatron_lm_config: {}
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_name: null
tpu_zone: null
use_cpu: false
```
2. `ds_config.json`:
```json
{
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 3,
"stage3_gather_16bit_weights_on_model_save": false,
"offload_optimizer": {
"device": "none"
},
"offload_param": {
"device": "none"
}
},
"gradient_clipping": 1.0,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": 10,
"steps_per_print": 2000000
}
```
3. Output of `accelerate launch test.py`:
```bash
ValueError: When using `deepspeed_config_file`, the following accelerate config variables will be ignored:
['gradient_accumulation_steps', 'gradient_clipping', 'zero_stage', 'offload_optimizer_device', 'offload_param_device',
'zero3_save_16bit_model', 'mixed_precision'].
Please specify them appropriately in the DeepSpeed config file.
If you are using an accelerate config file, remove others config variables mentioned in the above specified list.
The easiest method is to create a new config following the questionnaire via `accelerate config`.
It will only ask for the necessary config variables when using `deepspeed_config_file`.
```
**Scenario 2**: Use the solution of the error to create new accelerate config and check that no ambiguity error is now thrown.
1. Run `accelerate config`:
```bash
$ accelerate config
-------------------------------------------------------------------------------------------------------------------------------
In which compute environment are you running?
This machine
-------------------------------------------------------------------------------------------------------------------------------
Which type of machine are you using?
multi-GPU
How many different machines will you use (use more than 1 for multi-node training)? [1]:
Do you wish to optimize your script with torch dynamo?[yes/NO]:
Do you want to use DeepSpeed? [yes/NO]: yes
Do you want to specify a json file to a DeepSpeed config? [yes/NO]: yes
Please enter the path to the json DeepSpeed config file: ds_config.json
Do you want to enable `deepspeed.zero.Init` when using ZeRO Stage-3 for constructing massive models? [yes/NO]: yes
How many GPU(s) should be used for distributed training? [1]:4
accelerate configuration saved at ds_config_sample.yaml
```
2. Content of the `accelerate` config:
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: ds_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config: {}
machine_rank: 0
main_training_function: main
megatron_lm_config: {}
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
use_cpu: false
```
3. Output of `accelerate launch test.py`:
```bash
Distributed environment: DEEPSPEED Backend: nccl
Num processes: 4
Process index: 0
Local process index: 0
Device: cuda:0
Mixed precision type: bf16
ds_config: {'bf16': {'enabled': True}, 'zero_optimization': {'stage': 3, 'stage3_gather_16bit_weights_on_model_save': False, 'offload_optimizer': {'device': 'none'}, 'offload_param': {'device': 'none'}}, 'gradient_clipping': 1.0, 'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 10, 'steps_per_print': inf, 'fp16': {'enabled': False}}
```
**Scenario 3**: Setting the `accelerate launch` command arguments related to DeepSpeed as `"auto"` in the DeepSpeed` configuration file and check that things work as expected.
1. New `ds_config.json` with `"auto"` for the `accelerate launch` DeepSpeed command arguments:
```json
{
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": "auto",
"stage3_gather_16bit_weights_on_model_save": "auto",
"offload_optimizer": {
"device": "auto"
},
"offload_param": {
"device": "auto"
}
},
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto",
"steps_per_print": 2000000
}
```
2. Output of `accelerate launch --mixed_precision="fp16" --zero_stage=3 --gradient_accumulation_steps=5 --gradient_clipping=1.0 --offload_param_device="cpu" --offload_optimizer_device="nvme" --zero3_save_16bit_model="true" test.py`:
```bash
Distributed environment: DEEPSPEED Backend: nccl
Num processes: 4
Process index: 0
Local process index: 0
Device: cuda:0
Mixed precision type: fp16
ds_config: {'bf16': {'enabled': False}, 'zero_optimization': {'stage': 3, 'stage3_gather_16bit_weights_on_model_save': True, 'offload_optimizer': {'device': 'nvme'}, 'offload_param': {'device': 'cpu'}}, 'gradient_clipping': 1.0, 'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 5, 'steps_per_print': inf, 'fp16': {'enabled': True, 'auto_cast': True}}
```
**Note**: Remaining `"auto"` values are handled in `accelerator.prepare()` call as explained in point 2 of
`Important code changes when using DeepSpeed Config File`.
## Saving and loading
1. Saving and loading of models is unchanged for ZeRO Stage-1 and Stage-2.
@ -489,6 +682,6 @@ Papers:
- [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840)
- [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857)
Finally, please, remember that, 🤗 `Accelerate` only integrates DeepSpeed, therefore if you
Finally, please, remember that 🤗 `Accelerate` only integrates DeepSpeed, therefore if you
have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues).

View File

@ -0,0 +1,136 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Distributed Inference with 🤗 Accelerate
Distributed inference is a common use case, especially with natural language processing (NLP) models. Users often want to
send a number of different prompts, each to a different GPU, and then get the results back. This also has other cases
outside of just NLP, however for this tutorial we will focus on just this idea of each GPU receiving a different prompt,
and then returning the results.
## The Problem
Normally when doing this, users send the model to a specific device to load it from the CPU, and then move each prompt to a different device.
A basic pipeline using the `diffusers` library might look something like so:
```python
import torch
import torch.distributed as dist
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
```
Followed then by performing inference based on the specific prompt:
```python
def run_inference(rank, world_size):
dist.init_process_group("nccl", rank=rank, world_size=world_size)
pipe.to(rank)
if torch.distributed.get_rank() == 0:
prompt = "a dog"
elif torch.distributed.get_rank() == 1:
prompt = "a cat"
result = pipe(prompt).images[0]
result.save(f"result_{rank}.png")
```
One will notice how we have to check the rank to know what prompt to send, which can be a bit tedious.
A user might then also think that with 🤗 Accelerate, using the `Accelerator` to prepare a dataloader for such a task might also be
a simple way to manage this. (To learn more, check out the relvent section in the [Quick Tour](../quicktour#distributed-evaluation))
Can it manage it? Yes. Does it add unneeded extra code however: also yes.
## The Solution
With 🤗 Accelerate, we can simplify this process by using the [`Accelerator.split_between_processes`] context manager (which also exists in `PartialState` and `AcceleratorState`).
This function will automatically split whatever data you pass to it (be it a prompt, a set of tensors, a dictionary of the prior data, etc.) across all the processes (with a potential
to be padded) for you to use right away.
Let's rewrite the above example using this context manager:
```python
from accelerate import PartialState # Can also be Accelerator or AcceleratorState
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
distributed_state = PartialState()
pipe.to(distributed_state.device)
# Assume two processes
with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt:
result = pipe(prompt).images[0]
result.save(f"result_{distributed_state.process_index}.png")
```
And then to launch the code, we can use the 🤗 Accelerate:
If you have generated a config file to be used using `accelerate config`:
```bash
accelerate launch distributed_inference.py
```
If you have a specific config file you want to use:
```bash
accelerate launch --config_file my_config.json distributed_inference.py
```
Or if don't want to make any config files and launch on two GPUs:
> Note: You will get some warnings about values being guessed based on your system. To remove these you can do `accelerate config default` or go through `accelerate config` to create a config file.
```bash
accelerate launch --num_processes 2 distributed_inference.py
```
We've now reduced the boilerplate code needed to split this data to a few lines of code quite easily.
But what if we have an odd distribution of prompts to GPUs? For example, what if we have 3 prompts, but only 2 GPUs?
Under the context manager, the first GPU would receive the first two prompts and the second GPU the third, ensuring that
all prompts are split and no overhead is needed.
*However*, what if we then wanted to do something with the results of *all the GPUs*? (Say gather them all and perform some kind of post processing)
You can pass in `apply_padding=True` to ensure that the lists of prompts are padded to the same length, with extra data being taken
from the last sample. This way all GPUs will have the same number of prompts, and you can then gather the results.
<Tip>
This is only needed when trying to perform an action such as gathering the results, where the data on each device
needs to be the same length. Basic inference does not require this.
</Tip>
For instance:
```python
from accelerate import PartialState # Can also be Accelerator or AcceleratorStaet
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
distributed_state = PartialState()
pipe.to(distributed_state.device)
# Assume two processes
with distributed_state.split_between_processes(["a dog", "a cat", "a chicken"], apply_padding=True) as prompt:
result = pipe(prompt).images
```
On the first GPU, the prompts will be `["a dog", "a cat"]`, and on the second GPU it will be `["a chicken", "a chicken"]`.
Make sure to drop the final sample, as it will be a duplicate of the previous one.

View File

@ -0,0 +1,51 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Learning how to incorporate 🤗 Accelerate features quickly!
Please use the interactive tool below to help you get started with learning about a particular
feature of 🤗 Accelerate and how to utilize it! It will provide you with a code diff, an explaination
towards what is going on, as well as provide you with some useful links to explore more within
the documentation!
Most code examples start from the following python code before integrating 🤗 Accelerate in some way:
```python
for batch in dataloader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss.backward()
optimizer.step()
scheduler.step()
```
<div class="block dark:hidden">
<iframe
src="https://muellerzr-accelerate-examples.hf.space?__theme=light"
width="850"
height="1600"
></iframe>
</div>
<div class="hidden dark:block">
<iframe
src="https://muellerzr-accelerate-examples.hf.space?__theme=dark"
width="850"
height="1600"
></iframe>
</div>

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Fully Sharded Data Parallel
@ -66,13 +69,94 @@ Currently, `Accelerate` supports the following config through the CLI:
```bash
`Sharding Strategy`: [1] FULL_SHARD (shards optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD
`Offload Params`: Decides Whether to offload parameters and gradients to CPU
`Auto Wrap Policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP
`Transformer Layer Class to Wrap`: When using `TRANSFORMER_BASED_WRAP`, user specifies transformer layer class name (case-sensitive) to wrap ,e.g, `BertLayer`, `GPTJBlock`, `T5Block`...
`Auto Wrap Policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP [4] "HYBRID_SHARD" [5] "HYBRID_SHARD_ZERO2"
`Transformer Layer Class to Wrap`: When using `TRANSFORMER_BASED_WRAP`, user specifies comma-separated string of transformer layer class names (case-sensitive) to wrap ,e.g,
`BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`...
`Min Num Params`: minimum number of parameters when using `SIZE_BASED_WRAP`
`Backward Prefetch`: [1] BACKWARD_PRE, [2] BACKWARD_POST, [3] NO_PREFETCH
`State Dict Type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT
`Use Orig Params`: If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable paramteres.
Useful in cases such as parameter-efficient fine-tuning.
Please refer this [blog](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019)
`Sync Module States`: If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0
`Forward Prefetch`: If True, then FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass
```
For additional and more nuanced control, you can specify other FSDP parameters via `FullyShardedDataParallelPlugin`.
When creating `FullyShardedDataParallelPlugin` object, pass it the parameters that weren't part of the accelerate config or if you want to override them.
The FSDP parameters will be picked based on the accelerate config file or launch command arguments and other parameters that you will pass directly through the `FullyShardedDataParallelPlugin` object will set/override that.
Below is an example:
```py
from accelerate import FullyShardedDataParallelPlugin
from torch.distributed.fsdp.fully_sharded_data_parallel import FullOptimStateDictConfig, FullStateDictConfig
fsdp_plugin = FullyShardedDataParallelPlugin(
state_dict_config=FullStateDictConfig(offload_to_cpu=False, rank0_only=False),
optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=False, rank0_only=False),
)
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
```
## Saving and loading
The new recommended way of checkpointing when using FSDP models is to use `SHARDED_STATE_DICT` as `StateDictType` when setting up the accelerate config.
Below is the code snippet to save using `save_state` utility of accelerate.
```py
accelerator.save_state("ckpt")
```
Inspect the ckeckpoint folder to see model and optimizer as shards per process:
```
ls ckpt
# optimizer_0 pytorch_model_0 random_states_0.pkl random_states_1.pkl scheduler.bin
cd ckpt
ls optimizer_0
# __0_0.distcp __1_0.distcp
ls pytorch_model_0
# __0_0.distcp __1_0.distcp
```
To load them back for resuming the training, use the `load_state` utility of accelerate
```py
accelerator.load_state("ckpt")
```
When using transformers `save_pretrained`, pass `state_dict=accelerator.get_state_dict(model)` to save the model state dict.
Below is an example:
```diff
unwrapped_model.save_pretrained(
args.output_dir,
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
+ state_dict=accelerator.get_state_dict(model),
)
```
### State Dict
`accelerator.get_state_dict` will call the underlying `model.state_dict` implementation. With a model wrapped by FSDP, the default behavior of `state_dict` is to gather all of the state in the rank 0 device. This can cause CUDA out of memory errors if the parameters don't fit on a single GPU.
To avoid this, PyTorch provides a context manager that adjusts the behavior of `state_dict`. To offload some of the state dict onto CPU, you can use the following code:
```
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP, StateDictType, FullStateDictConfig
full_state_dict_config = FullStateDictConfig(offload_to_cpu=True, rank0_only=True)
with FSDP.state_dict_type(unwrapped_model, StateDictType.FULL_STATE_DICT, full_state_dict_config):
state = accelerator.get_state_dict(unwrapped_model)
```
You can then pass `state` into the `save_pretrained` method. There are several modes for `StateDictType` and `FullStateDictConfig` that you can use to control the behavior of `state_dict`. For more information, see the [PyTorch documentation](https://pytorch.org/docs/stable/fsdp.html).
## A few caveats to be aware of
- PyTorch FSDP auto wraps sub-modules, flattens the parameters and shards the parameters in place.

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Performing gradient accumulation with 🤗 Accelerate
@ -72,7 +75,7 @@ First the code shown earlier will be converted to utilize 🤗 Accelerate withou
<Tip warning={true}>
In its current state, this code is not going to perform gradient accumulation efficiently due to a process called gradient synchronization. Read more about that in the [Concepts tutorial](concept_guides/gradient_synchronization)!
In its current state, this code is not going to perform gradient accumulation efficiently due to a process called gradient synchronization. Read more about that in the [Concepts tutorial](../concept_guides/gradient_synchronization)!
</Tip>
@ -87,6 +90,9 @@ of steps to perform before each call to `step()` and how to automatically adjust
+ accelerator = Accelerator(gradient_accumulation_steps=2)
```
Alternatively, you can pass in a `gradient_accumulation_plugin` parameter to the [`Accelerator`] object's `__init__`, which will allow you to further customize the gradient accumulation behavior.
Read more about that in the [GradientAccumulationPlugin](../package_reference/accelerator#accelerate.utils.GradientAccumulationPlugin) docs.
From here you can use the [`~Accelerator.accumulate`] context manager from inside your training loop to automatically perform the gradient accumulation for you!
You just wrap it around the entire training part of our code:
@ -111,6 +117,11 @@ You can remove all the special checks for the step number and the loss adjustmen
As you can see the [`Accelerator`] is able to keep track of the batch number you are on and it will automatically know whether to step through the prepared optimizer and how to adjust the loss.
<Tip>
Typically with gradient accumulation, you would need to adjust the number of steps to reflect the change in total batches you are
training on. 🤗 Accelerate automagically does this for you by default. Behind the scenes we instantiate a GradientAccumulationPlugin configured to do this.
</Tip>
## The finished code
Below is the finished implementation for performing gradient accumulation with 🤗 Accelerate
@ -127,4 +138,4 @@ for batch in training_dataloader:
optimizer.zero_grad()
```
To learn more about what magic this wraps around, read the [Gradient Synchronization concept guide](/concept_guides/gradient_synchronization)
To learn more about what magic this wraps around, read the [Gradient Synchronization concept guide](../concept_guides/gradient_synchronization)

View File

@ -0,0 +1,174 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Intel® Extension for PyTorch
[IPEX](https://github.com/intel/intel-extension-for-pytorch) is optimized for CPUs with AVX-512 or above, and functionally works for CPUs with only AVX2. So, it is expected to bring performance benefit for Intel CPU generations with AVX-512 or above while CPUs with only AVX2 (e.g., AMD CPUs or older Intel CPUs) might result in a better performance under IPEX, but not guaranteed. IPEX provides performance optimizations for CPU training with both Float32 and BFloat16. The usage of BFloat16 is the main focus of the following sections.
Low precision data type BFloat16 has been natively supported on the 3rd Generation Xeon® Scalable Processors (aka Cooper Lake) with AVX512 instruction set and will be supported on the next generation of Intel® Xeon® Scalable Processors with Intel® Advanced Matrix Extensions (Intel® AMX) instruction set with further boosted performance. The Auto Mixed Precision for CPU backend has been enabled since PyTorch-1.10. At the same time, the support of Auto Mixed Precision with BFloat16 for CPU and BFloat16 optimization of operators has been massively enabled in Intel® Extension for PyTorch, and partially upstreamed to PyTorch master branch. Users can get better performance and user experience with IPEX Auto Mixed Precision.
## IPEX installation:
IPEX release is following PyTorch, to install via pip:
| PyTorch Version | IPEX version |
| :---------------: | :----------: |
| 2.0 | 2.0.0 |
| 1.13 | 1.13.0 |
| 1.12 | 1.12.300 |
| 1.11 | 1.11.200 |
| 1.10 | 1.10.100 |
```
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
```
Check more approaches for [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html).
## How It Works For Training optimization in CPU
🤗 Accelerate has integrated [IPEX](https://github.com/intel/intel-extension-for-pytorch), all you need to do is enabling it through the config.
**Scenario 1**: Acceleration of No distributed CPU training
Run <u>accelerate config</u> on your machine:
```bash
$ accelerate config
-----------------------------------------------------------------------------------------------------------------------------------------------------------
In which compute environment are you running?
This machine
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Which type of machine are you using?
No distributed training
Do you want to run your training on CPU only (even if a GPU / Apple Silicon device is available)? [yes/NO]:yes
Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU? [yes/NO]:yes
Do you wish to optimize your script with torch dynamo?[yes/NO]:NO
Do you want to use DeepSpeed? [yes/NO]: NO
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Do you wish to use FP16 or BF16 (mixed precision)?
bf16
```
This will generate a config file that will be used automatically to properly set the
default options when doing
```bash
accelerate launch my_script.py --args_to_my_script
```
For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with IPEX enabled.
default_config.yaml that is generated after `accelerate config`
```bash
compute_environment: LOCAL_MACHINE
distributed_type: 'NO'
downcast_bf16: 'no'
ipex_config:
ipex: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: true
```
```bash
accelerate launch examples/nlp_example.py
```
**Scenario 2**: Acceleration of distributed CPU training
we use Intel oneCCL for communication, combined with Intel® MPI library to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. you could refer the [here](https://huggingface.co/docs/transformers/perf_train_cpu_many) for the installation guide
Run <u>accelerate config</u> on your machine(node0):
```bash
$ accelerate config
-----------------------------------------------------------------------------------------------------------------------------------------------------------
In which compute environment are you running?
This machine
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Which type of machine are you using?
multi-CPU
How many different machines will you use (use more than 1 for multi-node training)? [1]: 4
-----------------------------------------------------------------------------------------------------------------------------------------------------------
What is the rank of this machine?
0
What is the IP address of the machine that will host the main process? 36.112.23.24
What is the port you will use to communicate with the main process? 29500
Are all the machines on the same local network? Answer `no` if nodes are on the cloud and/or on different network hosts [YES/no]: yes
Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU? [yes/NO]:yes
Do you wish to optimize your script with torch dynamo?[yes/NO]:NO
How many CPU(s) should be used for distributed training? [1]:16
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Do you wish to use FP16 or BF16 (mixed precision)?
bf16
```
For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with IPEX enabled for distributed CPU training.
default_config.yaml that is generated after `accelerate config`
```bash
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_CPU
downcast_bf16: 'no'
ipex_config:
ipex: true
machine_rank: 0
main_process_ip: 36.112.23.24
main_process_port: 29500
main_training_function: main
mixed_precision: bf16
num_machines: 4
num_processes: 16
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: true
```
Set following env and using intel MPI to launch the training
In node0, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument.
```bash
$ cat hostfile
xxx.xxx.xxx.xxx #node0 ip
xxx.xxx.xxx.xxx #node1 ip
xxx.xxx.xxx.xxx #node2 ip
xxx.xxx.xxx.xxx #node3 ip
```
Now, run the following command in node0 and **16DDP** will be enabled in node0,node1,node2,node3 with BF16 mixed precision:
```bash
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
export CCL_WORKER_COUNT=1
export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip
export CCL_ATL_TRANSPORT=ofi
mpirun -f hostfile -n 16 -ppn 4 accelerate launch examples/nlp_example.py
```
## Related Resources
- [Project's github](https://github.com/intel/intel-extension-for-pytorch)
- [API docs](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/api_doc.html)
- [Tuning guide](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/performance_tuning/tuning_guide.html)
- [Blogs & Publications](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/blogs_publications.html)

View File

@ -0,0 +1,108 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Using Local SGD with 🤗 Accelerate
Local SGD is a technique for distributed training where gradients are not synchronized every step. Thus, each process updates its own version of the model weights and after a given number of steps these weights are synchronized by averaging across all processes. This improves communication efficiency and can lead to substantial training speed up especially when a computer lacks a faster interconnect such as NVLink.
Unlike gradient accumulation (where improving communication efficiency requires increasing the effective batch size), Local SGD does not require changing a batch size or a learning rate / schedule. However, if necessary, Local SGD can be combined with gradient accumulation as well.
In this tutorial you will see how to quickly setup Local SGD 🤗 Accelerate. Compared to a standard Accelerate setup, this requires only two extra lines of code.
This example will use a very simplistic PyTorch training loop that performs gradient accumulation every two batches:
```python
device = "cuda"
model.to(device)
gradient_accumulation_steps = 2
for index, batch in enumerate(training_dataloader):
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss = loss / gradient_accumulation_steps
loss.backward()
if (index + 1) % gradient_accumulation_steps == 0:
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
## Converting it to 🤗 Accelerate
First the code shown earlier will be converted to use 🤗 Accelerate with neither a LocalSGD or a gradient accumulation helper:
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
+ model, optimizer, training_dataloader, scheduler
+ )
for index, batch in enumerate(training_dataloader):
inputs, targets = batch
- inputs = inputs.to(device)
- targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
loss = loss / gradient_accumulation_steps
+ accelerator.backward(loss)
if (index+1) % gradient_accumulation_steps == 0:
optimizer.step()
scheduler.step()
```
## Letting 🤗 Accelerate handle model synchronization
All that is left now is to let 🤗 Accelerate handle model parameter synchronization **and** the gradient accumulation for us. For simplicity let us assume we need to synchronize every 8 steps. This is
achieved by adding one `with LocalSGD` statement and one call `local_sgd.step()` after every optimizer step:
```diff
+local_sgd_steps=8
+with LocalSGD(accelerator=accelerator, model=model, local_sgd_steps=8, enabled=True) as local_sgd:
for batch in training_dataloader:
with accelerator.accumulate(model):
inputs, targets = batch
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
+ local_sgd.step()
```
Under the hood, the Local SGD code **disables** automatic gradient synchornization (but accumulation still works as expected!). Instead it averages model parameters every `local_sgd_steps` steps (as well as in the end of the training loop).
## Limitations
The current implementation works only with basic multi-GPU (or multi-CPU) training without, e.g., [DeepSpeed.](https://github.com/microsoft/DeepSpeed).
## References
Although we are not aware of the true origins of this simple approach, the idea of local SGD is quite old and goes
back to at least:
Zhang, J., De Sa, C., Mitliagkas, I., & Ré, C. (2016). [Parallel SGD: When does averaging help?. arXiv preprint
arXiv:1606.07365.](https://arxiv.org/abs/1606.07365)
We credit the term Local SGD to the following paper (but there might be earlier references we are not aware of).
Stich, Sebastian Urban. ["Local SGD Converges Fast and Communicates Little." ICLR 2019-International Conference on
Learning Representations. No. CONF. 2019.](https://arxiv.org/abs/1805.09767)

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
@ -115,7 +118,7 @@ An example of thr corresponding questions for using Megatron-LM features is show
```bash
:~$ accelerate config --config_file "megatron_gpt_config.yaml"
In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0
Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU [4] MPS): 2
Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU): 2
How many different machines will you use (use more than 1 for multi-node training)? [1]:
Do you want to use DeepSpeed? [yes/NO]:
Do you want to use FullyShardedDataParallel? [yes/NO]:
@ -290,6 +293,7 @@ You will implement the `accelerate.utils.AbstractTrainStep` or inherit from thei
```python
from accelerate.utils import MegatronLMDummyScheduler, GPTTrainStep, avg_losses_across_data_parallel_group
# Custom loss function for the Megatron model
class GPTTrainStepWithCustomLoss(GPTTrainStep):
def __init__(self, megatron_args, **kwargs):

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Memory Utilities

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerated PyTorch Training on Mac
@ -31,41 +34,10 @@ please follow this nice medium article [GPU-Acceleration Comes to PyTorch on M1
## How it works out of the box
It is enabled by default on MacOs machines with MPS enabled Apple Silicon GPUs.
To disable it, pass `--cpu` flag to `accelerate launch` command or answer the corresponding question when answering the `accelerate config` questionnaire.
On your machine(s) just run:
```bash
accelerate config
```
and answer the questions asked, specifically choose `MPS` for the query:
```
Which type of machine are you using?.
```
This will generate a config file that will be used automatically to properly set
the default options when doing `accelerate launch`, such as the one shown below:
```bash
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: MPS
downcast_bf16: 'no'
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 1
use_cpu: false
```
After this configuration has been made, here is how you run the CV example
(from the root of the repo) with MPS enabled:
You can directly run the following script to test it out on MPS enabled Apple Silicon machines:
```bash
accelerate launch /examples/cv_example.py --data_dir images
```

View File

@ -0,0 +1,136 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Quantization
## `bitsandbytes` Integration
🤗 Accelerate brings `bitsandbytes` quantization to your model. You can now load any pytorch model in 8-bit or 4-bit with a few lines of code.
If you want to use 🤗 Transformers models with `bitsandbytes`, you should follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization).
To learn more about how the `bitsandbytes` quantization works, check out the blog posts on [8-bit quantization](https://huggingface.co/blog/hf-bitsandbytes-integration) and [4-bit quantization](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
### Pre-Requisites
You will need to install the following requirements:
- Install `bitsandbytes` library
```bash
pip install bitsandbytes
```
- Install latest `accelerate` from source
```bash
pip install git+https://github.com/huggingface/accelerate.git
```
- Install `minGPT` and `huggingface_hub` to run examples
```bash
git clone https://github.com/karpathy/minGPT.git
pip install minGPT/
pip install huggingface_hub
```
### How it works
First, we need to initialize our model. To save memory, we can initialize an empty model using the context manager [`init_empty_weights`].
Let's take the GPT2 model from minGPT library.
```py
from accelerate import init_empty_weights
from mingpt.model import GPT
model_config = GPT.get_default_config()
model_config.model_type = 'gpt2-xl'
model_config.vocab_size = 50257
model_config.block_size = 1024
with init_empty_weights():
empty_model = GPT(model_config)
```
Then, we need to get the path to the weights of your model. The path can be the state_dict file (e.g. "pytorch_model.bin") or a folder containing the sharded checkpoints.
```py
from huggingface_hub import snapshot_download
weights_location = snapshot_download(repo_id="marcsun13/gpt2-xl-linear-sharded")
```
Finally, you need to set your quantization configuration with [`~utils.BnbQuantizationConfig`].
Here's an example for 8-bit quantization:
```py
from accelerate.utils import BnbQuantizationConfig
quantization_config = BnbQuantizationConfig(load_in_8bit=True, llm_int8_threshold = 6)
```
Here's an example for 4-bit quantization:
```py
from accelerate.utils import BnbQuantizationConfig
quantization_config = BnbQuantizationConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4")
```
To quantize your empty model with the selected configuration, you need to use [`~utils.load_and_quantize_model`].
```py
from accelerate.utils import load_and_quantize_model
quantized_model = load_and_quantize_model(empty_model, weights_location=weights_location, quantization_config=quantization_config, device_map = "auto")
```
### Saving and loading 8-bit model
You can save your 8-bit model with accelerate using [`~Accelerator.save_model`].
```py
from accelerate import Accelerator
accelerate = Accelerator()
new_weights_location = "path/to/save_directory"
accelerate.save_model(quantized_model, new_weights_location)
quantized_model_from_saved = load_and_quantize_model(empty_model, weights_location=new_weights_location, quantization_config=quantization_config, device_map = "auto")
```
Note that 4-bit model serialization is currently not supported.
### Offload modules to cpu and disk
You can offload some modules to cpu/disk if you don't have enough space on the GPU to store the entire model on your GPUs.
This uses big model inference under the hood. Check this [documentation](https://huggingface.co/docs/accelerate/usage_guides/big_modeling) for more details.
For 8-bit quantization, the selected modules will be converted to 8-bit precision.
For 4-bit quantization, the selected modules will be kept in `torch_dtype` that the user passed in `BnbQuantizationConfig`. We will add support to convert these offloaded modules in 4-bit when 4-bit serialization will be possible.
You just need to pass a custom `device_map` in order to offload modules on cpu/disk. The offload modules will be dispatched on the GPU when needed. Here's an example :
```py
device_map = {
"transformer.wte": 0,
"transformer.wpe": 0,
"transformer.drop": 0,
"transformer.h": "cpu",
"transformer.ln_f": "disk",
"lm_head": "disk",
}
```
### Fine-tune a quantized model
It is not possible to perform pure 8bit or 4bit training on these models. However, you can train these models by leveraging parameter efficient fine tuning methods (PEFT) and train for example adapters on top of them. Please have a look at [peft](https://github.com/huggingface/peft) library for more details.
Currently, you can't add adapters on top of any quantized model. However, with the official support of adapters with 🤗 Transformers models, you can fine-tune quantized models. If you want to finetune a 🤗 Transformers model , follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization) instead. Check out this [demo](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing) on how to fine-tune a 4-bit 🤗 Transformers model.
Note that you dont need to pass `device_map` when loading the model for training. It will automatically load your model on your GPU. Please note that `device_map=auto` should be used for inference only.
### Example demo - running GPT2 1.5b on a Google Colab
Check out the Google Colab [demo](https://colab.research.google.com/drive/1T1pOgewAWVpR9gKpaEWw4orOrzPFb3yM?usp=sharing) for running quantized models on a GTP2 model. The GPT2-1.5B model checkpoint is in FP32 which uses 6GB of memory. After quantization, it uses 1.6GB with 8-bit modules and 1.2GB with 4-bit modules.

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Amazon SageMaker
@ -160,10 +163,43 @@ use_cpu: false
want to use different/other Python packages you can do this by adding them to the `requirements.txt`. These packages
will be installed before your training script is started.
### Remote scripts: Use scripts located on Github
### Local Training: SageMaker Local mode
*undecided if feature is needed. Contact us if you would like this feature.*
The local mode in the SageMaker SDK allows you to run your training script locally inside the HuggingFace DLC (Deep Learning container)
or using your custom container image. This is useful for debugging and testing your training script inside the final container environment.
Local mode uses Docker compose (*Note: Docker Compose V2 is not supported yet*). The SDK will handle the authentication against ECR
to pull the DLC to your local environment. You can emulate CPU (single and multi-instance) and GPU (single instance) SageMaker training jobs.
To use local mode, you need to set your `ec2_instance_type` to `local`.
```yaml
ec2_instance_type: local
```
### Advanced configuration
The configuration allows you to override parameters for the [Estimator](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html).
These settings have to be applied in the config file and are not part of `accelerate config`. You can control many additional aspects of the training job, e.g. use Spot instances, enable network isolation and many more.
```yaml
additional_args:
# enable network isolation to restrict internet access for containers
enable_network_isolation: True
```
You can find all available configuration [here](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html).
### Use Spot Instances
*undecided if feature is needed. Contact us if you would like this feature.*
You can use Spot Instances e.g. using (see [Advanced configuration](#advanced-configuration)):
```yaml
additional_args:
use_spot_instances: True
max_wait: 86400
```
*Note: Spot Instances are subject to be terminated and training to be continued from a checkpoint. This is not handled in 🤗 Accelerate out of the box. Contact us if you would like this feature.*
### Remote scripts: Use scripts located on Github
*undecided if feature is needed. Contact us if you would like this feature.*

View File

@ -8,6 +8,9 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Tracking
@ -83,6 +86,16 @@ for iteration in config["num_iterations"]:
accelerator.end_training()
```
If a tracker requires a directory to save data to, such as `TensorBoard`, then pass the directory path to `project_dir`. The `project_dir` parameter is useful
when there are other configurations to be combined with in the [`~utils.ProjectConfiguration`] data class. For example, you can save the TensorBoard data to `project_dir` and everything else can be logged in the `logging_dir` parameter of [`~utils.ProjectConfiguration`:
```python
accelerator = Accelerator(log_with="tensorboard", project_dir=".")
# use with ProjectConfiguration
config = ProjectConfiguration(project_dir=".", logging_dir="another/directory")
accelerator = Accelerator(log_with="tensorboard", project_config=config)
```
## Implementing Custom Trackers
@ -105,9 +118,12 @@ Every tracker must implement three functions and have three properties:
- This should be implemented as a `@property` function
- Should return the internal tracking mechanism the library uses, such as the `run` object for `wandb`.
A brief example can be seen below with an integration with Weights and Biases, containing only the relevant information:
Each method should also utilize the [`state.PartialState`] class if the logger should only be executed on the main process for instance.
A brief example can be seen below with an integration with Weights and Biases, containing only the relevant information and logging just on
the main process:
```python
from accelerate.tracking import GeneralTracker
from accelerate.tracking import GeneralTracker, on_main_process
from typing import Optional
import wandb
@ -117,6 +133,7 @@ class MyCustomTracker(GeneralTracker):
name = "wandb"
requires_logging_directory = False
@on_main_process
def __init__(self, run_name: str):
self.run_name = run_name
run = wandb.init(self.run_name)
@ -125,9 +142,11 @@ class MyCustomTracker(GeneralTracker):
def tracker(self):
return self.run.run
@on_main_process
def store_init_configuration(self, values: dict):
wandb.config(values)
@on_main_process
def log(self, values: dict, step: Optional[int] = None):
wandb.log(values, step=step)
```
@ -161,16 +180,26 @@ wandb_tracker = accelerator.get_tracker("wandb")
From there you can interact with `wandb`'s `run` object like normal:
<Tip warning={true}>
Make sure to only interact with trackers on the main process!
```python
wandb_run.log_artifact(some_artifact_to_log)
```
<Tip>
Trackers built in Accelerate will automatically execute on the correct process,
so if a tracker is only meant to be ran on the main process it will do so
automatically.
</Tip>
If you want to truly remove Accelerate's wrapping entirely, you can
achieve the same outcome with:
```python
if accelerator.is_main_process:
wandb_run.log_artifact(some_artifact_to_log)
wandb_tracker = accelerator.get_tracker("wandb", unwrap=True)
with accelerator.on_main_process:
wandb_tracker.log_artifact(some_artifact_to_log)
```
## When a wrapper cannot work
If a library has an API that does not follow a strict `.log` with an overall dictionary such as Neptune.AI, logging can be done manually under an `if accelerator.is_main_process` statement:

View File

@ -0,0 +1,175 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Example Zoo
Below contains a non-exhuastive list of tutorials and scripts showcasing 🤗 Accelerate
## Official Accelerate Examples:
### Basic Examples
These examples showcase the base features of Accelerate and are a great starting point
- [Barebones NLP example](https://github.com/huggingface/accelerate/blob/main/examples/nlp_example.py)
- [Barebones distributed NLP example in a Jupyter Notebook](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb)
- [Barebones computer vision example](https://github.com/huggingface/accelerate/blob/main/examples/cv_example.py)
- [Barebones distributed computer vision example in a Jupyter Notebook](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_cv_example.ipynb)
- [Using Accelerate in Kaggle](https://www.kaggle.com/code/muellerzr/multi-gpu-and-accelerate)
### Feature Specific Examples
These examples showcase specific features that the Accelerate framework offers
- [Automatic memory-aware gradient accumulation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/automatic_gradient_accumulation.py)
- [Checkpointing states](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/checkpointing.py)
- [Cross validation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/cross_validation.py)
- [DeepSpeed](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/deepspeed_with_config_support.py)
- [Fully Sharded Data Parallelism](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/fsdp_with_peak_mem_tracking.py)
- [Gradient accumulation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/gradient_accumulation.py)
- [Memory-aware batch size finder](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/memory.py)
- [Metric Computation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/multi_process_metrics.py)
- [Using Trackers](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/tracking.py)
- [Using Megatron-LM](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/megatron_lm_gpt_pretraining.py)
### Full Examples
These examples showcase every feature in Accelerate at once that was shown in "Feature Specific Examples"
- [Complete NLP example](https://github.com/huggingface/accelerate/blob/main/examples/complete_nlp_example.py)
- [Complete computer vision example](https://github.com/huggingface/accelerate/blob/main/examples/complete_cv_example.py)
- [Very complete and extensible vision example showcasing SLURM, hydra, and a very extensible usage of the framework](https://github.com/yuvalkirstain/PickScore)
- [Causal language model fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py)
- [Masked language model fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm_no_trainer.py)
- [Speech pretraining example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py)
- [Translation fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py)
- [Text classification fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py)
- [Semantic segmentation fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/semantic-segmentation/run_semantic_segmentation_no_trainer.py)
- [Question answering fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa_no_trainer.py)
- [Beam search question answering fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py)
- [Multiple choice question answering fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/multiple-choice/run_swag_no_trainer.py)
- [Named entity recognition fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner_no_trainer.py)
- [Image classification fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification_no_trainer.py)
- [Summarization fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization_no_trainer.py)
- [End-to-end examples on how to use AWS SageMaker integration of Accelerate](https://github.com/huggingface/notebooks/blob/main/sagemaker/22_accelerate_sagemaker_examples/README.md)
- [Megatron-LM examples for various NLp tasks](https://github.com/pacman100/accelerate-megatron-test)
## Integration Examples
These are tutorials from libraries that integrate with 🤗 Accelerate:
> Don't find your integration here? Make a PR to include it!
### Catalyst
- [Distributed training tutorial with Catalyst](https://catalyst-team.github.io/catalyst/tutorials/ddp.html)
### DALLE2-pytorch
- [Fine-tuning DALLE2](https://github.com/lucidrains/DALLE2-pytorch#usage)
### 🤗 diffusers
- [Performing textual inversion with diffusers](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
- [Training DreamBooth with diffusers](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth)
### fastai
- [Distributed training from Jupyter Notebooks with fastai](https://docs.fast.ai/tutorial.distributed.html)
- [Basic distributed training examples with fastai](https://docs.fast.ai/examples/distributed_app_examples.html)
### GradsFlow
- [Auto Image Classification with GradsFlow](https://docs.gradsflow.com/en/latest/examples/nbs/01-ImageClassification/)
### imagen-pytorch
- [Fine-tuning Imagen](https://github.com/lucidrains/imagen-pytorch#usage)
### Kornia
- [Fine-tuning vision models with Kornia's Trainer](https://kornia.readthedocs.io/en/latest/get-started/training.html)
### PyTorch Accelerated
- [Quickstart distributed training tutorial with PyTorch Accelerated](https://pytorch-accelerated.readthedocs.io/en/latest/quickstart.html)
### PyTorch3D
- [Perform Deep Learning with 3D data](https://pytorch3d.org/tutorials/)
### Stable-Dreamfusion
- [Training with Stable-Dreamfusion to convert text to a 3D model](https://colab.research.google.com/drive/1MXT3yfOFvO0ooKEfiUUvTKwUkrrlCHpF?usp=sharing)
### Tez
- [Leaf disease detection with Tez and Accelerate](https://www.kaggle.com/code/abhishek/tez-faster-and-easier-training-for-leaf-detection/notebook)
### trlx
- [How to implement a sentiment learning task with trlx](https://github.com/CarperAI/trlx#example-how-to-add-a-task)
### Comfy-UI
- [Enabling using large Stable Diffusion Models in low-vram settings using Accelerate](https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/model_management.py#L291-L296)
## In Science
Below contains a non-exhaustive list of papers utilizing 🤗 Accelerate.
> Don't find your paper here? Make a PR to include it!
* Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, Omer Levy: “Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation”, 2023; [arXiv:2305.01569](http://arxiv.org/abs/2305.01569).
* Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim: “Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models”, 2023; [arXiv:2305.04091](http://arxiv.org/abs/2305.04091).
* Arthur Câmara, Claudia Hauff: “Moving Stuff Around: A study on efficiency of moving documents into memory for Neural IR models”, 2022; [arXiv:2205.08343](http://arxiv.org/abs/2205.08343).
* Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Daniel Y. Fu, Zhiqiang Xie, Beidi Chen, Clark Barrett, Joseph E. Gonzalez, Percy Liang, Christopher Ré, Ion Stoica, Ce Zhang: “High-throughput Generative Inference of Large Language Models with a Single GPU”, 2023; [arXiv:2303.06865](http://arxiv.org/abs/2303.06865).
* Peter Melchior, Yan Liang, ChangHoon Hahn, Andy Goulding: “Autoencoding Galaxy Spectra I: Architecture”, 2022; [arXiv:2211.07890](http://arxiv.org/abs/2211.07890).
* Jiaao Chen, Aston Zhang, Mu Li, Alex Smola, Diyi Yang: “A Cheaper and Better Diffusion Language Model with Soft-Masked Noise”, 2023; [arXiv:2304.04746](http://arxiv.org/abs/2304.04746).
* Ayaan Haque, Matthew Tancik, Alexei A. Efros, Aleksander Holynski, Angjoo Kanazawa: “Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions”, 2023; [arXiv:2303.12789](http://arxiv.org/abs/2303.12789).
* Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi: “RealFusion: 360° Reconstruction of Any Object from a Single Image”, 2023; [arXiv:2302.10663](http://arxiv.org/abs/2302.10663).
* Xiaoshi Wu, Keqiang Sun, Feng Zhu, Rui Zhao, Hongsheng Li: “Better Aligning Text-to-Image Models with Human Preference”, 2023; [arXiv:2303.14420](http://arxiv.org/abs/2303.14420).
* Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang: “HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace”, 2023; [arXiv:2303.17580](http://arxiv.org/abs/2303.17580).
* Yue Yang, Wenlin Yao, Hongming Zhang, Xiaoyang Wang, Dong Yu, Jianshu Chen: “Z-LaVI: Zero-Shot Language Solver Fueled by Visual Imagination”, 2022; [arXiv:2210.12261](http://arxiv.org/abs/2210.12261).
* Sheng-Yen Chou, Pin-Yu Chen, Tsung-Yi Ho: “How to Backdoor Diffusion Models?”, 2022; [arXiv:2212.05400](http://arxiv.org/abs/2212.05400).
* Junyoung Seo, Wooseok Jang, Min-Seop Kwak, Jaehoon Ko, Hyeonsu Kim, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, Seungryong Kim: “Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation”, 2023; [arXiv:2303.07937](http://arxiv.org/abs/2303.07937).
* Or Patashnik, Daniel Garibi, Idan Azuri, Hadar Averbuch-Elor, Daniel Cohen-Or: “Localizing Object-level Shape Variations with Text-to-Image Diffusion Models”, 2023; [arXiv:2303.11306](http://arxiv.org/abs/2303.11306).
* Dídac Surís, Sachit Menon, Carl Vondrick: “ViperGPT: Visual Inference via Python Execution for Reasoning”, 2023; [arXiv:2303.08128](http://arxiv.org/abs/2303.08128).
* Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, Qifeng Chen: “FateZero: Fusing Attentions for Zero-shot Text-based Video Editing”, 2023; [arXiv:2303.09535](http://arxiv.org/abs/2303.09535).
* Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, Yejin Choi: “NaturalProver: Grounded Mathematical Proof Generation with Language Models”, 2022; [arXiv:2205.12910](http://arxiv.org/abs/2205.12910).
* Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or: “TEXTure: Text-Guided Texturing of 3D Shapes”, 2023; [arXiv:2302.01721](http://arxiv.org/abs/2302.01721).
* Puijin Cheng, Li Lin, Yijin Huang, Huaqing He, Wenhan Luo, Xiaoying Tang: “Learning Enhancement From Degradation: A Diffusion Model For Fundus Image Enhancement”, 2023; [arXiv:2303.04603](http://arxiv.org/abs/2303.04603).
* Shun Shao, Yftah Ziser, Shay Cohen: “Erasure of Unaligned Attributes from Neural Representations”, 2023; [arXiv:2302.02997](http://arxiv.org/abs/2302.02997).
* Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo: “In-Context Instruction Learning”, 2023; [arXiv:2302.14691](http://arxiv.org/abs/2302.14691).
* Shikun Liu, Linxi Fan, Edward Johns, Zhiding Yu, Chaowei Xiao, Anima Anandkumar: “Prismer: A Vision-Language Model with An Ensemble of Experts”, 2023; [arXiv:2303.02506](http://arxiv.org/abs/2303.02506 ).
* Haoyu Chen, Zhihua Wang, Yang Yang, Qilin Sun, Kede Ma: “Learning a Deep Color Difference Metric for Photographic Images”, 2023; [arXiv:2303.14964](http://arxiv.org/abs/2303.14964).
* Van-Hoang Le, Hongyu Zhang: “Log Parsing with Prompt-based Few-shot Learning”, 2023; [arXiv:2302.07435](http://arxiv.org/abs/2302.07435).
* Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi, Kentaro Inui: “Do Deep Neural Networks Capture Compositionality in Arithmetic Reasoning?”, 2023; [arXiv:2302.07866](http://arxiv.org/abs/2302.07866).
* Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, Prithviraj Ammanabrolu: “Behavior Cloned Transformers are Neurosymbolic Reasoners”, 2022; [arXiv:2210.07382](http://arxiv.org/abs/2210.07382).
* Martin Wessel, Tomáš Horych, Terry Ruas, Akiko Aizawa, Bela Gipp, Timo Spinde: “Introducing MBIB -- the first Media Bias Identification Benchmark Task and Dataset Collection”, 2023; [arXiv:2304.13148](http://arxiv.org/abs/2304.13148 ). DOI: [https://dx.doi.org/10.1145/3539618.3591882 10.1145/3539618.3591882].
* Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, Daniel Cohen-Or: “Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models”, 2023; [arXiv:2301.13826](http://arxiv.org/abs/2301.13826).
* Marcio Fonseca, Yftah Ziser, Shay B. Cohen: “Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents”, 2022; [arXiv:2205.12486](http://arxiv.org/abs/2205.12486).
* Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or: “TEXTure: Text-Guided Texturing of 3D Shapes”, 2023; [arXiv:2302.01721](http://arxiv.org/abs/2302.01721).
* Tianxing He, Jingyu Zhang, Tianle Wang, Sachin Kumar, Kyunghyun Cho, James Glass, Yulia Tsvetkov: “On the Blind Spots of Model-Based Evaluation Metrics for Text Generation”, 2022; [arXiv:2212.10020](http://arxiv.org/abs/2212.10020).
* Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham: “In-Context Retrieval-Augmented Language Models”, 2023; [arXiv:2302.00083](http://arxiv.org/abs/2302.00083).
* Dacheng Li, Rulin Shao, Hongyi Wang, Han Guo, Eric P. Xing, Hao Zhang: “MPCFormer: fast, performant and private Transformer inference with MPC”, 2022; [arXiv:2211.01452](http://arxiv.org/abs/2211.01452).
* Baolin Peng, Michel Galley, Pengcheng He, Chris Brockett, Lars Liden, Elnaz Nouri, Zhou Yu, Bill Dolan, Jianfeng Gao: “GODEL: Large-Scale Pre-Training for Goal-Directed Dialog”, 2022; [arXiv:2206.11309](http://arxiv.org/abs/2206.11309).
* Egil Rønningstad, Erik Velldal, Lilja Øvrelid: “Entity-Level Sentiment Analysis (ELSA): An exploratory task survey”, 2023, Proceedings of the 29th International Conference on Computational Linguistics, 2022, pages 6773-6783; [arXiv:2304.14241](http://arxiv.org/abs/2304.14241).
* Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, Sergey Levine: “Offline RL for Natural Language Generation with Implicit Language Q Learning”, 2022; [arXiv:2206.11871](http://arxiv.org/abs/2206.11871).
* Zhiruo Wang, Shuyan Zhou, Daniel Fried, Graham Neubig: “Execution-Based Evaluation for Open-Domain Code Generation”, 2022; [arXiv:2212.10481](http://arxiv.org/abs/2212.10481).
* Minh-Long Luu, Zeyi Huang, Eric P. Xing, Yong Jae Lee, Haohan Wang: “Expeditious Saliency-guided Mix-up through Random Gradient Thresholding”, 2022; [arXiv:2212.04875](http://arxiv.org/abs/2212.04875).
* Jun Hao Liew, Hanshu Yan, Daquan Zhou, Jiashi Feng: “MagicMix: Semantic Mixing with Diffusion Models”, 2022; [arXiv:2210.16056](http://arxiv.org/abs/2210.16056).
* Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao: “LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot Learners”, 2021; [arXiv:2110.06274](http://arxiv.org/abs/2110.06274).

View File

@ -1,117 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Example Zoo
Below contains a non-exhuastive list of tutorials and scripts showcasing Accelerate
## Official Accelerate Examples:
### Basic Examples
These examples showcase the base features of Accelerate and are a great starting point
- [Barebones NLP example](https://github.com/huggingface/accelerate/blob/main/examples/nlp_example.py)
- [Barebones distributed NLP example in a Jupyter Notebook](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb)
- [Barebones computer vision example](https://github.com/huggingface/accelerate/blob/main/examples/cv_example.py)
- [Barebones distributed computer vision example in a Jupyter Notebook](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_cv_example.ipynb)
- [Using Accelerate in Kaggle](https://www.kaggle.com/code/muellerzr/multi-gpu-and-accelerate)
### Feature Specific Examples
These examples showcase specific features that the Accelerate framework offers
- [Automatic memory-aware gradient accumulation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/automatic_gradient_accumulation.py)
- [Checkpointing states](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/checkpointing.py)
- [Cross validation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/cross_validation.py)
- [DeepSpeed](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/deepspeed_with_config_support.py)
- [Fully Sharded Data Parallelism](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/fsdp_with_peak_mem_tracking.py)
- [Gradient accumulation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/gradient_accumulation.py)
- [Memory-aware batch size finder](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/memory.py)
- [Metric Computation](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/multi_process_metrics.py)
- [Using Trackers](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/tracking.py)
- [Using Megatron-LM](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/megatron_lm_gpt_pretraining.py)
### Full Examples
These examples showcase every feature in Accelerate at once that was shown in "Feature Specific Examples"
- [Complete NLP example](https://github.com/huggingface/accelerate/blob/main/examples/complete_nlp_example.py)
- [Complete computer vision example](https://github.com/huggingface/accelerate/blob/main/examples/complete_cv_example.py)
- [Causal language model fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py)
- [Masked language model fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm_no_trainer.py)
- [Speech pretraining example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py)
- [Translation fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py)
- [Text classification fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py)
- [Semantic segmentation fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/semantic-segmentation/run_semantic_segmentation_no_trainer.py)
- [Question answering fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa_no_trainer.py)
- [Beam search question answering fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py)
- [Multiple choice question answering fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/multiple-choice/run_swag_no_trainer.py)
- [Named entity recognition fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner_no_trainer.py)
- [Image classification fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification_no_trainer.py)
- [Summarization fine-tuning example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization_no_trainer.py)
- [End-to-end examples on how to use AWS SageMaker integration of Accelerate](https://github.com/huggingface/notebooks/blob/main/sagemaker/22_accelerate_sagemaker_examples/README.md)
- [Megatron-LM examples for various NLp tasks](https://github.com/pacman100/accelerate-megatron-test)
## Integration Examples
These are tutorials from libraries that integrate with 🤗 Accelerate:
### Catalyst
- [Distributed training tutorial with Catalyst](https://catalyst-team.github.io/catalyst/tutorials/ddp.html)
### DALLE2-pytorch
- [Fine-tuning DALLE2](https://github.com/lucidrains/DALLE2-pytorch#usage)
### 🤗 diffusers
- [Performing textual inversion with diffusers](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
- [Training DreamBooth with diffusers](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth)
### fastai
- [Distributed training from Jupyter Notebooks with fastai](https://docs.fast.ai/tutorial.distributed.html)
- [Basic distributed training examples with fastai](https://docs.fast.ai/examples/distributed_app_examples.html)
### GradsFlow
- [Auto Image Classification with GradsFlow](https://docs.gradsflow.com/en/latest/examples/nbs/01-ImageClassification/)
### imagen-pytorch
- [Fine-tuning Imagen](https://github.com/lucidrains/imagen-pytorch#usage)
### Kornia
- [Fine-tuning vision models with Kornia's Trainer](https://kornia.readthedocs.io/en/latest/get-started/training.html)
### PyTorch Accelerated
- [Quickstart distributed training tutorial with PyTorch Accelerated](https://pytorch-accelerated.readthedocs.io/en/latest/quickstart.html)
### PyTorch3D
- [Perform Deep Learning with 3D data](https://pytorch3d.org/tutorials/)
### Stable-Dreamfusion
- [Training with Stable-Dreamfusion to convert text to a 3D model](https://colab.research.google.com/drive/1MXT3yfOFvO0ooKEfiUUvTKwUkrrlCHpF?usp=sharing)
### Tez
- [Leaf disease detection with Tez and Accelerate](https://www.kaggle.com/code/abhishek/tez-faster-and-easier-training-for-leaf-detection/notebook)
### trlx
- [How to implement a sentiment learning task with trlx](https://github.com/CarperAI/trlx#example-how-to-add-a-task)

View File

@ -51,22 +51,22 @@ To run it in each of these various modes, use the following commands:
python ./nlp_example.py # from a server with a GPU
```
- with fp16 (mixed-precision)
* from any server by passing `fp16=True` to the `Accelerator`.
* from any server by passing `mixed_precison=fp16` to the `Accelerator`.
```bash
python ./nlp_example.py --fp16
python ./nlp_example.py --mixed_precision fp16
```
* from any server with Accelerate launcher
```bash
accelerate launch --fp16 ./nlp_example.py
accelerate launch --mixed_precision fp16 ./nlp_example.py
- multi GPUs (using PyTorch distributed mode)
* With Accelerate config and launcher
```bash
accelerate config # This will create a config file on your server
accelerate launch ./nlp_example.py # This will run the script on your server
```
* With traditional PyTorch launcher
* With traditional PyTorch launcher (`torch.distributed.launch` can be used with older versions of PyTorch)
```bash
python -m torch.distributed.launch --nproc_per_node 2 --use_env ./nlp_example.py
python -m torchrun --nproc_per_node 2 --use_env ./nlp_example.py
```
- multi GPUs, multi node (several machines, using PyTorch distributed mode)
* With Accelerate config and launcher, on each machine:
@ -74,14 +74,14 @@ To run it in each of these various modes, use the following commands:
accelerate config # This will create a config file on each server
accelerate launch ./nlp_example.py # This will run the script on each server
```
* With PyTorch launcher only
* With PyTorch launcher only (`torch.distributed.launch` can be used in older versions of PyTorch)
```bash
python -m torch.distributed.launch --nproc_per_node 2 \
python -m torchrun --nproc_per_node 2 \
--use_env \
--node_rank 0 \
--master_addr master_node_ip_address \
./nlp_example.py # On the first server
python -m torch.distributed.launch --nproc_per_node 2 \
python -m torchrun --nproc_per_node 2 \
--use_env \
--node_rank 1 \
--master_addr master_node_ip_address \
@ -139,22 +139,22 @@ To run it in each of these various modes, use the following commands:
python ./cv_example.py # from a server with a GPU
```
- with fp16 (mixed-precision)
* from any server by passing `fp16=True` to the `Accelerator`.
* from any server by passing `mixed_precison=fp16` to the `Accelerator`.
```bash
python ./cv_example.py --data_dir path_to_data --fp16
python ./cv_example.py --data_dir path_to_data --mixed_precison fp16
```
* from any server with Accelerate launcher
```bash
accelerate launch --fp16 ./cv_example.py --data_dir path_to_data
accelerate launch --mixed_precison fp16 ./cv_example.py --data_dir path_to_data
- multi GPUs (using PyTorch distributed mode)
* With Accelerate config and launcher
```bash
accelerate config # This will create a config file on your server
accelerate launch ./cv_example.py --data_dir path_to_data # This will run the script on your server
```
* With traditional PyTorch launcher
* With traditional PyTorch launcher (`torch.distributed.launch` can be used with older versions of PyTorch)
```bash
python -m torch.distributed.launch --nproc_per_node 2 --use_env ./cv_example.py --data_dir path_to_data
python -m torchrun --nproc_per_node 2 --use_env ./cv_example.py --data_dir path_to_data
```
- multi GPUs, multi node (several machines, using PyTorch distributed mode)
* With Accelerate config and launcher, on each machine:
@ -162,14 +162,14 @@ To run it in each of these various modes, use the following commands:
accelerate config # This will create a config file on each server
accelerate launch ./cv_example.py --data_dir path_to_data # This will run the script on each server
```
* With PyTorch launcher only
* With PyTorch launcher only (`torch.distributed.launch` can be used with older versions of PyTorch)
```bash
python -m torch.distributed.launch --nproc_per_node 2 \
python -m torchrun --nproc_per_node 2 \
--use_env \
--node_rank 0 \
--master_addr master_node_ip_address \
./cv_example.py --data_dir path_to_data # On the first server
python -m torch.distributed.launch --nproc_per_node 2 \
python -m torchrun --nproc_per_node 2 \
--use_env \
--node_rank 1 \
--master_addr master_node_ip_address \
@ -190,7 +190,22 @@ To run it in each of these various modes, use the following commands:
### Using AWS SageMaker integration
- [Examples showcasing AWS SageMaker integration of 🤗 Accelerate.](https://github.com/pacman100/accelerate-aws-sagemaker)
## Simple Multi-GPU Hardware Launcher
[multigpu_remote_launcher.py](./multigpu_remote_launcher.py) is a minimal script that demonstrates launching accelerate
on multiple remote GPUs, and with automatic hardware environment and dependency setup for reproducibility. You can
easily customize the training function used, training arguments, hyperparameters, and type of compute hardware, and then
run the script to automatically launch multi GPU training on remote hardware.
This script uses [Runhouse](https://github.com/run-house/runhouse) to launch on self-hosted hardware (e.g. in your own
cloud account or on-premise cluster) but there are other options for running remotely as well. Runhouse can be installed
with `pip install runhouse`, and you can refer to
[hardware setup](https://runhouse-docs.readthedocs-hosted.com/en/latest/api/python/cluster.html#hardware-setup)
for hardware setup instructions, or this
[Colab tutorial](https://colab.research.google.com/drive/1qVwYyLTCPYPSdz9ZX7BZl9Qm0A3j7RJe) for a more in-depth walkthrough.
## Finer Examples
While the first two scripts are extremely barebones when it comes to what you can do with accelerate, more advanced features are documented in two other locations.

View File

@ -19,7 +19,7 @@ Adjustments to each script from the base `nlp_example.py` file can be found quic
All following scripts also accept these arguments in addition to their added ones.
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.run`), such as:
```bash
accelerate launch ../nlp_example.py --mixed_precision fp16 --cpu 0
@ -34,7 +34,7 @@ accelerate launch ../nlp_example.py --mixed_precision fp16 --cpu 0
- `output_dir`, where saved state folders should be saved to, default is current working directory
- `resume_from_checkpoint`, what checkpoint folder to resume from. ("epoch_0", "step_22", ...)
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torchrun`), such as:
(Note, `resume_from_checkpoint` assumes that we've ran the script for one epoch with the `--checkpointing_steps epoch` flag)
@ -48,7 +48,7 @@ accelerate launch ./checkpointing.py --checkpointing_steps epoch output_dir "che
- Arguments available:
- `num_folds`, the number of folds the training dataset should be split into.
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torchrun`), such as:
```bash
accelerate launch ./cross_validation.py --num_folds 2
@ -61,7 +61,7 @@ accelerate launch ./cross_validation.py --num_folds 2
- Arguments available:
- `with_tracking`, whether to load in all available experiment trackers from the environment.
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torchrun`), such as:
```bash
accelerate launch ./tracking.py --with_tracking
@ -73,8 +73,19 @@ accelerate launch ./tracking.py --with_tracking
- Arguments available:
- `gradient_accumulation_steps`, the number of steps to perform before the gradients are accumulated and the optimizer and scheduler are stepped + zero_grad
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torch.distributed.launch`), such as:
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torchrun`), such as:
```bash
accelerate launch ./gradient_accumulation.py --gradient_accumulation_steps 5
```
```
### LocalSGD (`local_sgd.py`)
- Shows how to use `Accelerator.no_sync` to prevent gradient averaging in a distributed setup. However, unlike gradient accumulation, this method does not change the effective batch size. Local SGD can be combined with gradient accumulation.
These arguments should be added at the end of any method for starting the python script (such as `python`, `accelerate launch`, `python -m torchrun`), such as:
```bash
accelerate launch ./local_sgd.py --local_sgd_steps 4
```

View File

@ -14,17 +14,17 @@
import argparse
import os
import torch
from torch.optim import AdamW
from torch.utils.data import DataLoader
# New Code #
import evaluate
from accelerate import Accelerator, DistributedType
from accelerate.utils import find_executable_batch_size
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator
from accelerate.utils import find_executable_batch_size
########################################################################
# This is a fully working simple example to use Accelerate,
@ -84,10 +84,20 @@ def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
@ -214,8 +224,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",

View File

@ -15,15 +15,15 @@
import argparse
import os
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
########################################################################
# This is a fully working simple example to use Accelerate,
@ -86,9 +86,22 @@ def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
@ -203,13 +216,15 @@ def training_function(config, args):
# Now we train the model
for epoch in range(starting_epoch, num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
# New Code #
# We need to skip steps until we reach the resumed step during the first epoch
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
overall_step += 1
continue
# New Code #
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
# We need to skip steps until we reach the resumed step
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
overall_step += resume_step
else:
# After the first iteration though, we need to go back to the original dataloader
active_dataloader = train_dataloader
for step, batch in enumerate(active_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
@ -269,8 +284,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",

View File

@ -15,20 +15,20 @@
import argparse
from typing import List
import evaluate
import numpy as np
import torch
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import DatasetDict, load_dataset
# New Code #
# We'll be using StratifiedKFold for this example
from sklearn.model_selection import StratifiedKFold
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
########################################################################
# This is a fully working simple example to use Accelerate,
@ -106,9 +106,22 @@ def get_fold_dataloaders(
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
@ -250,8 +263,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",

View File

@ -31,16 +31,12 @@ import random
from itertools import chain
from pathlib import Path
import torch
from torch.utils.data import DataLoader
import datasets
import torch
import transformers
from accelerate import Accelerator, DistributedType
from accelerate.logging import get_logger
from accelerate.utils import DummyOptim, DummyScheduler, set_seed
from datasets import load_dataset
from huggingface_hub import Repository
from torch.utils.data import DataLoader
from tqdm.auto import tqdm
from transformers import (
CONFIG_MAPPING,
@ -55,6 +51,10 @@ from transformers import (
from transformers.utils import get_full_repo_name
from transformers.utils.versions import require_version
from accelerate import Accelerator, DistributedType
from accelerate.logging import get_logger
from accelerate.utils import DummyOptim, DummyScheduler, set_seed
logger = get_logger(__name__)

View File

@ -15,14 +15,22 @@
import argparse
import gc
import os
import torch
from torch.utils.data import DataLoader
import threading
import evaluate
from accelerate import Accelerator, DistributedType
import psutil
import torch
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from torch.distributed.fsdp.fully_sharded_data_parallel import FullOptimStateDictConfig, FullStateDictConfig
from torch.utils.data import DataLoader
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer,
get_linear_schedule_with_warmup,
set_seed,
)
from accelerate import Accelerator, DistributedType, FullyShardedDataParallelPlugin
########################################################################
@ -63,15 +71,44 @@ class TorchTracemalloc:
torch.cuda.empty_cache()
torch.cuda.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.cuda.memory_allocated()
self.process = psutil.Process()
self.cpu_begin = self.cpu_mem_used()
self.peak_monitoring = True
peak_monitor_thread = threading.Thread(target=self.peak_monitor_func)
peak_monitor_thread.daemon = True
peak_monitor_thread.start()
return self
def cpu_mem_used(self):
"""get resident set size memory for the current process"""
return self.process.memory_info().rss
def peak_monitor_func(self):
self.cpu_peak = -1
while True:
self.cpu_peak = max(self.cpu_mem_used(), self.cpu_peak)
# can't sleep or will not catch the peak right (this comment is here on purpose)
# time.sleep(0.001) # 1msec
if not self.peak_monitoring:
break
def __exit__(self, *exc):
self.peak_monitoring = False
gc.collect()
torch.cuda.empty_cache()
self.end = torch.cuda.memory_allocated()
self.peak = torch.cuda.max_memory_allocated()
self.used = b2mb(self.end - self.begin)
self.peaked = b2mb(self.peak - self.begin)
self.cpu_end = self.cpu_mem_used()
self.cpu_used = b2mb(self.cpu_end - self.cpu_begin)
self.cpu_peaked = b2mb(self.cpu_peak - self.cpu_begin)
# print(f"delta used/peak {self.used:4d}/{self.peaked:4d}")
@ -86,13 +123,25 @@ def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# New Code #
# Pass the advanced FSDP settings not part of the accelerate config by creating fsdp_plugin
fsdp_plugin = FullyShardedDataParallelPlugin(
state_dict_config=FullStateDictConfig(offload_to_cpu=False, rank0_only=False),
optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=False, rank0_only=False),
)
# Initialize accelerator
if args.with_tracking:
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="wandb", logging_dir=args.logging_dir
cpu=args.cpu,
mixed_precision=args.mixed_precision,
log_with="wandb",
project_dir=args.logging_dir,
fsdp_plugin=fsdp_plugin,
)
else:
accelerator = Accelerator()
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
accelerator.print(accelerator.distributed_type)
if hasattr(args.checkpointing_steps, "isdigit"):
@ -147,9 +196,22 @@ def training_function(config, args):
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
@ -162,7 +224,10 @@ def training_function(config, args):
set_seed(seed)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained(args.model_name_or_path, return_dict=True)
model = AutoModelForSequenceClassification.from_pretrained(
args.model_name_or_path, return_dict=True, low_cpu_mem_usage=True
)
# New Code #
# For FSDP feature, it is highly recommended and efficient to prepare the model before creating optimizer
model = accelerator.prepare(model)
@ -330,8 +395,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
@ -373,7 +438,7 @@ def main():
required=True,
)
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 1, "batch_size": 16}
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)

View File

@ -15,15 +15,15 @@
import argparse
import os
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
########################################################################
# This is a fully working simple example to use Accelerate
@ -81,9 +81,22 @@ def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
@ -192,8 +205,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",

View File

@ -0,0 +1,238 @@
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
from accelerate.local_sgd import LocalSGD
########################################################################
# This is a fully working simple example to use Accelerate
# with LocalSGD, which is a method to synchronize model
# parameters every K batches. It is different, but complementary
# to gradient accumulation.
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
"""
Creates a set of `DataLoader`s for the `glue` dataset,
using "bert-base-cased" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
)
return train_dataloader, eval_dataloader
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
from accelerate.test_utils.training import mocked_dataloaders
get_dataloaders = mocked_dataloaders # noqa: F811
def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# New Code #
gradient_accumulation_steps = int(args.gradient_accumulation_steps)
local_sgd_steps = int(args.local_sgd_steps)
# Initialize accelerator
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, gradient_accumulation_steps=gradient_accumulation_steps
)
if accelerator.distributed_type not in [DistributedType.NO, DistributedType.MULTI_CPU, DistributedType.MULTI_GPU]:
raise NotImplementedError("LocalSGD is supported only for CPUs and GPUs (no DeepSpeed or MegatronLM)")
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
metric = evaluate.load("glue", "mrpc")
set_seed(seed)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs),
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now we train the model
for epoch in range(num_epochs):
model.train()
with LocalSGD(
accelerator=accelerator, model=model, local_sgd_steps=local_sgd_steps, enabled=local_sgd_steps is not None
) as local_sgd:
for step, batch in enumerate(train_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
# New code #
# We use the new `accumulate` context manager to perform gradient accumulation
# We also currently do not support TPUs nor advise it as bugs were found on the XLA side when running our tests.
with accelerator.accumulate(model):
output = model(**batch)
loss = output.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
# LocalSGD-specific line
local_sgd.step()
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
# New Code #
parser.add_argument(
"--gradient_accumulation_steps",
type=int,
default=1,
help="The number of minibatches to be ran before gradients are accumulated.",
)
parser.add_argument(
"--local_sgd_steps", type=int, default=8, help="Number of local SGD steps or None to disable local SGD"
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -31,16 +31,12 @@ import random
from itertools import chain
from pathlib import Path
import torch
from torch.utils.data import DataLoader
import datasets
import torch
import transformers
from accelerate import Accelerator, DistributedType
from accelerate.logging import get_logger
from accelerate.utils import MegatronLMDummyScheduler, set_seed
from datasets import load_dataset
from huggingface_hub import Repository
from torch.utils.data import DataLoader
from tqdm.auto import tqdm
from transformers import (
CONFIG_MAPPING,
@ -55,6 +51,10 @@ from transformers import (
from transformers.utils import check_min_version, get_full_repo_name, send_example_telemetry
from transformers.utils.versions import require_version
from accelerate import Accelerator, DistributedType
from accelerate.logging import get_logger
from accelerate.utils import MegatronLMDummyScheduler, set_seed
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
check_min_version("4.23.0.dev0")

View File

@ -14,16 +14,16 @@
import argparse
import os
import torch
from torch.optim import AdamW
from torch.utils.data import DataLoader
# New Code #
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
from accelerate.utils import find_executable_batch_size
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
########################################################################
@ -86,9 +86,22 @@ def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
@ -204,8 +217,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",

View File

@ -15,15 +15,15 @@
import argparse
import os
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
########################################################################
# This is a fully working simple example to use Accelerate,
@ -88,9 +88,22 @@ def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
@ -209,8 +222,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",

View File

@ -15,15 +15,15 @@
import argparse
import os
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
########################################################################
# This is a fully working simple example to use Accelerate,
@ -86,9 +86,22 @@ def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
@ -120,7 +133,7 @@ def training_function(config, args):
# >>> log_with = ["all", MyCustomTrackerClassInstance()]
if args.with_tracking:
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="all", logging_dir=args.logging_dir
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="all", project_dir=args.project_dir
)
else:
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
@ -236,8 +249,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
@ -249,10 +262,10 @@ def main():
help="Whether to load in all available experiment trackers from the environment and use them for logging.",
)
parser.add_argument(
"--logging_dir",
"--project_dir",
type=str,
default="logs",
help="Location on where to store experiment tracking logs`",
help="Location on where to store experiment tracking logs` and relevent project information",
)
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}

View File

@ -17,15 +17,15 @@ import os
import re
import numpy as np
import PIL
import torch
from timm import create_model
from torch.optim.lr_scheduler import OneCycleLR
from torch.utils.data import DataLoader, Dataset
import PIL
from accelerate import Accelerator
from timm import create_model
from torchvision.transforms import Compose, RandomResizedCrop, Resize, ToTensor
from accelerate import Accelerator
########################################################################
# This is a fully working simple example to use Accelerate
@ -75,7 +75,7 @@ def training_function(config, args):
# Initialize accelerator
if args.with_tracking:
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="all", logging_dir=args.logging_dir
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="all", project_dir=args.project_dir
)
else:
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
@ -173,7 +173,7 @@ def training_function(config, args):
)
# We need to keep track of how many total steps we have iterated over
overall_step = 0
# We also need to keep track of the stating epoch so files are named properly
# We also need to keep track of the starting epoch so files are named properly
starting_epoch = 0
# Potentially load in the weights and states from a previous save
@ -203,12 +203,14 @@ def training_function(config, args):
model.train()
if args.with_tracking:
total_loss = 0
for step, batch in enumerate(train_dataloader):
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
# We need to skip steps until we reach the resumed step
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
overall_step += 1
continue
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
overall_step += resume_step
else:
# After the first iteration though, we need to go back to the original dataloader
active_dataloader = train_dataloader
for batch in active_dataloader:
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch = {k: v.to(accelerator.device) for k, v in batch.items()}
inputs = (batch["image"] - mean) / std
@ -272,8 +274,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
@ -303,10 +305,10 @@ def main():
help="Whether to load in all available experiment trackers from the environment and use them for logging.",
)
parser.add_argument(
"--logging_dir",
"--project_dir",
type=str,
default="logs",
help="Location on where to store experiment tracking logs`",
help="Location on where to store experiment tracking logs` and relevent project information",
)
args = parser.parse_args()
config = {"lr": 3e-2, "num_epochs": 3, "seed": 42, "batch_size": 64, "image_size": 224}

View File

@ -15,15 +15,15 @@
import argparse
import os
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
########################################################################
# This is a fully working simple example to use Accelerate
@ -52,7 +52,7 @@ def training_function(config, args):
# Initialize accelerator
if args.with_tracking:
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="all", logging_dir=args.logging_dir
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="all", project_dir=args.project_dir
)
else:
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
@ -109,9 +109,22 @@ def training_function(config, args):
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
@ -180,12 +193,14 @@ def training_function(config, args):
model.train()
if args.with_tracking:
total_loss = 0
for step, batch in enumerate(train_dataloader):
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
# We need to skip steps until we reach the resumed step
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
overall_step += 1
continue
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
overall_step += resume_step
else:
# After the first iteration though, we need to go back to the original dataloader
active_dataloader = train_dataloader
for step, batch in enumerate(active_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
@ -251,8 +266,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
@ -282,10 +297,10 @@ def main():
help="Optional save directory where all checkpoint folders will be stored. Default is the current working directory.",
)
parser.add_argument(
"--logging_dir",
"--project_dir",
type=str,
default="logs",
help="Location on where to store experiment tracking logs`",
help="Location on where to store experiment tracking logs` and relevent project information",
)
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}

View File

@ -17,15 +17,15 @@ import os
import re
import numpy as np
import PIL
import torch
from timm import create_model
from torch.optim.lr_scheduler import OneCycleLR
from torch.utils.data import DataLoader, Dataset
import PIL
from accelerate import Accelerator
from timm import create_model
from torchvision.transforms import Compose, RandomResizedCrop, Resize, ToTensor
from accelerate import Accelerator
########################################################################
# This is a fully working simple example to use Accelerate
@ -189,8 +189,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",

View File

@ -0,0 +1,55 @@
import argparse
import runhouse as rh
import torch
from nlp_example import training_function
from accelerate.utils import PrepareForLaunch, patch_environment
def launch_train(*args):
num_processes = torch.cuda.device_count()
print(f"Device count: {num_processes}")
with patch_environment(
world_size=num_processes, master_addr="127.0.01", master_port="29500", mixed_precision=args[1].mixed_precision
):
launcher = PrepareForLaunch(training_function, distributed_type="MULTI_GPU")
torch.multiprocessing.start_processes(launcher, args=args, nprocs=num_processes, start_method="spawn")
if __name__ == "__main__":
# Refer to https://runhouse-docs.readthedocs-hosted.com/en/main/rh_primitives/cluster.html#hardware-setup
# for cloud access setup instructions (if using on-demand hardware), and for API specifications.
# on-demand GPU
# gpu = rh.cluster(name='rh-cluster', instance_type='V100:1', provider='cheapest', use_spot=False) # single GPU
gpu = rh.cluster(name="rh-cluster", instance_type="V100:4", provider="cheapest", use_spot=False) # multi GPU
gpu.up_if_not()
# on-prem GPU
# gpu = rh.cluster(
# ips=["ip_addr"], ssh_creds={ssh_user:"<username>", ssh_private_key:"<key_path>"}, name="rh-cluster"
# )
# Set up remote function
reqs = [
"pip:./",
"transformers",
"datasets",
"evaluate",
"tqdm",
"scipy",
"scikit-learn",
"tensorboard",
"torch --upgrade --extra-index-url https://download.pytorch.org/whl/cu117",
]
launch_train_gpu = rh.function(fn=launch_train, system=gpu, reqs=reqs, name="train_bert_glue")
# Define train args/config, run train function
train_args = argparse.Namespace(cpu=False, mixed_precision="fp16")
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
launch_train_gpu(config, train_args, stream_logs=True)
# Alternatively, we can just run as instructed in the README (but only because there's already a wrapper CLI):
# gpu.install_packages(reqs)
# gpu.run(['accelerate launch --multi_gpu accelerate/examples/nlp_example.py'])

View File

@ -14,15 +14,15 @@
# limitations under the License.
import argparse
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
import evaluate
from accelerate import Accelerator, DistributedType
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
########################################################################
# This is a fully working simple example to use Accelerate
@ -79,16 +79,33 @@ def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
if accelerator.distributed_type == DistributedType.TPU:
return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt")
return tokenizer.pad(examples, padding="longest", return_tensors="pt")
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE
tokenized_datasets["validation"],
shuffle=False,
collate_fn=collate_fn,
batch_size=EVAL_BATCH_SIZE,
drop_last=(accelerator.mixed_precision == "fp8"),
)
return train_dataloader, eval_dataloader
@ -120,7 +137,6 @@ def training_function(config, args):
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
@ -134,6 +150,7 @@ def training_function(config, args):
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
@ -176,8 +193,8 @@ def main():
parser.add_argument(
"--mixed_precision",
type=str,
default="no",
choices=["no", "fp16", "bf16"],
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",

View File

@ -1,3 +1,17 @@
[tool.black]
line-length = 119
target-version = ['py36']
target-version = ['py37']
[tool.ruff]
# Never enforce `E501` (line length violations).
ignore = ["E501", "E741", "W605"]
select = ["E", "F", "I", "W"]
line-length = 119
# Ignore import violations in all `__init__.py` files.
[tool.ruff.per-file-ignores]
"__init__.py" = ["E402", "F401", "F403", "F811"]
[tool.ruff.isort]
lines-after-imports = 2
known-first-party = ["accelerate"]

View File

@ -4,11 +4,6 @@ ensure_newline_before_comments = True
force_grid_wrap = 0
include_trailing_comma = True
known_first_party = accelerate
known_third_party =
numpy
torch
torch_xla
line_length = 119
lines_after_imports = 2
multi_line_output = 3

View File

@ -16,10 +16,10 @@ from setuptools import setup
from setuptools import find_packages
extras = {}
extras["quality"] = ["black ~= 22.0", "isort >= 5.5.4", "flake8 >= 3.8.3", "hf-doc-builder >= 0.3.0"]
extras["quality"] = ["black ~= 23.1", "ruff >= 0.0.241", "hf-doc-builder >= 0.3.0", "urllib3 < 2.0.0"]
extras["docs"] = []
extras["test_prod"] = ["pytest", "pytest-xdist", "pytest-subtests", "parameterized"]
extras["test_dev"] = ["datasets", "evaluate", "transformers", "scipy", "scikit-learn", "deepspeed<0.7.0", "tqdm"]
extras["test_dev"] = ["datasets", "evaluate", "transformers", "scipy", "scikit-learn", "deepspeed", "tqdm"]
extras["testing"] = extras["test_prod"] + extras["test_dev"]
extras["rich"] = ["rich"]
@ -32,7 +32,7 @@ extras["sagemaker"] = [
setup(
name="accelerate",
version="0.15.0.dev0",
version="0.21.0",
description="Accelerate",
long_description=open("README.md", "r", encoding="utf-8").read(),
long_description_content_type="text/markdown",
@ -50,8 +50,8 @@ setup(
"accelerate-launch=accelerate.commands.launch:main",
]
},
python_requires=">=3.7.0",
install_requires=["numpy>=1.17", "packaging>=20.0", "psutil", "pyyaml", "torch>=1.4.0"],
python_requires=">=3.8.0",
install_requires=["numpy>=1.17", "packaging>=20.0", "psutil", "pyyaml", "torch>=1.10.0"],
extras_require=extras,
classifiers=[
"Development Status :: 5 - Production/Stable",
@ -61,7 +61,7 @@ setup(
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
],
)

View File

@ -1,12 +1,18 @@
# flake8: noqa
# There's no way to ignore "F401 '...' imported but unused" warnings in this
# module, but to preserve other warnings. So, don't check this module at all.
__version__ = "0.15.0.dev0"
__version__ = "0.21.0"
from .accelerator import Accelerator
from .big_modeling import cpu_offload, disk_offload, dispatch_model, init_empty_weights, load_checkpoint_and_dispatch
from .big_modeling import (
cpu_offload,
cpu_offload_with_hook,
disk_offload,
dispatch_model,
init_empty_weights,
init_on_device,
load_checkpoint_and_dispatch,
)
from .data_loader import skip_first_batches
from .launchers import debug_launcher, notebook_launcher
from .state import PartialState
from .utils import (
DeepSpeedPlugin,
DistributedDataParallelKwargs,

1697
src/accelerate/accelerator.py Normal file → Executable file

File diff suppressed because it is too large Load Diff

View File

@ -19,17 +19,25 @@ from typing import Dict, List, Optional, Union
import torch
import torch.nn as nn
from .hooks import AlignDevicesHook, add_hook_to_module, attach_align_device_hook, attach_align_device_hook_on_blocks
from .hooks import (
AlignDevicesHook,
CpuOffload,
UserCpuOffloadHook,
add_hook_to_module,
attach_align_device_hook,
attach_align_device_hook_on_blocks,
)
from .utils import (
OffloadedWeightsLoader,
check_device_map,
extract_submodules_state_dict,
find_tied_parameters,
get_balanced_memory,
infer_auto_device_map,
load_checkpoint_in_model,
offload_state_dict,
retie_parameters,
)
from .utils.versions import is_torch_version
@contextmanager
@ -60,8 +68,31 @@ def init_empty_weights(include_buffers: bool = False):
</Tip>
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("Initializing empty weights to a meta device requires torch >= 1.9.0")
with init_on_device(torch.device("meta"), include_buffers=include_buffers) as f:
yield f
@contextmanager
def init_on_device(device: torch.device, include_buffers: bool = False):
"""
A context manager under which models are initialized with all parameters on the specified device.
Args:
device (`torch.device`):
Device to initialize all parameters on.
include_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to also put all buffers on the meta device while initializing.
Example:
```python
import torch.nn as nn
from accelerate import init_on_device
with init_on_device(device=torch.device("cuda")):
tst = nn.Liner(100, 100) # on `cuda` device
```
"""
old_register_parameter = nn.Module.register_parameter
if include_buffers:
old_register_buffer = nn.Module.register_buffer
@ -71,12 +102,12 @@ def init_empty_weights(include_buffers: bool = False):
if param is not None:
param_cls = type(module._parameters[name])
kwargs = module._parameters[name].__dict__
module._parameters[name] = param_cls(module._parameters[name].to(torch.device("meta")), **kwargs)
module._parameters[name] = param_cls(module._parameters[name].to(device), **kwargs)
def register_empty_buffer(module, name, buffer):
old_register_buffer(module, name, buffer)
def register_empty_buffer(module, name, buffer, persistent=True):
old_register_buffer(module, name, buffer, persistent=persistent)
if buffer is not None:
module._buffers[name] = module._buffers[name].to(torch.device("meta"))
module._buffers[name] = module._buffers[name].to(device)
# Patch tensor creation
if include_buffers:
@ -89,7 +120,7 @@ def init_empty_weights(include_buffers: bool = False):
def patch_tensor_constructor(fn):
def wrapper(*args, **kwargs):
kwargs["device"] = torch.device("meta")
kwargs["device"] = device
return fn(*args, **kwargs)
return wrapper
@ -137,8 +168,6 @@ def cpu_offload(
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("CPU offloading requires torch >= 1.9.0")
if execution_device is None:
execution_device = next(iter(model.parameters())).device
if state_dict is None:
@ -157,6 +186,50 @@ def cpu_offload(
return model
def cpu_offload_with_hook(
model: torch.nn.Module,
execution_device: Optional[Union[int, str, torch.device]] = None,
prev_module_hook: Optional[UserCpuOffloadHook] = None,
):
"""
Offloads a model on the CPU and puts it back to an execution device when executed. The difference with
[`cpu_offload`] is that the model stays on the execution device after the forward and is only offloaded again when
the `offload` method of the returned `hook` is called. Useful for pipelines running a model in a loop.
Args:
model (`torch.nn.Module`):
The model to offload.
execution_device(`str`, `int` or `torch.device`, *optional*):
The device on which the model should be executed. Will default to the MPS device if it's available, then
GPU 0 if there is a GPU, and finally to the CPU.
prev_module_hook (`UserCpuOffloadHook`, *optional*):
The hook sent back by this function for a previous model in the pipeline you are running. If passed, its
offload method will be called just before the forward of the model to which this hook is attached.
Example:
```py
model_1, hook_1 = cpu_offload_with_hook(model_1, cuda_device)
model_2, hook_2 = cpu_offload_with_hook(model_2, cuda_device, prev_module_hook=hook_1)
model_3, hook_3 = cpu_offload_with_hook(model_3, cuda_device, prev_module_hook=hook_2)
hid_1 = model_1(input)
for i in range(50):
# model1 is offloaded on the CPU at the first iteration, model 2 stays on the GPU for this whole loop.
hid_2 = model_2(hid_1)
# model2 is offloaded to the CPU just before this forward.
hid_3 = model_3(hid_3)
# For model3, you need to manually call the hook offload method.
hook_3.offload()
```
"""
hook = CpuOffload(execution_device=execution_device, prev_module_hook=prev_module_hook)
add_hook_to_module(model, hook, append=True)
user_hook = UserCpuOffloadHook(model, hook)
return model, user_hook
def disk_offload(
model: nn.Module,
offload_dir: Union[str, os.PathLike],
@ -184,8 +257,6 @@ def disk_offload(
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("Disk offloading requires torch >= 1.9.0")
if not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json")):
offload_state_dict(offload_dir, model.state_dict())
if execution_device is None:
@ -213,6 +284,7 @@ def dispatch_model(
offload_dir: Optional[Union[str, os.PathLike]] = None,
offload_index: Optional[Dict[str, str]] = None,
offload_buffers: bool = False,
skip_keys: Optional[Union[str, List[str]]] = None,
preload_module_classes: Optional[List[str]] = None,
):
"""
@ -237,64 +309,84 @@ def dispatch_model(
to the index saved in `save_folder`.
offload_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to offload the buffers with the model parameters.
skip_keys (`str` or `List[str]`, *optional*):
A list of keys to ignore when moving inputs or outputs between devices.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("Model dispatching requires torch >= 1.9.0")
# Error early if the device map is incomplete.
check_device_map(model, device_map)
if main_device is None:
if set(device_map.values()) == {"cpu"} or set(device_map.values()) == {"cpu", "disk"}:
main_device = "cpu"
# for backward compatibility
is_quantized = getattr(model, "is_quantized", False) or getattr(model, "is_loaded_in_8bit", False)
# We attach hooks if the device_map have at least 2 different devices. Otherwise, the model in already loaded
# in the unique device and the user can decide where to dispatch the model.
# If the model is quantized, we always force-dispatch the model
if (len(set(device_map.values())) > 1) or is_quantized:
if main_device is None:
if set(device_map.values()) == {"cpu"} or set(device_map.values()) == {"cpu", "disk"}:
main_device = "cpu"
else:
main_device = [d for d in device_map.values() if d not in ["cpu", "disk"]][0]
if main_device != "cpu":
cpu_modules = [name for name, device in device_map.items() if device == "cpu"]
if state_dict is None and len(cpu_modules) > 0:
state_dict = extract_submodules_state_dict(model.state_dict(), cpu_modules)
disk_modules = [name for name, device in device_map.items() if device == "disk"]
if offload_dir is None and offload_index is None and len(disk_modules) > 0:
raise ValueError(
"We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules "
f"need to be offloaded: {', '.join(disk_modules)}."
)
if (
len(disk_modules) > 0
and offload_index is None
and (not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json")))
):
disk_state_dict = extract_submodules_state_dict(model.state_dict(), disk_modules)
offload_state_dict(offload_dir, disk_state_dict)
execution_device = {
name: main_device if device in ["cpu", "disk"] else device for name, device in device_map.items()
}
execution_device[""] = main_device
offloaded_devices = ["disk"] if main_device == "cpu" or main_device == "mps" else ["cpu", "disk"]
offload = {name: device in offloaded_devices for name, device in device_map.items()}
save_folder = offload_dir if len(disk_modules) > 0 else None
if state_dict is not None or save_folder is not None or offload_index is not None:
device = main_device if offload_index is not None else None
weights_map = OffloadedWeightsLoader(
state_dict=state_dict, save_folder=save_folder, index=offload_index, device=device
)
else:
main_device = [d for d in device_map.values() if d not in ["cpu", "disk"]][0]
weights_map = None
if main_device != "cpu":
cpu_modules = [name for name, device in device_map.items() if device == "cpu"]
if state_dict is None and len(cpu_modules) > 0:
state_dict = extract_submodules_state_dict(model.state_dict(), cpu_modules)
disk_modules = [name for name, device in device_map.items() if device == "disk"]
if offload_dir is None and offload_index is None and len(disk_modules) > 0:
raise ValueError(
"We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules "
f"need to be offloaded: {', '.join(disk_modules)}."
)
if (
len(disk_modules) > 0
and offload_index is None
and (not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json")))
):
disk_state_dict = extract_submodules_state_dict(model.state_dict(), disk_modules)
offload_state_dict(offload_dir, disk_state_dict)
execution_device = {
name: main_device if device in ["cpu", "disk"] else device for name, device in device_map.items()
}
offloaded_devices = ["disk"] if main_device == "cpu" else ["cpu", "disk"]
offload = {name: device in offloaded_devices for name, device in device_map.items()}
save_folder = offload_dir if len(disk_modules) > 0 else None
if state_dict is not None or save_folder is not None or offload_index is not None:
device = main_device if offload_index is not None else None
weights_map = OffloadedWeightsLoader(
state_dict=state_dict, save_folder=save_folder, index=offload_index, device=device
tied_params = find_tied_parameters(model)
attach_align_device_hook_on_blocks(
model,
execution_device=execution_device,
offload=offload,
offload_buffers=offload_buffers,
weights_map=weights_map,
skip_keys=skip_keys,
preload_module_classes=preload_module_classes,
)
# Attaching the hook may break tied weights, so we retie them
retie_parameters(model, tied_params)
else:
weights_map = None
attach_align_device_hook_on_blocks(
model,
execution_device=execution_device,
offload=offload,
offload_buffers=offload_buffers,
weights_map=weights_map,
preload_module_classes=preload_module_classes,
)
device = list(device_map.values())[0]
if device != "disk":
model.to(device)
else:
raise ValueError(
"You are trying to offload the whole model to the disk. Please use the `disk_offload` function instead."
)
model.hf_device_map = device_map
return model
@ -309,6 +401,7 @@ def load_checkpoint_and_dispatch(
offload_buffers: bool = False,
dtype: Optional[Union[str, torch.dtype]] = None,
offload_state_dict: Optional[bool] = None,
skip_keys: Optional[Union[str, List[str]]] = None,
preload_module_classes: Optional[List[str]] = None,
):
"""
@ -345,32 +438,54 @@ def load_checkpoint_and_dispatch(
If `True`, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if
the weight of the CPU state dict + the biggest shard does not fit. Will default to `True` if the device map
picked contains `"disk"` values.
skip_keys (`str` or `List[str]`, *optional*):
A list of keys to ignore when moving inputs or outputs between devices.
preload_module_classes (`List[str]`, *optional*):
A list of classes whose instances should load all their weights (even in the submodules) at the beginning
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
Example:
```python
>>> from accelerate import init_empty_weights, load_checkpoint_and_dispatch
>>> from huggingface_hub import hf_hub_download
>>> from transformers import AutoConfig, AutoModelForCausalLM
>>> # Download the Weights
>>> checkpoint = "EleutherAI/gpt-j-6B"
>>> weights_location = hf_hub_download(checkpoint, "pytorch_model.bin")
>>> # Create a model and initialize it with empty weights
>>> config = AutoConfig.from_pretrained(checkpoint)
>>> with init_empty_weights():
... model = AutoModelForCausalLM.from_config(config)
>>> # Load the checkpoint and dispatch it to the right devices
>>> model = load_checkpoint_and_dispatch(
... model, weights_location, device_map="auto", no_split_module_classes=["GPTJBlock"]
... )
```
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("Loading and dispatching requires torch >= 1.9.0")
if isinstance(device_map, str) and device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]:
raise ValueError(
"If passing a string for `device_map`, please choose 'auto', 'balanced', 'balanced_low_0' or "
"'sequential'."
)
if device_map != "sequential":
max_memory = get_balanced_memory(
model,
max_memory=max_memory,
no_split_module_classes=no_split_module_classes,
dtype=dtype,
low_zero=(device_map == "balanced_low_0"),
)
if isinstance(device_map, str):
if device_map != "sequential":
max_memory = get_balanced_memory(
model,
max_memory=max_memory,
no_split_module_classes=no_split_module_classes,
dtype=dtype,
low_zero=(device_map == "balanced_low_0"),
)
device_map = infer_auto_device_map(
model, max_memory=max_memory, no_split_module_classes=no_split_module_classes, dtype=dtype
)
if offload_state_dict is None and "disk" in device_map.values():
if offload_state_dict is None and device_map is not None and "disk" in device_map.values():
offload_state_dict = True
load_checkpoint_in_model(
model,
@ -379,6 +494,7 @@ def load_checkpoint_and_dispatch(
offload_folder=offload_folder,
dtype=dtype,
offload_state_dict=offload_state_dict,
offload_buffers=offload_buffers,
)
if device_map is None:
return model
@ -387,5 +503,6 @@ def load_checkpoint_and_dispatch(
device_map=device_map,
offload_dir=offload_folder,
offload_buffers=offload_buffers,
skip_keys=skip_keys,
preload_module_classes=preload_module_classes,
)

View File

@ -29,6 +29,7 @@ from .utils import (
SCHEDULER_NAME,
get_pretty_name,
is_tpu_available,
is_xpu_available,
save,
)
@ -37,6 +38,7 @@ if is_tpu_available(check_device=False):
import torch_xla.core.xla_model as xm
from .logging import get_logger
from .state import PartialState
logger = get_logger(__name__)
@ -99,8 +101,10 @@ def save_accelerator_state(
states["random_state"] = random.getstate()
states["numpy_random_seed"] = np.random.get_state()
states["torch_manual_seed"] = torch.get_rng_state()
states["torch_cuda_manual_seed"] = torch.cuda.get_rng_state_all()
# ^^ safe to call this function even if cuda is not available
if is_xpu_available():
states["torch_xpu_manual_seed"] = torch.xpu.get_rng_state_all()
else:
states["torch_cuda_manual_seed"] = torch.cuda.get_rng_state_all()
if is_tpu_available():
states["xm_seed"] = xm.get_rng_state()
output_states_file = os.path.join(output_dir, states_name)
@ -109,7 +113,16 @@ def save_accelerator_state(
return output_dir
def load_accelerator_state(input_dir, models, optimizers, schedulers, process_index, scaler=None):
def load_accelerator_state(
input_dir,
models,
optimizers,
schedulers,
process_index,
scaler=None,
map_location=None,
**load_model_func_kwargs,
):
"""
Loads states of the models, optimizers, scaler, and RNG generators from a given directory.
@ -126,19 +139,32 @@ def load_accelerator_state(input_dir, models, optimizers, schedulers, process_in
The current process index in the Accelerator state
scaler (`torch.cuda.amp.GradScaler`, *optional*):
An optional *GradScaler* instance to load
map_location (`str`, *optional*):
What device to load the optimizer state onto. Should be one of either "cpu" or "on_device".
load_model_func_kwargs (`dict`, *optional*):
Additional arguments that can be passed to the model's `load_state_dict` method.
"""
if map_location not in [None, "cpu", "on_device"]:
raise TypeError(
"Unsupported optimizer map location passed, please choose one of `None`, `'cpu'`, or `'on_device'`"
)
if map_location is None:
map_location = "cpu"
elif map_location == "on_device":
map_location = PartialState().device
# Model states
for i, model in enumerate(models):
weights_name = f"{MODEL_NAME}.bin" if i == 0 else f"{MODEL_NAME}_{i}.bin"
input_model_file = os.path.join(input_dir, weights_name)
models[i].load_state_dict(torch.load(input_model_file, map_location="cpu"))
models[i].load_state_dict(torch.load(input_model_file, map_location=map_location), **load_model_func_kwargs)
logger.info("All model weights loaded successfully")
# Optimizer states
for i, opt in enumerate(optimizers):
optimizer_name = f"{OPTIMIZER_NAME}.bin" if i == 0 else f"{OPTIMIZER_NAME}_{i}.bin"
input_optimizer_file = os.path.join(input_dir, optimizer_name)
optimizers[i].load_state_dict(torch.load(input_optimizer_file, map_location="cpu"))
optimizer_state = torch.load(input_optimizer_file, map_location=map_location)
optimizers[i].load_state_dict(optimizer_state)
logger.info("All optimizer states loaded successfully")
# Scheduler states
@ -160,12 +186,14 @@ def load_accelerator_state(input_dir, models, optimizers, schedulers, process_in
random.setstate(states["random_state"])
np.random.set_state(states["numpy_random_seed"])
torch.set_rng_state(states["torch_manual_seed"])
torch.cuda.set_rng_state_all(states["torch_cuda_manual_seed"])
# ^^ safe to call this function even if cuda is not available
if is_xpu_available():
torch.xpu.set_rng_state_all(states["torch_xpu_manual_seed"])
else:
torch.cuda.set_rng_state_all(states["torch_cuda_manual_seed"])
if is_tpu_available():
xm.set_rng_state(states["xm_seed"])
logger.info("All random states loaded successfully")
except:
except Exception:
logger.info("Could not load random states")
@ -185,4 +213,4 @@ def load_custom_state(obj, path, index: int = 0):
"""
load_location = f"{path}/custom_checkpoint_{index}.pkl"
logger.info(f"Loading the state of {get_pretty_name(obj)} from {load_location}")
obj.load_state_dict(torch.load(load_location))
obj.load_state_dict(torch.load(load_location, map_location="cpu"))

View File

@ -24,7 +24,7 @@ from accelerate.commands.tpu import tpu_command_parser
def main():
parser = ArgumentParser("Accelerate CLI tool", usage="accelerate <command> [<args>]")
parser = ArgumentParser("Accelerate CLI tool", usage="accelerate <command> [<args>]", allow_abbrev=False)
subparsers = parser.add_subparsers(help="accelerate command helpers")
# Register commands

View File

@ -23,7 +23,7 @@ from .update import update_command_parser
def get_config_parser(subparsers=None):
parent_parser = argparse.ArgumentParser(add_help=False)
parent_parser = argparse.ArgumentParser(add_help=False, allow_abbrev=False)
# The main config parser
config_parser = config_command_parser(subparsers)
# The subparser to add commands to

View File

@ -19,9 +19,10 @@ import os
from ...utils import (
ComputeEnvironment,
DistributedType,
DynamoBackend,
is_deepspeed_available,
is_mps_available,
is_transformers_available,
is_xpu_available,
)
from ...utils.constants import (
DEEPSPEED_MULTINODE_LAUNCHERS,
@ -29,9 +30,11 @@ from ...utils.constants import (
FSDP_BACKWARD_PREFETCH,
FSDP_SHARDING_STRATEGY,
FSDP_STATE_DICT_TYPE,
TORCH_DYNAMO_MODES,
)
from .config_args import ClusterConfig
from .config_utils import (
DYNAMO_BACKENDS,
_ask_field,
_ask_options,
_convert_distributed_mode,
@ -44,7 +47,7 @@ from .config_utils import (
def get_cluster_input():
distributed_type = _ask_options(
"Which type of machine are you using?",
["No distributed training", "multi-CPU", "multi-GPU", "TPU", "MPS"],
["No distributed training", "multi-CPU", "multi-XPU", "multi-GPU", "multi-NPU", "TPU"],
_convert_distributed_mode,
)
@ -56,11 +59,13 @@ def get_cluster_input():
main_process_port = None
rdzv_backend = "static"
same_network = True
tpu_name = None
tpu_zone = None
commands = None
command_file = None
if distributed_type in [DistributedType.MULTI_GPU, DistributedType.MULTI_CPU]:
if distributed_type in [
DistributedType.MULTI_GPU,
DistributedType.MULTI_GPU,
DistributedType.MULTI_XPU,
DistributedType.MULTI_CPU,
]:
num_machines = _ask_field(
"How many different machines will you use (use more than 1 for multi-node training)? [1]: ",
int,
@ -92,7 +97,7 @@ def get_cluster_input():
if distributed_type == DistributedType.NO:
use_cpu = _ask_field(
"Do you want to run your training on CPU only (even if a GPU is available)? [yes/NO]:",
"Do you want to run your training on CPU only (even if a GPU / Apple Silicon device is available)? [yes/NO]:",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
@ -102,6 +107,27 @@ def get_cluster_input():
else:
use_cpu = False
ipex_config = {}
if use_cpu:
ipex_config["ipex"] = _ask_field(
"Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU? [yes/NO]:",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if (
not use_cpu
and is_xpu_available()
and distributed_type not in [DistributedType.MULTI_GPU, DistributedType.MULTI_NPU, DistributedType.TPU]
):
ipex_config["use_xpu"] = _ask_field(
"Do you want to use XPU plugin to speed up training on XPU? [yes/NO]:",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
dynamo_config = {}
use_dynamo = _ask_field(
"Do you wish to optimize your script with torch dynamo?[yes/NO]:",
_convert_yes_no_to_bool,
@ -109,28 +135,43 @@ def get_cluster_input():
error_message="Please enter yes or no.",
)
if use_dynamo:
dynamo_backend = _ask_options(
prefix = "dynamo_"
dynamo_config[prefix + "backend"] = _ask_options(
"Which dynamo backend would you like to use?",
[
"eager",
"aot_eager",
"inductor",
"nvfuser",
"aot_nvfuser",
"aot_cudagraphs",
"ofi",
"fx2trt",
"onnxrt",
"ipex",
],
[x.lower() for x in DYNAMO_BACKENDS],
_convert_dynamo_backend,
default=2,
)
else:
dynamo_backend = DynamoBackend.NO
use_custom_options = _ask_field(
"Do you want to customize the defaults sent to torch.compile? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if use_custom_options:
dynamo_config[prefix + "mode"] = _ask_options(
"Which mode do you want to use?",
TORCH_DYNAMO_MODES,
lambda x: TORCH_DYNAMO_MODES[int(x)],
default=0,
)
dynamo_config[prefix + "use_fullgraph"] = _ask_field(
"Do you want the fullgraph mode or it is ok to break model into several subgraphs? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
dynamo_config[prefix + "use_dynamic"] = _ask_field(
"Do you want to enable dynamic shape tracing? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
use_mps = not use_cpu and is_mps_available()
deepspeed_config = {}
if distributed_type in [DistributedType.MULTI_GPU, DistributedType.NO]:
if distributed_type in [DistributedType.MULTI_GPU, DistributedType.NO] and not use_mps:
use_deepspeed = _ask_field(
"Do you want to use DeepSpeed? [yes/NO]: ",
_convert_yes_no_to_bool,
@ -172,6 +213,18 @@ def get_cluster_input():
deepspeed_config["offload_param_device"] = _ask_options(
"Where to offload parameters?", deepspeed_devices, lambda x: deepspeed_devices[int(x)]
)
if deepspeed_config["offload_param_device"] == "nvme":
deepspeed_config["offload_param_nvme_path"] = _ask_field(
"Nvme Path to offload parameters?",
str,
default="/nvme",
)
if deepspeed_config["offload_optimizer_device"] == "nvme":
deepspeed_config["offload_optimizer_nvme_path"] = _ask_field(
"Nvme Path to offload optimizer states?",
str,
default="/nvme",
)
deepspeed_config["gradient_accumulation_steps"] = _ask_field(
"How many gradient accumulation steps you're passing in your script? [1]: ",
int,
@ -283,14 +336,15 @@ def get_cluster_input():
)
if fsdp_config["fsdp_auto_wrap_policy"] == FSDP_AUTO_WRAP_POLICY[0]:
fsdp_config["fsdp_transformer_layer_cls_to_wrap"] = _ask_field(
"What is the transformer layer class name (case-sensitive) to wrap ,e.g, `BertLayer`, `GPTJBlock`, `T5Block` ...? : ",
"Specify the comma-separated list of transformer layer class names (case-sensitive) to wrap ,e.g, :"
"`BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput` ...? : ",
str,
)
elif fsdp_config["fsdp_auto_wrap_policy"] == FSDP_AUTO_WRAP_POLICY[1]:
fsdp_config["fsdp_min_num_params"] = _ask_field(
"What should be your FSDP's minimum number of parameters for Default Auto Wrapping Policy? [1e8]: ",
int,
default=1e8,
default=100000000,
)
fsdp_backward_prefetch_query = "What should be your FSDP's backward prefetch policy?"
fsdp_config["fsdp_backward_prefetch_policy"] = _ask_options(
@ -303,6 +357,25 @@ def get_cluster_input():
fsdp_state_dict_type_query,
FSDP_STATE_DICT_TYPE,
lambda x: FSDP_STATE_DICT_TYPE[int(x)],
default=2,
)
fsdp_config["fsdp_forward_prefetch"] = _ask_field(
"Do you want to enable FSDP's forward prefetch policy? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
fsdp_config["fsdp_use_orig_params"] = _ask_field(
"Do you want to enable FSDP's `use_orig_params` feature? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
fsdp_config["fsdp_sync_module_states"] = _ask_field(
"Do you want each individually wrapped FSDP unit to broadcast module parameters from rank 0 at the start? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
megatron_lm_config = {}
@ -365,72 +438,24 @@ def get_cluster_input():
float,
default=1.0,
)
# TPU specific defaults
tpu_commands = None
tpu_command_file = None
tpu_downcast_bf16 = "no"
tpu_env = []
tpu_name = None
tpu_vm = None
tpu_zone = None
tpu_use_sudo = False
tpu_use_cluster = False
if distributed_type == DistributedType.TPU:
main_training_function = _ask_field(
"What is the name of the function in your script that should be launched in all parallel scripts? [main]: ",
default="main",
)
use_cluster = _ask_field(
"Are you using a TPU cluster? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if use_cluster:
tpu_name = _ask_field(
"What is the name of your TPU cluster? ",
default=None,
error_message="Please enter the name of your TPU cluster.",
)
tpu_zone = _ask_field(
"What is the zone of your TPU cluster? ",
default=None,
error_message="Please enter the zone of your TPU cluster.",
)
run_commands = _ask_field(
"Do you have code you wish to run on startup in each pod? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if run_commands:
use_command_file = _ask_field(
"Is this code located in a bash script? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if use_command_file:
command_file = _ask_field(
"What is the path to your bash script? ",
default=None,
error_message="Please enter the path to your bash script.",
)
command_file = os.path.abspath(command_file)
else:
print("Please enter each command seperately you wish to run on startup in each pod.")
commands = []
another_command = True
while another_command:
commands.append(
_ask_field(
"Please enter a single command to be ran ",
default=None,
error_message="Please enter the commands you wish to run on startup in each pod as a single string.",
)
)
another_command = _ask_field(
"Do you wish to add another command? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
else:
main_training_function = "main"
if distributed_type in [DistributedType.MULTI_CPU, DistributedType.MULTI_GPU, DistributedType.TPU]:
if distributed_type in [
DistributedType.MULTI_CPU,
DistributedType.MULTI_XPU,
DistributedType.MULTI_GPU,
DistributedType.MULTI_NPU,
DistributedType.TPU,
]:
machine_type = str(distributed_type).split(".")[1].replace("MULTI_", "")
if machine_type == "TPU":
machine_type += " cores"
@ -452,32 +477,115 @@ def get_cluster_input():
else:
num_processes = 1
if distributed_type in [DistributedType.MULTI_GPU, DistributedType.NO] and not use_cpu:
if (
distributed_type
in [
DistributedType.MULTI_GPU,
DistributedType.MULTI_NPU,
DistributedType.MULTI_XPU,
DistributedType.NO,
]
and not use_cpu
and not use_mps
):
gpu_ids = _ask_field(
"What GPU(s) (by id) should be used for training on this machine as a comma-seperated list? [all]:",
default="all",
)
if distributed_type != DistributedType.TPU:
if distributed_type == DistributedType.TPU:
mixed_precision = "no"
main_training_function = _ask_field(
"What is the name of the function in your script that should be launched in all parallel scripts? [main]: ",
default="main",
)
tpu_use_cluster = _ask_field(
"Are you using a TPU cluster? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if tpu_use_cluster:
tpu_name = _ask_field(
"What is the name of your TPU cluster? ",
default=None,
error_message="Please enter the name of your TPU cluster.",
)
tpu_zone = _ask_field(
"What is the zone of your TPU cluster? ",
default=None,
error_message="Please enter the zone of your TPU cluster.",
)
tpu_use_sudo = _ask_field(
"To run a python script in a TPU pod, should `sudo` be used? [yes/NO]: ",
default=False,
error_message="Please enter yes or no.",
)
run_commands = _ask_field(
"Do you have code you wish to run on startup in each pod? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if run_commands:
use_command_file = _ask_field(
"Is this code located in a bash script? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if use_command_file:
tpu_command_file = _ask_field(
"What is the path to your bash script? ",
default=None,
error_message="Please enter the path to your bash script.",
)
tpu_command_file = os.path.abspath(tpu_command_file)
else:
print("Please enter each command seperately you wish to run on startup in each pod.")
tpu_commands = []
another_command = True
while another_command:
tpu_commands.append(
_ask_field(
"Please enter a single command to be ran ",
default=None,
error_message="Please enter the commands you wish to run on startup in each pod as a single string.",
)
)
another_command = _ask_field(
"Do you wish to add another command? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
tpu_vm = _ask_field(
"If not using an instance group, what are the names of the Compute VM instances to be used, seperated by a comma: ",
default="",
).split(",")
tpu_env = _ask_field(
"What environment variables do you wish to set in each pod, seperated by a comma: ",
default="",
).split(",")
else:
main_training_function = "main"
if distributed_type == DistributedType.DEEPSPEED and use_deepspeed_config:
mixed_precision = "no"
mixed_precision = None
else:
mixed_precision = _ask_options(
"Do you wish to use FP16 or BF16 (mixed precision)?",
["no", "fp16", "bf16"],
["no", "fp16", "bf16", "fp8"],
_convert_mixed_precision,
)
else:
mixed_precision = "no"
if use_dynamo and mixed_precision == "no" and not use_cpu:
print(
"Torch dynamo used without mixed precision requires TF32 to be efficient. Accelerate will enable it by default when launching your scripts."
)
downcast_bf16 = "no"
if distributed_type == DistributedType.TPU and mixed_precision == "bf16":
downcast_bf16 = _ask_field(
tpu_downcast_bf16 = _ask_field(
"Should `torch.float` be cast as `bfloat16` and `torch.double` remain `float32` on TPUs?", default="no"
)
@ -487,7 +595,7 @@ def get_cluster_input():
num_processes=num_processes,
gpu_ids=gpu_ids,
mixed_precision=mixed_precision,
downcast_bf16=downcast_bf16,
downcast_bf16=tpu_downcast_bf16,
machine_rank=machine_rank,
num_machines=num_machines,
main_process_ip=main_process_ip,
@ -496,12 +604,17 @@ def get_cluster_input():
deepspeed_config=deepspeed_config,
fsdp_config=fsdp_config,
megatron_lm_config=megatron_lm_config,
ipex_config=ipex_config,
use_cpu=use_cpu,
rdzv_backend=rdzv_backend,
same_network=same_network,
commands=tpu_commands,
command_file=tpu_command_file,
tpu_env=tpu_env,
tpu_name=tpu_name,
tpu_vm=tpu_vm,
tpu_zone=tpu_zone,
commands=commands,
command_file=command_file,
dynamo_backend=dynamo_backend,
tpu_use_sudo=tpu_use_sudo,
tpu_use_cluster=tpu_use_cluster,
dynamo_config=dynamo_config,
)

View File

@ -22,7 +22,7 @@ from typing import List, Optional, Union
import yaml
from ...utils import ComputeEnvironment, DistributedType, DynamoBackend, SageMakerDistributedType
from ...utils import ComputeEnvironment, DistributedType, SageMakerDistributedType
from ...utils.constants import SAGEMAKER_PYTHON_VERSION, SAGEMAKER_PYTORCH_VERSION, SAGEMAKER_TRANSFORMERS_VERSION
@ -41,8 +41,16 @@ else:
def load_config_from_file(config_file):
config_file_exists = config_file is not None and os.path.isfile(config_file)
config_file = config_file if config_file_exists else default_config_file
if config_file is not None:
if not os.path.isfile(config_file):
raise FileNotFoundError(
f"The passed configuration file `{config_file}` does not exist. "
"Please pass an existing file to `accelerate launch`, or use the the default one "
"created through `accelerate config` and run `accelerate launch` "
"without the `--config_file` argument."
)
else:
config_file = default_config_file
with open(config_file, "r", encoding="utf-8") as f:
if config_file.endswith(".json"):
if (
@ -70,7 +78,6 @@ class BaseConfig:
distributed_type: Union[DistributedType, SageMakerDistributedType]
mixed_precision: str
use_cpu: bool
dynamo_backend: DynamoBackend
def to_dict(self):
result = self.__dict__
@ -78,6 +85,9 @@ class BaseConfig:
for key, value in result.items():
if isinstance(value, Enum):
result[key] = value.value
if isinstance(value, dict) and not bool(value):
result[key] = None
result = {k: v for k, v in result.items() if v is not None}
return result
@classmethod
@ -88,13 +98,14 @@ class BaseConfig:
if "compute_environment" not in config_dict:
config_dict["compute_environment"] = ComputeEnvironment.LOCAL_MACHINE
if "mixed_precision" not in config_dict:
config_dict["mixed_precision"] = "fp16" if ("fp16" in config_dict and config_dict["fp16"]) else "no"
config_dict["mixed_precision"] = "fp16" if ("fp16" in config_dict and config_dict["fp16"]) else None
if "fp16" in config_dict: # Convert the config to the new format.
del config_dict["fp16"]
if "dynamo_backend" in config_dict: # Convert the config to the new format.
dynamo_backend = config_dict.pop("dynamo_backend")
config_dict["dynamo_config"] = {} if dynamo_backend == "NO" else {"dynamo_backend": dynamo_backend}
if "use_cpu" not in config_dict:
config_dict["use_cpu"] = False
if "dynamo_backend" not in config_dict:
config_dict["dynamo_backend"] = DynamoBackend.NO
return cls(**config_dict)
def to_json_file(self, json_file):
@ -111,14 +122,16 @@ class BaseConfig:
config_dict["compute_environment"] = ComputeEnvironment.LOCAL_MACHINE
if "mixed_precision" not in config_dict:
config_dict["mixed_precision"] = "fp16" if ("fp16" in config_dict and config_dict["fp16"]) else "no"
config_dict["mixed_precision"] = "fp16" if ("fp16" in config_dict and config_dict["fp16"]) else None
if isinstance(config_dict["mixed_precision"], bool) and not config_dict["mixed_precision"]:
config_dict["mixed_precision"] = "no"
if "fp16" in config_dict: # Convert the config to the new format.
del config_dict["fp16"]
if "dynamo_backend" in config_dict: # Convert the config to the new format.
dynamo_backend = config_dict.pop("dynamo_backend")
config_dict["dynamo_config"] = {} if dynamo_backend == "NO" else {"dynamo_backend": dynamo_backend}
if "use_cpu" not in config_dict:
config_dict["use_cpu"] = False
if "dynamo_backend" not in config_dict:
config_dict["dynamo_backend"] = DynamoBackend.NO
return cls(**config_dict)
def to_yaml_file(self, yaml_file):
@ -133,8 +146,8 @@ class BaseConfig:
self.distributed_type = SageMakerDistributedType(self.distributed_type)
else:
self.distributed_type = DistributedType(self.distributed_type)
if isinstance(self.dynamo_backend, str):
self.dynamo_backend = DynamoBackend(self.dynamo_backend.upper())
if self.dynamo_config is None:
self.dynamo_config = {}
@dataclass
@ -155,14 +168,23 @@ class ClusterConfig(BaseConfig):
fsdp_config: dict = None
# args for megatron_lm
megatron_lm_config: dict = None
# args for ipex
ipex_config: dict = None
# args for TPU
downcast_bf16: bool = False
# args for TPU pods
tpu_name: str = None
tpu_zone: str = None
tpu_use_cluster: bool = False
tpu_use_sudo: bool = False
command_file: str = None
commands: List[str] = None
tpu_vm: List[str] = None
tpu_env: List[str] = None
# args for dynamo
dynamo_config: dict = None
def __post_init__(self):
if self.deepspeed_config is None:
@ -171,6 +193,8 @@ class ClusterConfig(BaseConfig):
self.fsdp_config = {}
if self.megatron_lm_config is None:
self.megatron_lm_config = {}
if self.ipex_config is None:
self.ipex_config = {}
return super().__post_init__()
@ -178,7 +202,7 @@ class ClusterConfig(BaseConfig):
class SageMakerConfig(BaseConfig):
ec2_instance_type: str
iam_role_name: str
image_uri: str
image_uri: Optional[str] = None
profile: Optional[str] = None
region: str = "us-east-1"
num_machines: int = 1
@ -189,3 +213,5 @@ class SageMakerConfig(BaseConfig):
py_version: str = SAGEMAKER_PYTHON_VERSION
sagemaker_inputs_file: str = None
sagemaker_metrics_file: str = None
additional_args: dict = None
dynamo_config: dict = None

View File

@ -48,7 +48,7 @@ def _ask_field(input_text, convert_value=None, default=None, error_message=None)
if default is not None and len(result) == 0:
return default
return convert_value(result) if convert_value is not None else result
except:
except Exception:
if error_message is not None:
print(error_message)
@ -66,17 +66,17 @@ def _convert_compute_environment(value):
def _convert_distributed_mode(value):
value = int(value)
return DistributedType(["NO", "MULTI_CPU", "MULTI_GPU", "TPU", "MPS"][value])
return DistributedType(["NO", "MULTI_CPU", "MULTI_XPU", "MULTI_GPU", "MULTI_NPU", "TPU"][value])
def _convert_dynamo_backend(value):
value = int(value)
return DynamoBackend(DYNAMO_BACKENDS[value])
return DynamoBackend(DYNAMO_BACKENDS[value]).value
def _convert_mixed_precision(value):
value = int(value)
return PrecisionType(["no", "fp16", "bf16"][value])
return PrecisionType(["no", "fp16", "bf16", "fp8"][value])
def _convert_sagemaker_distributed_mode(value):

View File

@ -18,6 +18,7 @@ from pathlib import Path
import torch
from ...utils import is_npu_available, is_xpu_available
from .config_args import ClusterConfig, default_json_config_file
from .config_utils import SubcommandHelpFormatter
@ -25,7 +26,7 @@ from .config_utils import SubcommandHelpFormatter
description = "Create a default config file for Accelerate with only a few flags set."
def write_basic_config(mixed_precision="no", save_location: str = default_json_config_file, dynamo_backend="no"):
def write_basic_config(mixed_precision="no", save_location: str = default_json_config_file, use_xpu: bool = False):
"""
Creates and saves a basic cluster config to be used on a local machine with potentially multiple GPUs. Will also
set CPU if it is a CPU-only machine.
@ -37,6 +38,8 @@ def write_basic_config(mixed_precision="no", save_location: str = default_json_c
Optional custom save location. Should be passed to `--config_file` when using `accelerate launch`. Default
location is inside the huggingface cache folder (`~/.cache/huggingface`) but can be overriden by setting
the `HF_HOME` environmental variable, followed by `accelerate/default_config.yaml`.
use_xpu (`bool`, *optional*, defaults to `False`):
Whether to use XPU if available.
"""
path = Path(save_location)
path.parent.mkdir(parents=True, exist_ok=True)
@ -46,12 +49,13 @@ def write_basic_config(mixed_precision="no", save_location: str = default_json_c
)
return False
mixed_precision = mixed_precision.lower()
if mixed_precision not in ["no", "fp16", "bf16"]:
raise ValueError(f"`mixed_precision` should be one of 'no', 'fp16', or 'bf16'. Received {mixed_precision}")
if mixed_precision not in ["no", "fp16", "bf16", "fp8"]:
raise ValueError(
f"`mixed_precision` should be one of 'no', 'fp16', 'bf16', or 'fp8'. Received {mixed_precision}"
)
config = {
"compute_environment": "LOCAL_MACHINE",
"mixed_precision": mixed_precision,
"dynamo_backend": dynamo_backend,
}
if torch.cuda.is_available():
num_gpus = torch.cuda.device_count()
@ -61,8 +65,24 @@ def write_basic_config(mixed_precision="no", save_location: str = default_json_c
config["distributed_type"] = "MULTI_GPU"
else:
config["distributed_type"] = "NO"
elif is_xpu_available() and use_xpu:
num_xpus = torch.xpu.device_count()
config["num_processes"] = num_xpus
config["use_cpu"] = False
if num_xpus > 1:
config["distributed_type"] = "MULTI_XPU"
else:
config["distributed_type"] = "NO"
elif is_npu_available():
num_npus = torch.npu.device_count()
config["num_processes"] = num_npus
config["use_cpu"] = False
if num_npus > 1:
config["distributed_type"] = "MULTI_NPU"
else:
config["distributed_type"] = "NO"
else:
num_gpus = 0
num_xpus = 0
config["use_cpu"] = True
config["num_processes"] = 1
config["distributed_type"] = "NO"

View File

@ -16,11 +16,12 @@
import json
import os
from ...utils.constants import SAGEMAKER_PARALLEL_EC2_INSTANCES
from ...utils.dataclasses import ComputeEnvironment, DynamoBackend, SageMakerDistributedType
from ...utils.constants import SAGEMAKER_PARALLEL_EC2_INSTANCES, TORCH_DYNAMO_MODES
from ...utils.dataclasses import ComputeEnvironment, SageMakerDistributedType
from ...utils.imports import is_boto3_available
from .config_args import SageMakerConfig
from .config_utils import (
DYNAMO_BACKENDS,
_ask_field,
_ask_options,
_convert_dynamo_backend,
@ -170,6 +171,7 @@ def get_sagemaker_input():
["No distributed training", "Data parallelism"],
_convert_sagemaker_distributed_mode,
)
dynamo_config = {}
use_dynamo = _ask_field(
"Do you wish to optimize your script with torch dynamo?[yes/NO]:",
_convert_yes_no_to_bool,
@ -177,25 +179,39 @@ def get_sagemaker_input():
error_message="Please enter yes or no.",
)
if use_dynamo:
dynamo_backend = _ask_options(
prefix = "dynamo_"
dynamo_config[prefix + "backend"] = _ask_options(
"Which dynamo backend would you like to use?",
[
"eager",
"aot_eager",
"inductor",
"nvfuser",
"aot_nvfuser",
"aot_cudagraphs",
"ofi",
"fx2trt",
"onnxrt",
"ipex",
],
[x.lower() for x in DYNAMO_BACKENDS],
_convert_dynamo_backend,
default=2,
)
else:
dynamo_backend = DynamoBackend.NO
use_custom_options = _ask_field(
"Do you want to customize the defaults sent to torch.compile? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if use_custom_options:
dynamo_config[prefix + "mode"] = _ask_options(
"Which mode do you want to use?",
TORCH_DYNAMO_MODES,
lambda x: TORCH_DYNAMO_MODES[int(x)],
default="default",
)
dynamo_config[prefix + "use_fullgraph"] = _ask_field(
"Do you want the fullgraph mode or it is ok to break model into several subgraphs? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
dynamo_config[prefix + "use_dynamic"] = _ask_field(
"Do you want to enable dynamic shape tracing? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
ec2_instance_query = "Which EC2 instance type you want to use for your training?"
if distributed_type != SageMakerDistributedType.NO:
ec2_instance_type = _ask_options(
@ -215,7 +231,7 @@ def get_sagemaker_input():
mixed_precision = _ask_options(
"Do you wish to use FP16 or BF16 (mixed precision)?",
["no", "fp16", "bf16"],
["no", "fp16", "bf16", "fp8"],
_convert_mixed_precision,
)
@ -229,7 +245,7 @@ def get_sagemaker_input():
compute_environment=ComputeEnvironment.AMAZON_SAGEMAKER,
distributed_type=distributed_type,
use_cpu=False,
dynamo_backend=dynamo_backend,
dynamo_config=dynamo_config,
ec2_instance_type=ec2_instance_type,
profile=aws_profile,
region=aws_region,

View File

@ -19,11 +19,14 @@ import os
import platform
import numpy as np
import psutil
import torch
from accelerate import __version__ as version
from accelerate.commands.config import default_config_file, load_config_from_file
from ..utils import is_npu_available, is_xpu_available
def env_command_parser(subparsers=None):
if subparsers is not None:
@ -43,6 +46,8 @@ def env_command_parser(subparsers=None):
def env_command(args):
pt_version = torch.__version__
pt_cuda_available = torch.cuda.is_available()
pt_xpu_available = is_xpu_available()
pt_npu_available = is_npu_available()
accelerate_config = "Not found"
# Get the default from the config file.
@ -55,7 +60,12 @@ def env_command(args):
"Python version": platform.python_version(),
"Numpy version": np.__version__,
"PyTorch version (GPU?)": f"{pt_version} ({pt_cuda_available})",
"PyTorch XPU available": str(pt_xpu_available),
"PyTorch NPU available": str(pt_npu_available),
"System RAM": f"{psutil.virtual_memory().total / 1024 ** 3:.2f} GB",
}
if pt_cuda_available:
info["GPU type"] = torch.cuda.get_device_name()
print("\nCopy-and-paste the text below in your GitHub issue\n")
print("\n".join([f"- {prop}: {val}" for prop, val in info.items()]))

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1 @@
# flake8: noqa
# There's no way to ignore "F401 '...' imported but unused" warnings in this
# module, but to preserve other warnings. So, don't check this module at all
from .selection_menu import BulletMenu

View File

@ -33,7 +33,7 @@ class Direction(enum.Enum):
def forceWrite(content, end=""):
sys.stdout.write(content + end)
sys.stdout.write(str(content) + end)
sys.stdout.flush()

Some files were not shown because too many files have changed in this diff Show More