* accelerate/data_loader.py: do not yield if the base_dataloader is empty
in the code:
```
dataloader_iter = self.base_dataloader.__iter__()
# We iterate one batch ahead to check when we are at the end
try:
current_batch = next(dataloader_iter)
except StopIteration:
yield
```
If the base dataloader is empty then the exception is raised but `yield`
yields nothing.
This at the time of:
```
if self.device is not None:
current_batch = send_to_device(current_batch, self.device, non_blocking=self._non_blocking)
```
would lead to uncaught exception like:
File "/root/rl-swarm/.venv/lib/python3.10/site-packages/accelerate/data_loader.py", line 575, in iter
current_batch = send_to_device(current_batch, self.device, non_blocking=self._non_blocking)
UnboundLocalError: local variable 'current_batch' referenced before assignment because `current_batch`
was never assigned because `next(dataloader_iter)` returned with exception `StopIteration`.
Signed-off-by: 0xnightwind <nightwind1899@gmail.com>
* Update src/accelerate/data_loader.py
---------
Signed-off-by: 0xnightwind <nightwind1899@gmail.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* feat: use datasets.IterableDataset shard if possible.
When `accelerator.prepare` is called on a
`datasets.IterableDataset`, use the `shard` method to
split the dataset across the available processes. This
allows for more efficient data loading and processing.
Without load and slice overhead of `IterableDatasetShard`
* dataset
* remove unused import
* style
---------
Co-authored-by: wuwenxu.01 <wuwenxu.01@bytedance.com>
* add support for SwanLabTracker and update related documentation
* add emoji in FRAMWORK
* apply the style corrections and quality control
* add support for SwanLabTracker in tests
* fix bug in test_tracking
* deepspeed auto grad accum
* add tests for grad accum
* use tiny-random-gpt2
* Update tests/deepspeed/test_deepspeed_gradient_accumulation.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fix redundant code
* set_gradient_accumulation_boundary is always there
* remove unused helper
* no need for this
* full revert
* Apply style fixes
* get_global_grad_norm is always there
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* Fix double wrap
* Clocking off, ~equal to torch baseline
* works?
* Working version
* Partial rewrite
* FSDP2 path works
* Fix back prepare
* Almost done, proper AC left
* Feat: should work, cleanup + test more benchmarks left
* Style+quality
* Feat: fp8 example
* Feat: better example
* Feat: add readme
* Docs + should be done
* Fix: typos
* Fix: protect imports
* Feat: address comments
* Feat: add flops image
* use the state mixed precision which has undergone all preprocessing
* Update src/accelerate/accelerator.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Update src/accelerate/accelerator.py
* accelerator state sets the mixed precision for deepspeed and fp8_enabled
* fix
* fix
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Added artifacts and figure tracking at MLFlow tracker
* Added `log_artifact` to the MLFlowTracker
* Remove changes
* Added kwargs when loading state.
* added doc string
* Adjusted correct default types of kwargs
* Changed the load kwargs to a single one
* removed None value from kwargs
* fix kwargs for loading the model
* removed load_kwargs from optimizer state dict
* make load_kwargs a dictionary
* revert last changes
* reverted load_kwargs
* fix docstring
* added dict initiation
* Fix quality error during PR
* add standalone mode and replace ConnectionError with a warning when the main process port is in use, allowing for automatic port selection
* address review feedback: warn on port conflict only for single-node; raise error for multi-node
* Apply style fixes
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* check if num_extrs>0 and test
* test pass
* test passes
* make quality fix
* Apply style fixes
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* add regional compilation to cli tools and env vars
* added seq parallel to gaudi docs
* explain that lm_head is also compiled separately
* style
* docstring
* style
* Fix the issue where `set_epoch` does not take effect.
* Apply style fixes
---------
Co-authored-by: root <root@hjx-dev-h20-3-0.hjx-dev-h20-3.bcloud.svc.cluster.local>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* add support for port 0 auto-selection in multi-GPU environments
* address review feedback: [add implementation for DeepSpeed, simplify code logic]
---------
Co-authored-by: biondi <biondi_lee@htx.ht.gov.sg>
* fix fsdp2 wrap policy
* nn.Module doesn't have the dtype attribute
* Revert "nn.Module doesn't have the dtype attribute"
This reverts commit 513c7892876f81ec76ce32bcdce83bfe8556491d.
* Fix dtype handling in fsdp2_prepare_model to accommodate nn.Module without dtype attribute
* fix format problem
* Feat: enable FULL_STATE_DICT in config
* Feat: support FSDP2 FULL_STATE_DICT
* Refactor: remove deprecated save/load_state_dict
* Docs: add FULL_STATE_DICT as supported to docs
* Feat: update tests
* Feat: change Accelerator.get_state_dict() to use new api
* Initial test
* Try on push
* Only wf dispatch now
* keep trying
* Try again
* Try again
* source activate?
* Force bash
* Source activate accelerate to make it get the env propelry
* try using nightly docker
* Try this?
* Try this?
* Try this, proper output
* Try this, proper output
* Try via full conda activate(?)
* rm conda
* te fp8 tests
* add ao
* ao in setup too
* actually include fp8 deps
* FP8 docker image, use newer version
* Update docker image to take in input
* Test
* prior month
* igpu?
* Use only last 2 digits of year
* Build rest
* Apply style fixes
---------
Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* Fix: check for tp size when creating accelerator in tests
* Fix: better error handling in TorchTensorParallelPlugin
* Fix: make tp related args optional in tests (cmt by @kmehant)
* add support for custom function for reducing the batch size
* fix scoping
* Apply style fixes
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
xccl distributed backend is available for XPU device backend starting
from torch 2.7 (requires torch built with `USE_XCCL=1 USE_C10D_XCCL=1`).
This change is verified with the following Transformers tests:
* `tests/extended/test_trainer_ext.py`
* `tests/trainer/test_trainer_distributed.py`
This commit does not impact IPEX which currently remains using custom
distributed backend.
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
* Update CometMLTracker to allow re-using experiment
Update CometMLTracker to use new `comet_ml.start` function to create
Experiments, this way end-users can create online, offline experiments, append
data to an existing experiment and it also automatically re-use a running
experiment if one is present rather than creating a new one.
* Add back calling Experiment.end in finish
As `accelerator.end_training` is supposed to be called at the very end of
training by the user, users will still be able to log data after the main
training loop and this is needed for Offline Experiment to create the offline
archive.
* Update CometTracker behavior based on the version of the package
Use new method only for recent version of comet_ml
* Feat: initial conversion tool draft
* Feat: add value mapping to conversion tool
* Refactor: move from os to pathlib
* Feat: add first tests
* Feat: more tests
* Feat: minor fixes + dataclass conversions
* Feat: more remapping
* Fix: namespace has no attribute version + style
* Fix: offload params behavior
* Feat: add option to only rename keys in the config file to
* Fix: wrong attr name
* Fix: partially resolve comments
* Feat: work on config command + minor fixes to reflect changes
* Refactor: style + quality
* Feat: fsdp2 initial work
* Feat: some cleanups and first running fsdp2
* Fix: version checks + mixed precision policy
* Refactor: style + quality
* Remove obsolete todos
* Feat: grad norm clipping
* Fix: tests + rename attrs
* Refactor: style + quality
* Fix: None object is not iterable
* Fix: default cpu_offload for fsdp2
* Fix: cpu offload now behaves correctly
* Feat: apply_activation_checkpointing
* Fix: append to models
* Feat: start on concept guide
* wip: concept guide
* Fix: toctree
* cleanup of the concept guide
* Fix: minor fixes + mp
* Fix: quality + | to union
* Feat: backwards compatibility + args cleanup
* Fix: style + quality
* Feat: enable dropping refs when getting named params
* Fix: memory footprint with fsdp2
* Feat: cpu ram efficient loading
* Fix: mp
* Fix: not warn about sync_modules if fsdp version is 1
* Refactor: minor changes
* Small fixes + refactors
* Feat: docs + cleanup
* Feat: saving works (not sure about optim)
* More loading/saving work
* Feat: disable local_state_dict for fsdp2
* Fix: fsdp2 convergence
* Feat: working comparison script
* Feat: memory tracking fsdp2
* Feat: memory visualizer
* Feat: more work on benchmark
* Fix: raise error if model+optimizer arent prepared together
* Minor fixes
* Style
* More warnings
* Fix: reshard_after_forward vs sharding_strategy conflict
* Refactor: clean up accelerator
* Feat: more testing in fsdp2 benchmark
* Fix: memory visualizer
* Untested: support load/save_state
* Feat: concept guide improvements
* Refactor: concept guide
* Feat: benchmark works
* Feat: more work on fsdp2 benchmark
* Fix: note syntax
* Fix: small fixes + make original tests work
* Fix: grad scaling
* Feat: reshard after forward tests
* Feat: backward prefetch tests
* Feat: tests for fsdp2
* Refactor: minor fixes
* Feat: fsdp_utils docstrings
* Feat: autodoc fsdp.md
* Docs: get_module_children_bottom_up
* Fix: remove unused images
* Refactor: benchmark cleanup
* Fix: docs
* Feat: final doc changes
* Fix: torch.distributed has no attribute tensor
* Fix: style
* Feat: tests include version in failures
* Fix: benchmark force model to load in fp32
* Fix: rename runs
* Feat: last minor fixes
* Feat: new benchmark images
* Fix AMD GPU support with should_reduce_batch_size()
Even though torch has NVIDIA and AMD GPUs operate under the cuda namespace, the out of memory error for AMD GPUs is different. When trying to determine if a model can fit on an AMD GPU, this function will evaluate to false for a `torch.OutOfMemoryError`. This PR adds another check for the error string.
Example error messge:
```
'HIP out of memory. Tried to allocate 64.00 GiB. GPU 0 has a total capacity of 63.98 GiB of which 48.63 GiB is free. Of the allocated memory 15.02 GiB is allocated by PyTorch, and 129.49 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)'
```
* Missing comma
* Update memory.py
Consolidate OOM error check string
* feat: Add no_ssh multinode launcher option for deepspeed
* fix: Add CLI hints and brief documentation, add slurm launcher, and ensure that deepspeed 0.14.5 version is used for nossh
* Added artifacts and figure tracking at MLFlow tracker
* Added `log_artifact` to the MLFlowTracker
* Remove changes
* Added artifacts, artifacts and figure tracking at MLFlow tracker
* Improved the docstring
* added require_mlflow function at test_utils
* add test for MLflowTracker
* Bit of litting
* Refactor to a more robust test
* Revised the test asserts to something more robust.
* Removed incorrect import and some litting.
* removed commented code
* initiate tracker using Accelerator
* Added mlflow and matplotlib to setup.py. Guarded and decoredated the functions that required them.
* Guarded mlflow import
* added matplotlib required warning.
* ran style and quality
* init
* style
* is_hpu_available
* fix
* import habana_frameworks.torch.distributed.hccl
* style
* test
* initialize dist proc group
* revert
* set backend to hccl only if hccl initialization sets a local rank
* force backend hccl and multi_hpu type when sure of distributed launch
* style
* pass accelerator tests
* pas big modeling tests with bigger atol/rtol for accelerators
* fix hpu device count and skip tests requiring hpu:x
* hpu autocast
* hpu rng_state
* hpu launch
* hpu special device placement
* hpu launch
* rng state
* distributed data loop tests
* enforce non contiguity after device memory allocation
* pass fsdp tests
* enforce pt_hpu_lazy_mode=0 when fsdp testing
* pass cli tests
* pass and document grad sync tests
* pass kwargs handler and autocast tests
* memory utils
* found source of int64 errors
* skip some modeling utils tests
* enable int64
* skip optimizer tests
* pass checkpointing tests
* pass accelerator tests with safetensors main
* more hpu stuff
* style
* remove PT_HPU_LAZY_MODE and PT_ENABLE_INT64_SUPPORT as they should be in the testing environment
* start testing on gaudi2
* support fp16 on gaudi2
* add testing order
* custom hpu fsdp env dict
* fix torch trace malloc
* test ddp half precision comm hooks
* fix
* fix
* remove lower bound for hpu
* use 0.72 as lower bound
* lower lower bound
* order deepspeed tests
* fix
* deepspeed_use_hpu
* assert non lazy mode with offloaded optimizer
* make patching torch with habana frameworks the default
* less of require_non_hpu
* skip test_multi_device_merge_fsdp_weights for now as it halts
* skip another flaky test
* format
* use habana_visible_modules
* patch torch hpu device count
* avoid setting HABANA_VISIBLE_MODULES
* don't play with habana visible devices/modules
* only with hpu
* fixes and skips
* skip
* fix device ids and add some todos
* skip offloading with generate()
* fix
* reduced atol/rtol for hpu
* fix
* tag deepspeed tests that should run first
* enable a test path that was skipped
* revert a test that was customized for gaudi1
* some patching to enable HABANA_VISIBLE_MODULES
* fix zero3 test
* misc
* test DTensor TP
* remove gaudi1
* test
* style
* comment
* pass pad_across_processes
* require_fp16
* pass memory utils test
* test_ddp_comm_hook
* skip half precision comm hooks on hpu
* fix
* is_fp16_available
* fp16
* tp as part of integration tests
* fix
* write_basic_config
* safetensors
* local sgd and masked_fill_fwd_i64
* fix num_processes in test_load_states_by_steps
* fp8 support
* test
* fix
* add a workflow
* Update src/accelerate/accelerator.py
* review comments
* ci
* style
* comments
* test
* habana_frameworks.torch
* patch device count
* fix
* fix
* require_fp8
* fix
* fix
* gaudi 1
* remove unnecessary
* fixed maskd fill error in transformers
* style
* balanced_memory pass on hpu
* remove for now
* run first
* Apply suggestions from code review
* style after merge
* Update src/accelerate/accelerator.py
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Update src/accelerate/utils/transformer_engine.py
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* empty cache review comments
* test_scirpt.py error messages
* AccelerateTestCase for accelerator state cleanup
* test
* add gaudi1 workflow
* fp8 avilability
* fix
* reduce batch size
* concurrency
* check cuda as well
* nits and comments
* mark fsdp tests that require_fp16
* style
* mark deepspeed fp16 tests
* update image
* fix
* updated
* better msgs
* skip pippy
* test
* test on 2 device
* support up to 1% relative error in test_accelerate
* skip hpu fp16
* allow for 1 byte differene
* revert torch_device change
* style
* skip memory release since it's flaky
* add accelerator state cleanup to fixture
* fix
* atol
* fix
* more rtol
* equal grad test
* revert
* pass pippy on gaudi2 and skip on gaudi1
* enable sd 1.5 test with require fp16
* added warning on memory release
* don't log warning in memory release as it requires PartialState to be initialized
* Apply suggestions from code review
---------
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Bookmark
* bookmark
* Add torchao base example
* Currently broken
* Clean
* DDP varient working
* FSDP as well
* Works for all but zero3
* Bookmark: currently zero3 is underperforming
* Bookmark
* Another diff
* Fin
* Fin
* Add req huggingface suite
* update tests for fp8/torchao/ddp
* Log FP8 backend used and adjust typing
* add documentation for convert_to_float8_training
* Rename to convert_model_to_fp8_ao
* Call superinit"
* Add types
* Clean
* Use filter_first_and_last_linear_layers
* Update usage guide docs
* Actually loop through the zero stages
* Clean
* Replace GradientState -> DataLoader reference with weakrefs
So they can be cleaned up. Otherwise, they will always stay in memory, leading to notable memory leaks. Note: even accelerator.free_memory() did not work!
* Add comments; initialize _dataloader_references_ref directly instead of indirectly
This fixes tests/test_data_loader.py::StatefulDataLoaderTester tests which
started to fail after 828aae4:
```
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_dispatcher_state_dict_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_dispatcher_state_dict_num_workers_2 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_inheritance - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_state_dict_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_dataloader_state_dict_num_workers_2 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_decoupled_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_decoupled_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_2 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_end_of_dataloader - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_end_of_dataloader_dispatcher - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_skip_data_loader - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_0 - KeyError: 'in_order'
FAILED tests/test_data_loader.py::StatefulDataLoaderTester::test_stateful_dataloader_adapter_equivalent_to_torchdata_stateful_dataloader_num_workers_2 - KeyError: 'in_order'
```
The reason for the failure is that "in_order" is added only if data loader
is created with `prepare_data_loader` or `skip_first_batches()`. Tests in
`tests/test_data_loader.py::StatefulDataLoaderTester` however are creating
data loaders directly as classes and "in_order" was not added. Hence the
issue.
Fixes: 828aae4 ("add torchdata version check to avoid in_order error (#3344)")
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
* Add cross-entropy example in the gradient accumulation docs
* add example of logs
* correct skeleton code
* replace gather_for_metrics with gather
* batch_size -> per_device_batch_size
* remove main_process_only=True
* add autoregressive example in examples/
* Update docs/source/usage_guides/gradient_accumulation.md
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* ruff format
* add grad accum test
* update docs
* Update examples/by_feature/gradient_accumulation_for_autoregressive_models.py
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* update tests
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Select the DeepSpeedCPUOptimizer based on the original optimizer class.
* abstract out optimizer selection to a deepspeed util
* add deepspeed cpu Adam & AdamW
* [WIP] FEAT Decorator to purge accelerate env vars
In some circumstances, calling certain classes or functions can result
in accelerate env vars being set and not being cleaned up afterwards. As
an example, when calling:
TrainingArguments(fp16=True, ...)
The following env var will be set:
ACCELERATE_MIXED_PRECISION=fp16
This can affect subsequent code, since the env var takes precedence over
TrainingArguments(fp16=False). This is especially relevant for unit
testing, where we want to avoid the individual tests to have side
effects on one another. Decorate the unit test function or whole class
with this decorator to ensure that after each test, the env vars are
cleaned up. This works for both unittest.TestCase and normal
classes (pytest); it also works when decorating the parent class.
In its current state, this PR adds the new decorator and tests it, but
the decorator is not yet applied to potentially problematic functions or
classes.
* Linter
* Refactor code to be more readable
---------
Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
* feat: feat: Add warning for unassigned main devices
* refactor: Improve warning for unassigned main devices
* feat: impl fallback_allocate; fix output format
* fix: include last dot index in the iteration
* feat: incorporate fallback allocation into infer_auto_device_map
* Revert "feat: incorporate fallback allocation into infer_auto_device_map"
This reverts commit d607bfb530517478b90aa89c2a87a03c318a2e58.
* refactor: add helper functions and eliminate redundant variables
The fallback allocation will be reintroduced once the branching logic is fully refactored. This commit prepares the function infer_auto_device_map for further refactoring.
* refactor: simplify allocation logic by removing duplicates and reducing nesting
* feat: incorporate fallback allocation into infer_auto_device_map
Implemented fallback allocation to allow modules to be allocated to devices using BFS when regular allocation fails. This enhancement improves the allocation process by ensuring that at least one module is assigned to the device, even under tight memory constraints.
* fix: fix module splitting logic
* styles: fix styling errors
* test: add test coverage for no-warning cases
test_infer_auto_device_map and test_infer_auto_device_map_with_fallback_allocation now each have a no-warning test case.
Simplified and rewrote code sections that were made unreadable by the linter.
* refactor: simplify control flow in infer_auto_device_map
Added complete return type hinting for _init_infer_auto_device_map
* refactor: replace warnings.warn with logger.info for allocation failures
* fix: use assertLogs to capture no allocation warning messages correctly
* take care of case when "_tied_weights_keys" is not an attribute
Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
* fix style
Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
---------
Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
* Add Cambricon MLU accelerator support
* up mlu support for test
* fix mlu device MULTI_MLU
* Update src/accelerate/utils/imports.py
it's beautiful !
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* up mlu for quality check
* fix mlu device longTensor error
* fix mlu device tensor dtype check
* fix mlu device send_to_device with torch dynamo error
* Refactor AcceleratorState
* Should be near complete now
* Last missing piece
* Make my way to the acceleratorstate
* Include update to global var
* Don't use global
* gpu -> cuda
* Don't use update for dict, easier to read
* Fix tests
* stash
* Getting closer...
* Needed to spawn at the very end after env was setup
* Explain set_device before deepspeed
* Make docstring more accurate
* Early return insteaD
* Delineat blocks
* Make prepare_backend return state + backend for clarity/less magic
* fix mlu longtensor.to() bugs.
* fix MLU devices rng state save and load.
* Cambricon MLU features, Checks if `mlu` is available via an `cndev-based` check which won't trigger the drivers and leave mlu uninitialized.
* MLU devices : Checks if mlu is available via an cndev-based check which won't trigger the drivers and leave mlu
* fix code style and quality
* fix is_cuda_available error
---------
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* rebase
* Update torch v
* Rename
* Prop to docs
* Actually reverse states
* Rebase fully
* Restore old state
* Keep as load()
* No need for explicit anymore
* Check numpy version, dtypes was added in 1.25
* Clean up diff
* Fix hang
* skeleton code
* fix some errors for downloading the model
* fix some tqdm error
* fix some error
* fix some gpu errors with torch
* fix some gpu errors with torch
* testing simple way
* testing simple way
* testing simple way
* testing simple way
* actual code
* actual code
* final testing with serialization
* add multi_gpu speech generation
* fix some comments
* fix some style and quality
Continuation of #3102.
The equivalent PR in
PEFT (https://github.com/huggingface/peft/pull/2064) was successful to
restore stale bot function to PRs as well. Hence also making the same
change for accelerate.
* Bookmark
* Migratory
* Uncomment
* Rm name to model for now
* Rm container
* Left: test
* Allow only wrapping one model
* Add warning but only ref once
* Refine
* Update src/accelerate/accelerator.py
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* Finish stas nits
* Clean
* Fixup test + test writing
* Fully working
* Fin
* Nit
* Quality
* Update src/accelerate/accelerator.py
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* Actionable error
* Make note of when its enabled
* Apply suggestions from code review
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* Merge tests
* Merge
* Add currently broken test script
* Push the working implementation
* Fin
* Add guards for user behavior
* Test nits
* TODO: finish knowledge distillation example
* Update tests/deepspeed/test_deepspeed_multiple_model.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
* Allow for dict-like interface
* Get rid of disable
* Uncomment
* Complete rewrite to force a dict to be used
* Working tests/fin
* Use name as stas suggestion
* Clean
* docnit
* toctree
* toctree
* Missing ref
* Put in break
* Smaller diff
* Make note on how to use zeroinit
* Make note about accelerator ds plugin
* More docnits
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
* Limit users to not pass in another ds plugin to another accelerator
* not implemented err + Make a note about why no params
* Apply suggestions from code review from Stas
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* Add deepspeed_plugins arg + update doc
* Plugin -> plugins
* Change enable() -> select()
* Update ref properly + test
* Be consistent, model1,model2...
* first_, second_
* A few more auto values
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
---------
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
See https://github.com/huggingface/peft/pull/2061 in PEFT.
This restores the functionality of the stale bot after permissions for
the token have been limited. The action still shows errors for PEFT but
the bot appears to work fine.
* MNT Upgrade ruff to 0.6.4
Currently used version, 0.2.1, is quite old at this point.
Not a lot needed to be changed:
- Change ruff version in setup.py
- Remove deprecated ignore-init-module-imports option for ruff
- Type comparison should use is and not ==
- Use f-string instead of % formatting
- Some line wrapping and empty lines
* Oops
* initial fix for breaking accelerator pickling
* cleanup
* skip_first_batches should be used on raw dls
* multigpu sanity test
* bugs
* does this work with iterable dsets?
* fix typo
* ignore these commits, i'm just syncing the origin so i can test on my cloud workstation
* comment out failing tests, unsure if those are existing bugs or a recent regression
* torch 2.4.0?
* pickling generator issues
* test_pickle_accelerator
* test_pickle_accelerator should work now)
* base.__len__() -> len(base)
* undo reduce
* undo super().__reduce__() again
* pass args through superclass
* remove prints
* doc changes + make style && make quality
* rm warning
* Take 3
* Take 4
* Annotate
* Take 6
* Updated
* Spec
* Last fix
* Don't padd input
* Finished
* Continue refactor
* Rm comment
* Adjust the err
* Start adjustment
* GPT2 works, T5 does not
* llama too now I think
* Flag the t5 example
* v1
* More testing, need to try on H100
* Bigger batch for h100 test
* test tweak
* Fixup all tests!
* Bookmark
* Fix issues, working now
* rm num samples
* Uncomment
* Give stateful dl end of dl
* Make skip DL stateful
* Migrate to update_state_dict
* try/finally
* Add comments to test
* rm comment
* Document
* refactor out for eventual override
* Doc nit
* Brute force it
* Working version rebased from main
* kwargs
* Clean
* Fix more nits
* Fin
* Delay autocast flag
* Enable FP8 autocast during eval only if specified
* Fin
* Rm comment
* All done
* Zero3 works!
* Let the wrapper come off during unwrap_model
* Add import check
* Migrate all to benchmarks folder and make TE import check work
* Add readme
* Add README to benchmarks folder
* Update CLI to now include fp8 args
* Add test config for 0_34
* Finish adding to config yaml
* Write docs
* Expound docs w/ FP8
* Add to toctree
* Bookmark
* Tests pass!
* Fix imports
* Try with raw dict
* Make diff easier
* Add defaults to all relevent areas
* Rest of refactor
* Fix all of benjamin's nits
* Adjust logic based on Benjamin's feedback
* Adjust for new logic
* Add Cambricon MLU accelerator support
* up mlu support for test
* fix mlu device MULTI_MLU
* Update src/accelerate/utils/imports.py
it's beautiful !
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* up mlu for quality check
* fix mlu device longTensor error
* fix mlu device tensor dtype check
* fix mlu device send_to_device with torch dynamo error
* Refactor AcceleratorState
* Should be near complete now
* Last missing piece
* Make my way to the acceleratorstate
* Include update to global var
* Don't use global
* gpu -> cuda
* Don't use update for dict, easier to read
* Fix tests
* stash
* Getting closer...
* Needed to spawn at the very end after env was setup
* Explain set_device before deepspeed
* Make docstring more accurate
* Early return insteaD
* Delineat blocks
* Make prepare_backend return state + backend for clarity/less magic
* fix mlu longtensor.to() bugs.
* fix MLU devices rng state save and load.
---------
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* skip test due to torchvision issue
* Revert "skip test due to torchvision issue"
This reverts commit b12b6b4ffafea6ec6c65b9721a30b8a54bf7af1e.
* change min version
* test upgrade
* exact version
* update
* add back
* Enabled correct loading of models with shared tensors when using accelerator.load_state()
* removed unused import
* added a test for a model with shared weights
* removed unnecessary bits
* fixed linting errors
* Add Cambricon MLU accelerator support
* up mlu support for test
* fix mlu device MULTI_MLU
* Update src/accelerate/utils/imports.py
it's beautiful !
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* up mlu for quality check
* fix mlu device longTensor error
* fix mlu device tensor dtype check
* fix mlu device send_to_device with torch dynamo error
* Refactor AcceleratorState
* Should be near complete now
* Last missing piece
* Make my way to the acceleratorstate
* Include update to global var
* Don't use global
* gpu -> cuda
* Don't use update for dict, easier to read
* Fix tests
* stash
* Getting closer...
* Needed to spawn at the very end after env was setup
* Explain set_device before deepspeed
* Make docstring more accurate
* Early return insteaD
* Delineat blocks
* Make prepare_backend return state + backend for clarity/less magic
* fix mlu longtensor.to() bugs.
---------
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Add ddp comm hook
* Fix dataclass order
* Merge ddp grad hook to ddp kwargs handler
* Reset ddp kwargs key
* Add test
* Fix test case
* Split ddp grad test
* Fix test case
* Ehance docstring
* Minor
* Use naive baseenum for ddp comm hook type
* Add by feature example
* Add multi device deco
* Add user guide
* Update examples/by_feature/ddp_comm_hook.py
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Update examples/by_feature/ddp_comm_hook.py
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Add wrapper and state option details
* Update toctree
* Update docs/source/usage_guides/ddp_comm_hook.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/usage_guides/ddp_comm_hook.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/usage_guides/ddp_comm_hook.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/usage_guides/ddp_comm_hook.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/usage_guides/ddp_comm_hook.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/usage_guides/ddp_comm_hook.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/usage_guides/ddp_comm_hook.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Mv ddp comm hook index
* Fix ddp comm hook user guid
* Del empty line
---------
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* fix wrong use of sync_gradients to implement sync_each_batch as pointed out by @Nightmare-n
Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
* fix test
---------
Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
* Initial commit
* Now to test
* Store false
* Slight tweaks
* Fix naming
* Got it all working with tests
* Use not for safetensors arg
* rm change
* Add docs
* Adjust based on Marc's feedback
* Specify just weights
* Update tests to include CLI and swap namings
* Fin
* Rm unused
* Rm again
* fix stacklevel in logging to log info about the actual user callsite
* Add two tests for stacklevel in logging
---------
Co-authored-by: luowyang <luowyang@github.com>
* Add --log-dir/--log_dir to `distributed_args` to allow redirecting std
streams into log files when using torchrun as the launcher. Used with
--tee this will acheive similar effect as running with `torchrun --tee X
--log-dir=logs`.
* Deleted the unecessary "--log-dir" argument following suggestion from
@muellerzr, since it will be automatically generated from "--log_dir".
* address part of stats comments
* automatically set sync_module_states if low_cpu_mem is set
* Apply suggestions from @stas00
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* add links from fsdp and deepspeed docs. fix deepspeed imports
* replace raise in accelerate.launch
---------
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* Change dataloader send_to_device calls to non-blocking
* add non_blocking to dataloader dataclass
* add dataloader non blocking option from dataclass
* add handling for non blocking to accelerator
* add notes on non-blocking transfers to quicktour
* link to dataloaderconfiguration in docs
* linting
* "requires" -> "recommended" on non-blocking setting
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
---------
Co-authored-by: drhead <a@a.a>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Basic autocasting stuff
* Delay fp8 autocast until after DDP wrapping
* More fixes
* Bookmark: without dtype change
* Bookmark: with dtype changes
* Different alternative, better results
* Didn't matter what order, same result
* Revert + maintain
* Fin
* Refactor based on feedback
* native_amp bool
* Final nits
This test is to test the change in the memory size occupied by model loading when low_cpu_mem_usage is used.
Therefore, the default device used is cpu. However, when judging whether other devices are available,
new packages will be introduced, causing memory changes and interfering with the test results.
Signed-off-by: yuanwu <yuan.wu@intel.com>
* Refactor AcceleratorState
* Should be near complete now
* Last missing piece
* Make my way to the acceleratorstate
* Include update to global var
* Don't use global
* gpu -> cuda
* Don't use update for dict, easier to read
* Fix tests
* stash
* Getting closer...
* Needed to spawn at the very end after env was setup
* Explain set_device before deepspeed
* Make docstring more accurate
* Early return insteaD
* Delineat blocks
* Make prepare_backend return state + backend for clarity/less magic
* Check if it's None and then return
* Use a dataclass
* Forgot one
* Clean
* Style
* Docstring fix?
* Fix deepspeed
* Move slighly
* Final fix
* Fix state for deepspeed
* rm comment
* Resolve ZeRO-3 Initialization Failure in Pre-Set Torch Distributed Environments (huggingface/transformers#28803)
* add unit test for deepspeed zero3 intergation
* update test case then keep it accelerate spec
* changed notebook_launcher to not ignore num_processes parameter on colab
* clarified documentation on notebook_launcher (that config file is ignored by notebook_launcher)
* simplified logic in launcher to retain prev elif, imported get_gpu_info from environment
* run quality and style fixes
---------
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Expound docstring
* Reword
* Weird spacing
* Move example
* Move to solve formatting issues
* Link to the spec class
* Take 3
* Copy kwargs format to others
* Take 4...
* Special thingy
* Guard stateful objects
* Add test
* Add a test
* MOre tests
* Update AcceleratorState
* Decision: early return
* Test accelerator as well
* use right assert check
* Use getattr
* Update data_loader.py
* fix reformatting bug
* add unit test
* add Accelerator initialization in unit test
* move unit test of seedable sampler to test_script.py
* reformatted
* Improve .deepspeed_env generation
Co-authored-by: Rick Lamers <ricklamers@gmail.com>
* Leave for a latter date
---------
Co-authored-by: Rick Lamers <ricklamers@gmail.com>
* Beta test, could break!
* Cleanup and get rid of unneded files
* Work on integration
* Add numa affinity to config
* Add to config command
* Fix some of Stas' notes
* Use raw os to make things easier
* Update questionairre
* Use CPU_AFFINITY instead
* Change doc
* Update test
* Fix numa, I submit
* include ref to original
* Fix
---------
Co-authored-by: zach.mueller@huggingface.co <muellerzr@ip-26-0-160-100.ec2.internal>
* Move to using tags
* Add readme
* Include hf repo description in auto-build
* Test
* Even with an a...
* Rm readme things
* Symlink README for docker repo
* Include readme
* Fin
* Try now?
* Finally got symlink working
* Let's try this
* Forgot runs-on
* Still perm issues, revert
* split_between_processes for Dataset
* Update state.py
* remove param datasets.Dataset from split_between_processes, add note to function doc
* is_datasets_available is a function not a var
* reformat to make ruff happy
* isinstance(inputs, Dataset) only if is_datasets_available()
* add test_split_between_processes_dataset
* split_between_processes for Dataset: pad if apply_padding
* removed trailing whitespace
* complete test_split_between_processes_dataset
* fix test_split_between_processes_dataset for single GPU
* fix replication
* Set generator on each thread. The test passed.
* remove comments
* fix up
* fix format
* fix comment
* not setting the dataloader.batch_sampler
* Test uv
* Workflow dispatch
* Modify
* Setuptools...apparently?
* No need for -y
* Rm cache
* Rm workflow dispatch
* Trainer tests
* Might need to be -e
* Try keeping it at absolute home
* Undo integration
* add force flag in _do_sync class method and add sync_each_batch in GradientAccumulationPlugin
* modify test_sync to consider sync_each_batch. fix old tests involving optimizer
* run style checker
* minor refactoring based on @muellerzr's comments.
* update docs: gradient_synchronization.md
* Apply @muellerzr's documentation suggestions.
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Apply suggestions from @BenjaminBossan
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
---------
Co-authored-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Error messages should read `--main_process_port`, not `----main_process_port`. Users who copy and paste the message as it was will get this error message:
```
Accelerate CLI tool: error: unrecognized arguments: ----main_process_port
```
* Check if the buffers fit GPU memory after device map auto inferred
* For some models, like TheBloke/WizardCoder-33B-V1.1-GPTQ, contain a
huge buffer, which may cause OOM on GPU memory if not using
offload_buffers. This commit adds a check for such case.
* Minor refactors.
* Add missing assertions
Now, the behavior of the wrapped optimizer is that the gradient is cleared by default when `set_to_none=None`. This aligns with `torch.optim.Optimizer` and saves memory.
* New approach
* New version, good
* Complete rewrite, and works for testing
* More nits
* Simplify option_string filtering
* More suggestions from codereview
* Add test
* Fix broken tests
* Update accelerate config and launch to abstract out mpirun
* Fix var
* Documentation updates, updating the launch script to work with other MPI programs, and fixing the nlp example when using IPEX
* Style fixes
* Add a test
* Style fixes
* Formatting fix
* Updates based on review feedback.
* Remove model.train()
* Doc update
* Update doc regarding the accelerate config with the old method of mpirun and accelerate
* Fix typo in comment
* Quality and test updates
* Updates based on review feedback
* Quality fix
* Fix mock patch path
* Updates based on review feedback
* Quality fixes
* Don't manage PYTORCH_NVML_BASED_CUDA_CHECK
PYTORCH_NVML_BASED_CUDA_CHECK will use an NVML-based check when
determining how many devices are available. That's useful for preventing
CUDA initialization when doing that check (or calling
`torch.cuda.is_available()`). Instead of manipulating that env var, one
can call the torch utility `_device_count_nvml` directly preventing the
manipulation of the env var.
* Uses env var instead of private torch function
* Fixes flake8 check
* Let's try it out
* Let's try this out
* Some more cases
* String
* Require hub online for estimator
* Add CI checker to alert on hub status
* Format
* Oops death by ctrl z
* Fix import
* Fix tests
* Fixup tests
* Fix test
* Actually cast to string!
* Fixup deepspeed
* fsdp and deepspeed fix
* Since we're doing this, may as well get it all
* Stragglers
* Split only if we require config_file
* Make list
* Only convert if it's a path
* type
* Other func
* rm parenth
* Ban use of `os.*env`
* Fix `clear_environment` to actually clear environment variables
Assigning to `os.environ` does not clear the environment (Ruff B003)
* Have environment context managers restore state even if the block raises
* Add tests for environment CMs
* DOC Fixes to Accelerator docstring
- Add more links to accelerator classes where applicable
- Fix a typo: KwargHandler => KwargsHandler
* Fix syntax issues
Not sure how to add a link of the type is `list[SomeType]`, so just
removed it for now.
* Fixing link for KwargsHandler
* Add KwargsHandler to API docs
* Also add doc entry to kwargs.md
* Fix the pytest version to be less than 8.0.0
We're getting errors such as:
> /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/testing_utils.py:129: in <module>
> from _pytest.doctest import (
> E ImportError: cannot import name 'import_path' from '_pytest.doctest' (/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/_pytest/doctest.py)
* Update setup.py
Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>
* Deprecate and introduce dataloader_config
* Update docs
* Doc nits
* More tests, adjust based on PR review
* Fixup tests
* Nits
* Update docs/source/quicktour.md
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
* Clean
* Actually create one
* Forgot to change one
* Use pytest
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
* Make torch xla available on GPU
* format code
* fix documentation build error
* update according to the comments
* Replace DistributedType.TPU with DistributedType.XLA
* make all ut pass
* format code
* update comments
* skip test
* format code
* skip FSDPPluginIntegration for torchxla
* bring back custom_sampler_check
* fix ut
* format code
* format code
---------
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Prefer `is_torch_tensor` over `hasattr` for `torch.compile`.
`torch.compile` breaks when using `hasattr` but succeeds when using `isinstance(torch.Tensor)`. This commit short-circuits the `hasattr` call for `torch.Tensor`s if possible.
Note: `is_npu_available` is also not torch.compila compatible due to (1) lru_cache and (2) importlib checks, so I've moved it into the try block, catching the AssertionError instead.
* Fix torch.device("npu").
This is not available in non-npu pytorch. Note that
torch.device automatically assigns an index when created as torch.device("npu"), so overwriting device with `"npu:0"` is only required if device is a string "npu".
* Remove unittest.main execution.
* Fix style broken by merge save.
* Import operations functions directly.
* fix style
* Fix imports attempt 2.
* Re-raise error if no NPU available.
* Make output end up on the cpu at the end
* Rework a bit
* Remove the CPU part
* Update to include a new util to copy tensors across devices
* Update test
* Update doc
* Update docstring
* Make False by default and change if community feedback says yes
* Apply suggestions from code review
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Update default to False in doc and make a tip
* Update typing
* Defaults
* Explain
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Broken version
* Timing I would expect
* Working version!
* Use MethodType
* working test
* Tests
* Use no split module classes explicitly
* Put split_points in pipelien
* Store split points in hf_split_points
* fix case num_process=1
* Allow for dynamic batch padding (#2352)
* Allow for dynamic batch paddign
* Fix test
* Update src/accelerate/inference.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Break early after the first valid bs is found
* Less slicy-dicy
* Test cv model
* Start, need to test
* Use dataloader-like logic
* Refactor to utils
* With tests
* Update the source
* Clean
* bs=1 case
* Add test
* add some failing test
* Almost working version
* Much cleaner implementation
* Use pad_input_tensor
* All tests passing!
* Do it at tracing too
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Marc Sun <marc@huggingface.co>
* Rm literal
* Allow users to pass in max_memory
* Note about recursion
* Document, document, document
* Right import check
* Fix bug, add tests to multigpu runners
* Change default to None
* Start of docs
* Try again?
* Try again x2
* Trailing comma
* Move import
* Clean
* typehint
* typo
* From code review
* Use num_chunks
* Update tests/test_utils.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Bad copy/paste
* hf_split_points
---------
Co-authored-by: Marc Sun <marc@huggingface.co>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* According to the code in set_module_tensor_to_device, uint, int and bool type
won't be converted, so let's keep its original size, or the module size will be
under-estimated.
Though it will complain about "Device xpu is not recognized, available devices are integers(for GPU/XPU),
'mps', 'cpu' and 'disk'", but you cannot just put 0 as device, or it will treat 0 as CUDA device, then complains
again that torch is not compiled with CUDA enabled.
You will need safetensors >= 0.4.2 if using safetensors files.
* Add adapter_only option to save_fsdp_model and load_fsdp_model
* Gate with adapter_only
* Black format
* Change unwrapping behavior
* Use extract_model_from_parallel for model unwrapping
* Fix quality
* Move functions to utils files
* Fix quality
* Fix breakpoint API in test_script.py on TPU.
* only call set_trigger on the main process
* The test passed.
* add a comment
* Call mark_step after all_reduce to make torch_xla run collective op like the torch.distributed below, rather than waiting untill the tensor is referenced again to run the pending operations.
* Redo with new version
* Store
* Working version
* Seperate for now
* Min diff
* check if available
* Better docstring
* Check for multiple models and optimizers
* Check for TE and MSAMP args seperately
* String clarity
* Better docstring and types
* Quality
* Simplify a bunch for fp8
* Convert literals to type alias
* Better err
* Docs
* toc typo
* Apply suggestions from code review
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Maria Khalusova <kafooster@gmail.com>
* Address doc nits
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Maria Khalusova <kafooster@gmail.com>
* support `log_images` for aim tracker
* fix the potential kwargs issue for aim tracker's `log_images`
* remove ambiguous import statement
* use `aim` directly to avoid potential conflict
* Add npu support to big model inference
* make style
* add warning when using npu
* fix typo
* replace `.to(<num>)` with `.to("npu:<num>") when using `torch_npu`
* empty_cache
* fix
* remove the redundant code post the torch 2.1 release
* make `use_orig_params=True` by default.
* fix `save_state` optimizer saving for fsdp and update the fsdp example
* quality
* fixing the utils and tests. Updating the docs
* bump up the minimum version for FSDP support.
* address comment
* rename fsdp model checkpointing variables
* Try merge tests
* Fix
* Checkout branch
* Fix pip install
* rebase
* Colons
* right one
* use master
* Rm
* Add needs
* Better clean
* always
* Forgot other
* test on AWS
* update all labels
* fix multi-gpu working directory
* limit to 2 GPU
* force run on kube
* move build docker image to new ci
* test build on CPU instance
* move build docker image release to new ci
* move scheduled slow tests to new ci
* move integration test to new ci
* Comments
* Right CPU tags
* Right machines
* PR comments
* Fix issues
* Some trailers
---------
Co-authored-by: Guillaume LEGENDRE <glegendre01@gmail.com>
* Try merge tests
* Fix
* Checkout branch
* Fix pip install
* rebase
* Colons
* right one
* use master
* Rm
* Add needs
* Better clean
* always
* Forgot other
* test on AWS
* update all labels
* fix multi-gpu working directory
* limit to 2 GPU
* force run on kube
* move build docker image to new ci
* test build on CPU instance
* move build docker image release to new ci
* move scheduled slow tests to new ci
* move integration test to new ci
* Comments
* Right CPU tags
* Right machines
* PR comments
---------
Co-authored-by: Guillaume LEGENDRE <glegendre01@gmail.com>
* check port availability only in main deepspeed launcher
* check port availability only in main launcher for deepspeed/torchrun
* Update launch.py
add comments
---------
Co-authored-by: 聂靖入 <niejingru@bytedance.com>
* first take at troubleshooting guide
* logging moved to the troubleshooting guide
* TOC updates and gudie edits
* minor edits
* moved to tutorials
* feedback addressed
* batch size clarifications
* typo
* kernel, early stopping hanging, feedback
* Make safetensors default
* Rm location
* Actually flip flags
* Tests + update checkpointing
* Add to setup
* Start of tests with both safetensors and without
* Update tests to use both
* Remove from load state
* Explicit tip
* With suggestions
* Simplify, don't abstract. Need to bring back to deepspeed however
* Refactor to use consts
* Keep how it was
* Typo fix
* add clearml tracker
* fix style in tracking.py
* run ruff --fix
* run ruff fix on src/accelerate/utils/__init__.py as well
* properly run make style
* add tests
* modify code based on code review
* changes based on code review
* quote data_frame
* fix docs
* remove pandas req in log_table
* style changes
* add tracker to docs
* Warn when kernel version is too low on Linux
See #1929
On Linux with kernel version < 5.5, issues with hanging processes have
been reported. It is not clear how to fix the issue, so instead we warn
the user that they may encounter problems.
Notes
As logging requires an initialized PartialState, the actual check
happens at the end of Accelerator.__init__.
In a similar vein, the docstring of get_logger has been adjusted to
first initialize the Accelerator, as it is not working as currently
shown.
* Reviewer comment: small change to docstring
* bookmark
* Works!
* Working!
* Fully working now
* Cover dataset
* Needed for dispatch
* Check both
* Bring back pop, fix hang
* Fully working
* Change back to epoch
* Adjust for new methods
* Clean
* Fix tests
* Avoid circular import
* Clean
* Fix test
* Comment
* Add a comment
* Comment
* Use yield from instead
* all_gather_into_tensor
* Cleanup
* Reduce memory on non-gloo
* Fin
* Check for backend too on cpu
* CPU comment
* Change scope for performance
* Bring back zeros after remembering why
* Add comment
* Add comment
* Use empty
* Comment
* Support shared storage, start
* Pass use_local_node_storage
* Reverse and different namings
* Not global only
* Addres comments
* Clean
* Apply suggestions from code review
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
* Save on each node as explicit arg
* More explicit
---------
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
* initial commit for adding multinode training doc
* removed stray changes
* fix formatting issue and switch to bulleted list
* Update docs/source/basic_tutorials/launch.md
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Update docs/source/basic_tutorials/launch.md
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* added link to new blog post
---------
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Remove checkpoints only on main process
shutil.rmtree might throw errors if called on multiply processes. Make a call only on main process
* Apply style
* early stopping
* Fix tests
* Works on multi-gpu, uncomment
* Rm reset
* Check for >=1
* equal
* Trigger
* Fix test
* Update docs/source/concept_guides/deferring_execution.md
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
* Explicit example loop
* Set to zero, not None
* rename test
* Check again to ensure it's been reset
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
* add bf16 mixed precision support for NPU
* Explicitly register the NPU backend to PyTorch via `import torch_npu`
---------
Co-authored-by: statelesshz <jihuazhong1@huawei.com>
* [feat] implementing gather_for_metrics for objects
* [lint] make style result
* [docs] improve fn docs gather for metrics
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* [docs] update args description gather for metrics
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* [refactor] gather for metrics for non tensor obj
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* [fix] renaming tensor to data (was not defined and it is not just a tensor)
* [fix] else state
* [test] gather for metrics with non tensor objects
* [lint] make style result
* Update src/accelerate/accelerator.py
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Update src/accelerate/accelerator.py
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* [test] removing useless assertion
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* [test] add running on main
* [lint] style autoformat
---------
Co-authored-by: Lorenzobattistela <lorenzobattistela@gmail.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Resolves#1832
This fixes a bug in patch_environment that currently leads to
pre-existing items being deleted completely from the environment
variables, when they were temporarily modified by patch_environment,
once the context has finished. Now, the env vars are restored to their
previous values.
* Introduce new arguments: master_addr, node_rank, and num_nodes.
Relocate these arguments to the end of the notebook_launcher
function for compatibility.
* Set defaults for NPROC and NODE_RANK environment variables in the
PrepareForLaunch function to ensure compatibility.
* Thoroughly document the process and usage guidelines for
multi-node launching.
* reduce gradient first for XLA when unscaling the gradients in mixed
precision training with AMP.
* Apply suggestions from code review
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* update acceleartor.reduce and accelerate.utils.operations.reduce
---------
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* support for deepspeed optimizer and custom scheduler
* don't throw the error
* Add tests
* fix the tests
* fix the code quality
* Update tests/deepspeed/test_deepspeed.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
* fix the docstrings
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
* New technique
* needs
* explicit all
* Volume prune not going
* Skip volume
* versions
* Avoid checkout perhaps?
* Working dir
* Don't include dot-slash?
* Accelerate prefix?
* Working directory?
* Context?
* other workingdir
* Faster iteration
* Right tag
* Full
* Release
* GPU
* With driver
* Remove deps
* No bitsandbytes
* Try with raw push
* We can keep old docker images
* Also include release
* Skorch uses master
* Right tag
* Estimator
* Right err
* Fixup tests
* trust remote code
* Print output for debugging purposes
* trust_remote_code
* Address some comments
* change doc to req arg
* Properly check for _no_split_modules in transformer models
* Note on transformer models
* Check/handle pentabytes
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
* Tests are passing locally again, better handle for no_split
* Adjust setup?
* Let's see if the cleaner version works
* Refactor and clean up for testing
* Specify in comments
* Better error handling
* A million tests later
* More tests + err handling
* Require hub
* More with remote code
* Clean up
* Add a test for no_split
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Docstring
* Address some comments
* rm einops
* Let it err out
* Adjust errs
* Tests
* Reduce test repeats
* Clean up borders
* Tip on 20%
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* support for ram efficient loading of model with FSDP
* with default behaviour of efficient loading when using FSDP, `sync_module_states` needs to be `True`
* fixes
* Update accelerator.py
* Update dataclasses.py
* Move into check-device
* Use proper solutiona nd write test
* Move test
* Avoid circular import
* Remove patchenv alltogether
* New version
* Better way, run a verification test
* Final working version
* Debug mode
* doc
* Just debug
* Doc
* print
Using accelerator.unwrap_model(model, keep_fp32_wrapper=False) results
in a defective forward method. This bug was (probably) introduced in
PR #872.
Wrapping the method in MethodType (as elsewhere in code) resolves the
issue.
* Start on testing behavior
* Add test to capture current behavior
* Cleanup test; add length to DummyIterableDataset
* Remove wip test from test_dataloader.py
* Only check on remainder state if we're at the end of a dataloader
* Cleanup
* Fix style
* Move test to test_metrics
* Remove 2 num_process assertion so that we test on single-GPU as well,
why not
* Use `isinstance()` instead of `type()` in test_metrics
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
---------
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* First version
* As decorator
* Better err
* Limit
* Partial state
* More work
* Tests + config
* Debug mode
* Flag
* Rm references to debug mode, debug
* Tests
* Docs
* Nit
* Disable debug in config
* Support dict
* if the model is already an FSDP instance, remove the warning and prep overhead
* allow usage of `_no_split_modules` to simplify UX when using FSDP
* Update other.py
* fixes
* fix KwargsHandler.to_kwargs not working with os.environ initialization in __post_init__
* fix test_torch_dynamo_plugin such that it wouldn't change os.environ permanently
* move clear_os_environ func to utils/other and rename it
* reformat code in order to pass ci quality check
* modifiy the comment of utils.other.clear_environment
* Get rid of calling get_scale() by patching the step method of optimizer.
* Fix when step() is already patched by other parties.
* support pickle
* Minor updates.
* Change _accelerate_num_step_called to _accelerate_step_called
---------
Co-authored-by: YU Xinyuan <yuxinyuan02@corp.netease.com>
* Fixes for issue #1683: failed to run accelerate config in colab
Fixes for issue #1683: failed to run accelerate config in colab
* Fixes for issue #1683: failed to run accelerate config in colab, change input2 to a formal variable name
change input2 to a formal variable name
* Fixes for issue #1683: failed to run accelerate config in colab
removed unnecessary spaces
* Fix for #1683 failed to run accelerate config in colab
fixed reformatting issue, during the quality check
* Fixes for issue #1683: failed to run accelerate config in colab
refactor the code, passed black, ruff, doc-builder test; modified the prompt in colab.
* Fixes for issue #1683: failed to run accelerate config in colab
fixed black, ruff, doc-builder, modified prompt during choice input
* Fixes for issue #1683: failed to run accelerate config in colab
use utils.imports _is_package_available() method instead, to be consistent with the rest of the library code.
* Fixes for issue #1683: failed to run accelerate config in colab
add default choice, wrap up import check with try catch, passed quality check, style check and test cases
* Should fix multinode test
* For testing, remove after
* try this
* Try disabling
* Try again
* move more
* Fix multinode tests
* New check
* Fix err
* Fix test
* fix bug in is_xpu_available
* fix device configure bug for DDP with ccl backend
* enable accelerate launch for DistributedType.MULTI_XPU
* fix the bug in wait_for_everyone for xpu
* fix the bug in rng_sync_check for xpu
* refactoring code according to muellerzr's suggestion
* define RegressionModel4XPU for xpu to avoid ccl bug
* make MULTI_XPU independent on env var 'CCL_WORKER_COUNT'
Before this commit, this documentation suggested that model parameters
are updated when `accelerator.backward()` is called (which in turn calls
`loss.backward()`). This isn't the case - parameter updates happen when
`optimizer.step()` is called.
This commit:
1. Updates this documentation to reflect this within the discussion of
gradient accumulation.
2. Adds calls to `optimizer.step()` as that's key to gradient
accumulation.
2. Adds optimizer.zero_grad() for consistency with `accelerator.accumulate()`'s docs
3. Does some related word-smithing
To make sure I was thinking about gradient accumulation correctly, I'm
using `huggingface/transformer`'s performance guide for a working
definition of gradient accumulation, which this diff is consistent with:
> The idea behind gradient accumulation is to instead of calculating the
gradients for the whole batch at once to do it in smaller steps. The way
we do that is to calculate the gradients iteratively in smaller batches
by doing a forward and backward pass through the model and accumulating
the gradients in the process. *When enough gradients are accumulated we
run the model’s optimization step*. This way we can easily increase the
overall batch size to numbers that would never fit into the GPU’s
memory. In turn, however, the added forward and backward passes can slow
down the training a bit.
(https://huggingface.co/docs/transformers/perf_train_gpu_one#gradient-accumulation)
Another huggingface example of gradient accumulation that is consistent
with this change: [run_glue_no_trainer.py][0]
[0]: https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py#L518-L532
* Add step reset to free memory
* Check if not Accelerated Optimizer
* Continue
* Another try
* Check the rest
* Try with just check on init
* Change logic based on review
* Update
* Oops very big logic issue!
* Update deferring_execution.mdx
* [documentation] grammar fixes in gradient_synchronization.mdx
These changes are grammatical and do not affect the ideas communicated in the file.
* should set correct dtype to ipex optimize and use amp logic in native_amp logic in prepare_model
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* remove mix precision set in ipex, directly use it from accelerate state
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* raise import error if ipex is not valid in prepare ipex
* Update src/accelerate/accelerator.py
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
* Try with this
* Remove import to be late
* Apply padding properly for tensors
* Pad across tensors
* Check to see if this works
* Use -1
* Properly send the first item as what's to be padded
* Update docstring
* Add tests
* Fix test
* Update typehints and docstrings
* Adds `in_order` argument that defaults to False, to log in order.
Ads `in_order` argument that defaults to `False`, to log in order.
It really helps with readability. Defaults to false to not break backwards comp.
* fixed formatting
* Update src/accelerate/logging.py
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
* Fixed quality & suggestions
---------
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
* Update log_reports to send to slack
* REVERT this change, just for testing!
* Add slack_sdk dep
* Second one
* Try now?
* Remove len
* Need secret
* Try with new version
* Right boldface
* Fix import
* New format, use tabulate
* Add tabulate to yml
* Quality
* Purposefully fail
* Working updater, now to test
* Int
* Print payload
* Append
* Change maxcolwidth
* Offset
* More offset
* Context
* No max width
* gh format
* max-col-width'
* Reduce max
* Non-working tables
* Rm md report
* Try now
* Try with just count
* Use table
* New version
* Use table
* Try with thread
* Should be working now
* Clean
* Fixup test reports fully
* Revert workflow
* Keep tabulate in workflow ci
* Update other workflows
* Use blocks for better formatting
* ONe more test
* Works as expected
* Intel GPU support initialization
* rng state for xpu ,accel backend
* add xpu variable and clean code
* checkpointing, hooks, colls & megatronlm porting
* fix runtime errors
* test utils and xpu runtime checks
* fix unknown import in constant
* Resolve amp and cuda/xpu tensor placement
* add ipex for state and hooks
* add mingxiao's ipex changes and source code rebase changes
* add ipex binding in cluster
* resolve megatron lm issues and modelling memory
* indent fix and syntax
* versioning and sanity checks
* use kwargs and add upstream
* revert megatron lm xpu changes
* cleanups and test npr
* fix merge conflict
* fix merge conflict
* Fix merge conflict
* review commits
* make style, ruff code styling
* hf doc builder code style
* Review commits and code style
* remove xpu plugin and use only ipex by default if cpu/xpu present
* review commits and fix tests on state
* fix test in state
* add xpu condition in optimizer and code style/testing
* fix test add warn for ipex
* fix test
* fix test
* fix test and condition
* fix amp test prod,cli ,core
* fix minimum torch tests
* refine accelerator and modelling for tests
* refine modeling and merge
* Fix slow cuda tests
* doc and retrigger test
* Fix `XLA_USE_BF16` when not using mixed precision
* Fix RNG sync during data loading
* Fix hanging during checkpointing
* Remove extra _mp_fn
* Use all_gather to implement _tpu_gather
* Use collective_broadcast for torch RNG state
* Formatting and comments.
* Fix formatting with `make style`
* add image logging
* add table logging
* add artifact logging capabilities
* fix black
* remove log_iamges on base class
* fix docstring
* quality
* remove the artifact code
* add main proc decorator
* add main process to log_images in ternsorboard
* quality
---------
Co-authored-by: Thomas Capelle <thomas.capelle@steady-sun.com>
* Fix nested context manager for main_process_first()
* Fix test for main_process_first()
* Improve test for main_process_first()
* Fix formatting
* Fix test with single process
Previously devices() was a list containing duplicate entries. This
changes it into a set.
This significantly speeds safetensors loading when the device map is
long, as the safetensors loop loads each weight entry for each device
entry.
Co-authored-by: John Doe <john.doe@example.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
The `load_checkpoint_and_dispatch` method has `device_map: Optional[Union[str, Dict[str, Union[int, str, torch.device]]]] = None,`
But if you pass `device_map=None` you get an error:
```
accelerate/big_modeling.py", line 477, in load_checkpoint_and_dispatch
if offload_state_dict is None and "disk" in device_map.values():
AttributeError: 'NoneType' object has no attribute 'values'
```
When running in single GPU, the `batch_sampler` of `DataLoaderShared` is a `torch.utils.data.sampler.BatchSampler` object instead of `DataSamplerShared ` object, which does not contain necessary attributes to calculate `total_batch_size`.
* add quantization support for `dispatch_model`
* fix multi-gpu
* more chaecks
* fix bias issue
* Update src/accelerate/utils/modeling.py
Co-authored-by: Andrei Panferov <andrei@BlackSamorez.ru>
* make style
* add tests
* left some todos
---------
Co-authored-by: Andrei Panferov <andrei@BlackSamorez.ru>
* Attempt to fix importing invalid `torch.distributed.ReduceOp` when torch is built without distributed support.
* Style.
* Move `torch.distributed` logic detection to `imports.py` according to @muellerzr comments
* Style.
* Update wording
* Remove raising exceptions in the case of a non-distributed setup, simply dont import the ReduceOp in this case.
* Draft of FP8 support
* Missing import
* Fix names
* Conversion is inplace
* Enable fp8 in examples
* Customization point for Recipe
* Auto-enable FP8 depending on compute capability
* Fix typo
* Put back mixed precision arg
* Add debug script
* Add more tests in debug
* Add more stuff to debug
* Don't forget train
* Put the train in the right place
* Add options for selective conversion
* Fix typo
* Properly recurse
* Add more debug utils
* Typo and init
* Last choice
* More fixes
* More options in example
* Remove debug scripts
* Clean up debug and new names
* Add torch.no_grad for conversion
* Optimizer is deconnected from model?
* Re-attach model parameters to optimizer
* Fix extract
* Style
* Cleanup post-rebase
* Deal with apdding
* fix examples
* Update src/accelerate/accelerator.py
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
* Address comments
---------
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
The current implementation loads custom states to GPUs, leading to OOM. I add `map_location="cpu"` to the `torch.load` function, which is similar to the strategy in `load_accelerator_state`.
* Refactor implementation to use PartialState and adjust deprecation tests
* Utilize multi-process in Accelerator
* Use state
* Lazy PartialState
* Name, plus keep on_main_process for accelerator
* Handle if the tracker was made on main-process-only properly
* Missing variable names, oops
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
* Clean
* Logs
* Main process
* Clean
---------
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
* Try again
* Try off multi-gpu
* This is a test
* Finished now
* PartialState
* Update logger to use new API
* backend
* Working tests
* Working again!
* Raise err instead
* Better error
* Update src/accelerate/state.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
---------
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Add cpu offload with hook
* Style
* add to init
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add documentation
* Add tests
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Deepspeed param check
On line 146, in set_module_tensor_to_device(), adding a check for deepspeed parameters in the kwargs object, and not passing them solved the error I was receiving regarding the ds parameters not being recognized by torch.nn.Parameter.__new__(). With my admittedly limited knowledge, it seemed to me that the kwargs are not necessary to pass in the case of using Deepspeed+ Accelerate, and this bears out since the model loaded fine with zero-3 cpu parameter and buffer offload on a single-GPU machine, and performed perfectly comprehensible inference outputs (slowly) using the GPU.
The error, in my case, was occurring here as called from accelerator's dispatch_model().
Please let me know if my thinking on this is in anyway wrong! This fix worked for me.
`transformers` version: 4.26.0
- Platform: Linux-5.15.83.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes and no (zero-3 on single machine)
* 146-150 check for Int8 arguments
146-150 check for Int8 arguments. If found, send the args as well as the value.
* Used make style on branch
* Used make style with correct versions of black and flake8 on branch
* Start of examples
* Missing >
* Fix docstring nit
* Add comment on main_process_first
* Make comment on randomness
* first
* Backprop issues with examples into here
* Efficiently skip batches in a dataloader
* Add method in Accelerator and example
* Apply suggestions from code review
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
* Rename point of access
* Add point of access to init
* Add tests
* Don't forget to include fixes silly!
* Adapt examples
* Fix quality
* Forgot one
* fix method name
* Fix DataLoaderShard reinstantation
* Fix for epoch checkpointing
---------
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
* Add in code exploration tool to docs
* Update index to hotlink over to the explore
* With 100%
* Just do 750 for now
* Safe height
* Let's try with this
* Comment out original
* Revert
* Add in a note on the docs and remove a secondary code snippet
* Use 1550 for now so it fully fits
* 1600*
* Working save limit
* Centralize to project_dir
* Update docs
* Fix up tests
* Maintain old version, should fix tests
* Revert logging behavior
* Fix failing test
* Automatic checkpoint naming flag
* Logging -> Logger
* Fix naming
* Remove args and make a SaveConfiguration
* logger -> logging
* save_configuration to save_config
* Good to go now, just need to update docs
* Update all the docs
* Deprecate logging_dir param
* ProjectConfiguration
* Project_config
* Fix test
* Finish renaming
* Docfix
* Clean
* Update docs/source/usage_guides/tracking.mdx
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
*failure occurs when testing FP16
*autocast fail to work for cpu bf16 in some gpu+cpu platform,
no need to use is_bf16_available logic. because native_amp already contains such logic.
and num_cpu_threads_per_process is not reset for better performance in cpu only case
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* add an option to silence subprocess.CalledProcessError when running accelerate launch
* for black
* for real this time
* Add suggestion
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
* Update cli.mdx
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
* Add test for join context manager
* Add join_uneven_inputs context manager
* Format
* add conditional import for join
* Replace bare yield with nullcontext
* Update accelerator to maintain references to dataloaders
* add override option to join context manager
* format
* Add minimal docstring
* updates based on initial feedback
* remove launcher used for local testing from test script
* fix quality issues
* DEBUG: try resetting accelerator state to fix test
* Revert "DEBUG: try resetting accelerator state to fix test"
This reverts commit a13a56ea8e084cad72317cd451a176a2d3fa5dff.
* Reset state after accelerator tests
* Update src/accelerate/accelerator.py
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
* Warn if at least one iterable dataset seen
* remove launcher used for local test running
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
* Deepspeed example should use gather_for_metrics
I believe this example should be using gather_for_metrics here instead of gather.
* Update deepspeed_with_config_support.py
* Working CLI questionairre
* Forgot space
* Finish the rest
* Rename and make all funcs/options public
* Include Brian Chao in copyright
* Working number inptus
* Fix num
* Linebreak to ease viewing
* Finish sagemaker
* Clean
* Fix mixed precision
* adding support to return logits and generate for Megatron-LM GPT models
* addressing issue
* fix 🐛
* fixing many 🐛 and adding documentation
* remove warning
* address comments
* add docs and utilities for megatron-lm gpt generate and logits
* Add in ability to configure pod and start CLI commands
* Further tests, add a help
* Added tests and cleaned up!
* Fix weird missing parts
* MOre tests + install accelerate with flag
* Unused pod_config_file
* Test with multiple commands
* Update src/accelerate/commands/config/cluster.py
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
* Clarity during printing
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Make public names for readability
* Fix test expected outputs and refactor response
* Fix ref errors
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Allow BatchSamplerShard to not even out batches
* Update src/accelerate/data_loader.py
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
* Add early error
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
* Move io_same_device hook to before attach_align_device hook on cpu_offload and disk_offload.
That way we can keep the changes on forward method for the whole module without deleting the hook we want to keep: the one with execution device and configurations on how to move the tensors between devices.
* add append flag to add hook to enable usage of sequential hooks
* add tests to append hooks
* add docstring to append flag
* address review comments
* move io_same_device hook to top on cpu_offload and disk_offload
* trigger ci
* updating docs to use fork of megatorn-lm and minor example fix
* Update megatron_lm_gpt_pretraining.py
* minor example fixes to have logs in sync with config and args
* Update megatron_lm_gpt_pretraining.py
* Megatron-LM integration
* add code and resolve comment
Co-Authored-By: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* add code
* add code
* fix many 🐛
* add code
* add code and reverting tracker processes
* updating logging utilities, fixing Pipeline Parallelism and dataset/dataloader 🐛 s
1. Fixing bugs related to Pipeline Parallelism
2. Fixing bugs related to dataloaders/datasets.
3. Fixing logging utilities so that all logging and tracking happens on last process when using Megatron.
* addressing comments
* resolving comments
* update code
* refactoring and adding code to support custom implementation of`AbstractTrainStep` class
* minor change
* Many fixes for supporting custom TrainStep and Megatron Indexed Datasets
* Add code, 🐛 fixes and a initial doc file with headings
* fixing a big 🐛 related to loading checkpoints
* adding doc and an example
* example test CI
* docs
* more docs
* more doc changes
* more doc changes
* docs
* more docs
* doc fixing
* trying if we can directly import megatronlm utils
* doc fixing and throwing error if megatron isn't available.
* resolving comments
* fixes to bert and t5 and more docs
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Restructure actions and make running tests more efficient
* Try with source code adjustment
* First make sure they work
* Don't move
* Local workflows reference
* Keep it as a step
* Try changing a line
* Try not using tertiary
* Fix test
* Make tests wait
* Remove linechange
* Include and run based on new setup
* Try with removing workflow
* Re-add in, it works!
* Rename for clarity
* Prevent module constructor from building tensor in cpu and then move it to meta
* Patch torch.load
* Maybe the hack to override torch.load is too dangerous?
* Make style
* No need to override torch.load as one can just load from config intead
* No sure why there's a include_buffers argument, but we need to override tensor constructor only when include_buffers argument is True
* launcher related changes + minor fixes
* removing minor fixes
* remove minor change
* deepspeed multinode standard launcher
* undo
* fixing the multi-node standard launcher
* Meta init/tensor_to_device logic for Int8 Parameters.
* add 8 bit support
* add special modules support
Co-authored-by: timdettmers <timdettmers@users.noreply.github.com>
* bad formatting
* bad formatting
* restoring the poor lines that were alone!
* small hack
- replaced paramter replacement logic
* add int8 support - v1
* replace cpu by device
* better refactoring
* put to buffer
* add else statement to avoid breaking changes
* styling
Co-authored-by: Tim Dettmers <tim.dettmers@gmail.com>
Co-authored-by: timdettmers <timdettmers@users.noreply.github.com>
* set default num_cpu_threads_per_process to improve oob performance
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* fix log info
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* checkpointing enhancements and fixes for FSDP and DeepSpeed
* resolving comments
1. Adding deprecation args and warnings in launcher for FSDP
2. Handling old configs to work with new launcher args wrt FSDP.
3. Reverting changes to public methods in `checkpointing.py` and handling it in `Accelerator`
4. Explicitly writing the defaults of various FSDP options in `dataclasses` for readability.
* fixes
1. FSDP wrapped model being added to the `_models`.
2. Not passing the env variables when args are None.
* resolving comments
* adding FSDP for all the collective operations
* adding deepspeed and fsdp tests
1. Removes mrpc datafiles and directly relies on HF datasets as it was throwing `file not found` error when running from within `tests` folder. Updating `moke_dataloaders` as a result.
2. adding `test_performance.py`, `test_memory.py` and `test_checkpointing.py` for multi-gpu FSDP and DeepSpeed tests
* reverting `mocked_dataloader` changes
* adding FSDP tests
* data files revert
* excluding fsdp tests from `tests_core`
* try 2
* adding time delay to avoid `torchrun` from crashing at times leading which causing flaky behaviour
* reducing the time of tests
* fixes
* fix
* fixes and reduce time further
* reduce time further and minor fixes
* adding a deepspeed basic e2e test for single gpu setup
* fix: saving model weights
checkpointing not saving model weights if calling `accelerator.prepare_model` instead of `accelerator.prepare`
resolves issue: https://github.com/huggingface/accelerate/issues/555
* fix: saveing model weights for optimizer and scheduler
* Fix a few minor issues with example code in docs
- enumerate is not actually used
- variable name "labels" does nto match
- prepare method should be called
* Apply style
* fix some parameter setting does not work for CPU DDP and bf16 fail in DDP path
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* if number_machine > 1, get the ip and port accelerate config set
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* if main_process_ip and port is set by user, use them, else use default "127.0.0.1" when DDP is used in one machine
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* FSDP integration enhancements and fixes
* bug fixes
1. fix circular dependency
2. Add model print statement in FSDP example
3. minor fixes
* removing `always_wrap` as it is rarely useful
* removing comment
* resolving comments
* fsdp fp16 mp uses ShardedGradScaler
* fix import
* fix check
* add exception when class to wrap not found in model
* adding `FSDP_BACKWARD_PREFETCH`
* fix
* Fix scheduler in gradient accumulation example
* Phrase better how the scheduler is stepped during grad accum
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Migrate HFDeepSpeedConfig from trfrs to accelerate
* update state.py to resolve comments
1. Adds static method to have a simple API for integrating deepspeed config in transformers trainer.
* reverting changes and addressing comments
* Marking DepSpeed and FSDP as experimental in accelerate
* deepspeed revamp
* Update dataclasses.py
* Update deepspeed.py
* quality
* fixing code
* quality
* FIx imports
* saving 16bit model in zero stage 3
1. Saving 16bit model in zero stage 3
2. zero init in stage 3 support using HFDeepSpeedConfig
* quality
* adding test and fixing bugs
* update makefile for deepspeed tests
* Update test.yml
* adding `deepspeed` as requirement for tests
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* quality
* addressing comments
* add example and minor updates
1. Add example to show the usage of config file with revamped deepspeed support.
2. update required deepspeed version to 0.6.5
2. reverting `reinit` change as it is not required,
3. raising Exception when using `clip_grad_value` with DeepSpeed/FSDP.
* Documentation and Zero-3 Inference Support
1. Changes to support ZeRo Stage-3 Inference support.
2. minor bug fixes.
3. Documentation.
* doc fix
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* addressing comments
* update doc to address comments and bug fixes
1. update tests and add new one testing autofill functionality of `prepare` method.
2. fix bug related to zero-3 init related to HFDeepSpeedConfig
3. Update documentation addressing comments.
* removing image and hosting it on `documentation-images` dataset
* check for hidden_size for zero_opt heurisitics
* updating tests to resolve runner failures
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* deepspeed revamp
* Update dataclasses.py
* Update deepspeed.py
* quality
* fixing code
* quality
* FIx imports
* saving 16bit model in zero stage 3
1. Saving 16bit model in zero stage 3
2. zero init in stage 3 support using HFDeepSpeedConfig
* quality
* adding test and fixing bugs
* update makefile for deepspeed tests
* Update test.yml
* adding `deepspeed` as requirement for tests
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* quality
* addressing comments
* add example and minor updates
1. Add example to show the usage of config file with revamped deepspeed support.
2. update required deepspeed version to 0.6.5
2. reverting `reinit` change as it is not required,
3. raising Exception when using `clip_grad_value` with DeepSpeed/FSDP.
* Documentation and Zero-3 Inference Support
1. Changes to support ZeRo Stage-3 Inference support.
2. minor bug fixes.
3. Documentation.
* doc fix
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* addressing comments
* update doc to address comments and bug fixes
1. update tests and add new one testing autofill functionality of `prepare` method.
2. fix bug related to zero-3 init related to HFDeepSpeedConfig
3. Update documentation addressing comments.
* removing image and hosting it on `documentation-images` dataset
* check for hidden_size for zero_opt heurisitics
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Switch to evaluate for metrics
* Why the heck?
* Fix syntax error
* Install from githug
* Is this the culprit?
* Upgrade Python
* Protobouf 💩
* Install from git not necessary now
* Sneaky last tensorboard
* Let's try this way
* Forgot to add all files :-/
* Introduce nightly builds
* Fixup docker images slightly
* Make device-count specific test use `torch.cuda.device_count()` rather than `Accelerator.num_processes` to avoid bug.
* fix shuffling for ShufflerIterDataPipe instances
* add versioning test for Pytorch
* fix minimum Pytorch version
Co-authored-by: Loubna ben allal <loubnabenallal@gmail.com>
* Big model inference
* Reorganize port cleanup
* Last cleanup
* Test fix
* Quality
* Update src/accelerate/big_modeling.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix bug in default mem
* Check device map is complete
* More tests
* Make load function more general
* Apply suggestions from code review
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
* Quality
* Address more review comments
* Check generation results for gpt2
* Add main wrapper around everything
* Tests for final API
* Clean infer_auto_device
* Type annotations
* Apply suggestions from code review
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
* Address review comments
* Last review comment for now
* Fix bug in clean_device_map
* Add doc
* Style
* Fixes + dtype support
* Fix test
* Add option to offload CPU state_dict
* Indent typo
* Final tweaks
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
* DeepSpeed and FSDP plugin support through script
Setting env variables when DeepSpeed /FSDP plugins are provided directly through script without using accelerate launch.
* quality
* Create peak_memory_uasge_tracker.py
Adding the example by feature for tracking peak memory usage of GPU. One example of usage is to track the peak memory reduction when using FSDP.
* fixing the typo in the file name
* reformatting
* exclude peak_memory_usage_tracker.py from tests
* renaming and highlighting proper usage
* Update test_examples.py
😅
* Fix training in DeepSpeed
* Be more defensive
* Apply suggestions from code review
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
Co-authored-by: Zachary Mueller <muellerzr@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
- Added experiment tracking API, and support for Weights and Biases, TensorBoard, and CometML + Tests
- Added `tensorflow` to a new dependency list to be used during tests
- Added three new functions in `Accelerator` to interact with the API
* Use workflow from doc-builder to build PR docs
* Adjust branch
* Consecutive jobs
* Transfer lib install to the workflow
* Remove dep
* Add dev install
* Use delete doc comment workflow
* Trigger
* Last job and better token maybe?
* Adapt token
* Use temp variable
* Use temp variable for real
* Pass the token better
* Let the template fetch the token
* Try to build the main doc!
* With the right name, preferably
* Notebook try
* Test
* Put back
* Final cleanup
* Final cleanup for realsies
* Switch to main branch
* add env command
modified: src/accelerate/commands/accelerate_cli.py
- added the the env command to the command cli
new file: src/accelerate/commands/env.py
- added the env command parser and env command
new file: src/accelerate/file_utils.py
- added is_torch_available
- based on a69e185074/src/transformers/file_utils.py (L69)
modified: src/accelerate/utils.py
- add import of importlib_metadata
- maybe can do this at the file_utils? or maybe added the is_torch_available to the `utils.py`? I just based in the organization from `transformers` repo
* remove unnecessary is torch available
modified: src/accelerate/commands/env.py
- remove use of is_torch_available
deleted: src/accelerate/file_utils.py
- remove is torch available
modified: src/accelerate/utils.py
- revert to 00e80dcfff899440b743cf9d1453cc762268591b
* add default configs of accelerate
* add default configs of accelerate
* Update src/accelerate/commands/env.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Make accelerated model with AMP possible to pickle
Solves issue 273
As discussed in the issue, the solution is to use a class instead of a
closure to achieve the same result.
I created an alias convert_outputs_to_fp32 = ConvertOutputsToFp32 so
that importing convert_outputs_to_fp32 can still be imported.
Alternatively, I could remove the alias and change import to use
ConvertOutputsToFp32 directly, but this may break backwards
compatibility. Or I could name the class convert_outputs_to_fp32 because
it works like a function.
Regarding the testing, I added a check to test_script.py for the model
trained with mixed_precision=fp16. Locally, this test could not trigger
the error in the issue because the forward method is never replaced. I
believe this is because AcceleratorState detects that my machine can't
perform fp16 training. I hope that in CI, this would be detected.
* Move test and style (#1)
* Remove unnecessary import
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* make deepspeed optimizer match parameters of passed optimizer, instead of all model parameters
* style
Co-authored-by: Jack Hessel <jackh@allenai.org>
* Add high level API reference to README.md
Update 'Why shouldn't I use' section with reference to pytorch-accelerated
* Fix typo in notebook launcher
Update 'Launching a training' to 'Launching training' in one instance
* Create 'Frameworks Using Accelerate' section
* Improve README.md formatting
* Remove newlines
Both `accelerator` or `Accelerator` would do the trick, but `accelerate` won't since we never import it and even we do, the `backward()` method doesn't exist in `accelerate`.
* PoC on main dataloader
* Support `split_batches`
* Add TPU support
* Fix typo
* More fixes
* Final fix
* Remove last print
* Add comments in the code
* Add test
* Style and sanity check
* Update src/accelerate/accelerator.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Address review comments
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
description:Submit a bug report to help us improve Accelerate
body:
- type:markdown
attributes:
value:|
Thanks for taking the time to submit a bug report! 🐛
If this is not a bug related to the Accelerate library directly, but instead a general question about your code or the library specifically please use the [forums](https://discuss.huggingface.co/c/accelerate/18).
- type:textarea
id:system-info
attributes:
label:System Info
description:Please share your accelerate configuration with us. You can run the command `accelerate env` and copy-paste its outputs below
- label:"One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)"
- label:"My own task or dataset (give details below)"
- type:textarea
id:reproduction
validations:
required:true
attributes:
label:Reproduction
description:|
Please provide a code sample that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder:|
Steps to reproduce the behavior:
1.
2.
3.
- type:textarea
id:expected-behavior
validations:
required:true
attributes:
label:Expected behavior
description:"A clear and concise description of what you would expect to happen."
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of**who to tag**.
-Big modeling: @SunMarc
-Fully-Sharded Data Parallism: @SunMarc@zach-huggingface
-DeepSpeed: @SunMarc@zach-huggingface
-Command Line Interface: @SunMarc@zach-huggingface
-Documentation: @SunMarc@zach-huggingface
-Core parts of the library: @BenjaminBossan@SunMarc@zach-huggingface
-Maintained examples: @SunMarc or @zach-huggingface
echo "Quality check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and rerun 'make style; make quality;'" >> $GITHUB_STEP_SUMMARY
3. Create a new branch to hold your development changes, and do this for every new PR you work on.
Start by synchronizing your `main` branch with the `upstream/main` branch (ore details in the [GitHub Docs](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork)):
```bash
$ git checkout main
$ git fetch upstream
$ git merge upstream/main
```
Once your `main` branch is synchronized, create a new branch from it:
4. Set up a development environment by running the following command in a conda or a virtual environment you've created for working on this library:
```bash
$ pip install -e ".[dev]"
```
This will install all testing and linting/code quality dependencies for the library (see `quality`, `test_dev`,
`test_prod` targets in [`setup.py`](./setup.py)).
(If accelerate was already installed in the virtual environment, remove
it with `pip uninstall accelerate` before reinstalling it in editable
mode with the `-e` flag).
Alternatively, if you are using [Visual Studio Code](https://code.visualstudio.com/Download), the fastest way to get set up is by using
the provided Dev Container. Documentation on how to get started with dev containers is available [here](https://code.visualstudio.com/docs/remote/containers).
5. Develop the features on your branch.
As you work on the features, you should make sure that the test suite
passes. You should run the tests impacted by your changes like this (see
below an explanation regarding the environment variable):
```bash
$ pytest tests/<TEST_TO_RUN>.py
```
> For the following commands leveraging the `make` utility, we recommend using the WSL system when running on
> Windows. More information [here](https://docs.microsoft.com/en-us/windows/wsl/about).
You can also run the full suite with the following command.
```bash
$ make test
```
`accelerate` relies on `ruff` to format its source code
consistently. After you make changes, apply automatic style corrections and code verifications
that can't be automated in one go with:
This target is also optimized to only work with files modified by the PR you're working on.
If you prefer to run the checks one after the other, the following command apply the
style corrections:
```bash
$ make style
```
`accelerate` also uses a few custom scripts to check for coding mistakes. Quality
control runs in CI, however you can also run the same checks with:
```bash
$ make quality
```
You can also set up [`pre-commit`](https://pre-commit.com/) to run these checks
automatically as Git commit hooks.
```bash
$ pip install pre-commit
$ pre-commit install
```
Once you're happy with your changes, add changed files using `git add` and
make a commit with `git commit` to record your changes locally:
🤗 Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16.
@ -63,12 +57,12 @@ Here is an example:
+ device = accelerator.device
model = torch.nn.Transformer().to(device)
optim = torch.optim.Adam(model.parameters())
optimizer = torch.optim.Adam(model.parameters())
dataset = load_dataset('my_dataset')
data = torch.utils.data.DataLoader(dataset, shuffle=True)
+ model, optim, data = accelerator.prepare(model, optim, data)
+ model, optimizer, data = accelerator.prepare(model, optimizer, data)
model.train()
for epoch in range(10):
@ -81,13 +75,13 @@ Here is an example:
output = model(source)
loss = F.cross_entropy(output, targets)
+ accelerator.backward(loss)
- loss.backward()
+ accelerator.backward(loss)
optimizer.step()
```
As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp16).
As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp8, fp16, bf16).
In particular, the same code can then be run without modification on your local machine for debugging or your training environment.
@ -99,17 +93,17 @@ In particular, the same code can then be run without modification on your local
from datasets import load_dataset
+ from accelerate import Accelerator
+ accelerator = Accelerator()
- device = 'cpu'
+ accelerator = Accelerator()
+ model = torch.nn.Transformer()
- model = torch.nn.Transformer().to(device)
optim = torch.optim.Adam(model.parameters())
+ model = torch.nn.Transformer()
optimizer = torch.optim.Adam(model.parameters())
dataset = load_dataset('my_dataset')
data = torch.utils.data.DataLoader(dataset, shuffle=True)
+ model, optim, data = accelerator.prepare(model, optim, data)
+ model, optimizer, data = accelerator.prepare(model, optimizer, data)
model.train()
for epoch in range(10):
@ -122,15 +116,17 @@ In particular, the same code can then be run without modification on your local
output = model(source)
loss = F.cross_entropy(output, targets)
+ accelerator.backward(loss)
- loss.backward()
+ accelerator.backward(loss)
optimizer.step()
```
Want to learn more? Check out the [documentation](https://huggingface.co/docs/accelerate) or have a look at our [examples](https://github.com/huggingface/accelerate/tree/main/examples).
## Launching script
🤗 Accelerate also provides an optional CLI tool that allows you to quickly configure and test your training environment before launching the scripts. No need to remember how to use `torch.distributed.launch` or to write a specific launcher for TPU training!
🤗 Accelerate also provides an optional CLI tool that allows you to quickly configure and test your training environment before launching the scripts. No need to remember how to use `torch.distributed.run` or to write a specific launcher for TPU training!
On your machine(s) just run:
```bash
@ -149,17 +145,98 @@ For instance, here is how you would run the GLUE example on the MRPC task (from
accelerate launch examples/nlp_example.py
```
This CLI tool is **optional**, and you can still use `python my_script.py` or `python -m torchrun my_script.py` at your convenience.
You can also directly pass in the arguments you would to `torchrun` as arguments to `accelerate launch` if you wish to not run` accelerate config`.
To learn more, check the CLI documentation available [here](https://huggingface.co/docs/accelerate/package_reference/cli).
Or view the configuration zoo [here](https://github.com/huggingface/accelerate/blob/main/examples/config_yaml_templates/)
## Launching multi-CPU run using MPI
🤗 Here is another way to launch multi-CPU run using MPI. You can learn how to install Open MPI on [this page](https://www.open-mpi.org/faq/?category=building#easy-build). You can use Intel MPI or MVAPICH as well.
Once you have MPI setup on your cluster, just run:
```bash
accelerate config
```
Answer the questions that are asked, selecting to run using multi-CPU, and answer "yes" when asked if you want accelerate to launch mpirun.
Then, use `accelerate launch` with your script like:
```bash
accelerate launch examples/nlp_example.py
```
Alternatively, you can use mpirun directly, without using the CLI like:
```bash
mpirun -np 2 python examples/nlp_example.py
```
## Launching training using DeepSpeed
🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your Python script, we provide you the `DeepSpeedPlugin`.
```python
fromaccelerateimportAccelerator,DeepSpeedPlugin
# deepspeed needs to know your gradient accumulation steps beforehand, so don't forget to pass it
# Remember you still need to do gradient accumulation by yourself, just like you would have done without deepspeed
Note: DeepSpeed support is experimental for now. In case you get into some problem, please open an issue.
## Launching your training from a notebook
🤗 Accelerate also provides a `notebook_launcher` function you can use in a notebook to launch a distributed training. This is especially useful for Colab or Kaggle notebooks with a TPU backend. Just define your training loop in a `training_function` then in your last cell, add:
```python
fromaccelerateimportnotebook_launcher
notebook_launcher(training_function)
```
An example can be found in [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb). [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb)
## Why should I use 🤗 Accelerate?
You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library, In fact the whole API of 🤗 Accelerate is in one class, the `Accelerator` object.
You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library. In fact, the whole API of 🤗 Accelerate is in one class, the `Accelerator` object.
## Why shouldn't I use 🤗 Accelerate?
You shouldn't use 🤗 Accelerate if you don't want to write a training loop yourself. There are plenty of high-level libraries above PyTorch that will offer you that, 🤗 Accelerate is not one of them.
## Frameworks using 🤗 Accelerate
If you like the simplicity of 🤗 Accelerate but would prefer a higher-level abstraction around its capabilities, some frameworks and libraries that are built on top of 🤗 Accelerate are listed below:
* [Amphion](https://github.com/open-mmlab/Amphion) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development.
* [Animus](https://github.com/Scitator/animus) is a minimalistic framework to run machine learning experiments. Animus highlights common "breakpoints" in ML experiments and provides a unified interface for them within [IExperiment](https://github.com/Scitator/animus/blob/main/animus/core.py#L76).
* [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model training, and inference logic.
* [fastai](https://github.com/fastai/fastai#installing) is a PyTorch framework for Deep Learning that simplifies training fast and accurate neural nets using modern best practices. fastai provides a [Learner](https://docs.fast.ai/learner.html#Learner) to handle the training, fine-tuning, and inference of deep learning algorithms.
* [Finetuner](https://github.com/jina-ai/finetuner) is a service that enables models to create higher-quality embeddings for semantic search, visual similarity search, cross-modal text<->image search, recommendation systems, clustering, duplication detection, anomaly detection, or other uses.
* [InvokeAI](https://github.com/invoke-ai/InvokeAI) is a creative engine for Stable Diffusion models, offering industry-leading WebUI, terminal usage support, and serves as the foundation for many commercial products.
* [Kornia](https://kornia.readthedocs.io/en/latest/get-started/introduction.html) is a differentiable library that allows classical computer vision to be integrated into deep learning models. Kornia provides a [Trainer](https://kornia.readthedocs.io/en/latest/x.html#kornia.x.Trainer) with the specific purpose to train and fine-tune the supported deep learning algorithms within the library.
* [Open Assistant](https://projects.laion.ai/Open-Assistant/) is a chat-based assistant that understands tasks, can interact with their party systems, and retrieve information dynamically to do so.
* [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) is a lightweight training library, with a streamlined feature set centered around a general-purpose [Trainer](https://pytorch-accelerated.readthedocs.io/en/latest/trainer.html), that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves!
* [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is an open-source browser-based easy-to-use interface based on the Gradio library for Stable Diffusion.
* [torchkeras](https://github.com/lyhue1991/torchkeras) is a simple tool for training pytorch model just in a keras style, a dynamic and beautiful plot is provided in notebook to monitor your loss or metric.
* [transformers](https://github.com/huggingface/transformers) as a tool for helping train state-of-the-art machine learning models in PyTorch, Tensorflow, and JAX. (Accelerate is the backend for the PyTorch side).
## Installation
This repository is tested on Python 3.6+ and PyTorch 1.4.0+
This repository is tested on Python 3.8+ and PyTorch 1.10.0+
You should install 🤗 Accelerate in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
@ -174,8 +251,27 @@ pip install accelerate
## Supported integrations
- CPU only
- multi-CPU on one node (machine)
- multi-CPU on several nodes (machines)
- single GPU
- multi-GPU on one node (machine)
- multi-GPU on several nodes (machines)
- TPU
- FP16 with native AMP (apex on the roadmap)
- FP16/BFloat16 mixed precision
- FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine) or [MS-AMP](https://github.com/Azure/MS-AMP/)
- DeepSpeed support (Experimental)
- PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental)
- Megatron-LM support (Experimental)
## Citing 🤗 Accelerate
If you use 🤗 Accelerate in your publication, please cite it by using the following BibTeX entry.
```bibtex
@Misc{accelerate,
title={Accelerate: Training and inference at scale made simple, efficient and adaptable.},
author={Sylvain Gugger and Lysandre Debut and Thomas Wolf and Philipp Schmid and Zachary Mueller and Sourab Mangrulkar and Marc Sun and Benjamin Bossan},
This script supports `gpt-j-6b`, `gpt-neox`, `opt` (30B version) and `T0pp` out of the box, but you can specify any valid checkpoint for `model_name`.
To force a different `torch_dtype` than the one in the config: `--torch_dtype xxx`.
If you get an error linked to disk offload, you need to add the option `--disk-offload`
## Results
On a setup with two Titan RTXs (24GB of RAM) and 32GB of RAM, we get the following benchmarks (T0pp does not run in float16, which is why it's not included).
| Model | Model load time | Generation time | dtype | GPU 0 use | GPU 1 use | CPU use | Disk offload |
f"Accuracy not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
f"Accuracy not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
Comparing and running [torchao](https://github.com/pytorch/ao/tree/main/torchao/float8) FP8 with accelerate
## Overview
This repo provides scripts which compare native `torchao` model training against `accelerate`'s own integration. Each modeling type is segmented out via a script, supporting the following:
* Single GPU training (`non_distributed.py`)
* Multi-GPU training via DistributedDataParallelism (`ddp.py`)
* Fully Sharded Data Parallelism (`fsdp.py`)
* DeepSpeed ZeRO 1-3 (`deepspeed.py`)
To run them, it's recommended to use a docker image (see the attached `Dockerfile`) and not install `torchao` manually.
## Running:
There are official Docker images located at `huggingface/accelerate:gpu-fp8-torchao-nightly` which can be used.
You can run all scripts using the core `accelerate launch` command without any `accelerate config` being needed.
For single GPU, run it via `python`:
```bash
python non_distributed.py
```
For the rest, run it via `accelerate launch`:
```bash
accelerate launch ddp.py # or distrib_deepspeed.py, ddp.py
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
Comparing and running [TransformerEngine](https://github.com/NVIDIA/TransformerEngine) FP8 with accelerate
## Overview
This repo provides scripts which compare native TransformerEngine model training against `accelerate`'s own integration. Each modeling type is segmented out via a script, supporting the following:
* Single GPU training (`non_distributed.py`)
* Multi-GPU training via DistributedDataParallelism (`ddp.py`)
* Fully Sharded Data Parallelism (`fsdp.py`)
* DeepSpeed ZeRO 1-3 (`deepspeed.py`)
To run them, it's recommended to use a docker image (see the attached `Dockerfile`) and not install `TransformerEngine` manually.
## Running:
There are official Docker images located at `huggingface/accelerate:gpu-fp8-transformerengine-nightly` which can be used.
You can run all scripts using the core `accelerate launch` command without any `accelerate config` being needed.
For single GPU, run it via `python`:
```bash
python non_distributed.py
```
For the rest, run it via `accelerate launch`:
```bash
accelerate launch ddp.py # or distrib_deepspeed.py, ddp.py
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
This benchmark showcases `FSDP2` in 🤗 `accelerate` and compares it to `torch` baseline.
## Overview
This benchmark consists of two parts:
-`main.py` is the main script that runs the benchmark
-`visualize.py` is the script that visualizes the results (if `--output_dir` was specified for the previous command)
## Motivation
We want to showcase that 🤗 `accelerate`'s integration of `FSDP2` is on par raw PyTorch, and highlight a "broken" part in PyTorch that creating an optimizer before applying `FSDP2`**doesn't result in a working training loop**. (more on this later)
This script showcases **matching memory usage and convergence between `accelerate` and `torch`'s baseline.**
To deal with this breaking change (and maintain backward compatibility with FSDP1 in terms of an API), `accelerate` had to come up with a workaround since `accelerate` assumes that the user will nearly always create a model, optimizer, scheduler, etc beforehand and bring them themselves. This lead to an issue of a stark increase in memory as well as the model not even training if the user creates an optimizer beforehand.
To workaround this, we replace the parameters inside the optimizer with the newly created FSDP2 sharded ones. More about this can be found in this [blog post (TBD)](TODO)
> [!WARNING]
> This script is intended to fit on 2x 24GB GPUs, though on so few GPUs it's not possible to see the memory difference (discrepancies in grad allocation result in lower memory usage in the non-fixed case), only the difference in convergence. Below are attached results from 8x H100 GPUs where the difference is visible.
> TLDR: more GPUs = bigger memory difference between fixed and non-fixed cases.
## Results
Here are the results from running the benchmark on 8x H100 GPUs:
As you can see, the memory usage of `accelerate` and `torch_post_shard` (the **intended** way) are very similar, while `torch_pre_shard_not_fixed` uses significantly more memory. Our fix in `torch_pre_shard_fixed` brings the memory usage back in line with the **intended** approach.
> [!WARNING]
> Timing discrepancies are due to the benchmarks being ran in 1 script.
## Running
To run the benchmark, you can either use `accelerate launch` or `torchrun`:
```bash
accelerate launch main.py
```
```bash
# For two GPUs
torchrun --nproc_per_node 2 main.py
```
This supports multiple configurable options, you can learn about them by running:
```bash
python3 main.py --help
```
This script will run 4 different benchmarks:
-`torch_optimizer_after_fsdp`: `torch` baseline where optimizer is created after applying `FSDP2`, this is the **intended** way to do it
-`torch_optimizer_before_fsdp_not_fixed`: `torch` baseline where optimizer is created before applying `FSDP2` without fixing the optimizer parameters
-`torch_optimizer_before_fsdp_fixed`: `torch` baseline where optimizer is created before applying `FSDP2` with our fix to the optimizer
-`accelerate`: `accelerate`'s own integration of `FSDP2` where optimizer is created before applying `FSDP2`, but we apply our fix to the optimizer
Memory results are saved in a folder specified by `--output_dir` argument.
Optionally, you can specify `--save_memory_snapshot` to save the torch memory snapshot, which can then be viewed using [`torch memory viz`](https://pytorch.org/memory_viz)
## Visualizing results
To visualize the results, you can run:
```bash
python3 visualize.py --dir <path_to_output_dir>
```
This will then create two plots, showcasing allocated and reserved memory usage between all the different benchmarks discussed above.
This function returns a dictionary mapping the parameter names to their data pointers or
the original parameters if `drop_refs` is `False`.
It is used to get the original parameter names before `fully_shard` is applied.
We only return the data pointers, so we drop the references to the original parameters
and `fully_shard` will then trigger a new allocation for the sharded ones.
Args:
model (`torch.nn.Module`): Model instance to get the named parameters from
drop_refs (`bool`, *optional*, defaults to `False`): Whether to drop the references to the original parameters
Returns:
`dict[str, Union[torch.Tensor, int]]`: Dictionary mapping the parameter names to their data pointers or the original parameters if `drop_refs` is `False`
"""
named_parameters={}
forn,pinmodel.named_parameters():
# We only preserve the data pointers to have the unique 1:1 mapping between the original and the sharded parameters
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
importargparse
importjson
importmatplotlib.pyplotasplt
defparse_args():
parser=argparse.ArgumentParser()
parser.add_argument("--dir",type=str,help="Directory containing the memory usage data")
parser.add_argument(
"--memory_threshold",
type=int,
default=0,
help="Memory threshold to filter data that is below this value (only filters 1st `--filter_partition` of the points which should roughtly correspond to the model loading)",
)
parser.add_argument(
"--filter_partition",
type=float,
default=1/3,
help="Partition to drop data from that are below the memory threshold",
This benchmark compares different compilation strategies using PyTorch's `torch.compile` and Accelerate's `compile_regions` utility, which is based on the recipe in [PyTorch documentation](https://pytorch.org/tutorials/recipes/regional_compilation.html).
## Overview
The benchmark evaluates three approaches:
- **Baseline**: No compilation, standard PyTorch eager execution.
- **Full compilation**: Using PyTorch's `torch.compile()` on the entire model.
- **Regional compilation**: Using `accelerate.utils.compile_regions()` which targets specific blocks of the model to optimize compilation time.
Each approach is tested with different batch sizes (1 and 4) and sequence lengths (128) on various LLaMA-based models ranging from 1B to 13B parameters. We purposefully run the forward pass outside of the `torch.no_grad()` context to simulate performance in a training environment, where gradients are needed.
## Usage
To run this benchmark:
```bash
python regional_compilation.py
```
The script will automatically download the model configurations, create models, and benchmark both compilation and inference times across different scenarios.
## Requirements
- Suitable GPU memory for the models being tested.
- PyTorch with CUDA support.
- Transformers library.
- Accelerate library.
## Results
The benchmark results are summarized in the following figures:
- Compilation time is how long it takes to run the first forward pass.
- Speedup factor is the ratio of non-compiled baseline inference time to the fully/regionally compiled inference time.
Regional compilation provides significantly faster compilation times compared to full model compilation:
- **Full compilation**: Takes ~10-23 seconds depending on model size.
- **Regional compilation**: Takes only ~2-3 seconds across all model sizes.
- **Speed improvement**: Regional compilation is **5-9x faster** to compile.
### Inference Time
Regional compilation delivers inference performance close to full compilation:
- For batch size 1:
- For smaller models (1B-3B): Full compilation has a slight edge over regional compilation.
- For larger models (8B-13B): Regional compilation performs similarly to full compilation.
- For batch size 4: Regional compilation performs similarly to full compilation across all models.
## Key Takeaways
1.**Comparable Performance**: Regional compilation delivers performance speedups similar to full compilation, especially for larger models.
2.**Faster Compilation**: Regional compilation significantly reduces the time taken to compile models, making it a more efficient choice for deployment.
3.**Batch Size Impact**: At batch size 4, full compilation and regional compilation perform nearly identically.
4.**Model Size Impact**: Even with a small batch size, full compilation and regional compilation perform similarly for larger models (8B-13B).
5.**Practical Application**: For real-world applications, regional compilation is a practical choice for optimizing training cold start times, especially when working with large models.
Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Official Hugging Face Accelerate Docker Images
Accelerate publishes a variety of docker versions as part of our CI that users can also use. These are stable images that Accelerate can run off of which comes with a variety of different setup configurations, all of which are officially hosted on [Docker Hub](https://hub.docker.com/r/huggingface/accelerate).
A breakdown of each are given below
## Naming Conventions
Accelerate docker images follow a tagging convention of:
`accelerator` in this instance is one of many applical pre-configured backend supports:
*`gpu`: Comes compiled off of the `nvidia/cuda` image and includes core parts like `bitsandbytes`. Runs off python 3.9.
*`cpu`: Comes compiled off of `python:3.9-slim` and is designed for non-CUDA based workloads.
* More to come soon
*`gpu-deepspeed`: Comes compiled off of the `nvidia/cuda` image and includes core parts like `bitsandbytes` as well as the latest `deepspeed` version. Runs off python 3.10.
*`gpu-fp8-transformerengine`: Comes compiled off of `nvcr.io/nvidia/pytorch` and is specifically for running the `benchmarks/fp8` scripts on devices which support FP8 operations using the `TransformerEngine` library (RTX 4090, H100, etc)
## Nightlies vs Releases
Each release a new build is pushed with a version number included in the name. For a GPU-supported image of version 0.28.0 for instance, it would look like the following:
```bash
huggingface/accelerate:gpu-release-0.28.0
```
Nightlies contain two different image tags. There is a general `nightly` tag which is built each night, and a `nightly-YYYY-MM-DD` which corresponds to a build from a particular date.
For instance, here is an example nightly CPU image from 3/14/2024
```bash
huggingface/accelerate:cpu-nightly-2024-03-14
```
## Running the images
Each image comes compiled with `conda` and an `accelerate` environment contains all of the installed dependencies.
To pull down the latest nightly run:
```bash
docker pull huggingface/accelerate:gpu-nightly
```
To then run it in interactive mode with GPU-memory available, run:
```bash
docker container run --gpus all -it huggingface/accelerate:gpu-nightly
```
## DEPRECATED IMAGES
CPU and GPU docker images were hosted at `huggingface/accelerate-gpu` and `huggingface/accelerate-cpu`. These builds are now outdated and will not receive updates.
The builds at the corresponding `huggingface/accelerate:{gpu,cpu}` contain the same `Dockerfile`, so it's as simple as changing the docker image to the desired ones from above. We will not be deleting these images for posterity, but they will not be receiving updates going forward.
You can adapt the `--build_dir` to set any temporary folder that you prefer. This command will create it and generate
the MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite
Markdown editor.
## Previewing the documentation
To preview the docs, first install the `watchdog` module with:
```bash
pip install watchdog
```
Then run the following command:
```bash
doc-builder preview {package_name}{path_to_docs}
```
For example:
```bash
doc-builder preview accelerate docs/source/
```
The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
---
**NOTE**
The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml`& restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
---
## Adding a new element to the navigation bar
Accepted files are Markdown (.md).
Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/accelerate/blob/main/docs/source/_toctree.yml) file.
## Renaming section headers and moving sections
It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Execution process
When working with distributed training systems, it is important to manage how and when processes are executed across GPUs. Some processes are completed faster than others, and some processes shouldn't begin if others haven't finished yet. Accelerate provides tools for orchestrating when processes are executed to ensure everything remains synchronized across all devices.
This tutorial will teach you how to execute a process on only one machine and how to delay execution until all processes have reached a certain point.
## Execute on one process
Certain code only needs to be run once on a given machine, such as printing a log statement or only displaying one progress bar on the local main process.
<hfoptionsid="local-execution">
<hfoptionid="statements">
You should use `accelerator.is_local_main_process` to indicate code that should only be executed once.
You could also wrap a statement with `accelerator.is_local_main_process`.
> [!TIP]
> For standalone `print` statements that aren't wrapped in `accelerator.is_local_main_process`, replace `print` with Accelerate's [`~Accelerator.print`] method to only print once per process.
```py
ifaccelerator.is_local_main_process:
print("Accelerate is the best")
```
</hfoption>
<hfoptionid="function">
For a function that should only be executed once, use [`~Accelerator.on_local_main_process`].
```py
@accelerator.on_local_main_process
defdo_my_thing():
"Something done once per server"
do_thing_once_per_server()
```
</hfoption>
</hfoptions>
You could also direct Accelerate to execute code once across *all processes* regardless of the number of machines. This is useful if you're uploading a final model to the Hub.
<hfoptionsid="main-execution">
<hfoptionid="statement">
You should use `accelerator.is_main_process` to indicate code that should only be executed once across all processes.
```py
ifaccelerator.is_main_process:
repo.push_to_hub()
```
</hfoption>
<hfoptionid="function">
For a function that should only be executed once across all processes, use [`~Accelerator.on_main_process`].
```py
@accelerator.on_main_process
defdo_my_thing():
"Something done once per server"
do_thing_once()
```
</hfoption>
</hfoptions>
## Execute on a specific process
Accelerate can also help you execute functions that should only be executed on a specific process or a local process index.
<hfoptionsid="specific-execution">
<hfoptionid="specific process">
Use the [`~Accelerator.on_process`] method and specify the process index to execute a function on.
```py
@accelerator.on_process(process_index=0)
defdo_my_thing():
"Something done on process index 0"
do_thing_on_index_zero()
```
</hfoption>
<hfoptionid="local process">
Use the [`~Accelerator.on_local_process`] method and specify the local process index to execute a function on.
"Something done on process index 0 on each server"
do_thing_on_index_zero_on_each_server()
```
</hfoption>
</hfoptions>
## Defer execution
When you run your script on several GPUs at the same time, some code may be executed faster than others. You might need to wait for all processes to reach a certain point before executing the next set of instructions. For instance, you shouldn’t save a model before making sure every process is done with training.
To do this, add [`~Accelerator.wait_for_everyone`] in your code. This blocks all processes that have finished first from continuing until all remaining processes have reached the same point (this has no effect if you're running on a single GPU or CPU).
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Installation
Before you start, you will need to setup your environment, install the appropriate packages, and configure Accelerate. Accelerate is tested on **Python 3.8+**.
Accelerate is available on pypi and conda, as well as on GitHub. Details to install from each are below:
## pip
To install Accelerate from pypi, perform:
```bash
pip install accelerate
```
## conda
Accelerate can also be installed with conda with:
```bash
conda install -c conda-forge accelerate
```
## Source
New features are added every day that haven't been released yet. To try them out yourself, install
Next, you need to launch it with `accelerate launch`.
<Tipwarning={true}>
It's recommended you run `accelerate config` before using `accelerate launch` to configure your environment to your liking.
Otherwise Accelerate will use very basic defaults depending on your system setup.
</Tip>
## Using accelerate launch
Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them is.
<Tip>
If you are familiar with launching scripts in PyTorch yourself such as with `torchrun`, you can still do this. It is not required to use `accelerate launch`.
You can also use `accelerate launch` without performing `accelerate config` first, but you may need to manually pass in the right configuration parameters.
In this case, Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision.
Here is how you would use all GPUs and train with mixed precision disabled:
Multi-node training with Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following:
- Copy your codebase and data to all nodes. (or place them on a shared filesystem)
- Setup your python packages on all nodes.
- Run `accelerate config` on the main single node first. After specifying the number of nodes, you will be asked to specify the rank of each node (this will be 0 for the main/master node), along with the IP address and port for the main process. This is required for the worker nodes to communicate with the main process. Afterwards, you can copy or send this config file across all of your nodes, changing the `machine_rank` to 1, 2,3, etc. to avoid having to run the command (or just follow their directions directly for launching with `torchrun` as well)
Once you have done this, you can start your multi-node training run by running `accelerate launch` (or `torchrun`) on all nodes.
<Tip>
It is required that the command be ran on all nodes for everything to start, not just running it from the main node. You can use something like SLURM or a different process executor to wrap around this requirement and call everything from a single command.
</Tip>
<Tip>
It is recommended to use the intranet IP of your main node over the public IP for better latency. This is the `192.168.x.x` or the `172.x.x.x` address you see when you run `hostname -I` on the main node.
</Tip>
To get a better idea about multi-node training, check out our example for [multi-node training with FSDP](https://huggingface.co/blog/ram-efficient-pytorch-fsdp).
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Add Accelerate to your code
Each distributed training framework has their own way of doing things which can require writing a lot of custom code to adapt it to your PyTorch training code and training environment. Accelerate offers a friendly way to interface with these distributed training frameworks without having to learn the specific details of each one. Accelerate takes care of those details for you, so you can focus on the training code and scale it to any distributed training environment.
In this tutorial, you'll learn how to adapt your existing PyTorch code with Accelerate and get you on your way toward training on distributed systems with ease! You'll start with a basic PyTorch training loop (it assumes all the training objects like `model` and `optimizer` have been setup already) and progressively integrate Accelerate into it.
```python
device="cuda"
model.to(device)
forbatchintraining_dataloader:
optimizer.zero_grad()
inputs,targets=batch
inputs=inputs.to(device)
targets=targets.to(device)
outputs=model(inputs)
loss=loss_function(outputs,targets)
loss.backward()
optimizer.step()
scheduler.step()
```
## Accelerator
The [`Accelerator`] is the main class for adapting your code to work with Accelerate. It knows about the distributed setup you're using such as the number of different processes and your hardware type. This class also provides access to many of the necessary methods for enabling your PyTorch code to work in any distributed training environment and for managing and executing processes across devices.
That's why you should always start by importing and creating an [`Accelerator`] instance in your script.
```python
fromaccelerateimportAccelerator
accelerator=Accelerator()
```
The [`Accelerator`] also knows which device to move your PyTorch objects to, so it is recommended to let Accelerate handle this for you.
```diff
- device = "cuda"
+ device = accelerator.device
model.to(device)
```
## Prepare PyTorch objects
Next, you need to prepare your PyTorch objects (model, optimizer, scheduler, etc.) for distributed training. The [`~Accelerator.prepare`] method takes care of placing your model in the appropriate container (like single GPU or multi-GPU) for your training setup, adapting the optimizer and scheduler to use Accelerate's [`~optimizer.AcceleratedOptimizer`] and [`~scheduler.AcceleratedScheduler`], and creating a new dataloader that can be sharded across processes.
> [!TIP]
> Accelerate only prepares objects that inherit from their respective PyTorch classes such as `torch.optim.Optimizer`.
The PyTorch objects are returned in the same order they're sent.
Finally, remove the `to(device)` calls to the inputs and targets in the training loop because Accelerate's DataLoader classes automatically places them on the right device. You should also replace the usual `backward()` pass with Accelerate's [`~Accelerator.backward`] method which scales the gradients for you and uses the appropriate `backward()` method depending on your distributed setup (for example, DeepSpeed or Megatron).
```diff
- inputs = inputs.to(device)
- targets = targets.to(device)
outputs = model(inputs)
loss = loss_function(outputs, targets)
- loss.backward()
+ accelerator.backward(loss)
```
Put everything together and your new Accelerate training loop should now look like this!
Accelerate offers additional features - like gradient accumulation, gradient clipping, mixed precision training and more - you can add to your script to improve your training run. Let's explore these three features.
### Gradient accumulation
Gradient accumulation enables you to train on larger batch sizes by accumulating the gradients over multiple batches before updating the weights. This can be useful for getting around memory limitations. To enable this feature in Accelerate, specify the `gradient_accumulation_steps` parameter in the [`Accelerator`] class and add the [`~Accelerator.accumulate`] context manager to your script.
Gradient clipping is a technique to prevent "exploding gradients", and Accelerate offers:
* [`~Accelerator.clip_grad_value_`] to clip gradients to a minimum and maximum value
* [`~Accelerator.clip_grad_norm_`] for normalizing gradients to a certain value
### Mixed precision
Mixed precision accelerates training by using a lower precision data type like fp16 (half-precision) to calculate the gradients. For the best performance with Accelerate, the loss should be computed inside your model (like in Transformers models) because computations outside of the model are computed in full precision.
Set the mixed precision type to use in the [`Accelerator`], and then use the [`~Accelerator.autocast`] context manager to automatically cast the values to the specified data type.
> [!WARNING]
> Accelerate enables automatic mixed precision, so [`~Accelerator.autocast`] is only needed if there are other mixed precision operations besides those performed on loss by [`~Accelerator.backward`] which already handles the scaling.
Accelerate can also save and load a *model* once training is complete or you can also save the model and optimizer *state* which could be useful for resuming training.
### Model
Once all processes are complete, unwrap the model with the [`~Accelerator.unwrap_model`] method before saving it because the [`~Accelerator.prepare`] method wrapped your model into the proper interface for distributed training. If you don't unwrap the model, saving the model state dictionary also saves any potential extra layers from the larger model and you won't be able to load the weights back into your base model.
You should use the [`~Accelerator.save_model`] method to unwrap and save the model state dictionary. This method can also save a model into sharded checkpoints or into the [safetensors](https://hf.co/docs/safetensors/index) format.
<hfoptionsid="save">
<hfoptionid="single checkpoint">
```py
accelerator.wait_for_everyone()
accelerator.save_model(model,save_directory)
```
<Tip>
For models from the [Transformers](https://hf.co/docs/transformers/index) library, save the model with the [`~transformers.PreTrainedModel.save_pretrained`] method so that it can be reloaded with the [`~transformers.PreTrainedModel.from_pretrained`] method.
To load your weights, use the [`~Accelerator.unwrap_model`] method to unwrap the model first before loading the weights. All model parameters are references to tensors, so this loads your weights inside `model`.
To load a sharded checkpoint or a safetensor formatted checkpoint, use the [`~accelerate.load_checkpoint_in_model`] method. This method allows you to load a checkpoint onto a specific device.
During training, you may want to save the current state of the model, optimizer, random generators, and potentially learning rate schedulers so they can be restored in the *same script*. You should add the [`~Accelerator.save_state`] and [`~Accelerator.load_state`] methods to your script to save and load states.
To further customize where and how states are saved through [`~Accelerator.save_state`], use the [`~utils.ProjectConfiguration`] class. For example, if `automatic_checkpoint_naming` is enabled, each saved checkpoint is stored at `Accelerator.project_dir/checkpoints/checkpoint_{checkpoint_number}`.
Any other stateful items to be stored should be registered with the [`~Accelerator.register_for_checkpointing`] method so they can be saved and loaded. Every object passed to this method to be stored must have a `load_state_dict` and `state_dict` function.
> [!TIP]
> If you have [`torchdata>=0.8.0`](https://github.com/pytorch/data/tree/main) installed, you can additionally pass `use_stateful_dataloader=True` into your [`~utils.DataLoaderConfiguration`]. This extends Accelerate's DataLoader classes with a `load_state_dict` and `state_dict` function, and makes it so `Accelerator.save_state` and `Accelerator.load_state` also track how far into the training dataset it has read when persisting the model.
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Launching distributed training from Jupyter Notebooks
This tutorial teaches you how to fine tune a computer vision model with 🤗 Accelerate from a Jupyter Notebook on a distributed system.
You will also learn how to setup a few requirements needed for ensuring your environment is configured properly, your data has been prepared properly, and finally how to launch training.
<Tip>
This tutorial is also available as a Jupyter Notebook [here](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_cv_example.ipynb)
</Tip>
## Configuring the Environment
Before any training can be performed, an Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
```bash
accelerate config
```
However, if general defaults are fine and you are *not* running on a TPU, Accelerate has a utility to quickly write your GPU configuration into a config file via [`utils.write_basic_config`].
The following code will restart Jupyter after writing the configuration, as CUDA code was called to perform this.
<Tipwarning={true}>
CUDA can't be initialized more than once on a multi-GPU system. It's fine to debug in the notebook and have calls to CUDA, but in order to finally train a full cleanup and restart will need to be performed.
</Tip>
```python
importos
fromaccelerate.utilsimportwrite_basic_config
write_basic_config()# Write a config file
os._exit(00)# Restart the notebook
```
## Preparing the Dataset and Model
Next you should prepare your dataset. As mentioned earlier, great care should be taken when preparing the `DataLoaders` and model to make sure that **nothing** is put on *any* GPU.
If you do, it is recommended to put that specific code into a function and call that from within the notebook launcher interface, which will be shown later.
Make sure the dataset is downloaded based on the directions [here](https://github.com/huggingface/accelerate/tree/main/examples#simple-vision-example)
All that's left is to use the [`notebook_launcher`].
You pass in the function, the arguments (as a tuple), and the number of processes to train on. (See the [documentation](../package_reference/launchers) for more information)
To launch the training process with elasticity, enabling fault tolerance, you can use the `elastic_launch` feature provided by PyTorch. This requires setting additional parameters such as `rdzv_backend` and `max_restarts`. Here is an example of how to use `notebook_launcher` with elastic capabilities:
```python
notebook_launcher(
training_loop,
args,
num_processes=2,
max_restarts=3
)
```
As it's running it will print the progress as well as state how many devices you ran on. This tutorial was ran with two GPUs:
```python out
Launching training on 2 GPUs.
epoch 0: 88.12
epoch 1: 91.73
epoch 2: 92.58
epoch 3: 93.90
epoch 4: 94.71
```
And that's it!
Please note that [`notebook_launcher`] ignores the Accelerate config file, to launch based on the config use:
```bash
accelerate launch
```
## Debugging
A common issue when running the `notebook_launcher` is receiving a CUDA has already been initialized issue. This usually stems
from an import or prior code in the notebook that makes a call to the PyTorch `torch.cuda` sublibrary. To help narrow down what went wrong,
you can launch the `notebook_launcher` with `ACCELERATE_DEBUG_MODE=yes` in your environment and an additional check
will be made when spawning that a regular process can be created and utilize CUDA without issue. (Your CUDA code can still be ran afterwards).
## Conclusion
This notebook showed how to perform distributed training from inside of a Jupyter Notebook. Some key notes to remember:
- Make sure to save any code that use CUDA (or CUDA imports) for the function passed to [`notebook_launcher`]
- Set the `num_processes` to be the number of devices used for training (such as number of GPUs, CPUs, TPUs, etc)
- If using the TPU, declare your model outside the training loop function
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# TPU training
A [TPU (Tensor Processing Unit)](https://cloud.google.com/tpu/docs/intro-to-tpu) is a type of hardware specifically designed for training models efficiently. Accelerate supports TPU training, but there are a few things you should be aware of, namely graph compilation. This tutorial briefly discusses compilation, and for more details, take a look at the [Training on TPUs with Accelerate](../concept_guides/training_tpu) guide.
## Compilation
A TPU creates a graph of all the operations in the training step such as the forward pass, backward pass and optimizer step. This is why the first training step always takes a while because building and compiling this graph takes time. But once compilation is complete, it is cached and all subsequent steps are much faster.
The key is to avoid compiling your code again or else training is super slow. This means all your operations must be exactly the same:
* all tensors in your batches must have the same length (for example, no dynamic padding for NLP tasks)
* your code must be static (for example, no layers with for loops that have different lengths depending on the input such as a LSTM)
## Weight tying
A common language model design is to tie the weights of the embedding and softmax layers. However, moving the model to a TPU (either yourself or passing it to the [`~Accelerator.prepare`] method) breaks the weight tying and you'll need to retie the weights.
To add special behavior (like weight tying) in your script for TPUs, set [`~Accelerator.distributed_type`] to `DistributedType.TPU` first. Then you can use the [`~transformers.PreTrainedModel.tie_weights`] method to tie the weights.
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Troubleshoot
This guide provides solutions to some issues you might encounter when using Accelerate. Not all errors are covered because Accelerate is an active library that is continuously evolving and there are many different use cases and distributed training setups. If the solutions described here don't help with your specific error, please take a look at the [Ask for help](#ask-for-help) section to learn where and how to get help.
## Logging
Logging can help you identify where an error is coming from. In a distributed setup with multiple processes, logging can be a challenge, but Accelerate provides the [`~accelerate.logging`] utility to ensure logs are synchronized.
To troubleshoot an issue, use [`~accelerate.logging`] instead of the standard Python [`logging`](https://docs.python.org/3/library/logging.html#module-logging) module. Set the verbosity level (`INFO`, `DEBUG`, `WARNING`, `ERROR`, `CRITICAL`) with the `log_level` parameter, and then you can either:
1. Export the `log_level` as the `ACCELERATE_LOG_LEVEL` environment variable.
2. Pass the `log_level` directly to `get_logger`.
For example, to set `log_level="INFO"`:
```py
fromaccelerate.loggingimportget_logger
logger=get_logger(__name__,log_level="DEBUG")
```
By default, the log is called on main processes only. To call it on all processes, pass `main_process_only=False`.
If a log should be called on all processes and in order, also pass `in_order=True`.
There can be many reasons why your code is hanging. Let's take a look at how to solve some of the most common issues that can cause your code to hang.
### Mismatched tensor shapes
Mismatched tensor shapes is a common issue that can cause your code to hang for a significant amount of time on a distributed setup.
When running scripts in a distributed setup, functions such as [`Accelerator.gather`] and [`Accelerator.reduce`] are necessary to grab tensors across devices to collectively perform operations on them. These (and other) functions rely on `torch.distributed` to perform a `gather` operation, which requires tensors to have the **exact same shape** across all processes. When the tensor shapes don't match, your code hangs and you'll eventually hit a timeout exception.
You can use Accelerate's operational debug mode to immediately catch this issue. We recommend enabling this mode during the `accelerate config` setup, but you can also enable it from the CLI, as an environment variable, or by manually editing the `config.yaml` file.
For early stopping in distributed training, if each process has a specific stopping condition (e.g. validation loss), it may not be synchronized across all processes. As a result, a break can happen on process 0 but not on process 1 which will cause your code to hang indefinitely until a timeout occurs.
If you have early stopping conditionals, use the `set_trigger` and `check_trigger` methods to make sure all the processes
are ended correctly.
```py
# Assume `should_do_breakpoint` is a custom defined function that returns a conditional,
# and that conditional might be true only on process 1
ifshould_do_breakpoint(loss):
accelerator.set_trigger()
# Later in the training script when we need to check for the breakpoint
ifaccelerator.check_trigger():
break
```
### Low kernel versions on Linux
On Linux with kernel version <5.5,hangingprocesseshavebeenreported.Toavoidthisproblem,upgradeyoursystemtoalaterkernelversion.
The inner function **must** take batch size as the first parameter, but we do not pass one to it when called. The wrapper will handle this for you. Any object (models, optimizers) that consumes device memory and is passed to the [`Accelerator`] also **must** be declared inside the inner function.
If you changed the device setup and observe different model performance, it is likely you didn't update your script when moving from one setup to another. Even if you're using the same script with the same batch size, the results will still be different on a TPU, multi-GPU, and single GPU.
For example, if you were training on a single GPU with a batch size of 16 and you move to a dual GPU setup, you need to change the batch size to 8 to have the same effective batch size. This is because when training with Accelerate, the batch size passed to the dataloader is the **batch size per GPU**.
To make sure you can reproduce the results between the setups, make sure to use the same seed, adjust the batch size accordingly, and consider scaling the learning rate.
For more details and a quick reference for batch sizes, check out the [Comparing performance between different device setups](../concept_guides/performance) guide.
## Performance issues on different GPUs
If your multi-GPU setup consists of different GPUs, you may encounter some performance issues:
- There may be an imbalance in GPU memory between the GPUs. In this case, the GPU with the smaller memory will limit the batch size or the size of the model that can be loaded onto the GPUs.
- If you are using GPUs with different performance profiles, the performance will be driven by the slowest GPU you are using because the other GPUs will have to wait for it to complete its workload.
Vastly different GPUs within the same setup can lead to performance bottlenecks.
## Ask for help
If none of the solutions and advice here helped resolve your issue, you can always reach out to the community and Accelerate team for help.
- Ask for help on the Hugging Face forums by posting your question in the [Accelerate category](https://discuss.huggingface.co/c/accelerate/18). Make sure to write a descriptive post with relevant context about your setup and reproducible code to maximize the likelihood that your problem is solved!
- Post a question on [Discord](http://hf.co/join/discord), and let the team and the community help you.
- Create an Issue on the Accelerate [GitHub repository](https://github.com/huggingface/accelerate/issues) if you think you've found a bug related to the library. Include context regarding the bug and details about your distributed setup to help us better figure out what's wrong and how we can fix it.
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Loading big models into memory
When loading a pre-trained model in PyTorch, the usual workflow looks like this:
```py
importtorch
my_model=ModelClass(...)
state_dict=torch.load(checkpoint_file)
my_model.load_state_dict(state_dict)
```
In plain English, those steps are:
1. Create the model with randomly initialized weights
2. Load the model weights (in a dictionary usually called a state dict) from the disk
3. Load those weights inside the model
While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pre-trained weights. If you're loading a model with 6 billion parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16).
<Tipwarning={true}>
This API is quite new and still in its experimental stage. While we strive to provide a stable API, it's possible some small parts of the public API will change in the future.
</Tip>
## How the Process Works: A Quick Overview
<Youtubeid="MWCSGj9jEAo"/>
## How the Process Works: Working with Code
### Instantiating an empty model
The first tool Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved to that device.
<Tipwarning={true}>
You can't move a model initialized like this on CPU or another device directly, since it doesn't have any data. It's also very likely that a forward pass with that empty model will fail, as not all operations are supported on the meta device.
</Tip>
### Sharded checkpoints
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split into several smaller files that we call checkpoint shards.
Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
```bash
first_state_dict.bin
index.json
second_state_dict.bin
```
with index.json being the following file:
```
{
"linear1.weight": "first_state_dict.bin",
"linear1.bias": "first_state_dict.bin",
"linear2.weight": "second_state_dict.bin",
"linear2.bias": "second_state_dict.bin"
}
```
and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"linear1.bias"`, `second_state_dict.bin` the ones for `"linear2.weight"` and `"linear2.bias"`
### Loading weights
The second tool Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
If you want to use big model inference with Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
Here is how we can use this to load the [GPT2-1.5B](https://huggingface.co/marcsun13/gpt2-xl-linear-sharded) model.
By passing `device_map="auto"`, we tell Accelerate to determine automatically where to put each layer of the model depending on the available resources:
- first, we use the maximum space available on the GPU(s)
- if we still need space, we store the remaining weights on the CPU
- if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors
#### `no_split_module_classes`
This parameter will indicate that some of the modules with the name `"Block"` should not be split across different devices. You should set here all blocks that
include a residual connection of some kind.
#### The `device_map`
You can see the `device_map` that Accelerate picked by accessing the `hf_device_map` attribute of your model:
```py
model.hf_device_map
```
```python out
{'transformer.wte': 0,
'transformer.wpe': 0,
'transformer.drop': 0,
'transformer.h.0': 0,
...
'transformer.h.21': 0,
'transformer.h.22': 1,
'transformer.h.23': 1,
'transformer.h.24': 1,
...
'transformer.h.47': 1,
'transformer.ln_f': 1,
'lm_head': 1}
```
It's fully possible to create your own device map for the layers to use as well, specifying the GPU device to use (a number), `"cpu"`, or `"disk"` and pass this in:
Behind the scenes, Accelerate added hooks to the model, so that:
- at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works)
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after
This way, your model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM!
<Tip warning={true}>
This only supports the inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations.
</Tip>
### Designing a device map
You can let Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
<Tip>
You can derive all sizes of the model (and thus compute a `device_map`) on a model that is on the meta device.
</Tip>
All the options will produce the same result when you don't have enough GPU memory to accommodate the whole model (which is to fit everything that can on the GPU, then offload weights on the CPU or even on the disk if there is not enough RAM).
When you have more GPU memory available than the model size, here is the difference between each option:
- `"auto"` and `"balanced"` evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1.
- `"balanced_low_0"` evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the `generate` function for Transformers models
- `"sequential"` will fit what it can on GPU 0, then move on GPU 1 and so forth (so won't use the last GPUs if it doesn't need to).
<Tip>
The options `"auto"` and `"balanced"` produce the same results for now, but the behavior of `"auto"` might change in the future if we find a strategy that makes more sense, while `"balanced"` will stay stable.
</Tip>
First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want to use for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`.
Here is an example where we don't want to use more than 10GiB on each of the two GPUs and no more than 30GiB of CPU RAM for the model weights:
When a first allocation happens in PyTorch, it loads CUDA kernels which take about 1-2GB of memory depending on the GPU. Therefore you always have less usable memory than the actual size of the GPU. To see how much memory is actually used do `torch.ones(1).cuda()` and look at the memory usage.
Therefore when you create memory maps with `max_memory` make sure to adjust the available memory accordingly to avoid out-of-memory errors.
</Tip>
Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup, the close-to-ideal map is:
as you can see we gave the remaining 7 GPUs ~50% more memory than GPU 0.
If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance, if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be:
To be the most efficient, make sure your device map puts the parameters on the GPUs in a sequential manner (e.g. don't put one of the first weights on GPU 0, then weights on GPU 1 and the last weight back to GPU 0) to avoid making many transfers of data between the GPUs.
</Tip>
## CPU offload only
If you want to offload your model on CPU, you can use [`cpu_offload`]. As a result, all parameters of the model will be offloaded and only one copy of the state dict of the model will be kept. During the forward pass, parameters will be extracted from that state dict and put on the execution device and passed as they are needed, then offloaded again.
```python
cpu_offload(model, execution_device)
```
You can also use [`cpu_offload_with_hook`]. This function will offloads a model on the CPU and puts it back to an execution device when executed. The difference with [`cpu_offload`] is that the model stays on the execution device after the forward and is only offloaded again when the `offload` method of the returned `hook` is called. Furthermore, [`cpu_offload_with_hook`] is more performant but less memory saving. It is useful for pipelines running a model in a loop:
# model1 is offloaded on the CPU at the first iteration, model 2 stays on the GPU for this whole loop.
hid_2 = model_2(hid_1)
# model2 is offloaded to the CPU just before this forward.
hid_3 = model_3(hid_3)
# For model3, you need to manually call the hook offload method.
hook_3.offload()
```
## Disk offload only
To perform disk offload, you can use [`disk_offload`]. As a result, all parameters of the model will be offloaded as memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and put on the execution device passed as they are needed, then offloaded again.
We are aware of the current limitations in the API:
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to a lack of RAM.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) attributes devices sequentially (to avoid moving things back and forth) so if your first layer is bigger than the size of the GPU you have, it will end up with everything on the CPU/Disk.
- [`load_checkpoint_and_dispatch`] and [`load_checkpoint_in_model`] do not perform any check on the correctness of your state dict compared to your model at the moment (this will be fixed in a future version), so you may get some weird errors if trying to load a checkpoint with mismatched or missing keys.
- The model parallelism used when your model is split on several GPUs is naive and not optimized, meaning that only one GPU works at a given time and the other sits idle.
- When weights are offloaded on the CPU/hard drive, there is no pre-fetching (yet, we will work on this for future versions) which means the weights are put on the GPU when they are needed and not before.
- Hard-drive offloading might be very slow if the hardware you run on does not have fast communication between disk and CPU (like NVMes).
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Executing and deferring jobs
When you run your usual script, instructions are executed in order. Using Accelerate to deploy your script on several
GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
faster than others.
You might need to wait for all processes to have reached a certain point before executing a given instruction. For
instance, you shouldn't save a model before being sure every process is done with training, and you wouldn't want to
continue training before all the model weights have been loaded in. To do this, just write the following line in your code:
```
accelerator.wait_for_everyone()
```
This instruction will block all the processes that arrive first until all the other processes have reached that
point (if you run your script on just one GPU or CPU, this won't do anything).
A few example cases of when to use this utility are listed below:
<Tip>
Some of these are utilized with the [`~Accelerator.main_process_first`] context manager, which utilizes [`~Accelerator.wait_for_everyone`] to
run a particular set of code on the main process beforehand before triggering and launching the other processes
</Tip>
## Downloading a Dataset
When downloading a dataset, you should download it first on the main process and then load the cached dataset afterward
<Tip>
`load_dataset` will perform a lock under the hood to stop multiple downloads from happening at once, but if you are downloading something
not using this library you should use this method.
</Tip>
```python
withaccelerator.main_process_first():
datasets=load_dataset("glue","mrpc")
```
Under the hood this is the same as calling:
```python
# First do something on the main process
ifaccelerator.is_main_process:
datasets=load_dataset("glue","mrpc")
else:
accelerator.wait_for_everyone()
# And then send it to the rest of them
ifnotaccelerator.is_main_process:
datasets=load_dataset("glue","mrpc")
else:
accelerator.wait_for_everyone()
```
## Saving the `state_dict`
When saving the `state_dict` of the model, since you would normally save one file on just the main process
you should specify that:
```python
ifaccelerator.is_main_process:
model=accelerator.unwrap_model(model)
torch.save(model.state_dict(),"weights.pth")
```
## Loading in the `state_dict`
When loading in the `state_dict` to a model, optimizer, or scheduler, you should wait
for all workers to have the weights loaded in before moving on to training
```python
withaccelerator.main_process_first():
state=torch.load("weights.pth")
model.load_state_dict(state)
```
## Applying a multi-worker CPU operation
Applying a `map()` operation on multiple workers, such as tokenizing should be done on the
main process first, and then propagated to each one.
```python
datasets=load_dataset("glue","mrpc")
withaccelerator.main_process_first():
tokenized_datasets=datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx","sentence1","sentence2"],
)
```
## Applying checks such as Early Stopping
To have a check that works with a flag set by a particular process, the `set_trigger` and `check_trigger` API should be used. Useful examples
for doing so can include situations such as using early stopping and monitoring the loss (as each loss slightly differs on each process).
Call [`Accelerator.set_trigger`] when your condition has been met, and [`Accelerator.check_trigger`] when checking if that condition has been met in any process:
```python
for(x,y)indata_loader:
logits=model(x)
loss=loss_func(logits,y)
# Assume `should_do_early_stopping` is a custom defined function that returns a conditional
ifshould_do_early_stopping(loss):
accelerator.set_trigger()
# Later in the training script when we need to check for the breakpoint
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# FSDP1 vs FSDP2
This guide explains the key differences between `FSDP1` and `FSDP2` and helps you migrate your existing code to use `FSDP2` with minimal changes.
## How is FSDP2 better than FSDP1?
First, we want to understand how `FSDP1` and `FSDP2` work internally to understand the differences between them. This also helps us understand the limitations of `FSDP1` and how `FSDP2` solves them.
We'll be discussing a scenario where we have a single `Layer` that contains 3 `Linear` layers and is wrapped using `FSDP` to be sharded across 2 GPUs.
First, we have to understand the original `FSDP1` and the limitations it brings. It represents each `FSDP` module as a single `FlatParameter` which is a single 1D tensor that contains all of the module parameters, which then get sharded across ranks. I.e. if you wrap the `Layer` with `FSDP1`, you'd achieve something as such:
You might notice a problem. The whole `Layer` gets flattened into a single `FlatParameter`, which then gets sharded across ranks. But if it's a single `FlatParameter` object, how do we store metadata? That is one of the limitations. Properly storing per-parameter metadata such as `dtype`, `requires_grad`, etc. is not possible without some ugly hacks.
### FSDP2
This is why `FSDP2` was introduced. It doesn't use `FlatParameter`, instead it uses `DTensor` which is short for "Distributed Tensor". Each `DTensor` basically represents a vanilla `torch.Tensor` that has been sharded across ranks. It contains metadata about the original `torch.Tensor` and how it's sharded, what is the [placement type](https://pytorch.org/docs/stable/distributed.tensor.html#module-torch.distributed.tensor.placement_types) and so on. This is why it's called `per-parameter sharding`. The following figure shows the difference:
Each Parameter of the original `Layer` is sharded across the 0th dimension, and split between 2 GPUs. Now, each `Linear` layer is a separate `DTensor` and storing metadata per-parameter is possible and straightforward.
> [!TIP]
> In the image above, the tensors were sharded across the 1st dimension for the sake of fitting the image on the screen, in reality, they are sharded across the 0th dimension as stated above
## What does FSDP2 offer?
`FSDP2` is a new and improved version of PyTorch's fully-sharded data parallel training API. Its main advantage is using `DTensor` to represent sharded parameters. Compared to `FSDP1`, it offers:
- Simpler internal implementation, where each `Parameter` is a separate `DTensor`
- Enables simple partial parameter freezing because of the above, which makes methods as [`LORA`](https://arxiv.org/abs/2106.09685) work out of the box
- With `DTensor`, `FSDP2` supports mixing `fp8` and other parameter types in the same model out of the box
- Faster and simpler checkpointing without extra communication across ranks using `SHARDED_STATE_DICT` and [`torch.distributed.checkpoint`](https://pytorch.org/docs/stable/distributed.checkpoint.html), this way, each rank only saves its own shard and corresponding metadata
- For loading, it uses a `state_dict` of the sharded model to directly load the sharded parameters
- Support for asynchronous checkpointing, where parameters are first copied to CPU memory, after this, main thread continues training while another thread stores the parameters on disk
- Memory efficiency and deterministic memory usage, `FSDP2` doesn't use `recordStream` anymore and uses stream-to-stream synchronization (for more technical details see [this forum post](https://dev-discuss.pytorch.org/t/fsdp-cudacachingallocator-an-outsider-newb-perspective/1486) and [this issue](https://github.com/pytorch/pytorch/issues/114299))
- In the future, optimizations of the communication patterns via `torch.compile` are planned, further improving the performance and memory efficiency
## API Differences
We have already discussed the internal differences, now let's discuss the differences, you, as a user, will need to know.
Here are the main changes in configuration options when using `FSDP2` through the `accelerate` CLI:
Previous (`FSDP1`) | New (`FSDP2`) | What Changed
-- | -- | --
`--fsdp_sharding_strategy` | `--fsdp_reshard_after_forward` | replaces `--fsdp_sharding_strategy`, changed to `true` (previously `FULL_SHARD`) or `false` (previously `SHARD_GRAD_OP`)
`--fsdp_backward_prefetch` | \*\***REMOVED**\*\* | `FSDP2` uses previous `BACKWARD_PRE` option by default, as only this allows communication and computation overlap
`--fsdp_forward_prefetch` | \*\***NOT YET IMPLEMENTED**\*\* | How to implement this is under active discussion, for now it is not supported in `FSDP2`
`--fsdp_sync_module_states` | \*\***REMOVED**\*\* | with `FSDP2`, this parameter becomes redundant
`--fsdp_cpu_ram_efficient_loading` | `--fsdp_cpu_ram_efficient_loading` | if `true`, `FSDP2` will similarly load the model only on rank 0, and then parameters get synced to other ranks, this is the same behavior as `FSDP1`, however, setting `--fsdp_sync_module_states` isn't required anymore
`--fsdp_state_dict_type` | `--fsdp_state_dict_type` | `LOCAL_STATE_DICT` becomes obsolete and with `FSDP2``SHARDED_STATE_DICT` is the default option, which results in no extra communication and each rank saving its own shard, other possible option is `FULL_STATE_DICT` which results in extra communication and spike in memory usage but saves the full model from rank 0.
`--fsdp_use_orig_params` | \*\***REMOVED**\*\* | `FSDP2` uses a `DTensor` class on the background, which means it *always* uses the original parameters by default
\*\***NEW**\*\* | `--fsdp_version` | `1` is the default option, to not break existing code, set to `2` to use `FSDP2`
For all other options that remain unchanged, see the [`FSDP` documentation](../usage_guides/fsdp.md).
## How to Switch to FSDP2
### If using Python code:
Simply set `fsdp_version=2` when creating your plugin and replace options according to the table above.
This will automatically convert all FSDP1 settings to their FSDP2 equivalents. Use `--overwrite` to update the existing file instead of creating a new one.
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# FSDP vs DeepSpeed
Accelerate offers flexibilty of training frameworks, by integrating two extremely powerful tools for distributed training, namely [Pytorch FSDP](../usage_guides/fsdp) and [Microsoft DeepSpeed](../usage_guides/deepspeed). The aim of this tutorial is to draw parallels, as well as to outline potential differences, to empower the user to switch seamlessly between these two frameworks.
<Tip>
To switch between the frameworks, we recommend launching code `accelerate launch` passing in the correct config file with `--config_file`, or passing in the respective arguments directly for [FSDP and DeepSpeed](../package_reference/cli#accelerate-launch) .
Example Accelerate configurations can be found here for [DeepSpeed](../usage_guides/deepspeed#accelerate-deepspeed-plugin) and [FSDP](../usage_guides/fsdp#how-it-works-out-of-the-box), or in the [example zoo under "Launch Configurations"](../usage_guides/explore)
</Tip>
<Tipwarning={true}>
This tutorial is for single-node, multi-GPU, scenarios only.
</Tip>
## Configuring Functionalities
Model tensors are split into different GPUs in an attempt to scale up model sizes; this is termed *sharding* in FSDP, and *partitioning* in DeepSpeed. FSDP sharding and DeepSpeed ZeRO (partitioning) stages are configured by `--fsdp_sharding_strategy`, and `--zero_stage`, respectively. In particular, FSDP `FULL_SHARD` maps to DeepSpeed ZeRO stage `3`; see this [comprehensive mapping between FSDP sharding and DeepSpeed ZeRO settings](../usage_guides/fsdp#mapping-between-fsdp-sharding-strategies-and-deepspeed-zero-stages). The below table summarizes and groups similar settings:
Group | Framework | Configuration | Example | Restrictions (if any)
model | FSDP<br><br>DeepSpeed | `--fsdp_auto_wrap_policy`<br><spanstyle="white-space:nowrap;">`--fsdp_transformer_layer_cls_to_wrap`</span><br>None | `TRANSFORMER_BASED_WRAP`<br><LayerClass> |<br>Usually not needed <br>Transparent to user.
parameters summoning | FSDP<br>DeepSpeed | `--fsdp_use_orig_params`<br>None | `true` | required for `torch.compile`<br>Transparent to user
training | FSDP<br>DeepSpeed | None<br>`--gradient_accumulation_steps`<br>`--gradient_clipping` | <br>`auto`<br>`auto` | Transparent to user
For detailed descriptions of the above, refer to [`Accelerate` launch documentation](../package_reference/cli#accelerate-launch).
<Tip>
To access other DeepSpeed configurations, such as mixed precision settings,
you need to pass in a `--deepspeed_config_file`, see the [documentation](../usage_guides/deepspeed#deepspeed-config-file).
DeepSpeed can be also configured via [`DeepSpeedPlugin`], e.g., `DeepSpeedPlugin.zero_stage` is equivalent of `--zero_stage`, and `DeepSpeedPlugin.hf_ds_config` can be used to pass `--deepeed_config_file.`
</Tip>
<Tip>
FSDP can be also configured via [`FullyShardedDataParallelPlugin`], e.g., `FullyShardedDataParallelPlugin.sharding_strategy` is equivalent of `--fsdp_sharding_strategy`.
</Tip>
### Checkpointing
Do note that while FSDP can be configured via `--fsdp_state_dict_type` to save either full / sharded checkpoints.
<Tip>
For DeepSpeed Zero3, one could pass a `--zero3_save_16bit_model true`, which conveniently consolidates the model to a single rank and saves; this is the FSDP equivalent of `fsdp_state_dict_type: FULL_STATE_DICT`.
</Tip>
<Tipwarning={true}>
For large models, consolidating the model to a single rank can be very slow.
</Tip>
<Tip>
For quicker checkpointing, for FSDP use `fsdp_state_dict_type: SHARDED_STATE_DICT`, and for DeepSpeed Zero3 [use the `zero_to_fp32.py` script to post-convert sharded checkpoints](https://www.deepspeed.ai/tutorials/zero/#extracting-weights).
</Tip>
### Offloading
FSDP only allows *all-or-nothing* offload (i.e., either offload parameters, gradients, and optimizer, or keep them all in GPU), but DeepSpeed can offload parameters and optimizer differently. Furthermore, DeepSpeed also supports [offloading to NVME](https://www.deepspeed.ai/docs/config-json/#parameter-offloading).
### Prefetching
FSDP allows two prefetching configurations `--fsdp_forward_prefetch` and `--fsdp_backward_prefetch` to improve overlap of comms / computation at a cost of extra memory, see [FSDP documentation](https://pytorch.org/docs/stable/fsdp.html).
For DeepSpeed, the prefetching will be turned on when needed, and it turns on depending on certain hyper-params like `stage3_param_persistence_threshold`, `stage3_max_reuse_distance`, etc, [that can be configured for Zero3](https://www.deepspeed.ai/docs/config-json/#parameter-offloading); `accelerate` may set these hyper-params automatically if you don't set those explicitly in the deepspeed config file.
<Tip>
For FSDP set `fsdp_backward_prefetch: BACKWARD_PRE` for improved throughputs if memory allows.
</Tip>
### Model Loading
While FSDP require an explicit `--fsdp_cpu_ram_efficient_loading true` to activate efficient model loading, `transformers` will activate the similar feature whenever DeepSpeed Zero3 is used.
<Tip>
For FSDP, whenever setting `--fsdp_cpu_ram_efficient_loading true`, `accelerate` will automatically set `sync_module_states` to true.
For RAM efficient loading the weights will be loaded only in a single rank, and thus requires `sync_module_states` to broadcast weights to other ranks.
</Tip>
### Model
FSDP requires an explicit `--fsdp_auto_wrap_policy` for the algorithm to decide how to schedule the all-gather and reduce-scatter operations. But for DeepSpeed this is transparent to the user.
<Tip>
For FSDP, simply set `fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP`. With the latest [`transformers`] versions, we try our best to figure out the suitable `fsdp_transformer_layer_cls_to_wrap` for HF transformers models. However, if you get an error regarding it, please specify this.
</Tip>
### Parameters Summoning
FSDP requires an explicit `--fsdp_use_orig_params` flag if using `torch.compile`, see [the pytorch documentation](https://pytorch.org/docs/stable/fsdp.html#module-torch.distributed.fsdp). For DeepSpeed this is transparent to the user.
<Tip>
For FSDP, when using `torch.compile` please set `fsdp_use_orig_params: True`.
</Tip>
## Training
Deepspeed requires explicit `--gradient_accumulation_steps` and `--gradient_clipping` flags. For FSDP this is transparent to the user.
<Tip>
When using DeepSpeed, set `gradient_accumulation_steps: "auto"` and `gradient_clipping: "auto"` to automatically pick up values set in the [`Accelerator`] or [`TrainingArguments`] (if using `transformers`).
</Tip>
## On Differences in Data Precision Handling
To discuss how data precision is handled in both FSDP and Deepspeed, it is instructive to first give an overview of how model parameters are handled in these frameworks. Before the model / optimizer parameters are distributed across GPUs, parameter preparation is involved to first "flatten" them to one-dimensional [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor). The implementation of FSDP / DeepSpeed varies in the respect of the `dtype` in which these "flattened" parameters are stored, and there are ramifications with regards to how [`torch.Optimizer`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) allocate their `dtype`s. The table below outlines the processes for both frameworks; the "Local" column indicates the process occurring at a per-gpu level, therefore any memory overheads by upcasting should be understood to be amortized by the number of gpus used.
<Tip>
As a rule of thumb, for stable training with automatic mixed precision, all the trainable parameters have to be in `torch.float32`.
Optimizer (Pre-Step) | ✅ | FSDP<br>DeepSpeed | upcasting (if any) to `torch_dtype`<br>upcasted to `float32`
Optimizer (Actual Step) | ✅ | FSDP<br>DeepSpeed | occurs in `torch_dtype`<br> occurs in `float32`.
<Tipwarning={true}>
Therefore when using DeepSpeed a small number of GPUs, be aware of potentially significant memory overheads due to the upcasting during preparation.
</Tip>
<Tip>
With FSDP, in the absence of mixed precision, it is possible to operate the [`torch.Optimizer`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) in low precision `torch_dtype`, which may be helpful when using small number of GPUs.
</Tip>
<Tipwarning={true}>
With mixed precision, FSDP and DeepSpeed will upcast in the model preparation step (c.f. table above). But do note that FSDP will then save checkpoints in the upcasted precision; Deepspeed may still save low precision checkpoints if `--zero3_save_16bit_model` is specified.
</Tip>
To clarify the above table consider the concrete examples below; the optimizer pre- and actual step combined for brevity. With FSDP it is possible to operate in the two modes shown below, but DeepSpeed can only operate in one.
Framework | Model Loading (`torch_dtype`) | Mixed Precision | Preparation (Local) | Training | Optimizer (Local)
In Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model.
```diff
+ from accelerate import Accelerator
+ accelerator = Accelerator()
import torch.nn as nn
- from torch.nn.parallel import DistributedDataParallel
model = nn.Linear(10,10)
+ model = accelerator.prepare(model)
```
## The slowdown in gradient accumulation
You now understand that PyTorch adds hooks to the `forward` and `backward` method of your PyTorch model when
training in a distributed setup. But how does this risk slowing down your code?
In DDP (distributed data parallel), the specific order in which processes are performed and ran are expected
at specific points and these must also occur at roughly the same time before moving on.
The most direct example is when you update model parameters through
`optimizer.step()`.
Without gradient accumulation, all instances of the model need to have updated
their gradients computed, collated, and updated before moving on to the next
batch of data.
When performing gradient accumulation, you accumulate `n` loss gradients and
skip `optimizer.step()` until `n` batches have been reached. As all training
processes only need to synchronize by the time `optimizer.step()` is called,
without any modification to your training step, this needless inter-process
communication can cause a significant slowdown.
How can you avoid this overhead?
## Solving the slowdown problem
Since you are skipping model parameter updates when training on these batches, their gradients do not need to be synchronized until the point where `optimizer.step()` is actually called.
PyTorch cannot automagically tell when you need to do this, but they do provide a tool to help through the [`no_sync`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.no_sync) context manager
that is added to your model after converting it to DDP.
Under this context manager, PyTorch will skip synchronizing the gradients when
`.backward()` is called, and the first call to `.backward()` outside this
context manager will trigger the synchronization. See an example below:
# Trigger gradient synchronization on the last batch
ifindex!=(len(dataloader)-1):
withddp_model.no_sync():
# Gradients only accumulate
outputs=ddp_model(inputs)
loss=loss_func(outputs)
accelerator.backward(loss)
else:
# Gradients finally sync
outputs=ddp_model(inputs)
loss=loss_func(outputs)
accelerator.backward(loss)
optimizer.step()
```
In Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
`ddp_model.no_sync` gets replaced with [`~Accelerator.no_sync`] and operates the same way:
# Trigger gradient synchronization on the last batch
if index != (len(dataloader)-1):
- with ddp_model.no_sync():
+ with accelerator.no_sync(model):
# Gradients only accumulate
outputs = ddp_model(inputs)
loss = loss_func(outputs, targets)
accelerator.backward(loss)
else:
# Gradients finally sync
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
As you may expect, the [`~Accelerator.accumulate`] function wraps around this conditional check by keeping track of the current batch number, leaving you with the final
As you can see, if you are not careful about how you set up your gradient synchronization, you can get upwards of more than a 2x slowdown during training!
If you are worried about making sure everything is done properly, we highly recommend utilizing the [`~Accelerator.accumulate`] function and passing in
`gradient_accumulation_steps` or `gradient_accumulation_plugin` to the [`Accelerator`] object so Accelerate can handle this for you.
### `no_sync` requires additional GPU memory when using FSDP
Be aware that not syncing gradients can have adverse effects while performing FSDP training. As it has been warned in `torch`, the [`no_sync` context manager for FSDP](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.FullyShardedDataParallel.no_sync) will require additional memory.
Therefore in memory intensive situations while using FSDP, we recommend to set `sync_each_batch` to `True` in the [`~utils.GradientAccumulationPlugin`] to disable `no_sync`.
See the example below where we fine-tune Mixtral (47B parameters) on 8 A100-80GB GPUs. We see that even for a modest `gradient_accumulation_steps=2` we quickly go out-of-memory (OOM) if `no_sync` is enabled. Again, this is due to additional memory overheads due to FSDP's `no_sync`. However, if `no_sync` is disabled via `sync_each_batch=True`, then the memory consumption for `gradient_accumulation_steps=16` reverts to that of `gradient_accumulation_steps=1`.
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerate's internal mechanisms
Internally, Accelerate works by first analyzing the environment in which the script is launched to determine which
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
that information is stored in the [`~AcceleratorState`].
This class is initialized the first time you instantiate an [`~Accelerator`] as well as performing any
specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of
[`~state.AcceleratorState`]. (The same can also be done with the [`PartialState`], a more barebones version it inherits)
Then, when calling [`~Accelerator.prepare`], the library:
- wraps your model(s) in the container adapted for the distributed setup,
- wraps your optimizer(s) in an [`~optimizer.AcceleratedOptimizer`],
- wraps your scheduler(s) in an [`~scheduler.AcceleratedScheduler`]
- creates a new version of your dataloader(s) in a [`~data_loader.DataLoaderShard`] or [`~data_loader.DataLoaderDispatcher`]
While the model(s), optimizer(s), and scheduler(s) are just put in simple wrappers, the dataloader(s) are re-created. This is mostly
because PyTorch does not let the user change the `batch_sampler` of a dataloader once it's been created and the
library handles the sharding of your data between processes by changing that `batch_sampler` to yield every other
`num_processes` batches (if enabled).
The [`~data_loader.DataLoaderShard`] subclasses `DataLoader` to add the following functionality:
- it synchronizes the appropriate random number generator of all processes at each new iteration, to ensure any
randomization (like shuffling) is done the exact same way across processes.
- it puts the batches on the proper device before yielding them (unless you have opted out of
`device_placement=True`).
The [`~data_loader.DataLoaderDispatcher`] subclasses differs from the [`~data_loader.DataLoaderShard`] in that when iterating through the `DataLoader`, the data is all starting from process 0 and *then* split and sent off to each process rather than it happening at the dataset level.
The random number generator synchronization will by default synchronize:
- the `generator` attribute of a given sampler (like the PyTorch `RandomSampler`) for PyTorch >= 1.6
- the main random number generator in PyTorch <=1.5.1
You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main
[`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on a local `generator` to avoid
setting the same seed in the main random number generator in all processes.
<Tipwarning={true}>
Synchronization of the main torch (or CUDA or XLA) random number generator will affect any other potential random
artifacts you could have in your dataset (like random data augmentation) in the sense that all processes will get
the same random numbers from the torch random modules (so will apply the same random data augmentation if it's
controlled by torch).
</Tip>
<Tip>
The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local
`torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example.
</Tip>
If you have [`torchdata>=0.8.0`](https://github.com/pytorch/data/tree/main) installed, and you have passed `use_stateful_dataloader=True` into your [`~utils.DataLoaderConfiguration`], these classes will directly inherit from `StatefulDataLoader` instead, and maintain a `state_dict`.
For more details about the internals, see the [Internals page](../package_reference/torch_wrappers).
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Low precision training methods
The release of new kinds of hardware led to the emergence of new training paradigms that better utilize them. Currently, this is in the form of training
in 8-bit precision using packages such as [TransformersEngine](https://github.com/NVIDIA/TransformerEngine) (TE) or [MS-AMP](https://github.com/Azure/MS-AMP/tree/main).
For an introduction to the topics discussed today, we recommend reviewing the [low-precision usage guide](../usage_guides/low_precision_training) as this documentation will reference it regularly.
## A Quick Chart
Below is a quick chart from the MS-AMP documentation showing the different bit-precisions for each solution during training:
Optimization Level | Computation(GEMM) | Comm | Weight | Master Weight | Weight Gradient | Optimizer States
`TransformersEngine` is the first solution to trying to train in 8-bit floating point. It works by using drop-in replacement layers for certain ones in a model that utilizes their FP8-engine to reduce the number of bits (such as 32 to 8) without degrading the final accuracy of the model.
Specifically, Accelerate will find and replace the following layers with `TransformersEngine` versions:
*`nn.LayerNorm` for `te.LayerNorm`
*`nn.Linear` for `te.Linear`
As a result we wind up with a model that has most of its layers in BF16, while some layers are in FP8 reducing some of the memory.
Anecdotally, we have noticed that performance gains don't really start showing when using `TransformerEngine` until a large majority of the layers
in the model are made up of those two layers to replace. As a result, only larger models have shown performance improvements when the number of parameters is around and upwards of a few billion.
The `TransformerEngine` can receive many different arguments that customize how it performs FP8 calculations and what they do. A full list of the arguments is available below:
*`margin`: The margin to use for the gradient scaling.
*`interval`: The interval to use for how often the scaling factor is recomputed.
*`fp8_format``: The format to use for the FP8 recipe. Must be one of `HYBRID` or `E4M3`. (Generally `HYBRID` for training, `E4M3` for evaluation)
*`amax_history_len`: The length of the history to use for the scaling factor computation
*`amax_compute_algo`: The algorithm to use for the scaling factor computation. Must be one of `max` or `most_recent`.
*`override_linear_precision`: Whether or not to execute `fprop`, `dgrad`, and `wgrad` GEMMS in higher precision.
You can customize each of these as part of [`utils.FP8RecipeKwargs`] to help optimize performance of your models.
If we notice in the chart mentioned earlier, TE simply casts the computation layers into FP8, while everything else is in FP32. As a result this winds up utilizing the most memory but does so with the benefit of guaranteeing the least amount of loss in end accuracy during training.
## `MS-AMP`
MS-AMP takes a different approach to `TransformersEngine` by providing three different optimization levels to convert more operations in FP8 or FP16.
* The base optimization level (`O1`), passes communications of the weights (such as in DDP) in FP8, stores the weights of the model in FP16, and leaves the optimizer states in FP32. The main benefit of this optimization level is that we can reduce the communication bandwidth by essentially half. Additionally, more GPU memory is saved due to 1/2 of everything being cast in FP8, and the weights being cast to FP16. Notably, both the optimizer states remain in FP32.
* The second optimization level (`O2`) improves upon this by also reducing the precision of the optimizer states. One is in FP8 while the other is in FP16. Generally it's been shown that this will only provide a net-gain of no degraded end accuracy, increased training speed, and reduced memory as now every state is either in FP16 or FP8.
* Finally, MS-AMP has a third optimization level (`O3`) which helps during DDP scenarios such as DeepSpeed. The weights of the model in memory are fully cast to FP8, and the master weights are now stored in FP16. This fully reduces memory by the highest factor as now not only is almost everything in FP8, only two states are left in FP16. Currently, only DeepSpeed versions up through 0.9.2 are supported, so this capability is not included in the Accelerate integration
## Combining the two
More experiments need to be performed but it's been noted that combining both MS-AMP and TransformersEngine can lead to the highest throughput by relying on NVIDIA's optimized FP8 operators and utilizing how MS-AMP reduces the memory overhead.
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Comparing performance across distributed setups
Evaluating and comparing the performance from different setups can be quite tricky if you don't know what to look for.
For example, you cannot run the same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate
and expect your results to line up.
But why?
There are three reasons for this that this tutorial will cover:
1.**Setting the right seeds**
2.**Observed Batch Sizes**
3.**Learning Rates**
## Setting the Seed
While this issue has not come up as much, make sure to use [`utils.set_seed`] to fully set the seed in all distributed cases so training will be reproducible:
```python
fromaccelerate.utilsimportset_seed
set_seed(42)
```
Why is this important? Under the hood this will set **5** different seed settings:
```python
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)# or torch.xpu.manual_seed_all, etc
# ^^ safe to call this function even if cuda is not available
ifis_torch_xla_available():
xm.set_rng_state(seed)
```
The random state, numpy's state, torch, torch's device state, and if TPUs are available torch_xla's cuda state.
## Observed Batch Sizes
When training with Accelerate, the batch size passed to the dataloader is the **batch size per GPU**. What this entails is
a batch size of 64 on two GPUs is truly a batch size of 128. As a result, when testing on a single GPU this needs to be accounted for,
as well as similarly for TPUs.
The below table can be used as a quick reference to try out different batch sizes:
<Tip>
In this example, there are two GPUs for "Multi-GPU" and a TPU pod with 8 workers
As noted in multiple sources[[1](https://aws.amazon.com/blogs/machine-learning/scalable-multi-node-deep-learning-training-using-gpus-in-the-aws-cloud/)][[2](https://docs.nvidia.com/clara/clara-train-sdk/pt/model.html#classification-models-multi-gpu-training)], the learning rate should be scaled *linearly* based on the number of devices present. The below
snippet shows doing so with Accelerate:
<Tip>
Since users can have their own learning rate schedulers defined, we leave this up to the user to decide if they wish to scale their
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Training on TPUs
Training on TPUs can be slightly different from training on multi-gpu, even with Accelerate. This guide aims to show you
where you should be careful and why, as well as the best practices in general.
## Training in a Notebook
The main carepoint when training on TPUs comes from the [`notebook_launcher`]. As mentioned in the [notebook tutorial](../usage_guides/notebook), you need to
restructure your training code into a function that can get passed to the [`notebook_launcher`] function and be careful about not declaring any tensors on the GPU.
While on a TPU that last part is not as important, a critical part to understand is that when you launch code from a notebook you do so through a process called **forking**.
When launching from the command-line, you perform **spawning**, where a python process is not currently running and you *spawn* a new process in. Since your Jupyter notebook is already
utilizing a python process, you need to *fork* a new process from it to launch your code.
Where this becomes important is in regard to declaring your model. On forked TPU processes, it is recommended that you instantiate your model *once* and pass this into your
training function. This is different than training on GPUs where you create `n` models that have their gradients synced and back-propagated at certain moments. Instead, one
model instance is shared between all the nodes and it is passed back and forth. This is important especially when training on low-resource TPUs such as those provided in Kaggle kernels or
on Google Colaboratory.
Below is an example of a training function passed to the [`notebook_launcher`] if training on CPUs or GPUs:
<Tip>
This code snippet is based off the one from the `simple_nlp_example` notebook found [here](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb) with slight
The `notebook_launcher` will default to 8 processes if Accelerate has been configured for a TPU
</Tip>
If you use this example and declare the model *inside* the training loop, then on a low-resource system you will potentially see an error
like:
```
ProcessExitedException: process 0 terminated with signal SIGSEGV
```
This error is *extremely* cryptic but the basic explanation is you ran out of system RAM. You can avoid this entirely by reconfiguring the training function to
accept a single `model` argument, and declare it in an outside cell:
The above workaround is only needed when launching a TPU instance from a Jupyter Notebook on a low-resource server such as Google Colaboratory or Kaggle. If
using a script or launching on a much beefier server declaring the model beforehand is not needed.
</Tip>
## Mixed Precision and Global Variables
As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), Accelerate supports fp16 and bf16, both of which can be used on TPUs.
That being said, ideally `bf16` should be utilized as it is extremely efficient to use.
There are two "layers" when using `bf16` and Accelerate on TPUs, at the base level and at the operation level.
At the base level, this is enabled when passing `mixed_precision="bf16"` to `Accelerator`, such as:
```python
accelerator=Accelerator(mixed_precision="bf16")
```
By default, this will cast `torch.float` and `torch.double` to `bfloat16` on TPUs.
The specific configuration being set is an environmental variable of `XLA_USE_BF16` is set to `1`.
There is a further configuration you can perform which is setting the `XLA_DOWNCAST_BF16` environmental variable. If set to `1`, then
`torch.float` is `bfloat16` and `torch.double` is `float32`.
This is performed in the `Accelerator` object when passing `downcast_bf16=True`:
Using downcasting instead of bf16 everywhere is good for when you are trying to calculate metrics, log values, and more where raw bf16 tensors would be unusable.
## Training Times on TPUs
As you launch your script, you may notice that training seems exceptionally slow at first. This is because TPUs
first run through a few batches of data to see how much memory to allocate before finally utilizing this configured
memory allocation extremely efficiently.
If you notice that your evaluation code to calculate the metrics of your model takes longer due to a larger batch size being used,
it is recommended to keep the batch size the same as the training data if it is too slow. Otherwise the memory will reallocate to this
new batch size after the first few iterations.
<Tip>
Just because the memory is allocated does not mean it will be used or that the batch size will increase when going back to your training dataloader.
ogp_description="Run your raw PyTorch training script on any kind of device. 🤗 Accelerate provides an easy API to make your scripts run with mixed precision and on any kind of distributed setting (multi-GPUs, TPUs etc.)"
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerate
Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.
Built on `torch_xla` and `torch.distributed`, Accelerate takes care of the heavy lifting, so you don't have to write any custom code to adapt to these platforms.
Convert existing codebases to utilize [DeepSpeed](usage_guides/deepspeed), perform [fully sharded data parallelism](usage_guides/fsdp), and have automatic support for mixed-precision training!
<Tip>
To get a better idea of this process, make sure to check out the [Tutorials](basic_tutorials/overview)!
</Tip>
This code can then be launched on any system through Accelerate's CLI interface:
<pclass="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use Accelerate to solve real-world problems.</p>
<pclass="text-gray-700">High-level explanations for building a better understanding of important topics such as avoiding subtle nuances and pitfalls in distributed training and DeepSpeed.</p>
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.