Compare commits

...

42 Commits

Author SHA1 Message Date
f57136a0d4 Release: v1.0.0rc0 2024-09-13 08:20:30 -04:00
e9e5a73fcc POC: multiple model/configuration DeepSpeed support (#3097)
* Bookmark

* Migratory

* Uncomment

* Rm name to model for now

* Rm container

* Left: test

* Allow only wrapping one model

* Add warning but only ref once

* Refine

* Update src/accelerate/accelerator.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Finish stas nits

* Clean

* Fixup test + test writing

* Fully working

* Fin

* Nit

* Quality

* Update src/accelerate/accelerator.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Actionable error

* Make note of when its enabled

* Apply suggestions from code review

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Merge tests

* Merge

* Add currently broken test script

* Push the working implementation

* Fin

* Add guards for user behavior

* Test nits

* TODO: finish knowledge distillation example

* Update tests/deepspeed/test_deepspeed_multiple_model.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Allow for dict-like interface

* Get rid of disable

* Uncomment

* Complete rewrite to force a dict to be used

* Working tests/fin

* Use name as stas suggestion

* Clean

* docnit

* toctree

* toctree

* Missing ref

* Put in break

* Smaller diff

* Make note on how to use zeroinit

* Make note about accelerator ds plugin

* More docnits

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Limit users to not pass in another ds plugin to another accelerator

* not implemented err + Make a note about why no params

* Apply suggestions from code review from Stas

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Add deepspeed_plugins arg + update doc

* Plugin -> plugins

* Change enable() -> select()

* Update ref properly + test

* Be consistent, model1,model2...

* first_, second_

* A few more auto values

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
2024-09-13 07:28:06 -04:00
79a8426416 🚨🚨🚨 The Great Deprecation 🚨🚨🚨 (#3098)
* The great purge

* Clean

* Some more fixings

* Some more deprecations Benjamin found

* Fix kwarghandler test
2024-09-12 21:12:32 -04:00
8a43837cc9 [docs] More docstrings (#3108) 2024-09-12 15:28:36 -04:00
a768b2b753 No more t5 (#3107) 2024-09-12 13:27:15 -04:00
85b1a03552 Update image ref for docs (#3105)
* Update image

* Fin
2024-09-11 15:44:39 -04:00
fc52fa969e [docs] Doc sprint (#3099)
* docs sprint

* youtube id

* feedback
2024-09-11 13:31:47 -04:00
3a670bd0da MAINT: Permission for GH token in stale.yml (#3102)
See https://github.com/huggingface/peft/pull/2061 in PEFT.

This restores the functionality of the stale bot after permissions for
the token have been limited. The action still shows errors for PEFT but
the bot appears to work fine.
2024-09-11 13:27:15 -04:00
b32d8bcb75 [docs] DataLoaderConfiguration docstring (#3103) 2024-09-11 13:26:56 -04:00
d5b7b70e06 MS-AMP support (w/o FSDP) (#3093)
* MS-AMP support sans FSDP

* Fix import

* Fixings

* Last Benjamin nit

* New ruff version cleaning

* Update src/accelerate/accelerator.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-09-10 12:25:45 -04:00
1ce2eb6385 Revert "Enable Unwrapping for Model State Dicts (FSDP) (#2959)" (#3096)
This reverts commit f35cbd1f023db2c7a4972388df3a34274cca7939.
2024-09-10 17:37:22 +02:00
3fd02e60dc MAINT: Upgrade ruff to v0.6.4 (#3095)
* MNT Upgrade ruff to 0.6.4

Currently used version, 0.2.1, is quite old at this point.

Not a lot needed to be changed:

- Change ruff version in setup.py
- Remove deprecated ignore-init-module-imports option for ruff
- Type comparison should use is and not ==
- Use f-string instead of % formatting
- Some line wrapping and empty lines

* Oops
2024-09-10 10:43:37 -04:00
ed9a574564 Update README.md to include distributed image generation gist (#3077)
* Update README.md to include distributed image generation gist

* add script
2024-09-10 10:42:35 -04:00
7d3bbe721b fix skip_keys usage in forward hooks (#3088)
* fix skip_keys

* fix linting
2024-09-10 14:12:17 +02:00
4b4c036933 use the correct available memory API for XPU (#3076)
* fix

* update

* remove blank line

* update

* add check

* add  imports

* warning for both

* reformat
2024-09-09 10:31:31 -04:00
e7e01812df fix bug in _get_named_modules (#3052)
* bug fix

* bug fix
2024-09-06 18:30:45 +02:00
5ad982ac51 Support sequential cpu offloading with torchao quantized tensors (#3085) 2024-09-06 08:49:23 +02:00
9d67867ad9 Re-enable setting state dict type (#3084) 2024-09-05 12:56:26 -04:00
52b3421d8f Fix three typos in src/accelerate/data_loader.py (#3082)
* Update data_loader.py

Fix a typo in line 678: "datalaoder" -> "dataloader"

* Fix typos in data_loader.py
2024-09-05 11:38:47 -04:00
f1ca8ac78f Allow DataLoaderAdapter subclasses to be pickled by implementing __reduce__ (#3074)
* initial fix for breaking accelerator pickling

* cleanup

* skip_first_batches should be used on raw dls

* multigpu sanity test

* bugs

* does this work with iterable dsets?

* fix typo

* ignore these commits, i'm just syncing the origin so i can test on my cloud workstation

* comment out failing tests, unsure if those are existing bugs or a recent regression

* torch 2.4.0?

* pickling generator issues

* test_pickle_accelerator

* test_pickle_accelerator should work now)

* base.__len__() -> len(base)

* undo reduce

* undo super().__reduce__() again

* pass args through superclass

* remove prints

* doc changes + make style && make quality
2024-09-05 11:25:37 -04:00
ab89fc7e1d Fix FSDP auto_wrap using characters instead of full str for layers (#3075) 2024-09-04 12:44:32 -04:00
b5235f21d8 0.35.0.dev 2024-09-02 18:18:42 -04:00
8931e5e48c Remove skip_first_batches support for StatefulDataloader and fix all the tests (#3068)
* Pippy tests - good

* Fix dataloader example tests

* SD issue

* Rm test

* Docs

* Rm from doc
2024-09-02 18:14:24 -04:00
a84859242d Speed up tests by shaving off subprocess when not needed (#3042)
* bookmark

* Continue making improvements

* Bookmark

* More

* Format
2024-09-02 12:12:55 -04:00
758d6243a7 add set_epoch for MpDeviceLoaderWrapper (#3053)
* add set_epoch for MpDeviceLoaderWrapper

* fix one over-indented space
2024-09-02 11:47:39 -04:00
b07ad2adf2 Fix typo in comment (#3045)
* Fix typo in comment

* Fix typo in comment: quality check
2024-09-02 11:47:04 -04:00
1d09a20fc1 use duck-typing to ensure underlying optimizer supports schedulefree hooks (#3055)
* use duck-typing to ensure underlying optimizer supports schedulefree hooks

* fixup
2024-09-02 11:43:18 -04:00
3fcc9461c4 Do not import transformer_engine on import (#3056)
* Do not import `transformer_engine` on import

* fix message

* add test

* Update test_imports.py

* resolve comment 1/2

* resolve comment 1.5/2

* lint

* more lint

* Update tests/test_imports.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* fmt

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-08-28 09:06:13 -04:00
939ce400cb Update torchpippy (#2938)
* rm warning

* Take 3

* Take 4

* Annotate

* Take 6

* Updated

* Spec

* Last fix

* Don't padd input

* Finished

* Continue refactor

* Rm comment

* Adjust the err

* Start adjustment

* GPT2 works, T5 does not

* llama too now I think

* Flag the t5 example
2024-08-26 14:21:13 -04:00
c2120927b0 Add FP8 docker images (#3048)
* Add fp8 docker images

* Add more docker images

* Rv

* bring back ds

* Less diffy

* No need for sep tag
2024-08-26 12:12:34 -04:00
654e1d9984 Add a SLURM example with minimal config (#2950)
* Add an example with minimal config

* Improve

* Even more minimal

* Rm slurm arg

* Update examples/slurm/submit_multinode_fsdp.sh

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-08-26 10:38:10 -04:00
8c3aded21a Update CONTRIBUTING.md Setup Instructions (#3046) 2024-08-26 10:22:29 -04:00
2789933938 Decouple prepare_data_loader() from Accelerator (#3047) 2024-08-26 10:19:59 -04:00
726140cad2 Fixup dataloader state dict bugs + incorporate load/save_state API (#3034)
* v1

* More testing, need to try on H100

* Bigger batch for h100 test

* test tweak

* Fixup all tests!

* Bookmark

* Fix issues, working now

* rm num samples

* Uncomment

* Give stateful dl end of dl

* Make skip DL stateful

* Migrate to update_state_dict

* try/finally

* Add comments to test

* rm comment

* Document

* refactor out for eventual override

* Doc nit

* Brute force it
2024-08-23 15:13:33 -04:00
2d4f1dda7e Fix batch_sampler maybe None error (#3025)
* Fix batch_sampler maybe None

For more details, see: https://github.com/huggingface/accelerate/issues/3011

* Update test_data_loader.py

Add unit test for dataloader with batch_size=None when using Iterabledataset

* Update tests/test_data_loader.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Fix inconsistent indentation

Fix inconsistent indentation

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-08-22 20:02:33 -04:00
c0cf860dc6 Fix fp8 benchmark on single GPU (#3032) 2024-08-22 16:54:32 -04:00
ad3f574a3b Add early support for torchdata.stateful_dataloader.StatefulDataLoader within the Accelerator (#2895)
* temporary commit

* checkout?

* dataloader wrapper

* tmp

* weird failing test

* trying multiple inheritance

* DataLoaderAdapter

* make style

* Some dark magic dynamic reflection (for backwards compat)

* typo

* some tests

* more mixin stuff

* maybe found broken test?

* this is a very invasive feature

* i think the feature is done?

* add xpu support (#2864)

* better tests

* discovered a bug

* maybe fixed bug?

* make style

* hopefully this is PR ready

* properly skip tests

* parameterize

* temporary commit

* checkout?

* dataloader wrapper

* tmp

* weird failing test

* trying multiple inheritance

* DataLoaderAdapter

* make style

* Some dark magic dynamic reflection (for backwards compat)

* typo

* some tests

* more mixin stuff

* maybe found broken test?

* this is a very invasive feature

* i think the feature is done?

* better tests

* discovered a bug

* maybe fixed bug?

* make style

* hopefully this is PR ready

* properly skip tests

* parameterize

* Update src/accelerate/utils/dataclasses.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Update src/accelerate/data_loader.py

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* merge conflicts

* move imports

* make style

* merges are breaking tests

* fix test name

* Require safetensors>=0.4.3

* undo last commit

* minor style

* address pr comments

* Torchdata version 0.8.0 is stable now

* added docs and require torchdata>=0.8.0 for testing

* test base_dataloader attr doesn't cause infinite recursion

* address pr

* replace super().__iter__ with self.base_dataloader.__iter__

---------

Co-authored-by: Fanli Lin <fanli.lin@intel.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-08-22 08:43:45 -04:00
1a6af0bd6d Improve config handling and add a zoo (#3029)
* Improve config handling and add a zoo

* Docs

* rm comment

* Tweak doc
2024-08-20 10:40:21 -04:00
52fae0960c Add end_training/destroy_pg to everything and unpin numpy (#3030)
* Add end_training/destroy_pg to everything

* Carry over to AcceleratorState

* If forked, ignore

* More numpy fun

* Skip only init
2024-08-20 10:40:12 -04:00
7ffe7662ca Fix torch version check (#3024)
* Fix torch version check

* Adjust to simply change the FSDP pytorch v

* Forgot one, but keep consistent
2024-08-19 11:42:20 -04:00
5536a3a893 Set correct NPU backend and distributed_type when using transfer_to_npu (#3021)
* fix npu setting

* fix npu setting

* add code comments

---------

Co-authored-by: yangyuanhang7 <yangyuanhang7@jd.com>
2024-08-19 11:18:16 -04:00
7ec8eab955 Tweak defaults for quantized-typed FP8 TE weights (#3018)
* Tweak defaults

* Can't forget about CLI

* Update docs
2024-08-19 07:47:54 -04:00
158 changed files with 4038 additions and 909 deletions

View File

@ -82,3 +82,23 @@ jobs:
push: true
tags: huggingface/accelerate:gpu-deepspeed-release-${{needs.get-version.outputs.version}}
version-cuda-fp8-transformerengine:
name: "Latest Accelerate GPU FP8 TransformerEngine [version]"
runs-on:
group: aws-g6-4xlarge-plus
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: docker/accelerate-gpu/Dockerfile
push: true
tags: huggingface/accelerate:gpu-fp8-transformerengine-release-${{needs.get-version.outputs.version}}

View File

@ -86,3 +86,25 @@ jobs:
huggingface/accelerate:gpu-deepspeed-nightly
huggingface/accelerate:gpu-deepspeed-nightly-${{ env.date }}
latest-cuda-fp8-transformerengine:
name: "Latest Accelerate GPU FP8 TransformerEngine [dev]"
runs-on:
group: aws-g6-4xlarge-plus
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Get current date
id: date
run: |
echo "date=$(date '+%Y-%m-%d')" >> $GITHUB_ENV
- name: Build and Push GPU
uses: docker/build-push-action@v4
with:
file: benchmarks/fp8/Dockerfile
push: true
tags: huggingface/accelerate:gpu-fp8-transformerengine-nightly-${{ env.date }}

View File

@ -10,6 +10,8 @@ jobs:
name: Close Stale Issues
if: github.repository == 'huggingface/accelerate'
runs-on: ubuntu-latest
permissions:
issues: write
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:

View File

@ -123,12 +123,15 @@ Follow these steps to start contributing:
4. Set up a development environment by running the following command in a conda or a virtual environment you've created for working on this library:
```bash
$ pip install -e ".[quality]"
$ pip install -e ".[dev]"
```
This will install all testing and linting/code quality dependencies for the library (see `quality`, `test_dev`,
`test_prod` targets in [`setup.py`](./setup.py)).
(If accelerate was already installed in the virtual environment, remove
it with `pip uninstall accelerate` before reinstalling it in editable
mode with the `-e` flag.)
mode with the `-e` flag).
Alternatively, if you are using [Visual Studio Code](https://code.visualstudio.com/Download), the fastest way to get set up is by using
the provided Dev Container. Documentation on how to get started with dev containers is available [here](https://code.visualstudio.com/docs/remote/containers).

View File

@ -157,6 +157,8 @@ accelerate launch --multi_gpu --num_processes 2 examples/nlp_example.py
To learn more, check the CLI documentation available [here](https://huggingface.co/docs/accelerate/package_reference/cli).
Or view the configuration zoo [here](https://github.com/huggingface/accelerate/blob/main/examples/config_yaml_templates/)
## Launching multi-CPU run using MPI
🤗 Here is another way to launch multi-CPU run using MPI. You can learn how to install Open MPI on [this page](https://www.open-mpi.org/faq/?category=building#easy-build). You can use Intel MPI or MVAPICH as well.
@ -256,7 +258,7 @@ pip install accelerate
- multi-GPU on several nodes (machines)
- TPU
- FP16/BFloat16 mixed precision
- FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine)
- FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine) or [MS-AMP](https://github.com/Azure/MS-AMP/)
- DeepSpeed support (Experimental)
- PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental)
- Megatron-LM support (Experimental)

View File

@ -0,0 +1,12 @@
FROM ghcr.io/azure/msamp
RUN pip install transformers evaluate datasets
RUN git clone https://github.com/huggingface/accelerate
RUN cd accelerate && \
pip install -e . && \
cd benchmarks/fp8
CMD ["bash"]

View File

@ -0,0 +1,123 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `MS-AMP`.
This particular script verifies this for DDP training.
"""
import evaluate
import msamp
import torch
from fp8_utils import evaluate_model, get_training_utilities
from torch.nn.parallel import DistributedDataParallel as DDP
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline(opt_level="O2"):
set_seed(42)
scaler = torch.cuda.amp.GradScaler()
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
accelerator = Accelerator()
device = accelerator.device
model, optimizer = msamp.initialize(model, optimizer, opt_level=opt_level)
model.to(device)
# Convert the model to DDP
device_ids, output_device = [accelerator.local_process_index], accelerator.local_process_index
model = DDP(model, device_ids=device_ids, output_device=output_device)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for i, batch in enumerate(train_dataloader):
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
loss = outputs.loss
scaler.scale(loss).backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
return base_model_results, trained_model_results
def train_integration(opt_level="O2"):
kwargs_handlers = [FP8RecipeKwargs(backend="msamp", opt_level=opt_level)]
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=kwargs_handlers)
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer = accelerator.prepare(model, optimizer)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for i, batch in enumerate(train_dataloader):
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
return base_model_results, trained_model_results
if __name__ == "__main__":
for opt_level in ["O1", "O2"]:
baseline_not_trained, baseline_trained = train_baseline(opt_level)
accelerator_not_trained, accelerator_trained = train_integration(opt_level)
assert (
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
), f'Accuracy not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
assert (
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
), f'F1 not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
assert (
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
), f'Accuracy not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
assert (
baseline_trained["f1"] == accelerator_trained["f1"]
), f'F1 not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'

View File

@ -0,0 +1,161 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `MS-AMP`.
This particular script verifies this for DeepSpeed training.
NOTE: MS-AMP does *not* support ZeRO-3.
"""
# import msamp.deepspeed as msamp_deepspeed
import evaluate
import torch
from fp8_utils import evaluate_model, get_training_utilities
from msamp import deepspeed as msamp_deepspeed
from accelerate import Accelerator, DeepSpeedPlugin
from accelerate.state import AcceleratorState
from accelerate.utils import set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline(zero_stage: int = 1, opt_level: str = "O1"):
set_seed(42)
accelerator = Accelerator()
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
import numpy as np
config = {
"train_batch_size": 32,
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": zero_stage,
"offload_optimizer": {"device": "none", "nvme_path": None},
"offload_param": {"device": "none", "nvme_path": None},
},
"gradient_clipping": 1.0,
"steps_per_print": np.inf,
"bf16": {"enabled": True},
"fp16": {"enabled": False},
"zero_allow_untested_optimizer": True,
"msamp": {
"enabled": True,
"opt_level": opt_level,
},
}
(
model,
optimizer,
_,
_,
) = msamp_deepspeed.initialize(
model=model,
optimizer=optimizer,
config_params=config,
)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
model.backward(loss)
model.step()
for _ in range(accelerator.num_processes):
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
torch.cuda.empty_cache()
AcceleratorState()._reset_state(True)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
return base_model_results, trained_model_results
def train_integration(zero_stage: int = 1, opt_level: str = "O1"):
set_seed(42)
deepspeed_plugin = DeepSpeedPlugin(
zero_stage=zero_stage,
enable_msamp=True,
msamp_opt_level=opt_level,
)
accelerator = Accelerator(mixed_precision="fp8", deepspeed_plugin=deepspeed_plugin)
accelerator.state.deepspeed_plugin.deepspeed_config["train_micro_batch_size_per_gpu"] = 16
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler)
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.train()
for _ in range(2):
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
model.destroy()
torch.cuda.empty_cache()
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
AcceleratorState()._reset_state(True)
return base_model_results, trained_model_results
if __name__ == "__main__":
for zero_stage in [1, 2]:
for opt_level in ["O1", "O2", "O3"]:
baseline_not_trained, baseline_trained = train_baseline(zero_stage, opt_level)
accelerator_not_trained, accelerator_trained = train_integration(zero_stage, opt_level)
assert (
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
), f'ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
assert (
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
), f'ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
assert (
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
), f'ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
assert (
baseline_trained["f1"] == accelerator_trained["f1"]
), f'ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'
torch.distributed.destroy_process_group()

View File

@ -0,0 +1,118 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
def get_dataloaders(model_name: str, batch_size: int = 16):
from datasets import load_dataset
from torch.utils.data import DataLoader
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
return tokenizer.pad(
examples,
padding="longest",
pad_to_multiple_of=16, # Specific for FP8
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"],
shuffle=False,
collate_fn=collate_fn,
batch_size=16,
drop_last=True,
)
return train_dataloader, eval_dataloader
def get_training_utilities(model_name: str, batch_size: int = 16, accelerator=None):
"""
Returns a tuple of:
- Model
- Optimizer
- Train dataloader (prepared)
- Eval dataloader (prepared)
- LR Scheduler
Suitable for training on the MRPC dataset
"""
from torch.optim import AdamW
from transformers import AutoModelForSequenceClassification, get_linear_schedule_with_warmup
from accelerate import Accelerator
if accelerator is None:
accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(model_name)
train_dataloader, eval_dataloader = get_dataloaders(model_name, batch_size)
optimizer = AdamW(model.parameters(), lr=0.0001)
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=len(train_dataloader) * 2,
)
train_dataloader, eval_dataloader = accelerator.prepare(train_dataloader, eval_dataloader)
return model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
def get_named_parameters(model):
"""
Same thing as `Accelerator.get_named_parameters` Returns a list of the named parameters of the model (extracted
from parallel)
"""
from accelerate.utils import extract_model_from_parallel
model = extract_model_from_parallel(model)
return {n: p for n, p in model.named_parameters()}
def evaluate_model(model, dataloader, metric, accelerator=None):
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
model.eval()
for step, batch in enumerate(dataloader):
with torch.no_grad():
# W/ MS-AMP, we need to cast while evaluating
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()

View File

@ -0,0 +1,118 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script tests to ensure that `accelerate` performs at the same level as raw `MS-AMP`.
This particular script verifies this for single GPU training.
"""
import evaluate
import msamp
import torch
from fp8_utils import evaluate_model, get_training_utilities
from accelerate import Accelerator
from accelerate.state import AcceleratorState
from accelerate.utils import FP8RecipeKwargs, set_seed
MODEL_NAME = "bert-base-cased"
METRIC = evaluate.load("glue", "mrpc")
def train_baseline(opt_level="O2"):
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
model, optimizer = msamp.initialize(model, optimizer, opt_level=opt_level)
model.to("cuda")
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
scaler = torch.cuda.amp.GradScaler()
for batch in train_dataloader:
batch = batch.to("cuda")
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
outputs = model(**batch)
loss = outputs.loss
loss = scaler.scale(loss)
loss.backward()
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
return base_model_results, trained_model_results
def train_integration(opt_level="O2"):
kwargs_handlers = [FP8RecipeKwargs(backend="msamp", opt_level=opt_level)]
AcceleratorState()._reset_state(True)
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=kwargs_handlers)
set_seed(42)
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
MODEL_NAME, accelerator=accelerator
)
model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler)
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
model.train()
for batch in train_dataloader:
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
lr_scheduler.step()
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
assert (
trained_model_results["accuracy"] > base_model_results["accuracy"]
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
assert (
trained_model_results["f1"] > base_model_results["f1"]
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
return base_model_results, trained_model_results
if __name__ == "__main__":
for opt_level in ["O1", "O2"]:
baseline_not_trained, baseline_trained = train_baseline(opt_level)
accelerator_not_trained, accelerator_trained = train_integration(opt_level)
assert (
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
), f'Accuracy should be the same for the baseline and accelerator: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
assert (
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
), f'F1 score should be the same for the baseline and accelerator: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
assert (
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
), f'Accuracy should be the same for the baseline and accelerator: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
assert (
baseline_trained["f1"] == accelerator_trained["f1"]
), f'F1 score should be the same for the baseline and accelerator: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'

View File

@ -15,6 +15,8 @@ To run them, it's recommended to use a docker image (see the attached `Dockerfil
## Running:
There are official Docker images located at `huggingface/accelerate:gpu-fp8-transformerengine-nightly` which can be used.
You can run all scripts using the core `accelerate launch` command without any `accelerate config` being needed.
For single GPU, run it via `python`:

View File

@ -17,6 +17,7 @@ This script tests to ensure that `accelerate` performs at the same level as raw
This particular script verifies this for DDP training.
"""
import evaluate
import torch
import transformer_engine.common.recipe as te_recipe

View File

@ -17,6 +17,7 @@ This script tests to ensure that `accelerate` performs at the same level as raw
This particular script verifies this for DDP training.
"""
from unittest.mock import patch
import deepspeed

View File

@ -109,7 +109,8 @@ def evaluate_model(model, dataloader, metric, accelerator=None):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
references = batch["labels"]
if accelerator is not None and accelerator.num_processes > 1:
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
predictions, references = accelerator.gather_for_metrics((predictions, references))
metric.add_batch(predictions=predictions, references=references)
return metric.compute()

View File

@ -17,6 +17,7 @@ This script tests to ensure that `accelerate` performs at the same level as raw
This particular script verifies this for FSDP training.
"""
from functools import partial
import evaluate

View File

@ -17,6 +17,7 @@ This script tests to ensure that `accelerate` performs at the same level as raw
This particular script verifies this for single GPU training.
"""
import evaluate
import torch
import transformer_engine.common.recipe as te_recipe

View File

@ -33,6 +33,7 @@ huggingface/accelerate:{accelerator}-{nightly,release}
* `cpu`: Comes compiled off of `python:3.9-slim` and is designed for non-CUDA based workloads.
* More to come soon
* `gpu-deepspeed`: Comes compiled off of the `nvidia/cuda` image and includes core parts like `bitsandbytes` as well as the latest `deepspeed` version. Runs off python 3.10.
* `gpu-fp8-transformerengine`: Comes compiled off of `nvcr.io/nvidia/pytorch` and is specifically for running the `benchmarks/fp8` scripts on devices which support FP8 operations using the `TransformerEngine` library (RTX 4090, H100, etc)
## Nightlies vs Releases

View File

@ -16,7 +16,7 @@
- local: basic_tutorials/tpu
title: TPU training
- local: basic_tutorials/launch
title: Launching distributed code
title: Launching Accelerate scripts
- local: basic_tutorials/notebook
title: Launching distributed training from Jupyter Notebooks
title: Tutorials
@ -34,7 +34,7 @@
- local: usage_guides/profiler
title: Profiler
- local: usage_guides/checkpoint
title: Save and load training states
title: Checkpointing
- local: basic_tutorials/troubleshooting
title: Troubleshoot
- local: usage_guides/training_zoo
@ -50,10 +50,12 @@
title: Low precision (FP8) training
- local: usage_guides/deepspeed
title: DeepSpeed
- local: usage_guides/deepspeed_multiple_model
title: Using multiple models with DeepSpeed
- local: usage_guides/ddp_comm_hook
title: DDP Communication Hooks
- local: usage_guides/fsdp
title: Fully Sharded Data Parallelism
title: Fully Sharded Data Parallel
- local: usage_guides/megatron_lm
title: Megatron-LM
- local: usage_guides/sagemaker
@ -73,7 +75,7 @@
title: How to guides
- sections:
- local: concept_guides/internal_mechanism
title: 🤗 Accelerate's internal mechanism
title: Accelerate's internal mechanism
- local: concept_guides/big_model_inference
title: Loading big models into memory
- local: concept_guides/performance
@ -85,23 +87,23 @@
- local: concept_guides/fsdp_and_deepspeed
title: FSDP vs DeepSpeed
- local: concept_guides/low_precision_training
title: How training in low-precision environments is possible (FP8)
title: Low precision training methods
- local: concept_guides/training_tpu
title: TPU best practices
title: Training on TPUs
title: Concepts and fundamentals
- sections:
- local: package_reference/accelerator
title: Accelerator
- local: package_reference/state
title: Stateful configuration classes
title: Stateful classes
- local: package_reference/cli
title: The Command Line
- local: package_reference/torch_wrappers
title: Torch wrapper classes
title: DataLoaders, Optimizers, Schedulers
- local: package_reference/tracking
title: Experiment trackers
- local: package_reference/launchers
title: Distributed launchers
title: Launchers
- local: package_reference/deepspeed
title: DeepSpeed utilities
- local: package_reference/logging
@ -109,15 +111,15 @@
- local: package_reference/big_modeling
title: Working with large models
- local: package_reference/inference
title: Distributed inference with big models
title: Pipeline parallelism
- local: package_reference/kwargs
title: Kwargs handlers
- local: package_reference/fp8
title: FP8 Functionality
title: FP8
- local: package_reference/utilities
title: Utility functions and classes
- local: package_reference/megatron_lm
title: Megatron-LM Utilities
title: Megatron-LM utilities
- local: package_reference/fsdp
title: Fully Sharded Data Parallelism Utilities
title: Fully Sharded Data Parallel utilities
title: "Reference"

View File

@ -13,31 +13,29 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Installation and Configuration
# Installation
Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 Accelerate. 🤗 Accelerate is tested on **Python 3.8+**.
Before you start, you will need to setup your environment, install the appropriate packages, and configure Accelerate. Accelerate is tested on **Python 3.8+**.
## Installing 🤗 Accelerate
Accelerate is available on pypi and conda, as well as on GitHub. Details to install from each are below:
🤗 Accelerate is available on pypi and conda, as well as on GitHub. Details to install from each are below:
## pip
### pip
To install 🤗 Accelerate from pypi, perform:
To install Accelerate from pypi, perform:
```bash
pip install accelerate
```
### conda
## conda
🤗 Accelerate can also be installed with conda with:
Accelerate can also be installed with conda with:
```bash
conda install -c conda-forge accelerate
```
### Source
## Source
New features are added every day that haven't been released yet. To try them out yourself, install
from the GitHub repository:
@ -56,9 +54,9 @@ cd accelerate
pip install -e .
```
## Configuring 🤗 Accelerate
## Configuration
After installing, you need to configure 🤗 Accelerate for how the current system is setup for training.
After installing, you need to configure Accelerate for how the current system is setup for training.
To do so run the following and answer the questions prompted to you:
```bash
@ -70,7 +68,8 @@ To write a barebones configuration that doesn't include options such as DeepSpee
```bash
python -c "from accelerate.utils import write_basic_config; write_basic_config(mixed_precision='fp16')"
```
🤗 Accelerate will automatically utilize the maximum number of GPUs available and set the mixed precision mode.
Accelerate will automatically utilize the maximum number of GPUs available and set the mixed precision mode.
To check that your configuration looks fine, run:
@ -99,4 +98,4 @@ An example output is shown below, which describes two GPUs on a single machine w
- main_training_function: main
- deepspeed_config: {}
- fsdp_config: {}
```
```

View File

@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Launching your 🤗 Accelerate scripts
# Launching Accelerate scripts
In the previous tutorial, you were introduced to how to modify your current training script to use 🤗 Accelerate.
In the previous tutorial, you were introduced to how to modify your current training script to use Accelerate.
The final version of that code is shown below:
```python
@ -69,14 +69,14 @@ Next, you need to launch it with `accelerate launch`.
<Tip warning={true}>
It's recommended you run `accelerate config` before using `accelerate launch` to configure your environment to your liking.
Otherwise 🤗 Accelerate will use very basic defaults depending on your system setup.
Otherwise Accelerate will use very basic defaults depending on your system setup.
</Tip>
## Using accelerate launch
🤗 Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them is.
<Tip>
@ -101,7 +101,7 @@ CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...
```
You can also use `accelerate launch` without performing `accelerate config` first, but you may need to manually pass in the right configuration parameters.
In this case, 🤗 Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision.
In this case, Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision.
Here is how you would use all GPUs and train with mixed precision disabled:
```bash
@ -129,7 +129,7 @@ accelerate launch -h
<Tip>
Even if you are not using 🤗 Accelerate in your code, you can still use the launcher for starting your scripts!
Even if you are not using Accelerate in your code, you can still use the launcher for starting your scripts!
</Tip>
@ -178,7 +178,7 @@ accelerate launch {script_name.py} {--arg1} {--arg2} ...
## Custom Configurations
As briefly mentioned earlier, `accelerate launch` should be mostly used through combining set configurations
made with the `accelerate config` command. These configs are saved to a `default_config.yaml` file in your cache folder for 🤗 Accelerate.
made with the `accelerate config` command. These configs are saved to a `default_config.yaml` file in your cache folder for Accelerate.
This cache folder is located at (with decreasing order of priority):
- The content of your environment variable `HF_HOME` suffixed with `accelerate`.
@ -211,7 +211,7 @@ accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_nam
```
## Multi-node training
Multi-node training with 🤗Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following:
Multi-node training with Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following:
- Copy your codebase and data to all nodes. (or place them on a shared filesystem)
- Setup your python packages on all nodes.

View File

@ -219,3 +219,6 @@ During training, you may want to save the current state of the model, optimizer,
To further customize where and how states are saved through [`~Accelerator.save_state`], use the [`~utils.ProjectConfiguration`] class. For example, if `automatic_checkpoint_naming` is enabled, each saved checkpoint is stored at `Accelerator.project_dir/checkpoints/checkpoint_{checkpoint_number}`.
Any other stateful items to be stored should be registered with the [`~Accelerator.register_for_checkpointing`] method so they can be saved and loaded. Every object passed to this method to be stored must have a `load_state_dict` and `state_dict` function.
> [!TIP]
> If you have [`torchdata>=0.8.0`](https://github.com/pytorch/data/tree/main) installed, you can additionally pass `use_stateful_dataloader=True` into your [`~utils.DataLoaderConfiguration`]. This extends Accelerate's DataLoader classes with a `load_state_dict` and `state_dict` function, and makes it so `Accelerator.save_state` and `Accelerator.load_state` also track how far into the training dataset it has read when persisting the model.

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Launching Multi-GPU Training from a Jupyter Environment
# Launching distributed training from Jupyter Notebooks
This tutorial teaches you how to fine tune a computer vision model with 🤗 Accelerate from a Jupyter Notebook on a distributed system.
You will also learn how to setup a few requirements needed for ensuring your environment is configured properly, your data has been prepared properly, and finally how to launch training.
@ -26,13 +26,13 @@ You will also learn how to setup a few requirements needed for ensuring your env
## Configuring the Environment
Before any training can be performed, a 🤗 Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
Before any training can be performed, a Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
```bash
accelerate config
```
However, if general defaults are fine and you are *not* running on a TPU, 🤗Accelerate has a utility to quickly write your GPU configuration into a config file via [`utils.write_basic_config`].
However, if general defaults are fine and you are *not* running on a TPU, Accelerate has a utility to quickly write your GPU configuration into a config file via [`utils.write_basic_config`].
The following code will restart Jupyter after writing the configuration, as CUDA code was called to perform this.
@ -454,7 +454,7 @@ epoch 4: 94.71
And that's it!
Please note that [`notebook_launcher`] ignores the 🤗 Accelerate config file, to launch based on the config use:
Please note that [`notebook_launcher`] ignores the Accelerate config file, to launch based on the config use:
```bash
accelerate launch

View File

@ -15,10 +15,10 @@ rendered properly in your Markdown viewer.
# Overview
Welcome to the 🤗 Accelerate tutorials! These introductory guides will help catch you up to speed on working with 🤗 Accelerate.
Welcome to the Accelerate tutorials! These introductory guides will help catch you up to speed on working with Accelerate.
You'll learn how to modify your code to have it work with the API seamlessly, how to launch your script properly,
and more!
These tutorials assume some basic knowledge of Python and familiarity with the PyTorch framework.
If you have any questions about 🤗 Accelerate, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/accelerate/18).
If you have any questions about Accelerate, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/accelerate/18).

View File

@ -204,8 +204,8 @@ Vastly different GPUs within the same setup can lead to performance bottlenecks.
If none of the solutions and advice here helped resolve your issue, you can always reach out to the community and Accelerate team for help.
- Ask for help on the Hugging Face forums by posting your question in the [🤗 Accelerate category](https://discuss.huggingface.co/c/accelerate/18). Make sure to write a descriptive post with relevant context about your setup and reproducible code to maximize the likelihood that your problem is solved!
- Ask for help on the Hugging Face forums by posting your question in the [Accelerate category](https://discuss.huggingface.co/c/accelerate/18). Make sure to write a descriptive post with relevant context about your setup and reproducible code to maximize the likelihood that your problem is solved!
- Post a question on [Discord](http://hf.co/join/discord), and let the team and the community help you.
- Create an Issue on the 🤗 Accelerate [GitHub repository](https://github.com/huggingface/accelerate/issues) if you think you've found a bug related to the library. Include context regarding the bug and details about your distributed setup to help us better figure out what's wrong and how we can fix it.
- Create an Issue on the Accelerate [GitHub repository](https://github.com/huggingface/accelerate/issues) if you think you've found a bug related to the library. Include context regarding the bug and details about your distributed setup to help us better figure out what's wrong and how we can fix it.

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Handling big models for inference
# Loading big models into memory
When loading a pre-trained model in PyTorch, the usual workflow looks like this:
@ -46,7 +46,7 @@ This API is quite new and still in its experimental stage. While we strive to pr
### Instantiating an empty model
The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
The first tool Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
```py
from accelerate import init_empty_weights
@ -74,7 +74,7 @@ initializes an empty model with a bit more than 100B parameters. Behind the scen
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split into several smaller files that we call checkpoint shards.
🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
```bash
first_state_dict.bin
@ -97,9 +97,9 @@ and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"l
### Loading weights
The second tool 🤗 Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
The second tool Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
If you want to use big model inference with 🤗 Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
If you want to use big model inference with Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
Here is how we can use this to load the [GPT2-1.5B](https://huggingface.co/marcsun13/gpt2-xl-linear-sharded) model.
@ -145,7 +145,7 @@ model = load_checkpoint_and_dispatch(
)
```
By passing `device_map="auto"`, we tell 🤗 Accelerate to determine automatically where to put each layer of the model depending on the available resources:
By passing `device_map="auto"`, we tell Accelerate to determine automatically where to put each layer of the model depending on the available resources:
- first, we use the maximum space available on the GPU(s)
- if we still need space, we store the remaining weights on the CPU
- if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors
@ -159,7 +159,7 @@ include a residual connection of some kind.
#### The `device_map`
You can see the `device_map` that 🤗 Accelerate picked by accessing the `hf_device_map` attribute of your model:
You can see the `device_map` that Accelerate picked by accessing the `hf_device_map` attribute of your model:
```py
model.hf_device_map
@ -210,7 +210,7 @@ outputs = model.generate(x1, max_new_tokens=10, do_sample=False)[0]
tokenizer.decode(outputs.cpu().squeeze())
```
Behind the scenes, 🤗 Accelerate added hooks to the model, so that:
Behind the scenes, Accelerate added hooks to the model, so that:
- at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works)
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after
@ -225,7 +225,7 @@ This way, your model can run for inference even if it doesn't fit on one of the
### Designing a device map
You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
You can let Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
<Tip>

View File

@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Deferring Executions
# DExecuting and deferring jobs
When you run your usual script, instructions are executed in order. Using 🤗 Accelerate to deploy your script on several
When you run your usual script, instructions are executed in order. Using Accelerate to deploy your script on several
GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
faster than others.

View File

@ -13,15 +13,15 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Moving between FSDP And DeepSpeed
# FSDP vs DeepSpeed
🤗 Accelerate offers flexibilty of training frameworks, by integrating two extremely powerful tools for distributed training, namely [Pytorch FSDP](../usage_guides/fsdp) and [Microsoft DeepSpeed](../usage_guides/deepspeed). The aim of this tutorial is to draw parallels, as well as to outline potential differences, to empower the user to switch seamlessly between these two frameworks.
Accelerate offers flexibilty of training frameworks, by integrating two extremely powerful tools for distributed training, namely [Pytorch FSDP](../usage_guides/fsdp) and [Microsoft DeepSpeed](../usage_guides/deepspeed). The aim of this tutorial is to draw parallels, as well as to outline potential differences, to empower the user to switch seamlessly between these two frameworks.
<Tip>
To switch between the frameworks, we recommend launching code 🤗 `accelerate launch` passing in the correct config file with `--config_file`, or passing in the respective arguments directly for [FSDP and DeepSpeed](../package_reference/cli#accelerate-launch) .
To switch between the frameworks, we recommend launching code `accelerate launch` passing in the correct config file with `--config_file`, or passing in the respective arguments directly for [FSDP and DeepSpeed](../package_reference/cli#accelerate-launch) .
Example 🤗 Accelerate configurations can be found here for [DeepSpeed](../usage_guides/deepspeed#accelerate-deepspeed-plugin) and [FSDP](../usage_guides/fsdp#how-it-works-out-of-the-box), or in the [example zoo under "Launch Configurations"](../usage_guides/explore)
Example Accelerate configurations can be found here for [DeepSpeed](../usage_guides/deepspeed#accelerate-deepspeed-plugin) and [FSDP](../usage_guides/fsdp#how-it-works-out-of-the-box), or in the [example zoo under "Launch Configurations"](../usage_guides/explore)
</Tip>
@ -47,7 +47,7 @@ parameters summoning | FSDP<br>DeepSpeed | `--fsdp_use_orig_params`<br>None | `t
parameters syncing | FSDP<br>DeepSpeed | `--fsdp_sync_module_states`<br>None | `true` |
training | FSDP<br>DeepSpeed | None<br>`--gradient_accumulation_steps`<br>`--gradient_clipping` | <br>`auto`<br>`auto` | Transparent to user
For detailed descriptions of the above, refer to [🤗 `Accelerate` launch documentation](../package_reference/cli#accelerate-launch).
For detailed descriptions of the above, refer to [`Accelerate` launch documentation](../package_reference/cli#accelerate-launch).
<Tip>
@ -94,7 +94,7 @@ FSDP only allows *all-or-nothing* offload (i.e., either offload parameters, grad
### Prefetching
FSDP allows two prefetching configurations `--fsdp_forward_prefetch` and `--fsdp_backward_prefetch` to improve overlap of comms / computation at a cost of extra memory, see [FSDP documentation](https://pytorch.org/docs/stable/fsdp.html).
For DeepSpeed, the prefetching will be turned on when needed, and it turns on depending on certain hyper-params like `stage3_param_persistence_threshold`, `stage3_max_reuse_distance`, etc, [that can be configured for Zero3](https://www.deepspeed.ai/docs/config-json/#parameter-offloading); 🤗 `accelerate` may set these hyper-params automatically if you don't set those explicitly in the deepspeed config file.
For DeepSpeed, the prefetching will be turned on when needed, and it turns on depending on certain hyper-params like `stage3_param_persistence_threshold`, `stage3_max_reuse_distance`, etc, [that can be configured for Zero3](https://www.deepspeed.ai/docs/config-json/#parameter-offloading); `accelerate` may set these hyper-params automatically if you don't set those explicitly in the deepspeed config file.
<Tip>
@ -104,11 +104,11 @@ For DeepSpeed, the prefetching will be turned on when needed, and it turns on de
### Model Loading
While FSDP require an explicit `--fsdp_cpu_ram_efficient_loading true` to activate efficient model loading, 🤗 `transformers` will activate the similar feature whenever DeepSpeed Zero3 is used.
While FSDP require an explicit `--fsdp_cpu_ram_efficient_loading true` to activate efficient model loading, `transformers` will activate the similar feature whenever DeepSpeed Zero3 is used.
<Tip>
For FSDP, whenever setting `--fsdp_cpu_ram_efficient_loading true`, 🤗 `accelerate` will automatically set `sync_module_states` to true.
For FSDP, whenever setting `--fsdp_cpu_ram_efficient_loading true`, `accelerate` will automatically set `sync_module_states` to true.
For RAM efficient loading the weights will be loaded only in a singe rank, and thus requires `sync_module_states` to broadcast weights to other ranks.
</Tip>

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Gradient Synchronization
# Gradient synchronization
PyTorch's distributed module operates by communicating back and forth between all of the GPUs in your system.
This communication takes time, and ensuring all processes know the states of each other happens at particular triggerpoints
@ -28,7 +28,7 @@ from torch.nn.parallel import DistributedDataParallel
model = nn.Linear(10, 10)
ddp_model = DistributedDataParallel(model)
```
In 🤗 Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model.
In Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model.
```diff
+ from accelerate import Accelerator
@ -90,7 +90,7 @@ for index, batch in enumerate(dataloader):
optimizer.step()
```
In 🤗 Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
In Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
`ddp_model.no_sync` gets replaced with [`~Accelerator.no_sync`] and operates the same way:
```diff

View File

@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# 🤗 Accelerate's internal mechanisms
# Accelerate's internal mechanisms
Internally, 🤗 Accelerate works by first analyzing the environment in which the script is launched to determine which
Internally, Accelerate works by first analyzing the environment in which the script is launched to determine which
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
that information is stored in the [`~AcceleratorState`].
@ -69,4 +69,6 @@ setting the same seed in the main random number generator in all processes.
</Tip>
If you have [`torchdata>=0.8.0`](https://github.com/pytorch/data/tree/main) installed, and you have passed `use_stateful_dataloader=True` into your [`~utils.DataLoaderConfiguration`], these classes will directly inherit from `StatefulDataLoader` instead, and maintain a `state_dict`.
For more details about the internals, see the [Internals page](package_reference/torch_wrappers).

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Low Precision Training Methods
# Low precision training methods
The release of new kinds of hardware led to the emergence of new training paradigms that better utilize them. Currently, this is in the form of training
in 8-bit precision using packages such as [TransformersEngine](https://github.com/NVIDIA/TransformerEngine) (TE) or [MS-AMP](https://github.com/Azure/MS-AMP/tree/main).
@ -36,7 +36,7 @@ MS-AMP O3 | FP8 | FP8 | FP8 | FP16 | FP8 | FP8+FP16
`TransformersEngine` is the first solution to trying to train in 8-bit floating point. It works by using drop-in replacement layers for certain ones in a model that utilizes their FP8-engine to reduce the number of bits (such as 32 to 8) without degrading the final accuracy of the model.
Specifically, 🤗 Accelerate will find and replace the following layers with `TransformersEngine` versions:
Specifically, Accelerate will find and replace the following layers with `TransformersEngine` versions:
* `nn.LayerNorm` for `te.LayerNorm`
* `nn.Linear` for `te.Linear`
@ -50,7 +50,7 @@ The `TransformerEngine` can receive many different arguments that customize how
* `margin`: The margin to use for the gradient scaling.
* `interval`: The interval to use for how often the scaling factor is recomputed.
* `fp8_format``: The format to use for the FP8 recipe. Must be one of `E4M3` or `HYBRID`.
* `fp8_format``: The format to use for the FP8 recipe. Must be one of `HYBRID` or `E4M3`. (Generally `HYBRID` for training, `E4M3` for evaluation)
* `amax_history_len`: The length of the history to use for the scaling factor computation
* `amax_compute_algo`: The algorithm to use for the scaling factor computation. Must be one of `max` or `most_recent`.
* `override_linear_precision`: Whether or not to execute `fprop`, `dgrad`, and `wgrad` GEMMS in higher precision.
@ -67,7 +67,7 @@ MS-AMP takes a different approach to `TransformersEngine` by providing three dif
* The second optimization level (`O2`) improves upon this by also reducing the precision of the optimizer states. One is in FP8 while the other is in FP16. Generally it's been shown that this will only provide a net-gain of no degraded end accuracy, increased training speed, and reduced memory as now every state is either in FP16 or FP8.
* Finally, MS-AMP has a third optimization level (`O3`) which helps during DDP scenarios such as DeepSpeed. The weights of the model in memory are fully cast to FP8, and the master weights are now stored in FP16. This fully reduces memory by the highest factor as now not only is almost everything in FP8, only two states are left in FP16. Currently, only DeepSpeed versions up through 0.9.2 are supported, so this capability is not included in the 🤗 Accelerate integration
* Finally, MS-AMP has a third optimization level (`O3`) which helps during DDP scenarios such as DeepSpeed. The weights of the model in memory are fully cast to FP8, and the master weights are now stored in FP16. This fully reduces memory by the highest factor as now not only is almost everything in FP8, only two states are left in FP16. Currently, only DeepSpeed versions up through 0.9.2 are supported, so this capability is not included in the Accelerate integration
## Combining the two

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Comparing performance between different device setups
# Comparing performance across distributed setups
Evaluating and comparing the performance from different setups can be quite tricky if you don't know what to look for.
For example, you cannot run the same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate

View File

@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Training on TPUs with 🤗 Accelerate
# Training on TPUs
Training on TPUs can be slightly different from training on multi-gpu, even with 🤗 Accelerate. This guide aims to show you
Training on TPUs can be slightly different from training on multi-gpu, even with Accelerate. This guide aims to show you
where you should be careful and why, as well as the best practices in general.
## Training in a Notebook
@ -81,7 +81,7 @@ notebook_launcher(training_function)
<Tip>
The `notebook_launcher` will default to 8 processes if 🤗 Accelerate has been configured for a TPU
The `notebook_launcher` will default to 8 processes if Accelerate has been configured for a TPU
</Tip>
@ -128,10 +128,10 @@ And finally calling the training function with:
## Mixed Precision and Global Variables
As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), 🤗 Accelerate supports fp16 and bf16, both of which can be used on TPUs.
As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), Accelerate supports fp16 and bf16, both of which can be used on TPUs.
That being said, ideally `bf16` should be utilized as it is extremely efficient to use.
There are two "layers" when using `bf16` and 🤗 Accelerate on TPUs, at the base level and at the operation level.
There are two "layers" when using `bf16` and Accelerate on TPUs, at the base level and at the operation level.
At the base level, this is enabled when passing `mixed_precision="bf16"` to `Accelerator`, such as:
```python

View File

@ -15,7 +15,7 @@ rendered properly in your Markdown viewer.
# Accelerate
🤗 Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.
Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.
```diff
+ from accelerate import Accelerator
@ -37,7 +37,7 @@ rendered properly in your Markdown viewer.
scheduler.step()
```
Built on `torch_xla` and `torch.distributed`, 🤗 Accelerate takes care of the heavy lifting, so you don't have to write any custom code to adapt to these platforms.
Built on `torch_xla` and `torch.distributed`, Accelerate takes care of the heavy lifting, so you don't have to write any custom code to adapt to these platforms.
Convert existing codebases to utilize [DeepSpeed](usage_guides/deepspeed), perform [fully sharded data parallelism](usage_guides/fsdp), and have automatic support for mixed-precision training!
<Tip>
@ -56,11 +56,11 @@ accelerate launch {my_script.py}
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./basic_tutorials/overview"
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
<p class="text-gray-700">Learn the basics and become familiar with using 🤗 Accelerate. Start here if you are using 🤗 Accelerate for the first time!</p>
<p class="text-gray-700">Learn the basics and become familiar with using Accelerate. Start here if you are using Accelerate for the first time!</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./usage_guides/explore"
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
<p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Accelerate to solve real-world problems.</p>
<p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use Accelerate to solve real-world problems.</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concept_guides/gradient_synchronization"
><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
@ -68,7 +68,7 @@ accelerate launch {my_script.py}
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/accelerator"
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
<p class="text-gray-700">Technical descriptions of how 🤗 Accelerate classes and methods work.</p>
<p class="text-gray-700">Technical descriptions of how Accelerate classes and methods work.</p>
</a>
</div>
</div>

View File

@ -15,33 +15,78 @@ rendered properly in your Markdown viewer.
# Working with large models
## Dispatching and Offloading Models
## Dispatch and offload
### init_empty_weights
[[autodoc]] big_modeling.init_empty_weights
### cpu_offload
[[autodoc]] big_modeling.cpu_offload
### cpu_offload_with_hook
[[autodoc]] big_modeling.cpu_offload_with_hook
### disk_offload
[[autodoc]] big_modeling.disk_offload
### dispatch_model
[[autodoc]] big_modeling.dispatch_model
### load_checkpoint_and_dispatch
[[autodoc]] big_modeling.load_checkpoint_and_dispatch
### load_checkpoint_in_model
[[autodoc]] big_modeling.load_checkpoint_in_model
### infer_auto_device_map
[[autodoc]] utils.infer_auto_device_map
## Model Hooks
## Hooks
### Hook Classes
### ModelHook
[[autodoc]] hooks.ModelHook
### AlignDevicesHook
[[autodoc]] hooks.AlignDevicesHook
### SequentialHook
[[autodoc]] hooks.SequentialHook
### Adding Hooks
## Adding Hooks
### add_hook_to_module
[[autodoc]] hooks.add_hook_to_module
### attach_execution_device_hook
[[autodoc]] hooks.attach_execution_device_hook
### attach_align_device_hook
[[autodoc]] hooks.attach_align_device_hook
### attach_align_device_hook_on_blocks
[[autodoc]] hooks.attach_align_device_hook_on_blocks
### Removing Hooks
## Removing Hooks
### remove_hook_from_module
[[autodoc]] hooks.remove_hook_from_module
### remove_hook_from_submodules
[[autodoc]] hooks.remove_hook_from_submodules

View File

@ -13,16 +13,32 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Utilities for DeepSpeed
# DeepSpeed utilities
## DeepSpeedPlugin
## get_active_deepspeed_plugin
[[autodoc]] utils.get_active_deepspeed_plugin
[[autodoc]] utils.DeepSpeedPlugin
[[autodoc]] utils.deepspeed.DummyOptim
[[autodoc]] utils.deepspeed.DummyScheduler
## DeepSpeedEnginerWrapper
[[autodoc]] utils.deepspeed.DeepSpeedEngineWrapper
## DeepSpeedOptimizerWrapper
[[autodoc]] utils.deepspeed.DeepSpeedOptimizerWrapper
## DeepSpeedSchedulerWrapper
[[autodoc]] utils.deepspeed.DeepSpeedSchedulerWrapper
## DummyOptim
[[autodoc]] utils.deepspeed.DummyOptim
## DummyScheduler

View File

@ -13,16 +13,26 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# FP8 Functionality
# FP8
Below are functions and classes relative to the underlying FP8 implementation
## FP8RecipeKwargs
[[autodoc]] utils.FP8RecipeKwargs
## convert_model
[[autodoc]] utils.convert_model
## has_transformer_engine_layers
[[autodoc]] utils.has_transformer_engine_layers
## contextual_fp8_autocast
[[autodoc]] utils.contextual_fp8_autocast
## apply_fp8_autowrap
[[autodoc]] utils.apply_fp8_autowrap

View File

@ -13,12 +13,20 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Utilities for Fully Sharded Data Parallelism
# Fully Sharded Data Parallel utilities
## enable_fsdp_ram_efficient_loading
[[autodoc]] utils.enable_fsdp_ram_efficient_loading
## disable_fsdp_ram_efficient_loading
[[autodoc]] utils.disable_fsdp_ram_efficient_loading
## merge_fsdp_weights
[[autodoc]] utils.merge_fsdp_weights
## FullyShardedDataParallelPlugin
[[autodoc]] utils.FullyShardedDataParallelPlugin

View File

@ -13,8 +13,10 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# The inference API
# Pipeline parallelism
These docs refer to the [PiPPy](https://github.com/PyTorch/PiPPy) integration.
Accelerate supports pipeline parallelism for large-scale training with the PyTorch [torch.distributed.pipelining](https://pytorch.org/docs/stable/distributed.pipelining.html) API.
## prepare_pippy
[[autodoc]] inference.prepare_pippy

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Kwargs Handlers
# Kwargs handlers
The following objects can be passed to the main [`Accelerator`] to customize how some PyTorch objects
related to distributed training or mixed precision are created.

View File

@ -17,6 +17,10 @@ rendered properly in your Markdown viewer.
Functions for launching training on distributed processes.
## notebook_launcher
[[autodoc]] accelerate.notebook_launcher
## debug_launcher
[[autodoc]] accelerate.debug_launcher

View File

@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Logging with Accelerate
# Logging
Refer to the [Troubleshooting guide](../usage_guides/troubleshooting#logging) or to the example below to learn
how to use 🤗 Accelerate's logger.
how to use Accelerate's logger.
[[autodoc]] logging.get_logger

View File

@ -13,20 +13,36 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Utilities for Megatron-LM
# Megatron-LM utilities
## MegatronLMPlugin
[[autodoc]] utils.MegatronLMPlugin
## MegatronLMDummyScheduler
[[autodoc]] utils.MegatronLMDummyScheduler
## MegatronLMDummyDataLoader
[[autodoc]] utils.MegatronLMDummyDataLoader
## AbstractTrainStep
[[autodoc]] utils.AbstractTrainStep
## GPTTrainStep
[[autodoc]] utils.GPTTrainStep
## BertTrainStep
[[autodoc]] utils.BertTrainStep
## T5TrainStep
[[autodoc]] utils.T5TrainStep
## avg_losses_across_data_parallel_group
[[autodoc]] utils.avg_losses_across_data_parallel_group

View File

@ -21,8 +21,14 @@ instances share the same state, which is initialized on the first instantiation.
These classes are immutable and store information about certain configurations or
states.
## PartialState
[[autodoc]] state.PartialState
## AcceleratorState
[[autodoc]] state.AcceleratorState
## GradientState
[[autodoc]] state.GradientState

View File

@ -13,25 +13,36 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Wrapper classes for torch Dataloaders, Optimizers, and Schedulers
# DataLoaders, Optimizers, and Schedulers
The internal classes Accelerate uses to prepare objects for distributed training
when calling [`~Accelerator.prepare`].
## Datasets and DataLoaders
## DataLoader utilities
[[autodoc]] data_loader.prepare_data_loader
[[autodoc]] data_loader.skip_first_batches
## BatchSamplerShard
[[autodoc]] data_loader.BatchSamplerShard
## IterableDatasetShard
[[autodoc]] data_loader.IterableDatasetShard
## DataLoaderShard
[[autodoc]] data_loader.DataLoaderShard
## DataLoaderDispatcher
[[autodoc]] data_loader.DataLoaderDispatcher
## Optimizers
## AcceleratedOptimizer
[[autodoc]] optimizer.AcceleratedOptimizer
## Schedulers
## AcceleratedScheduler
[[autodoc]] scheduler.AcceleratedScheduler

View File

@ -13,23 +13,38 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Experiment Tracking
# Experiment Trackers
## The Base Tracker Class
## GeneralTracker
[[autodoc]] tracking.GeneralTracker
## Integrated Trackers
## TensorBoardTracker
[[autodoc]] tracking.TensorBoardTracker
- __init__
## WandBTracker
[[autodoc]] tracking.WandBTracker
- __init__
## CometMLTracker
[[autodoc]] tracking.CometMLTracker
- __init__
## AimTracker
[[autodoc]] tracking.AimTracker
- __init__
## MLflowTracker
[[autodoc]] tracking.MLflowTracker
- __init__
## ClearMLTracker
[[autodoc]] tracking.ClearMLTracker
- __init__

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Helpful Utilities
# Utility functions and classes
Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case.
@ -202,8 +202,6 @@ These utilities relate to interacting with PyTorch models
[[autodoc]] utils.set_module_tensor_to_device
[[autodoc]] utils.shard_checkpoint
## Parallel

View File

@ -53,6 +53,8 @@ accelerate launch path_to_script.py --args_for_the_script
To learn more, check out the [Launch distributed code](basic_tutorials/launch) tutorial for more information about launching your scripts.
We also have a [configuration zoo](https://github.com/huggingface/accelerate/blob/main/examples/config_yaml_templates) which showcases a number of premade **minimal** example configurations for a variety of setups you can run.
## Adapt training code
The next main feature of Accelerate is the [`Accelerator`] class which adapts your PyTorch code to run on different distributed setups.

View File

@ -13,15 +13,15 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Handling big models for inference
# Big Model Inference
One of the biggest advancements 🤗 Accelerate provides is the concept of [large model inference](../concept_guides/big_model_inference) wherein you can perform *inference* on models that cannot fully fit on your graphics card.
One of the biggest advancements Accelerate provides is [Big Model Inference](../concept_guides/big_model_inference), which allows you to perform inference with models that don't fully fit on your graphics card.
This tutorial will be broken down into two parts showcasing how to use both 🤗 Accelerate and 🤗 Transformers (a higher API-level) to make use of this idea.
This tutorial will show you how to use Big Model Inference in Accelerate and the Hugging Face ecosystem.
## Using 🤗 Accelerate
## Accelerate
For these tutorials, we'll assume a typical workflow for loading your model in such that:
A typical workflow for loading a PyTorch model is shown below. `ModelClass` is a model that exceeds the GPU memory of your device (mps or cuda).
```py
import torch
@ -31,9 +31,7 @@ state_dict = torch.load(checkpoint_file)
my_model.load_state_dict(state_dict)
```
Note that here we assume that `ModelClass` is a model that takes up more video-card memory than what can fit on your device (be it `mps` or `cuda`).
The first step is to init an empty skeleton of the model which won't take up any RAM using the [`init_empty_weights`] context manager:
With Big Model Inference, the first step is to init an empty skeleton of the model with the `init_empty_weights` context manager. This doesn't require any memory because `my_model` is "parameterless".
```py
from accelerate import init_empty_weights
@ -41,22 +39,14 @@ with init_empty_weights():
my_model = ModelClass(...)
```
With this `my_model` currently is "parameterless", hence leaving the smaller footprint than what one would normally get loading this onto the CPU directly.
Next, the weights are loaded into the model for inference.
Next we need to load in the weights to our model so we can perform inference.
The [`load_checkpoint_and_dispatch`] method loads a checkpoint inside your empty model and dispatches the weights for each layer across all available devices, starting with the fastest devices (GPU, MPS, XPU, NPU, MLU, MUSA) first before moving to the slower ones (CPU and hard drive).
For this we will use [`load_checkpoint_and_dispatch`], which as the name implies will load a checkpoint inside your empty model and dispatch the weights for each layer across all the devices you have available (GPU/MPS and CPU RAM).
Setting `device_map="auto"` automatically fills all available space on the GPU(s) first, then the CPU, and finally, the hard drive (the absolute slowest option) if there is still not enough memory.
To determine how this `dispatch` can be performed, generally specifying `device_map="auto"` will be good enough as 🤗 Accelerate
will attempt to fill all the space in your GPU(s), then loading them to the CPU, and finally if there is not enough RAM it will be loaded to the disk (the absolute slowest option).
<Tip>
For more details on designing your own device map, see this section of the [concept guide](../concept_guides/big_model_inference#designing-a-device-map)
</Tip>
See an example below:
> [!TIP}
> Refer to the [Designing a device map](../concept_guides/big_model_inference#designing-a-device-map) guide for more details on how to design your own device map.
```py
from accelerate import load_checkpoint_and_dispatch
@ -66,19 +56,11 @@ model = load_checkpoint_and_dispatch(
)
```
<Tip>
If there are certain “chunks” of layers that shouldnt be split, pass them to `no_split_module_classes` (see [here](../concept_guides/big_model_inference#loading-weights) for more details).
If there are certain "chunks" of layers that shouldn't be split, you can pass them in as `no_split_module_classes`. Read more about it [here](../concept_guides/big_model_inference#loading-weights)
A models weights can also be sharded into multiple checkpoints to save memory, such as when the `state_dict` doesn't fit in memory (see [here](../concept_guides/big_model_inference#sharded-checkpoints) for more details).
</Tip>
<Tip>
Also to save on memory (such as if the `state_dict` will not fit in RAM), a model's weights can be divided and split into multiple checkpoint files. Read more about it [here](../concept_guides/big_model_inference#sharded-checkpoints)
</Tip>
Now that the model is dispatched fully, you can perform inference as normal with the model:
Now that the model is fully dispatched, you can perform inference.
```py
input = torch.randn(2,3)
@ -86,22 +68,16 @@ input = input.to("cuda")
output = model(input)
```
What will happen now is each time the input gets passed through a layer, it will be sent from the CPU to the GPU (or disk to CPU to GPU), the output is calculated, and then the layer is pulled back off the GPU going back down the line. While this adds some overhead to the inference being performed, through this method it is possible to run **any size model** on your system, as long as the largest layer is capable of fitting on your GPU.
Each time an input is passed through a layer, it is sent from the CPU to the GPU (or disk to CPU to GPU), the output is calculated, and the layer is removed from the GPU going back down the line. While this adds some overhead to inference, it enables you to run any size model on your system, as long as the largest layer fits on your GPU.
<Tip>
Multiple GPUs, or "model parallelism", can be utilized but only one GPU will be active at any given moment. This forces the GPU to wait for the previous GPU to send it the output. You should launch your script normally with Python instead of other tools like torchrun and accelerate launch.
Multiple GPUs can be utilized, however this is considered "model parallelism" and as a result only one GPU will be active at a given moment, waiting for the prior one to send it the output. You should launch your script normally with `python`
and not need `torchrun`, `accelerate launch`, etc.
> [!TIP]
> You may also be interested in *pipeline parallelism* which utilizes all available GPUs at once, instead of only having one GPU active at a time. This approach is less flexbile though. For more details, refer to the [Memory-efficient pipeline parallelism](./distributed_inference#memory-efficient-pipeline-parallelism-experimental) guide.
</Tip>
<Youtube id="MWCSGj9jEAo"/>
For a visual representation of this, check out the animation below:
<Youtube id="MWCSGj9jEAo" />
### Complete Example
Below is the full example showcasing what we performed above:
Take a look at a full example of Big Model Inference below.
```py
import torch
@ -119,13 +95,13 @@ input = input.to("cuda")
output = model(input)
```
## Using 🤗 Transformers, 🤗 Diffusers, and other 🤗 Open Source Libraries
## Hugging Face ecosystem
Libraries that support 🤗 Accelerate big model inference include all of the earlier logic in their `from_pretrained` constructors.
Other libraries in the Hugging Face ecosystem, like Transformers or Diffusers, supports Big Model Inference in their [`~transformers.PreTrainedModel.from_pretrained`] constructors.
These operate by specifying a string representing the model to download from the [🤗 Hub](https://hf.co/models) and then denoting `device_map="auto"` along with a few extra parameters.
You just need to add `device_map="auto"` in [`~transformers.PreTrainedModel.from_pretrained`] to enable Big Model Inference.
As a brief example, we will look at using `transformers` and loading in Big Science's T0pp model.
For example, load Big Sciences T0pp 11 billion parameter model with Big Model Inference.
```py
from transformers import AutoModelForSeq2SeqLM
@ -133,9 +109,7 @@ from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
```
After loading the model in, the initial steps from before to prepare a model have all been done and the model is fully
ready to make use of all the resources in your machine. Through these constructors, you can also save *more* memory by
specifying the precision the model is loaded into as well, through the `torch_dtype` parameter, such as:
After loading the model, the empty init and smart dispatch steps from before are executed and the model is fully ready to make use of all the resources in your machine. Through these constructors, you can also save more memory by specifying the `torch_dtype` parameter to load a model in a lower precision.
```py
from transformers import AutoModelForSeq2SeqLM
@ -143,8 +117,6 @@ from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto", torch_dtype=torch.float16)
```
To learn more about this, check out the 🤗 Transformers documentation available [here](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
## Next steps
## Where to go from here
For a much more detailed look at big model inference, be sure to check out the [Conceptual Guide on it](../concept_guides/big_model_inference)
For a more detailed explanation of Big Model Inference, make sure to check out the [conceptual guide](../concept_guides/big_model_inference)!

View File

@ -15,8 +15,8 @@ rendered properly in your Markdown viewer.
# Checkpointing
When training a PyTorch model with 🤗 Accelerate, you may often want to save and continue a state of training. Doing so requires
saving and loading the model, optimizer, RNG generators, and the GradScaler. Inside 🤗 Accelerate are two convenience functions to achieve this quickly:
When training a PyTorch model with Accelerate, you may often want to save and continue a state of training. Doing so requires
saving and loading the model, optimizer, RNG generators, and the GradScaler. Inside Accelerate are two convenience functions to achieve this quickly:
- Use [`~Accelerator.save_state`] for saving everything mentioned above to a folder location
- Use [`~Accelerator.load_state`] for loading everything stored from an earlier `save_state`

View File

@ -23,7 +23,7 @@ Distributed Data Parallel (DDP) communication hooks provide a generic interface
- **BF16 Compression Hook**: Similar to FP16, but uses the Brain Floating Point format (`torch.bfloat16`), which can be more efficient on certain hardware.
- **PowerSGD Hook**: An advanced gradient compression algorithm that provides high compression rates and can accelerate bandwidth-bound distributed training.
In this tutorial, you will see how to quickly set up DDP communication hooks and perform training with the utilities provided in 🤗 Accelerate, which can be as simple as adding just one new line of code! This demonstrates how to use DDP communication hooks to optimize gradient communication in distributed training with the 🤗 Accelerate library.
In this tutorial, you will see how to quickly set up DDP communication hooks and perform training with the utilities provided in Accelerate, which can be as simple as adding just one new line of code! This demonstrates how to use DDP communication hooks to optimize gradient communication in distributed training with the Accelerate library.
## FP16 Compression Hook

View File

@ -33,7 +33,7 @@ DeepSpeed ZeRO-2 is primarily used only for training, as its features are of no
DeepSpeed ZeRO-3 can be used for inference as well since it allows huge models to be loaded on multiple GPUs, which
won't be possible on a single GPU.
🤗 Accelerate integrates [DeepSpeed](https://github.com/microsoft/DeepSpeed) via 2 options:
Accelerate integrates [DeepSpeed](https://github.com/microsoft/DeepSpeed) via 2 options:
1. Integration of the DeepSpeed features via `deepspeed config file` specification in `accelerate config` . You just supply your custom config file or use our template. Most of
this document is focused on this feature. This supports all the core features of DeepSpeed and gives user a lot of flexibility.
@ -45,7 +45,7 @@ won't be possible on a single GPU.
Training:
1. 🤗 Accelerate integrates all features of DeepSpeed ZeRO. This includes all the ZeRO stages 1, 2 and 3 as well as ZeRO-Offload, ZeRO-Infinity (which can offload to disk/NVMe) and ZeRO++.
1. Accelerate integrates all features of DeepSpeed ZeRO. This includes all the ZeRO stages 1, 2 and 3 as well as ZeRO-Offload, ZeRO-Infinity (which can offload to disk/NVMe) and ZeRO++.
Below is a short description of Data Parallelism using ZeRO - Zero Redundancy Optimizer along with diagram from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)
![ZeRO Data Parallelism](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero.png)
@ -727,7 +727,7 @@ Papers:
- [ZeRO++: Extremely Efficient Collective Communication for Giant Model Training](https://arxiv.org/abs/2306.10209)
Finally, please, remember that 🤗 `Accelerate` only integrates DeepSpeed, therefore if you
Finally, please, remember that `Accelerate` only integrates DeepSpeed, therefore if you
have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues).

View File

@ -0,0 +1,246 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Using multiple models with DeepSpeed
<Tip warning={true}>
This guide assumes that you have read and understood the [DeepSpeed usage guide](./deepspeed.md).
</Tip>
Running multiple models with Accelerate and DeepSpeed is useful for:
* Knowledge distillation
* Post-training techniques like RLHF (see the [TRL](https://github.com/huggingface/trl) library for more examples)
* Training multiple models at once
Currently, Accelerate has a **very experimental API** to help you use multiple models.
This tutorial will focus on two common use cases:
1. Knowledge distillation, where a smaller student model is trained to mimic a larger, better-performing teacher. If the student model fits on a single GPU, we can use ZeRO-2 for training and ZeRO-3 to shard the teacher for inference. This is significantly faster than using ZeRO-3 for both models.
2. Training multiple *disjoint* models at once.
## Knowledge distillation
Knowledge distillation is a good example of using multiple models, but only training one of them.
Normally, you would use a single [`utils.DeepSpeedPlugin`] for both models. However, in this case, there are two separate configurations. Accelerate allows you to create and use multiple plugins **if and only if** they are in a `dict` so that you can reference and enable the proper plugin when needed.
```python
from accelerate.utils import DeepSpeedPlugin
zero2_plugin = DeepSpeedPlugin(hf_ds_config="zero2_config.json")
zero3_plugin = DeepSpeedPlugin(hf_ds_config="zero3_config.json")
deepspeed_plugins = {"student": zero2_plugin, "teacher": zero3_plugin}
```
The `zero2_config.json` should be configured for full training (so specify `scheduler` and `optimizer` if you are not utilizing your own), while `zero3_config.json` should only be configured for the inference model, as shown in the example below.
```json
{
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": "auto",
"stage3_max_reuse_distance": "auto",
},
"train_micro_batch_size_per_gpu": 1
}
```
An example `zero2_config.json` configuration is shown below.
```json
{
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto",
"torch_adam": true,
"adam_w_mode": true
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
}
```
<Tip>
DeepSpeed will raise an error if `train_micro_batch_size_per_gpu` isn't specified, even if this particular model isn't being trained.
</Tip>
From here, create a single [`Accelerator`] and pass in both configurations.
```python
from accelerate import Accelerator
accelerator = Accelerator(deepspeed_plugins=deepspeed_plugins)
```
Now let's see how to use them.
### Student model
By default, Accelerate sets the first item in the `dict` as the default or enabled plugin (`"student"` plugin). Verify this by using the [`utils.deepspeed.get_active_deepspeed_plugin`] function to see which plugin is enabled.
```python
active_plugin = get_active_deepspeed_plugin(accelerator.state)
assert active_plugin is deepspeed_plugins["student"]
```
[`AcceleratorState`] also keeps the active DeepSpeed plugin saved in `state.deepspeed_plugin`.
```python
assert active_plugin is accelerator.deepspeed_plugin
```
Since `student` is the currently active plugin, let's go ahead and prepare the model, optimizer, and scheduler.
```python
student_model, optimizer, scheduler = ...
student_model, optimizer, scheduler, train_dataloader = accelerator.prepare(student_model, optimizer, scheduler, train_dataloader)
```
Now it's time to deal with the teacher model.
### Teacher model
First, you need to specify in [`Accelerator`] that the `zero3_config.json` configuration should be used.
```python
accelerator.state.select_deepspeed_plugin("teacher")
```
This disables the `"student"` plugin and enables the `"teacher"` plugin instead. The
DeepSpeed stateful config inside of Transformers is updated, and it changes which plugin configuration gets called when using
`deepspeed.initialize()`. This allows you to use the automatic `deepspeed.zero.Init` context manager integration Transformers provides.
```python
teacher_model = AutoModel.from_pretrained(...)
teacher_model = accelerator.prepare(teacher_model)
```
Otherwise, you should manually initialize the model with `deepspeed.zero.Init`.
```python
with deepspeed.zero.Init(accelerator.deepspeed_plugin.config):
model = MyModel(...)
```
### Training
From here, your training loop can be whatever you like, as long as `teacher_model` is never being trained on.
```python
teacher_model.eval()
student_model.train()
for batch in train_dataloader:
with torch.no_grad():
output_teacher = teacher_model(**batch)
output_student = student_model(**batch)
# Combine the losses or modify it in some way
loss = output_teacher.loss + output_student.loss
accelerator.backward(loss)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
```
## Train multiple disjoint models
Training multiple models is a more complicated scenario.
In its current state, we assume each model is **completely disjointed** from the other during training.
This scenario still requires two [`utils.DeepSpeedPlugin`]'s to be made. However, you also need a second [`Accelerator`], since different `deepspeed` engines are being called at different times. A single [`Accelerator`] can only carry one instance at a time.
Since the [`state.AcceleratorState`] is a stateful object though, it is already aware of both [`utils.DeepSpeedPlugin`]'s available. You can just instantiate a second [`Accelerator`] with no extra arguments.
```python
first_accelerator = Accelerator(deepspeed_plugins=deepspeed_plugins)
second_accelerator = Accelerator()
```
You can call either `first_accelerator.state.select_deepspeed_plugin()` to enable or disable
a particular plugin, and then call [`prepare`].
```python
# can be `accelerator_0`, `accelerator_1`, or by calling `AcceleratorState().select_deepspeed_plugin(...)`
first_accelerator.state.select_deepspeed_plugin("first_model")
first_model = AutoModel.from_pretrained(...)
# For this example, `get_training_items` is a nonexistent function that gets the setup we need for training
first_optimizer, first_scheduler, train_dl, eval_dl = get_training_items(model1)
first_model, first_optimizer, first_scheduler, train_dl, eval_dl = accelerator.prepare(
first_model, first_optimizer, first_scheduler, train_dl, eval_dl
)
second_accelerator.state.select_deepspeed_plugin("second_model")
second_model = AutoModel.from_pretrained(...)
# For this example, `get_training_items` is a nonexistent function that gets the setup we need for training
second_optimizer, second_scheduler, _, _ = get_training_items(model2)
second_model, second_optimizer, second_scheduler = accelerator.prepare(
second_model, second_optimizer, second_scheduler
)
```
And now you can train:
```python
for batch in dl:
outputs1 = first_model(**batch)
first_accelerator.backward(outputs1.loss)
first_optimizer.step()
first_scheduler.step()
first_optimizer.zero_grad()
outputs2 = model2(**batch)
second_accelerator.backward(outputs2.loss)
second_optimizer.step()
second_scheduler.step()
second_optimizer.zero_grad()
```
## Resources
To see more examples, please check out the [related tests](https://github.com/huggingface/accelerate/blob/main/src/accelerate/test_utils/scripts/external_deps/test_ds_multiple_model.py) currently in [Accelerate].

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Distributed Inference with 🤗 Accelerate
# Distributed inference
Distributed inference can fall into three brackets:
@ -56,13 +56,13 @@ def run_inference(rank, world_size):
```
One will notice how we have to check the rank to know what prompt to send, which can be a bit tedious.
A user might then also think that with 🤗 Accelerate, using the `Accelerator` to prepare a dataloader for such a task might also be
A user might then also think that with Accelerate, using the `Accelerator` to prepare a dataloader for such a task might also be
a simple way to manage this. (To learn more, check out the relevant section in the [Quick Tour](../quicktour#distributed-evaluation))
Can it manage it? Yes. Does it add unneeded extra code however: also yes.
With 🤗 Accelerate, we can simplify this process by using the [`Accelerator.split_between_processes`] context manager (which also exists in `PartialState` and `AcceleratorState`).
With Accelerate, we can simplify this process by using the [`Accelerator.split_between_processes`] context manager (which also exists in `PartialState` and `AcceleratorState`).
This function will automatically split whatever data you pass to it (be it a prompt, a set of tensors, a dictionary of the prior data, etc.) across all the processes (with a potential
to be padded) for you to use right away.
@ -82,7 +82,7 @@ with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt:
result.save(f"result_{distributed_state.process_index}.png")
```
And then to launch the code, we can use the 🤗 Accelerate:
And then to launch the code, we can use the Accelerate:
If you have generated a config file to be used using `accelerate config`:
@ -144,22 +144,20 @@ You can find more complex examples [here](https://github.com/huggingface/acceler
## Memory-efficient pipeline parallelism (experimental)
This next part will discuss using *pipeline parallelism*. This is an **experimental** API utilizing the [PiPPy library by PyTorch](https://github.com/pytorch/PiPPy/) as a native solution.
This next part will discuss using *pipeline parallelism*. This is an **experimental** API that utilizes [torch.distributed.pipelining](https://pytorch.org/docs/stable/distributed.pipelining.html#) as a native solution.
The general idea with pipeline parallelism is: say you have 4 GPUs and a model big enough it can be *split* on four GPUs using `device_map="auto"`. With this method you can send in 4 inputs at a time (for example here, any amount works) and each model chunk will work on an input, then receive the next input once the prior chunk finished, making it *much* more efficient **and faster** than the method described earlier. Here's a visual taken from the PyTorch repository:
![PiPPy example](https://camo.githubusercontent.com/681d7f415d6142face9dd1b837bdb2e340e5e01a58c3a4b119dea6c0d99e2ce0/68747470733a2f2f692e696d6775722e636f6d2f657955633934372e706e67)
![Pipeline parallelism example](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/pipeline_parallel.png)
To illustrate how you can use this with Accelerate, we have created an [example zoo](https://github.com/huggingface/accelerate/tree/main/examples/inference) showcasing a number of different models and situations. In this tutorial, we'll show this method for GPT2 across two GPUs.
Before you proceed, please make sure you have the latest pippy installed by running the following:
Before you proceed, please make sure you have the latest PyTorch version installed by running the following:
```bash
pip install torchpippy
pip install torch
```
We require at least version 0.2.0. To confirm that you have the correct version, run `pip show torchpippy`.
Start by creating the model on the CPU:
```{python}
@ -170,7 +168,7 @@ model = GPT2ForSequenceClassification(config)
model.eval()
```
Next you'll need to create some example inputs to use. These help PiPPy trace the model.
Next you'll need to create some example inputs to use. These help `torch.distributed.pipelining` trace the model.
<Tip warning={true}>
However you make this example will determine the relative batch size that will be used/passed

View File

@ -13,14 +13,14 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Learning how to incorporate 🤗 Accelerate features quickly!
# Start Here!
Please use the interactive tool below to help you get started with learning about a particular
feature of 🤗 Accelerate and how to utilize it! It will provide you with a code diff, an explanation
feature of Accelerate and how to utilize it! It will provide you with a code diff, an explanation
towards what is going on, as well as provide you with some useful links to explore more within
the documentation!
Most code examples start from the following python code before integrating 🤗 Accelerate in some way:
Most code examples start from the following python code before integrating Accelerate in some way:
```python
for batch in dataloader:

View File

@ -79,7 +79,7 @@ Currently, `Accelerate` supports the following config through the CLI:
`fsdp_auto_wrap_policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP
`fsdp_transformer_layer_cls_to_wrap`: Only applicable for 🤗 Transformers. When using `fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP`, a user may provide a comma-separated string of transformer layer class names (case-sensitive) to wrap, e.g., `BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`. This is important because submodules that share weights (e.g., embedding layers) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by a couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer-based models. You can use the `model._no_split_modules` for 🤗 Transformer models by answering `yes` to `Do you want to use the model's `_no_split_modules` to wrap. It will try to use `model._no_split_modules` when possible.
`fsdp_transformer_layer_cls_to_wrap`: Only applicable for Transformers. When using `fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP`, a user may provide a comma-separated string of transformer layer class names (case-sensitive) to wrap, e.g., `BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`. This is important because submodules that share weights (e.g., embedding layers) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by a couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer-based models. You can use the `model._no_split_modules` for Transformer models by answering `yes` to `Do you want to use the model's `_no_split_modules` to wrap. It will try to use `model._no_split_modules` when possible.
`fsdp_min_num_params`: minimum number of parameters when using `fsdp_auto_wrap_policy=SIZE_BASED_WRAP`.
@ -91,7 +91,7 @@ Currently, `Accelerate` supports the following config through the CLI:
`fsdp_use_orig_params`: If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable parameters. This setting is useful in cases such as parameter-efficient fine-tuning as discussed in [this post](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019). This option also allows one to have multiple optimizer param groups. This should be `True` when creating an optimizer before preparing/wrapping the model with FSDP.
`fsdp_cpu_ram_efficient_loading`: Only applicable for 🤗 Transformers models. If True, only the first process loads the pretrained model checkpoint while all other processes have empty weights. This should be set to False if you experience errors when loading the pretrained 🤗 Transformers model via `from_pretrained` method. When this setting is True `fsdp_sync_module_states` also must to be True, otherwise all the processes except the main process would have random weights leading to unexpected behaviour during training. For this to work, make sure the distributed process group is initialized before calling Transformers `from_pretrained` method. When using 🤗 Trainer API, the distributed process group is initialized when you create an instance of `TrainingArguments` class.
`fsdp_cpu_ram_efficient_loading`: Only applicable for Transformers models. If True, only the first process loads the pretrained model checkpoint while all other processes have empty weights. This should be set to False if you experience errors when loading the pretrained Transformers model via `from_pretrained` method. When this setting is True `fsdp_sync_module_states` also must to be True, otherwise all the processes except the main process would have random weights leading to unexpected behaviour during training. For this to work, make sure the distributed process group is initialized before calling Transformers `from_pretrained` method. When using Trainer API, the distributed process group is initialized when you create an instance of `TrainingArguments` class.
`fsdp_sync_module_states`: If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0.
@ -187,7 +187,7 @@ accelerate merge-weights pytorch_model_fsdp_0/ output_path
## A few caveats to be aware of
- In case of multiple models, pass the optimizers to the prepare call in the same order as corresponding models else `accelerator.save_state()` and `accelerator.load_state()` will result in wrong/unexpected behaviour.
- This feature is incompatible with `--predict_with_generate` in the `run_translation.py` script of 🤗 `Transformers` library.
- This feature is incompatible with `--predict_with_generate` in the `run_translation.py` script of `Transformers` library.
For more control, users can leverage the `FullyShardedDataParallelPlugin`. After creating an instance of this class, users can pass it to the Accelerator class instantiation.
For more information on these options, please refer to the PyTorch [FullyShardedDataParallel](https://github.com/pytorch/pytorch/blob/0df2e863fbd5993a7b9e652910792bd21a516ff3/torch/distributed/fsdp/fully_sharded_data_parallel.py#L236) code.

View File

@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Performing gradient accumulation with 🤗 Accelerate
# Performing gradient accumulation with Accelerate
Gradient accumulation is a technique where you can train on bigger batch sizes than
your machine would normally be able to fit into memory. This is done by accumulating gradients over
@ -22,7 +22,7 @@ several batches, and only stepping the optimizer after a certain number of batch
While technically standard gradient accumulation code would work fine in a distributed setup, it is not the most efficient
method for doing so and you may experience considerable slowdowns!
In this tutorial you will see how to quickly setup gradient accumulation and perform it with the utilities provided in 🤗 Accelerate,
In this tutorial you will see how to quickly setup gradient accumulation and perform it with the utilities provided in Accelerate,
which can total to adding just one new line of code!
This example will use a very simplistic PyTorch training loop that performs gradient accumulation every two batches:
@ -47,9 +47,9 @@ for index, batch in enumerate(training_dataloader):
optimizer.zero_grad()
```
## Converting it to 🤗 Accelerate
## Converting it to Accelerate
First the code shown earlier will be converted to utilize 🤗 Accelerate without the special gradient accumulation helper:
First the code shown earlier will be converted to utilize Accelerate without the special gradient accumulation helper:
```diff
+ from accelerate import Accelerator
@ -79,9 +79,9 @@ First the code shown earlier will be converted to utilize 🤗 Accelerate withou
</Tip>
## Letting 🤗 Accelerate handle gradient accumulation
## Letting Accelerate handle gradient accumulation
All that is left now is to let 🤗 Accelerate handle the gradient accumulation for us. To do so you should pass in a `gradient_accumulation_steps` parameter to [`Accelerator`], dictating the number
All that is left now is to let Accelerate handle the gradient accumulation for us. To do so you should pass in a `gradient_accumulation_steps` parameter to [`Accelerator`], dictating the number
of steps to perform before each call to `step()` and how to automatically adjust the loss during the call to [`~Accelerator.backward`]:
```diff
@ -120,7 +120,7 @@ As you can see the [`Accelerator`] is able to keep track of the batch number you
<Tip>
Typically with gradient accumulation, you would need to adjust the number of steps to reflect the change in total batches you are
training on. 🤗 Accelerate automagically does this for you by default. Behind the scenes we instantiate a [`GradientAccumulationPlugin`] configured to do this.
training on. Accelerate automagically does this for you by default. Behind the scenes we instantiate a [`GradientAccumulationPlugin`] configured to do this.
</Tip>
@ -140,7 +140,7 @@ accelerator = Accelerator(..., gradient_accumulation_plugin=plugin)
## The finished code
Below is the finished implementation for performing gradient accumulation with 🤗 Accelerate
Below is the finished implementation for performing gradient accumulation with Accelerate
```python
from accelerate import Accelerator
@ -171,7 +171,7 @@ To learn more about what magic this wraps around, read the [Gradient Synchroniza
## Self-contained example
Here is a self-contained example that you can run to see gradient accumulation in action with 🤗 Accelerate:
Here is a self-contained example that you can run to see gradient accumulation in action with Accelerate:
```python
import torch

View File

@ -40,7 +40,7 @@ Check more approaches for [IPEX installation](https://intel.github.io/intel-exte
## How It Works For Training optimization in CPU
🤗 Accelerate has integrated [IPEX](https://github.com/intel/intel-extension-for-pytorch), all you need to do is enabling it through the config.
Accelerate has integrated [IPEX](https://github.com/intel/intel-extension-for-pytorch), all you need to do is enabling it through the config.
**Scenario 1**: Acceleration of No distributed CPU training

View File

@ -13,12 +13,12 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Using Local SGD with 🤗 Accelerate
# Using Local SGD with Accelerate
Local SGD is a technique for distributed training where gradients are not synchronized every step. Thus, each process updates its own version of the model weights and after a given number of steps these weights are synchronized by averaging across all processes. This improves communication efficiency and can lead to substantial training speed up especially when a computer lacks a faster interconnect such as NVLink.
Unlike gradient accumulation (where improving communication efficiency requires increasing the effective batch size), Local SGD does not require changing a batch size or a learning rate / schedule. However, if necessary, Local SGD can be combined with gradient accumulation as well.
In this tutorial you will see how to quickly setup Local SGD 🤗 Accelerate. Compared to a standard Accelerate setup, this requires only two extra lines of code.
In this tutorial you will see how to quickly setup Local SGD Accelerate. Compared to a standard Accelerate setup, this requires only two extra lines of code.
This example will use a very simplistic PyTorch training loop that performs gradient accumulation every two batches:
@ -42,9 +42,9 @@ for index, batch in enumerate(training_dataloader):
optimizer.zero_grad()
```
## Converting it to 🤗 Accelerate
## Converting it to Accelerate
First the code shown earlier will be converted to use 🤗 Accelerate with neither a LocalSGD or a gradient accumulation helper:
First the code shown earlier will be converted to use Accelerate with neither a LocalSGD or a gradient accumulation helper:
```diff
+ from accelerate import Accelerator
@ -67,9 +67,9 @@ First the code shown earlier will be converted to use 🤗 Accelerate with neit
scheduler.step()
```
## Letting 🤗 Accelerate handle model synchronization
## Letting Accelerate handle model synchronization
All that is left now is to let 🤗 Accelerate handle model parameter synchronization **and** the gradient accumulation for us. For simplicity let us assume we need to synchronize every 8 steps. This is
All that is left now is to let Accelerate handle model parameter synchronization **and** the gradient accumulation for us. For simplicity let us assume we need to synchronize every 8 steps. This is
achieved by adding one `with LocalSGD` statement and one call `local_sgd.step()` after every optimizer step:
```diff

View File

@ -15,11 +15,11 @@ rendered properly in your Markdown viewer.
# Low Precision Training Methods
🤗 Accelerate provides integrations to train on lower precision methods using specified supported hardware through the `TransformersEngine` and `MS-AMP` packages. This documentation will help guide you through what hardware is supported, how to configure your [`Accelerator`] to leverage the low precision methods, and what you can expect when training.
Accelerate provides integrations to train on lower precision methods using specified supported hardware through the `TransformersEngine` and `MS-AMP` packages. This documentation will help guide you through what hardware is supported, how to configure your [`Accelerator`] to leverage the low precision methods, and what you can expect when training.
## What training on FP8 means
To explore more of the nitty-gritty in training in FP8 with PyTorch and 🤗 Accelerate, check out the [concept_guide](../concept_guides/low_precision_training) on why this can be difficult. But essentially rather than training in BF16, some (or all) aspects of training a model can be performed using 8 bits instead of 16. The challenge is doing so without degrading final performance.
To explore more of the nitty-gritty in training in FP8 with PyTorch and Accelerate, check out the [concept_guide](../concept_guides/low_precision_training) on why this can be difficult. But essentially rather than training in BF16, some (or all) aspects of training a model can be performed using 8 bits instead of 16. The challenge is doing so without degrading final performance.
This is only enabled on specific NVIDIA hardware, namely:
@ -39,7 +39,7 @@ from accelerate import Accelerator
accelerator = Accelerator(mixed_precision="fp8")
```
By default, if `MS-AMP` is available in your environment, 🤗 Accelerate will automatically utilize it as a backend. To specify it yourself (and customize other parts of the FP8 mixed precision setup), you can utilize the [`utils.FP8RecipeKwargs`] or clarify it in your config `yaml`/during `accelerate launch`:
By default, if `MS-AMP` is available in your environment, Accelerate will automatically utilize it as a backend. To specify it yourself (and customize other parts of the FP8 mixed precision setup), you can utilize the [`utils.FP8RecipeKwargs`] or clarify it in your config `yaml`/during `accelerate launch`:
```{python}
from accelerate import Accelerator
@ -56,7 +56,7 @@ fp8_config:
amax_compute_algorithm: max
amax_history_length: 1024
backend: TE
fp8_format: E4M3
fp8_format: HYBRID
interval: 1
margin: 0
override_linear_precision: false
@ -67,7 +67,7 @@ fp8_config:
Of the two, `MS-AMP` is traditionally the easier one to configure as there is only a single argument: the optimization level.
Currently two levels of optimization are supported in the 🤗 Accelerate integration, `"O1"` and `"O2"` (using the letter 'o', not zero).
Currently two levels of optimization are supported in the Accelerate integration, `"O1"` and `"O2"` (using the letter 'o', not zero).
* `"O1"` will cast the weight gradients and `all_reduce` communications to happen in 8-bit, while the rest are done in 16 bit. This reduces the general GPU memory usage and speeds up communication bandwidths.
* `"O2"` will also cast first-order optimizer states into 8 bit, while the second order states are in FP16. (Currently just the `Adam` optimizer is supported). This tries its best to minimize final accuracy degradation and will save the highest potential memory.
@ -96,7 +96,7 @@ fp8_config:
TransformersEngine has much more available for customizing how and what FP8 calculations are performed. A full list of supported arguments and what they mean are available in [NVIDIA's documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html), however they are restated as part of [`FP8KwargsHandler`]'s docstring for your convenience.
🤗 Accelerate tries to set sensible defaults, but exploring and tweaking the various parameters yourself can lead to better performance potentially.
Accelerate tries to set sensible defaults, but exploring and tweaking the various parameters yourself can lead to better performance potentially.
To use it, specify `backend="te"` and modify any of the arguments you want as part of your kwarg handler:
@ -117,7 +117,7 @@ fp8_config:
amax_compute_algorithm: max
amax_history_length: 1024
backend: TE
fp8_format: E4M3
fp8_format: HYBRID
interval: 1
margin: 0
override_linear_precision: false

View File

@ -32,7 +32,7 @@ independently and in parallel by each shard followed by syncing across all GPUs
In a simple transformer layer, this leads to 2 `all-reduces` in the forward path and 2 in the backward path.
For more details, please refer research paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using
Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf) and
this section of 🤗 blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#tensor-parallelism).
this section of blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#tensor-parallelism).
b. **Pipeline Parallelism (PP)**: Reduces memory footprint and enables large scale training via inter-node parallelization.
@ -41,7 +41,7 @@ Layers are distributed uniformly across PP stages. For example, if a model has `
pipeline parallelism, each GPU will have `6` layers (24/4). For more details on schedules to reduce the idle time of PP,
please refer to the research paper [Efficient Large-Scale Language Model Training on GPU Clusters
Using Megatron-LM](https://arxiv.org/pdf/2104.04473.pdf) and
this section of 🤗 blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#pipeline-parallelism).
this section of blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#pipeline-parallelism).
c. **Sequence Parallelism (SP)**: Reduces memory footprint without any additional communication. Only applicable when using TP.
It reduces activation memory required as it prevents the same copies to be on the tensor parallel ranks
@ -57,7 +57,7 @@ d. **Data Parallelism (DP)** via Distributed Optimizer: Reduces the memory footp
For example, when using Adam optimizer with mixed-precision training, each parameter accounts for 12 bytes of memory.
This gets distributed equally across the GPUs, i.e., each parameter would account for 3 bytes (12/4) if we have 4 GPUs.
For more details, please refer the research paper [ZeRO: Memory Optimizations Toward Training Trillion
Parameter Models](https://arxiv.org/pdf/1910.02054.pdf) and following section of 🤗 blog
Parameter Models](https://arxiv.org/pdf/1910.02054.pdf) and following section of blog
[The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#zero-data-parallelism).
e. **Selective Activation Recomputation**: Reduces the memory footprint of activations significantly via smart activation checkpointing.
@ -72,9 +72,9 @@ PyTorch JIT compiled Fused GeLU and Fused Bias+Dropout+Residual addition.
g. **Support for Indexed datasets**: Efficient binary format of datasets for large scale training. Support for the `mmap`, `cached` index file and the `lazy` loader format.
h. **Checkpoint reshaping and interoperability**: Utility for reshaping Megatron-LM checkpoints of variable
tensor and pipeline parallel sizes to the beloved 🤗 Transformers sharded checkpoints as it has great support with plethora of tools
such as 🤗 Accelerate Big Model Inference, Megatron-DeepSpeed Inference etc.
Support is also available for converting 🤗 Transformers sharded checkpoints to Megatron-LM checkpoint of variable tensor and pipeline parallel sizes
tensor and pipeline parallel sizes to the beloved Transformers sharded checkpoints as it has great support with plethora of tools
such as Accelerate Big Model Inference, Megatron-DeepSpeed Inference etc.
Support is also available for converting Transformers sharded checkpoints to Megatron-LM checkpoint of variable tensor and pipeline parallel sizes
for large scale training.
@ -359,7 +359,7 @@ def main():
2. For using the Megatron-LM datasets, a few more changes are required. Dataloaders for these datasets
are available only on rank 0 of each tensor parallel group. As such, there are rank where dataloader won't be
available and this requires tweaks to the training loop. Being able to do all this shows how
flexible and extensible 🤗 Accelerate is. The changes required are as follows.
flexible and extensible Accelerate is. The changes required are as follows.
a. For Megatron-LM indexed datasets, we need to use `MegatronLMDummyDataLoader`
and pass the required dataset args to it such as `data_path`, `seq_length` etc.
@ -391,7 +391,7 @@ c. Changes to training and evaluation loops as dataloader is only available on t
So, we need to iterate only if the dataloader isn't `None` else provide empty dict
As such, we loop using `while` loop and break when `completed_steps` is equal to `args.max_train_steps`
This is similar to the Megatron-LM setup wherein user has to provide `max_train_steps` when using Megaton-LM indexed datasets.
This displays how flexible and extensible 🤗 Accelerate is.
This displays how flexible and extensible Accelerate is.
```python
while completed_steps < args.max_train_steps:
@ -414,10 +414,10 @@ while completed_steps < args.max_train_steps:
## Utility for Checkpoint reshaping and interoperability
1. The scripts for these are present in 🤗 Transformers library under respective models.
1. The scripts for these are present in Transformers library under respective models.
Currently, it is available for GPT model [checkpoint_reshaping_and_interoperability.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/checkpoint_reshaping_and_interoperability.py)
2. Below is an example of conversion of checkpoint from Megatron-LM to universal 🤗 Transformers sharded checkpoint.
2. Below is an example of conversion of checkpoint from Megatron-LM to universal Transformers sharded checkpoint.
```bash
python checkpoint_reshaping_and_interoperability.py \
--convert_checkpoint_from_megatron_to_transformers \
@ -569,18 +569,18 @@ setting is synonymous with gradient accumulation.
7. When using Megatron-LM, use `accelerator.save_state` and `accelerator.load_state` for saving and loading checkpoints.
8. Below are the mapping from Megatron-LM model architectures to the the equivalent 🤗 transformers model architectures.
Only these 🤗 transformers model architectures are supported.
8. Below are the mapping from Megatron-LM model architectures to the the equivalent transformers model architectures.
Only these transformers model architectures are supported.
a. Megatron-LM [BertModel](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/bert_model.py) :
🤗 transformers models with `megatron-bert` in config's model type, e.g.,
transformers models with `megatron-bert` in config's model type, e.g.,
[MegatronBERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)
b. Megatron-LM [GPTModel](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py) :
🤗 transformers models with `gpt2` in config's model type, e.g.,
transformers models with `gpt2` in config's model type, e.g.,
[OpenAI GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)
c. Megatron-LM [T5Model](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/t5_model.py) :
🤗 transformers models with `t5` in config's model type, e.g.,
transformers models with `t5` in config's model type, e.g.,
[T5](https://huggingface.co/docs/transformers/model_doc/t5) and
[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)

View File

@ -13,12 +13,12 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Understanding how big of a model can fit on your machine
# Model memory estimator
One very difficult aspect when exploring potential models to use on your machine is knowing just how big of a model will *fit* into memory with your current graphics card (such as loading the model onto CUDA).
To help alleviate this, 🤗 Accelerate has a CLI interface through `accelerate estimate-memory`. This tutorial will
help walk you through using it, what to expect, and at the end link to the interactive demo hosted on the 🤗 Hub which will
To help alleviate this, Accelerate has a CLI interface through `accelerate estimate-memory`. This tutorial will
help walk you through using it, what to expect, and at the end link to the interactive demo hosted on the Hub which will
even let you post those results directly on the model repo!
Currently we support searching for models that can be used in `timm` and `transformers`.

View File

@ -50,5 +50,5 @@ Please refer to https://github.com/pytorch/pytorch/issues/82707 for more details
2. Distributed setups `gloo` and `nccl` are not working with `mps` device.
This means that currently only single GPU of `mps` device type can be used.
Finally, please, remember that, 🤗 `Accelerate` only integrates MPS backend, therefore if you
Finally, please, remember that, `Accelerate` only integrates MPS backend, therefore if you
have any problems or questions with regards to MPS backend usage, please, file an issue with [PyTorch GitHub](https://github.com/pytorch/pytorch/issues).

View File

@ -18,7 +18,7 @@ rendered properly in your Markdown viewer.
Profiler is a tool that allows the collection of performance metrics during training and inference. Profilers context manager API can be used to better understand what model operators are the most expensive, examine their input shapes and stack traces, study device kernel activity, and visualize the execution trace. It provides insights into the performance of your model, allowing you to optimize and improve it.
This guide explains how to use PyTorch Profiler to measure the time and memory consumption of the models operators and how to integrate this with 🤗 Accelerate. We will cover various use cases and provide examples for each.
This guide explains how to use PyTorch Profiler to measure the time and memory consumption of the models operators and how to integrate this with Accelerate. We will cover various use cases and provide examples for each.
## Using profiler to analyze execution time
@ -329,6 +329,6 @@ Self CUDA time total: 4.165ms
## Conclusion and Further Information
PyTorch Profiler is a powerful tool for analyzing the performance of your models. By integrating it with 🤗 Accelerate, you can easily profile your models and gain insights into their performance, helping you to optimize and improve them.
PyTorch Profiler is a powerful tool for analyzing the performance of your models. By integrating it with Accelerate, you can easily profile your models and gain insights into their performance, helping you to optimize and improve them.
For more detailed information, refer to the [PyTorch Profiler documentation](https://pytorch.org/docs/stable/profiler.html).

View File

@ -13,13 +13,13 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Quantization
# Model quantization
## `bitsandbytes` Integration
🤗 Accelerate brings `bitsandbytes` quantization to your model. You can now load any pytorch model in 8-bit or 4-bit with a few lines of code.
Accelerate brings `bitsandbytes` quantization to your model. You can now load any pytorch model in 8-bit or 4-bit with a few lines of code.
If you want to use 🤗 Transformers models with `bitsandbytes`, you should follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization).
If you want to use Transformers models with `bitsandbytes`, you should follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization).
To learn more about how the `bitsandbytes` quantization works, check out the blog posts on [8-bit quantization](https://huggingface.co/blog/hf-bitsandbytes-integration) and [4-bit quantization](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
@ -127,7 +127,7 @@ device_map = {
It is not possible to perform pure 8bit or 4bit training on these models. However, you can train these models by leveraging parameter efficient fine tuning methods (PEFT) and train for example adapters on top of them. Please have a look at [peft](https://github.com/huggingface/peft) library for more details.
Currently, you can't add adapters on top of any quantized model. However, with the official support of adapters with 🤗 Transformers models, you can fine-tune quantized models. If you want to finetune a 🤗 Transformers model , follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization) instead. Check out this [demo](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing) on how to fine-tune a 4-bit 🤗 Transformers model.
Currently, you can't add adapters on top of any quantized model. However, with the official support of adapters with Transformers models, you can fine-tune quantized models. If you want to finetune a Transformers model , follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization) instead. Check out this [demo](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing) on how to fine-tune a 4-bit Transformers model.
Note that you dont need to pass `device_map` when loading the model for training. It will automatically load your model on your GPU. Please note that `device_map=auto` should be used for inference only.

View File

@ -23,17 +23,16 @@ make it easier than ever to train Hugging Face Transformer models in [Amazon Sag
### Setup & Installation
Before you can run your 🤗 Accelerate scripts on Amazon SageMaker you need to sign up for an AWS account. If you do not
Before you can run your Accelerate scripts on Amazon SageMaker you need to sign up for an AWS account. If you do not
have an AWS account yet learn more [here](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-set-up.html).
After you have your AWS Account you need to install the `sagemaker` sdk for 🤗 Accelerate with:
After you have your AWS Account you need to install the `sagemaker` sdk for Accelerate with:
```bash
pip install "accelerate[sagemaker]" --upgrade
```
🤗 Accelerate currently uses the 🤗 DLCs, with `transformers`, `datasets` and `tokenizers` pre-installed. 🤗
Accelerate is not in the DLC yet (will soon be added!) so to use it within Amazon SageMaker you need to create a
Accelerate currently uses the DLCs, with `transformers`, `datasets` and `tokenizers` pre-installed. Accelerate is not in the DLC yet (will soon be added!) so to use it within Amazon SageMaker you need to create a
`requirements.txt` in the same directory where your training script is located and add it as dependency:
```
@ -43,25 +42,25 @@ accelerate
You should also add any other dependencies you have to this `requirements.txt`.
### Configure 🤗 Accelerate
### Configure Accelerate
You can configure the launch configuration for Amazon SageMaker the same as you do for non SageMaker training jobs with
the 🤗 Accelerate CLI:
the Accelerate CLI:
```bash
accelerate config
# In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 1
```
🤗 Accelerate will go through a questionnaire about your Amazon SageMaker setup and create a config file you can edit.
Accelerate will go through a questionnaire about your Amazon SageMaker setup and create a config file you can edit.
<Tip>
🤗 Accelerate is not saving any of your credentials.
Accelerate is not saving any of your credentials.
</Tip>
### Prepare a 🤗 Accelerate fine-tuning script
### Prepare a Accelerate fine-tuning script
The training script is very similar to a training script you might run outside of SageMaker, but to save your model
after training you need to specify either `/opt/ml/model` or use `os.environ["SM_MODEL_DIR"]` as your save
@ -82,7 +81,7 @@ directory. After training, artifacts in this directory are uploaded to S3:
### Launch Training
You can launch your training with 🤗 Accelerate CLI with:
You can launch your training with Accelerate CLI with:
```
accelerate launch path_to_script.py --args_to_the_script
@ -159,7 +158,7 @@ use_cpu: false
### Python packages and dependencies
🤗 Accelerate currently uses the 🤗 DLCs, with `transformers`, `datasets` and `tokenizers` pre-installed. If you
Accelerate currently uses the DLCs, with `transformers`, `datasets` and `tokenizers` pre-installed. If you
want to use different/other Python packages you can do this by adding them to the `requirements.txt`. These packages
will be installed before your training script is started.
@ -198,7 +197,7 @@ additional_args:
max_wait: 86400
```
*Note: Spot Instances are subject to be terminated and training to be continued from a checkpoint. This is not handled in 🤗 Accelerate out of the box. Contact us if you would like this feature.*
*Note: Spot Instances are subject to be terminated and training to be continued from a checkpoint. This is not handled in Accelerate out of the box. Contact us if you would like this feature.*
### Remote scripts: Use scripts located on Github

View File

@ -13,10 +13,10 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
# Tracking
# Experiment trackers
There are a large number of experiment tracking API's available, however getting them all to work with in a multi-processing environment can oftentimes be complex.
🤗 Accelerate provides a general tracking API that can be used to log useful items during your script through [`Accelerator.log`]
Accelerate provides a general tracking API that can be used to log useful items during your script through [`Accelerator.log`]
## Integrated Trackers

View File

@ -15,7 +15,7 @@ rendered properly in your Markdown viewer.
# Example Zoo
Below contains a non-exhaustive list of tutorials and scripts showcasing 🤗 Accelerate
Below contains a non-exhaustive list of tutorials and scripts showcasing Accelerate.
## Official Accelerate Examples:
@ -68,7 +68,7 @@ These examples showcase every feature in Accelerate at once that was shown in "F
## Integration Examples
These are tutorials from libraries that integrate with 🤗 Accelerate:
These are tutorials from libraries that integrate with Accelerate:
> Don't find your integration here? Make a PR to include it!
@ -85,7 +85,7 @@ These are tutorials from libraries that integrate with 🤗 Accelerate:
- [Fine-tuning DALLE2](https://github.com/lucidrains/DALLE2-pytorch#usage)
### 🤗 diffusers
### Diffusers
- [Performing textual inversion with diffusers](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
- [Training DreamBooth with diffusers](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth)
@ -134,7 +134,7 @@ These are tutorials from libraries that integrate with 🤗 Accelerate:
## In Science
Below contains a non-exhaustive list of papers utilizing 🤗 Accelerate.
Below contains a non-exhaustive list of papers utilizing Accelerate.
> Don't find your paper here? Make a PR to include it!

View File

@ -208,23 +208,13 @@ To run it in each of these various modes, use the following commands:
- [huggan project](https://github.com/huggingface/community-events/tree/main/huggan)
### Using AWS SageMaker integration
- [Examples showcasing AWS SageMaker integration of 🤗 Accelerate.](https://github.com/pacman100/accelerate-aws-sagemaker)
## Simple Multi-GPU Hardware Launcher
[multigpu_remote_launcher.py](./multigpu_remote_launcher.py) is a minimal script that demonstrates launching accelerate
on multiple remote GPUs, and with automatic hardware environment and dependency setup for reproducibility. You can
easily customize the training function used, training arguments, hyperparameters, and type of compute hardware, and then
run the script to automatically launch multi GPU training on remote hardware.
This script uses [Runhouse](https://github.com/run-house/runhouse) to launch on self-hosted hardware (e.g. in your own
cloud account or on-premise cluster) but there are other options for running remotely as well. Runhouse can be installed
with `pip install runhouse`, and you can refer to
[hardware setup](https://runhouse-docs.readthedocs-hosted.com/en/latest/api/python/cluster.html#hardware-setup)
for hardware setup instructions, or this
[Colab tutorial](https://colab.research.google.com/drive/1qVwYyLTCPYPSdz9ZX7BZl9Qm0A3j7RJe) for a more in-depth walkthrough.
## Configuration zoo
In [/config_yaml_templates](./config_yaml_templates/) we have a variety of *minimal* `config.yaml` templates and examples to help you learn
how to create your own configuration files depending on the scenario.
## SLURM Scripts
In [/slurm/submit_multigpu.sh](./slurm/submit_multigpu.sh) and [/slurm/submit_multinode.sh](./slurm/submit_multinode.sh) we present two scripts for running the examples on a machine with [SLURM](https://slurm.schedmd.com/documentation.html) workload manager.
@ -251,6 +241,20 @@ export PYTHONPATH=/home/nct01/nct01328/transformers-in-supercomputers:$PYTHONPAT
export GPUS_PER_NODE=4
```
## Simple Multi-GPU Hardware Launcher (using an external platform)
[multigpu_remote_launcher.py](./multigpu_remote_launcher.py) is a minimal script that demonstrates launching accelerate
on multiple remote GPUs, and with automatic hardware environment and dependency setup for reproducibility. You can
easily customize the training function used, training arguments, hyperparameters, and type of compute hardware, and then
run the script to automatically launch multi GPU training on remote hardware.
This script uses [Runhouse](https://github.com/run-house/runhouse) to launch on self-hosted hardware (e.g. in your own
cloud account or on-premise cluster) but there are other options for running remotely as well. Runhouse can be installed
with `pip install runhouse`, and you can refer to
[hardware setup](https://runhouse-docs.readthedocs-hosted.com/en/latest/api/python/cluster.html#hardware-setup)
for hardware setup instructions, or this
[Colab tutorial](https://colab.research.google.com/drive/1qVwYyLTCPYPSdz9ZX7BZl9Qm0A3j7RJe) for a more in-depth walkthrough.
## Finer Examples
While the first two scripts are extremely barebones when it comes to what you can do with accelerate, more advanced features are documented in two other locations.

View File

@ -217,6 +217,7 @@ def training_function(config, args):
# And call it at the end with no arguments
# Note: You could also refactor this outside of your training loop function
inner_training_loop()
accelerator.end_training()
def main():

View File

@ -19,9 +19,10 @@ import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup
from accelerate import Accelerator, DistributedType
from accelerate import Accelerator, DataLoaderConfiguration, DistributedType
from accelerate.utils import set_seed
########################################################################
@ -125,7 +126,8 @@ def training_function(config, args):
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# Initialize accelerator
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
dataloader_config = DataLoaderConfiguration(use_stateful_dataloader=args.use_stateful_dataloader)
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision, dataloader_config=dataloader_config)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
@ -217,8 +219,11 @@ def training_function(config, args):
model.train()
# New Code #
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
# We need to skip steps until we reach the resumed step
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
# We need to skip steps until we reach the resumed step only if we are not using a stateful dataloader
if not args.use_stateful_dataloader:
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
else:
active_dataloader = train_dataloader
overall_step += resume_step
else:
# After the first iteration though, we need to go back to the original dataloader
@ -248,7 +253,6 @@ def training_function(config, args):
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True` (the default).
@ -261,7 +265,6 @@ def training_function(config, args):
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
@ -276,6 +279,7 @@ def training_function(config, args):
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
accelerator.end_training()
def main():
@ -308,6 +312,11 @@ def main():
default=None,
help="If the training should continue from a checkpoint folder.",
)
parser.add_argument(
"--use_stateful_dataloader",
action="store_true",
help="If the dataloader should be a resumable stateful dataloader.",
)
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)

View File

@ -255,6 +255,7 @@ def training_function(config, args):
preds = torch.stack(test_predictions, dim=0).sum(dim=0).div(int(args.num_folds)).argmax(dim=-1)
test_metric = metric.compute(predictions=preds, references=test_references)
accelerator.print("Average test metrics from all folds:", test_metric)
accelerator.end_training()
def main():

View File

@ -192,6 +192,7 @@ def training_function(config, args):
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
accelerator.end_training()
def main():

View File

@ -716,6 +716,7 @@ def main():
with open(os.path.join(args.output_dir, "all_results.json"), "w") as f:
json.dump({"perplexity": perplexity, "eval_loss": eval_loss.item()}, f)
accelerator.end_training()
if __name__ == "__main__":

View File

@ -222,6 +222,7 @@ def training_function(config, args):
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
accelerator.end_training()
def main():

View File

@ -399,8 +399,7 @@ def training_function(config, args):
step=epoch,
)
if args.with_tracking:
accelerator.end_training()
accelerator.end_training()
def main():

View File

@ -197,6 +197,7 @@ def training_function(config, args):
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
accelerator.end_training()
def main():

View File

@ -202,6 +202,7 @@ def training_function(config, args):
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
accelerator.end_training()
def main():

View File

@ -703,6 +703,7 @@ def main():
with open(os.path.join(args.output_dir, "all_results.json"), "w") as f:
json.dump({"perplexity": perplexity}, f)
accelerator.end_training()
if __name__ == "__main__":

View File

@ -210,6 +210,7 @@ def training_function(config, args):
# And call it at the end with no arguments
# Note: You could also refactor this outside of your training loop function
inner_training_loop()
accelerator.end_training()
def main():

View File

@ -214,6 +214,7 @@ def training_function(config, args):
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
accelerator.end_training()
def main():

View File

@ -203,6 +203,7 @@ def training_function(config, args):
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
accelerator.end_training()
def main():

View File

@ -202,6 +202,7 @@ def training_function(config, args):
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
accelerator.end_training()
def main():

View File

@ -236,11 +236,7 @@ def training_function(config, args):
step=epoch,
)
# New Code #
# When a run is finished, you should call `accelerator.end_training()`
# to close all of the open trackers
if args.with_tracking:
accelerator.end_training()
accelerator.end_training()
def main():

View File

@ -23,7 +23,7 @@ from torch.optim.lr_scheduler import OneCycleLR
from torch.utils.data import DataLoader, Dataset
from torchvision.transforms import Compose, RandomResizedCrop, Resize, ToTensor
from accelerate import Accelerator
from accelerate import Accelerator, DataLoaderConfiguration
########################################################################
@ -72,12 +72,19 @@ class PetsDataset(Dataset):
def training_function(config, args):
# Initialize accelerator
dataloader_config = DataLoaderConfiguration(use_stateful_dataloader=args.use_stateful_dataloader)
if args.with_tracking:
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="all", project_dir=args.project_dir
cpu=args.cpu,
mixed_precision=args.mixed_precision,
log_with="all",
project_dir=args.project_dir,
dataloader_config=dataloader_config,
)
else:
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, dataloader_config=dataloader_config
)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
@ -262,8 +269,7 @@ def training_function(config, args):
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
if args.with_tracking:
accelerator.end_training()
accelerator.end_training()
def main():
@ -298,6 +304,11 @@ def main():
default=None,
help="If the training should continue from a checkpoint folder.",
)
parser.add_argument(
"--use_stateful_dataloader",
action="store_true",
help="If the dataloader should be a resumable stateful dataloader.",
)
parser.add_argument(
"--with_tracking",
action="store_true",

View File

@ -21,7 +21,7 @@ from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
from accelerate import Accelerator, DataLoaderConfiguration, DistributedType
########################################################################
@ -49,12 +49,19 @@ EVAL_BATCH_SIZE = 32
def training_function(config, args):
# Initialize accelerator
dataloader_config = DataLoaderConfiguration(use_stateful_dataloader=args.use_stateful_dataloader)
if args.with_tracking:
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="all", project_dir=args.project_dir
cpu=args.cpu,
mixed_precision=args.mixed_precision,
dataloader_config=dataloader_config,
log_with="all",
project_dir=args.project_dir,
)
else:
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
accelerator = Accelerator(
cpu=args.cpu, mixed_precision=args.mixed_precision, dataloader_config=dataloader_config
)
if hasattr(args.checkpointing_steps, "isdigit"):
if args.checkpointing_steps == "epoch":
@ -194,7 +201,10 @@ def training_function(config, args):
total_loss = 0
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
# We need to skip steps until we reach the resumed step
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
if not args.use_stateful_dataloader:
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
else:
active_dataloader = train_dataloader
overall_step += resume_step
else:
# After the first iteration though, we need to go back to the original dataloader
@ -256,8 +266,7 @@ def training_function(config, args):
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
if args.with_tracking:
accelerator.end_training()
accelerator.end_training()
def main():
@ -284,6 +293,11 @@ def main():
default=None,
help="If the training should continue from a checkpoint folder.",
)
parser.add_argument(
"--use_stateful_dataloader",
action="store_true",
help="If the dataloader should be a resumable stateful dataloader.",
)
parser.add_argument(
"--with_tracking",
action="store_true",

View File

@ -0,0 +1,10 @@
# Config Zoo
This folder contains a variety of minimal configurations for `Accelerate` achieving certain goals. You can use these
direct config YAML's, or build off of them for your own YAML's.
These are highly annoted versions, aiming to teach you what each section does.
Each config can be run via `accelerate launch --config_file {file} run_me.py`
`run_me.py` will then print out how the current environment is setup (the contents of the `AcceleratorState`)

View File

@ -0,0 +1,15 @@
# Similar to FSDP, we set the distributed type as DEEPSPEED
distributed_type: DEEPSPEED
# With DeepSpeed, we utilize a deepspeed config file for the entire configuration
deepspeed_config:
# Can also be any of the config json's in accelerate/examples/deepspeed_config_templates
deepspeed_config_file: ../deepspeed_config_templates/zero_stage1_config.json
# If using ZeRO-3 and wanting to load big models in, this should be set to `true` so
# `transformers` uses the right `init` function
zero3_init_flag: false # true
# Finally we need to specify the number of GPUs to use
num_processes: 2
# Optionally we can set the mixed precision now instead of in the deepspeed config file,
# however this requires the `fp16` and `bf16` options to be set to `auto` in the deepspeed config file
# mixed_precision: "bf16"

View File

@ -0,0 +1,18 @@
# This config template simply setups up the TransformersEngine config (and a config for a single GPU),
# this can interop with the other configs in this folder
distributed_type: "NO"
mixed_precision: "fp8"
# Then we specify the fp8 configuration:
fp8_config:
backend: TE # Can be TE | MS-AMP
# The following are TE specific arguments.
# See https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html#common-api for more details
amax_history_length: 1024
fp8_format: E4M3
interval: 1
margin: 0
override_linear_precision: false
# Generally this should always be set to `false` to have the most realistic fp8 eval performance
use_autocast_during_eval: false
# If using MS-AMP, we ignore all of the prior and set a opt_level
#opt_level: O1

View File

@ -0,0 +1,18 @@
# Since we are doing FSDP (even though it's multi-GPU), we need to specify the distributed type as FSDP
distributed_type: FSDP
# Can be one of "no", "fp16", or "bf16" (see `transformer_engine.yaml` for `fp8`, but it works for FSDP as well)
mixed_precision: 'bf16'
# Specify the number of GPUs to use
num_processes: 2
# Then we can specify the FSDP config
fsdp_config:
fsdp_activation_checkpointing: false
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true

View File

@ -0,0 +1,6 @@
# Specify distributed_type as `MULTI_GPU` for DDP
distributed_type: "MULTI_GPU"
# Can be one of "no", "fp16", or "bf16" (see `transformer_engine.yaml` for `fp8`)
mixed_precision: "bf16"
# Specify the number of GPUs to use
num_processes: 2

View File

@ -0,0 +1,16 @@
# This config template is for a multi-node setup. This assumes DDP, but can be interop'd with the other configs in this folder
# Generally it's recommended to look at the SLURM config template for a more robust multi-node setup
distributed_type: MULTI_GPU
# We need to specify the current machine's rank
machine_rank: 0
# We then need to specify the IP address and port of the main process
main_process_ip: '1234'
main_process_port: 9999
# We need to specify the number of machines
num_machines: 2
# We need to specify the *total* number of processes
num_processes: 8
# And then we need to specify how rdvz comms will be handled
rdzv_backend: static # or c10d
# If the compute nodes are on the same network (cloud will more than likely be false)
same_network: false

View File

@ -0,0 +1,27 @@
# Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
A base script which outputs the accelerate config for the given environment
"""
from accelerate import Accelerator
accelerator = Accelerator()
accelerator.print(f"Accelerator state from the current environment:\n{accelerator.state}")
if accelerator.fp8_recipe_handler is not None:
accelerator.print(f"FP8 config:\n{accelerator.fp8_recipe_handler}")
accelerator.end_training()

View File

@ -0,0 +1,4 @@
# Since this is single GPU, we don't need distributed training
distributed_type: "NO"
# Can be one of "no", "fp16", or "bf16" (see `transformer_engine.yaml` for `fp8`)
mixed_precision: "bf16"

View File

@ -180,6 +180,7 @@ def training_function(config, args):
eval_metric = accurate.item() / num_elems
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}: {100 * eval_metric:.2f}")
accelerator.end_training()
def main():

View File

@ -22,4 +22,4 @@ Or:
```bash
torchrun --nproc-per-node {NUM_GPUS} phi2.py
```
```

View File

@ -0,0 +1,117 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Originally by jiwooya1000, put together together by sayakpaul.
Documentation: https://huggingface.co/docs/diffusers/main/en/training/distributed_inference
Run:
accelerate launch distributed_image_generation.py --batch_size 8
# Enable memory optimizations for large models like SD3
accelerate launch distributed_image_generation.py --batch_size 8 --low_mem
"""
import os
import time
import fire
import torch
from datasets import load_dataset
from diffusers import DiffusionPipeline
from tqdm import tqdm
from accelerate import PartialState
from accelerate.utils import gather_object
START_TIME = time.strftime("%Y%m%d_%H%M%S")
DTYPE_MAP = {"fp32": torch.float32, "fp16": torch.float16, "bf16": torch.bfloat16}
def get_batches(items, batch_size):
num_batches = (len(items) + batch_size - 1) // batch_size
batches = []
for i in range(num_batches):
start_index = i * batch_size
end_index = min((i + 1) * batch_size, len(items))
batch = items[start_index:end_index]
batches.append(batch)
return batches
def main(
ckpt_id: str = "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
save_dir: str = "./evaluation/examples",
seed: int = 1,
batch_size: int = 4,
num_inference_steps: int = 20,
guidance_scale: float = 4.5,
dtype: str = "fp16",
low_mem: bool = False,
):
pipeline = DiffusionPipeline.from_pretrained(ckpt_id, torch_dtype=DTYPE_MAP[dtype])
save_dir = save_dir + f"_{START_TIME}"
parti_prompts = load_dataset("nateraw/parti-prompts", split="train")
data_loader = get_batches(items=parti_prompts["Prompt"], batch_size=batch_size)
distributed_state = PartialState()
if low_mem:
pipeline.enable_model_cpu_offload(gpu_id=distributed_state.device.index)
else:
pipeline = pipeline.to(distributed_state.device)
if distributed_state.is_main_process:
if not os.path.exists(save_dir):
os.makedirs(save_dir)
print(f"Directory '{save_dir}' created successfully.")
else:
print(f"Directory '{save_dir}' already exists.")
count = 0
for _, prompts_raw in tqdm(enumerate(data_loader), total=len(data_loader)):
input_prompts = []
with distributed_state.split_between_processes(prompts_raw) as prompts:
generator = torch.manual_seed(seed)
images = pipeline(
prompts, num_inference_steps=num_inference_steps, guidance_scale=guidance_scale, generator=generator
).images
input_prompts.extend(prompts)
distributed_state.wait_for_everyone()
images = gather_object(images)
input_prompts = gather_object(input_prompts)
if distributed_state.is_main_process:
for image, prompt in zip(images, input_prompts):
count += 1
temp_dir = os.path.join(save_dir, f"example_{count}")
os.makedirs(temp_dir)
prompt = "_".join(prompt.split())
image.save(f"image_{prompt}.png")
if distributed_state.is_main_process:
print(f">>> Image Generation Finished. Saved in {save_dir}")
if __name__ == "__main__":
fire.Fire(main)

View File

@ -32,7 +32,7 @@ model.eval()
input = torch.randint(
low=0,
high=model.config.vocab_size,
size=(2, 512), # bs x seq_len
size=(1, 512), # bs x seq_len
device="cpu",
dtype=torch.int64,
requires_grad=False,
@ -49,6 +49,16 @@ model = prepare_pippy(model, split_points="auto", example_args=(input,))
# available on all GPUs
# model = prepare_pippy(model, split_points="auto", example_args=(input,), gather_output=True)
# Create new inputs of the expected size (n_processes)
input = torch.randint(
low=0,
high=model.config.vocab_size,
size=(2, 512), # bs x seq_len
device="cpu",
dtype=torch.int64,
requires_grad=False,
)
# Move the inputs to the first device
input = input.to("cuda:0")
@ -76,3 +86,4 @@ if PartialState().is_last_process:
output = torch.stack(tuple(output[0]))
print(f"Time of first pass: {first_batch}")
print(f"Average time per batch: {(end_time - start_time) / 5}")
PartialState().destroy_process_group()

View File

@ -32,7 +32,7 @@ model.eval()
input = torch.randint(
low=0,
high=model.config.vocab_size,
size=(2, 1024), # bs x seq_len
size=(1, 1024), # bs x seq_len
device="cpu",
dtype=torch.int64,
requires_grad=False,
@ -48,6 +48,16 @@ model = prepare_pippy(model, split_points="auto", example_args=(input,))
# available on all GPUs
# model = prepare_pippy(model, split_points="auto", example_args=(input,), gather_output=True)
# Create new inputs of the expected size (n_processes)
input = torch.randint(
low=0,
high=model.config.vocab_size,
size=(2, 1024), # bs x seq_len
device="cpu",
dtype=torch.int64,
requires_grad=False,
)
# Move the inputs to the first device
input = input.to("cuda:0")
@ -75,3 +85,4 @@ if PartialState().is_last_process:
output = torch.stack(tuple(output[0]))
print(f"Time of first pass: {first_batch}")
print(f"Average time per batch: {(end_time - start_time) / 5}")
PartialState().destroy_process_group()

View File

@ -27,7 +27,7 @@ model.eval()
# Input configs
# Create example inputs for the model
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
prompts = ("I would like to", "I really like to", "The weather is pretty") # bs = 3
prompts = ("I would like to", "I really like to") # bs = 2, sending 2 per process
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer(prompts, return_tensors="pt", padding=True)
@ -43,6 +43,8 @@ model = prepare_pippy(model, split_points="auto", example_kwargs=inputs)
# currently we don't support `model.generate`
# output = model.generate(**inputs, max_new_tokens=1)
prompts = ("I would like to", "I really like to", "The weather is pretty") # bs = 3
inputs = tokenizer(prompts, return_tensors="pt", padding=True)
inputs = inputs.to(0)
with torch.no_grad():
output = model(**inputs)
@ -52,3 +54,4 @@ if PartialState().is_last_process:
next_token_logits = output[0][:, -1, :]
next_token = torch.argmax(next_token_logits, dim=-1)
print(tokenizer.batch_decode(next_token))
PartialState().destroy_process_group()

Some files were not shown because too many files have changed in this diff Show More