Compare commits

..

10 Commits

Author SHA1 Message Date
47b9ac1071 Patch: v0.20.3 2023-06-08 12:10:54 -04:00
d37b93e679 Reset dataloader end_of_datalaoder at each iter (#1562) 2023-06-08 12:09:50 -04:00
966b6b057d [doc build] Use secrets (#1551) 2023-06-08 09:18:30 -04:00
3915c3d754 Patch: v0.20.2 2023-06-08 09:02:50 -04:00
a1ff1ab076 [core] Fix possibility to passNoneType objects in prepare (#1561)
* add possibility to pass nonetype objects

* adds nice test
2023-06-08 09:01:55 -04:00
99f63d56bf fix the typo when setting the "_accelerator_prepared" attribute (#1560)
* fix the typo when setting the "_accelerator_prepared" attribute

* use the name "_is_accelerate_prepared" instead
2023-06-08 09:01:45 -04:00
15384934d7 Patch: v0.20.1 2023-06-07 14:58:56 -04:00
5a41d49ad1 Fix load_state_dict when there is one device and disk (#1557) 2023-06-07 14:58:28 -04:00
d108a51aaf Avoid double wrapping of all accelerate.prepare objects (#1555)
* Add step reset to free memory

* Check if not Accelerated Optimizer

* Continue

* Another try

* Check the rest

* Try with just check on init

* Change logic based on review

* Update

* Oops very big logic issue!
2023-06-07 14:58:19 -04:00
9765b84f9c Release: v0.20.0 2023-06-07 10:05:15 -04:00
131 changed files with 1344 additions and 7424 deletions

View File

@ -1,12 +1,6 @@
name: "\U0001F41B Bug Report"
description: Submit a bug report to help us improve Accelerate
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to submit a bug report! 🐛
If this is not a bug related to the Accelerate library directly, but instead a general question about your code or the library specifically please use the [forums](https://discuss.huggingface.co/c/accelerate/18).
- type: textarea
id: system-info
attributes:

View File

@ -1,47 +0,0 @@
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
- Big modeling: @SunMarc
- Fully-Sharded Data Parallism: @pacman100
- DeepSpeed: @pacman100
- Command Line Interface: @muellerzr
- Documentation: @muellerzr
- Core parts of the library: @muellerzr @BenjaminBossan
- Maintained examples: @muellerzr or @pacman100
-->

View File

@ -21,40 +21,44 @@ jobs:
version-cpu:
name: "Latest Accelerate CPU [version]"
runs-on: [self-hosted, docker-gpu, multi-gpu]
runs-on: ubuntu-latest
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v1
- name: Check out code
uses: actions/checkout@v2
- name: Login to DockerHub
uses: docker/login-action@v2
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push CPU
uses: docker/build-push-action@v4
uses: docker/build-push-action@v2
with:
file: docker/accelerate-cpu/Dockerfile
context: ./docker/accelerate-cpu
push: true
tags: huggingface/accelerate-cpu:${{needs.get-version.outputs.version}}
version-cuda:
name: "Latest Accelerate GPU [version]"
runs-on: [self-hosted, docker-gpu, multi-gpu]
runs-on: ubuntu-latest
needs: get-version
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v1
- name: Check out code
uses: actions/checkout@v2
- name: Login to DockerHub
uses: docker/login-action@v2
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU
uses: docker/build-push-action@v4
uses: docker/build-push-action@v2
with:
file: docker/accelerate-gpu/Dockerfile
context: ./docker/accelerate-gpu
push: true
tags: huggingface/accelerate-gpu:${{needs.get-version.outputs.version}}
tags: huggingface/accelerate-gpu:${{needs.get-version.outputs.version}}

View File

@ -42,9 +42,4 @@ jobs:
run-merge-tests:
needs: build-docker-containers
if: always()
uses: ./.github/workflows/run_merge_tests.yml
run-integration-tests:
needs: run-merge-tests
if: always()
uses: ./.github/workflows/self_hosted_integration_tests.yml
uses: ./.github/workflows/run_merge_tests.yml

View File

@ -11,50 +11,44 @@ concurrency:
cancel-in-progress: false
jobs:
clean-storage:
name: "Clean docker image storage"
runs-on: [self-hosted, docker-gpu, multi-gpu]
steps:
- name: Clean storage
run: |
docker image prune --all -f --filter "until=48h"
docker system prune --all -f --filter "until=48h"
latest-cpu:
name: "Latest Accelerate CPU [dev]"
runs-on: [self-hosted, docker-gpu, multi-gpu]
needs: clean-storage
runs-on: ubuntu-latest
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v1
- name: Check out code
uses: actions/checkout@v2
- name: Login to DockerHub
uses: docker/login-action@v2
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push CPU
uses: docker/build-push-action@v4
uses: docker/build-push-action@v2
with:
file: docker/accelerate-cpu/Dockerfile
context: ./docker/accelerate-cpu
push: true
tags: huggingface/accelerate-cpu
latest-cuda:
name: "Latest Accelerate GPU [dev]"
runs-on: [self-hosted, docker-gpu, multi-gpu]
needs: clean-storage
runs-on: ubuntu-latest
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v1
- name: Check out code
uses: actions/checkout@v2
- name: Login to DockerHub
uses: docker/login-action@v2
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and Push GPU
uses: docker/build-push-action@v4
uses: docker/build-push-action@v2
with:
file: docker/accelerate-gpu/Dockerfile
context: ./docker/accelerate-gpu
push: true
tags: huggingface/accelerate-gpu
tags: huggingface/accelerate-gpu

View File

@ -1,64 +0,0 @@
# CI for specifically ensuring integrations work fine (`transformers` mainly)
# Useful tips:
# - New integrations to test should have its own job, and follow a strategy method where we check both
# the pypi and github versions.
# - When checking the latest release of the integration, use
# git checkout $(git describe --tags `git rev-list --tags --max-count=1`) to get the latest release.
name: Integration Tests
on:
pull_request:
paths:
- "src/**"
- "tests/**"
- ".github/**"
- "examples/**"
- "setup.py"
types: [opened, synchronize, reopened]
env:
HF_HOME: ~/hf_cache
jobs:
run-trainer-tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
transformers-version: [
pypi,
github
]
steps:
- uses: actions/checkout@v3.1.0
- name: Set up python 3.8
uses: actions/setup-python@v3
with:
python-version: 3.8
- name: Install Accelerate from source
run: |
pip install --upgrade pip
pip install -e .
- name: Clone and install transformers
run: |
cd ..
git clone https://github.com/huggingface/transformers
cd transformers
if [[ ${{ matrix.transformers-version }} = pypi ]]; then
git checkout $(git describe --tags `git rev-list --tags --max-count=1`)
fi
pip install .[torch,testing]
- name: Show installed libraries
run: |
pip freeze
- name: Run Trainer tests
env:
WANDB_DISABLED: true
run: |
cd ../transformers
pytest -sv tests/trainer

View File

@ -39,7 +39,6 @@ jobs:
make test
- name: Run examples on GPUs
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y
@ -80,13 +79,11 @@ jobs:
make test_cli
- name: Run Integration tests on GPUs
if: always()
run: |
source activate accelerate
make test_integrations
- name: Run examples on GPUs
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y
@ -96,10 +93,4 @@ jobs:
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run-integration-tests:
needs: [run_all_tests_single_gpu, run_all_tests_multi_gpu]
if: always()
uses: ./.github/workflows/self_hosted_integration_tests.yml
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

View File

@ -7,16 +7,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.8
- name: Set up Python 3.7
uses: actions/setup-python@v3
with:
python-version: 3.8
python-version: 3.7
- name: Install Python dependencies
run: pip install -e .[quality]
- name: Run Quality check
run: make quality
- name: Check if failure
if: ${{ failure() }}
run: |
echo "Quality check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and rerun 'make style; make quality;'" >> $GITHUB_STEP_SUMMARY
run: make quality

View File

@ -35,12 +35,10 @@ jobs:
make test_cli
- name: Run test on GPUs
if: always()
run: |
source activate accelerate
make test
- name: Run examples on GPUs
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y
@ -70,13 +68,17 @@ jobs:
pip install -e .[testing,test_trackers] -U
pip install pytest-reportlog tabulate
- name: Run CLI tests
run: |
source activate accelerate
make test_cli
- name: Run test on GPUs
run: |
source activate accelerate
make test
- name: Run examples on GPUs
if: always()
run: |
source activate accelerate
pip uninstall comet_ml -y

View File

@ -1,126 +0,0 @@
# CI for specifically ensuring integrations work fine (`transformers` mainly) on GPUs
# Useful tips:
# - `working-directory` should be set to the root of the repo, which is cloned on the actual CI runner.
# It follows the directory structure of `actions-runner/_work/{repo_name}/{repo_name}/{cloned_repo} on
# prem, but in Actions setting `working-directory` looks just in the `{repo_name}` level.
# - New integrations to test should have its own job, and follow a strategy method where we check both
# the pypi and github versions.
# - Workflow call lets this be called from `build_and_run_tests.yml`
# - When using a docker container, it's recommended to set `--shm-size`, we use 16gb.
name: Integration Tests (push to "main")
on:
workflow_call:
workflow_dispatch:
env:
HF_HOME: ~/hf_cache
defaults:
run:
shell: bash
jobs:
run-trainer-tests:
container:
image: huggingface/accelerate-gpu:latest
options: --gpus all --shm-size "16gb"
runs-on: [self-hosted, docker-gpu, multi-gpu]
strategy:
fail-fast: false
matrix:
transformers-version: [
pypi,
github
]
cuda_visible_devices: [
"0",
"0,1"
]
steps:
- name: Update accelerate clone and pip install
working-directory: accelerate/
run:
source activate accelerate;
git config --global --add safe.directory '*';
git checkout main && git fetch && git checkout ${{ github.sha }};
pip install -e .;
- name: Update transformers clone & pip install
working-directory: transformers/
run: |
source activate accelerate
git config --global --add safe.directory '*'
git checkout main && git pull
if [[ ${{ matrix.transformers-version }} = pypi ]]; then
git checkout $(git describe --tags `git rev-list --tags --max-count=1`)
fi
pip install .[torch,deepspeed-testing]
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run trainer tests
working-directory: transformers/
env:
CUDA_VISIBLE_DEVICES: ${{ matrix.cuda_visible_devices }}
WANDB_DISABLED: true
run: |
source activate accelerate;
pytest -sv tests/trainer
- name: Run deepspeed tests
working-directory: transformers/
env:
CUDA_VISIBLE_DEVICES: ${{ matrix.cuda_visible_devices }}
WANDB_DISABLED: true
if: always()
run: |
source activate accelerate;
pytest -sv tests/deepspeed
run-skorch-tests:
container:
image: huggingface/accelerate-gpu:latest
options: --gpus all --shm-size "16gb"
runs-on: [self-hosted, docker-gpu, multi-gpu]
strategy:
fail-fast: false
matrix:
skorch-version: [
pypi,
github
]
steps:
- name: Update accelerate clone and pip install
working-directory: accelerate/
run:
source activate accelerate;
git config --global --add safe.directory '*';
git checkout main && git fetch && git checkout ${{ github.sha }};
pip install -e .;
- name: Update skorch clone & pip install
working-directory: skorch/
run: |
source activate accelerate
git config --global --add safe.directory '*'
git checkout master && git pull
if [[ ${{ matrix.skorch-version }} = pypi ]]; then
git checkout $(git describe --tags `git rev-list --tags --max-count=1`)
fi
pip install .[testing]
pip install flaky
- name: Show installed libraries
run: |
source activate accelerate;
pip freeze
- name: Run skorch tests
working-directory: skorch/
run: |
source activate accelerate;
pytest -sv -k TestAccelerate

View File

@ -18,7 +18,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v1
with:
python-version: 3.8
python-version: 3.7
- name: Install requirements
run: |

View File

@ -23,7 +23,7 @@ jobs:
matrix:
pytorch-version: [
latest,
minimum,
minimum
]
test-kind: [
test_prod,
@ -39,10 +39,10 @@ jobs:
]
steps:
- uses: actions/checkout@v3.1.0
- name: Set up python 3.8
- name: Set up python 3.7
uses: actions/setup-python@v3
with:
python-version: 3.8
python-version: 3.7
- name: Activate python cache
uses: actions/cache@v3
@ -58,7 +58,7 @@ jobs:
if [[ ${{ matrix.test-kind }} = test_prod ]]; then pip install -e .[test_prod]; fi
if [[ ${{ matrix.test-kind }} != test_prod ]]; then pip install -e .[testing,test_trackers]; fi
if [[ ${{ matrix.test-kind }} = test_rest ]]; then pip uninstall comet_ml -y; fi
if [[ ${{ matrix.test-kind }} = minimum ]]; then pip install torch==1.10.0; fi
if [[ ${{ matrix.pytorch-version }} = minimum ]]; then pip install torch==1.6.0; fi
pip install pytest-reportlog tabulate
- name: Run Tests

View File

@ -27,7 +27,7 @@ test:
python -m pytest -s -v ./tests/ --ignore=./tests/test_examples.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_all.log",)
test_big_modeling:
python -m pytest -s -v ./tests/test_big_modeling.py ./tests/test_modeling_utils.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_big_modeling.log",)
python -m pytest -s -v ./tests/test_big_modeling.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_big_modeling.log",)
test_core:
python -m pytest -s -v ./tests/ --ignore=./tests/test_examples.py --ignore=./tests/deepspeed --ignore=./tests/test_big_modeling.py \

View File

@ -21,7 +21,7 @@ limitations under the License.
<p>
<p align="center">
<!-- Uncomment when CircleCI is set up
<!-- Uncomment when CircleCI is setup
<a href="https://circleci.com/gh/huggingface/accelerate">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
</a>
@ -91,7 +91,7 @@ Here is an example:
optimizer.step()
```
As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp8, fp16, bf16).
As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp16).
In particular, the same code can then be run without modification on your local machine for debugging or your training environment.
@ -132,7 +132,7 @@ In particular, the same code can then be run without modification on your local
optimizer.step()
```
Want to learn more? Check out the [documentation](https://huggingface.co/docs/accelerate) or have a look at our [examples](https://github.com/huggingface/accelerate/tree/main/examples).
Want to learn more? Check out the [documentation](https://huggingface.co/docs/accelerate) or have look at our [examples](https://github.com/huggingface/accelerate/tree/main/examples).
## Launching script
@ -155,17 +155,7 @@ For instance, here is how you would run the GLUE example on the MRPC task (from
accelerate launch examples/nlp_example.py
```
This CLI tool is **optional**, and you can still use `python my_script.py` or `python -m torchrun my_script.py` at your convenience.
You can also directly pass in the arguments you would to `torchrun` as arguments to `accelerate launch` if you wish to not run` accelerate config`.
For example, here is how to launch on two GPUs:
```bash
accelerate launch --multi_gpu --num_processes 2 examples/nlp_example.py
```
To learn more, check the CLI documentation available [here](https://huggingface.co/docs/accelerate/package_reference/cli).
This CLI tool is **optional**, and you can still use `python my_script.py` or `python -m torchrun my_script.py` at your convenance.
## Launching multi-CPU run using MPI
@ -178,12 +168,12 @@ mpirun -np 2 python examples/nlp_example.py
## Launching training using DeepSpeed
🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your Python script, we provide you the `DeepSpeedPlugin`.
🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your python script, we provide you the `DeepSpeedPlugin`.
```python
from accelerate import Accelerator, DeepSpeedPlugin
# deepspeed needs to know your gradient accumulation steps beforehand, so don't forget to pass it
# deepspeed needs to know your gradient accumulation steps before hand, so don't forget to pass it
# Remember you still need to do gradient accumulation by yourself, just like you would have done without deepspeed
deepspeed_plugin = DeepSpeedPlugin(zero_stage=2, gradient_accumulation_steps=2)
accelerator = Accelerator(mixed_precision='fp16', deepspeed_plugin=deepspeed_plugin)
@ -210,7 +200,7 @@ An example can be found in [this notebook](https://github.com/huggingface/notebo
## Why should I use 🤗 Accelerate?
You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library. In fact, the whole API of 🤗 Accelerate is in one class, the `Accelerator` object.
You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library, In fact the whole API of 🤗 Accelerate is in one class, the `Accelerator` object.
## Why shouldn't I use 🤗 Accelerate?
@ -221,21 +211,20 @@ You shouldn't use 🤗 Accelerate if you don't want to write a training loop you
If you like the simplicity of 🤗 Accelerate but would prefer a higher-level abstraction around its capabilities, some frameworks and libraries that are built on top of 🤗 Accelerate are listed below:
* [Animus](https://github.com/Scitator/animus) is a minimalistic framework to run machine learning experiments. Animus highlights common "breakpoints" in ML experiments and provides a unified interface for them within [IExperiment](https://github.com/Scitator/animus/blob/main/animus/core.py#L76).
* [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model training, and inference logic.
* [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model train, and inference logic.
* [fastai](https://github.com/fastai/fastai#installing) is a PyTorch framework for Deep Learning that simplifies training fast and accurate neural nets using modern best practices. fastai provides a [Learner](https://docs.fast.ai/learner.html#Learner) to handle the training, fine-tuning, and inference of deep learning algorithms.
* [Finetuner](https://github.com/jina-ai/finetuner) is a service that enables models to create higher-quality embeddings for semantic search, visual similarity search, cross-modal text<->image search, recommendation systems, clustering, duplication detection, anomaly detection, or other uses.
* [InvokeAI](https://github.com/invoke-ai/InvokeAI) is a creative engine for Stable Diffusion models, offering industry-leading WebUI, terminal usage support, and serves as the foundation for many commercial products.
* [Kornia](https://kornia.readthedocs.io/en/latest/get-started/introduction.html) is a differentiable library that allows classical computer vision to be integrated into deep learning models. Kornia provides a [Trainer](https://kornia.readthedocs.io/en/latest/x.html#kornia.x.Trainer) with the specific purpose to train and fine-tune the supported deep learning algorithms within the library.
* [Open Assistant](https://projects.laion.ai/Open-Assistant/) is a chat-based assistant that understands tasks, can interact with their party systems, and retrieve information dynamically to do so.
* [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) is a lightweight training library, with a streamlined feature set centered around a general-purpose [Trainer](https://pytorch-accelerated.readthedocs.io/en/latest/trainer.html), that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves!
* [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) is a lightweight training library, with a streamlined feature set centred around a general-purpose [Trainer](https://pytorch-accelerated.readthedocs.io/en/latest/trainer.html), that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves!
* [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is an open-source browser-based easy-to-use interface based on the Gradio library for Stable Diffusion.
* [torchkeras](https://github.com/lyhue1991/torchkeras) is a simple tool for training pytorch model just in a keras style, a dynamic and beautiful plot is provided in notebook to monitor your loss or metric.
* [transformers](https://github.com/huggingface/transformers) as a tool for helping train state-of-the-art machine learning models in PyTorch, Tensorflow, and JAX. (Accelerate is the backend for the PyTorch side).
* [torchkeras](https://github.com/lyhue1991/torchkeras) is a simple tool for training pytorch model jusk in a keras style, a dynamic and beautiful plot is provided in notebook to monitor your loss or metric.
## Installation
This repository is tested on Python 3.8+ and PyTorch 1.10.0+
This repository is tested on Python 3.7+ and PyTorch 1.4.0+
You should install 🤗 Accelerate in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
@ -256,8 +245,7 @@ pip install accelerate
- multi-GPU on one node (machine)
- multi-GPU on several nodes (machines)
- TPU
- FP16/BFloat16 mixed precision
- FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine)
- FP16 with native AMP (apex on the roadmap)
- DeepSpeed support (Experimental)
- PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental)
- Megatron-LM support (Experimental)
@ -269,7 +257,7 @@ If you use 🤗 Accelerate in your publication, please cite it by using the foll
```bibtex
@Misc{accelerate,
title = {Accelerate: Training and inference at scale made simple, efficient and adaptable.},
author = {Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Mangrulkar, Marc Sun, Benjamin Bossan},
author = {Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Mangrulkar},
howpublished = {\url{https://github.com/huggingface/accelerate}},
year = {2022}
}

View File

@ -1,7 +1,7 @@
# Builds CPU-only Docker image of PyTorch
# Uses multi-staged approach to reduce size
# Stage 1
FROM python:3.8-slim as compile-image
FROM python:3.7-slim as compile-image
ARG DEBIAN_FRONTEND=noninteractive
@ -25,7 +25,7 @@ RUN python3 -m pip install --no-cache-dir \
--extra-index-url https://download.pytorch.org/whl/cpu
# Stage 2
FROM python:3.8-slim AS build-image
FROM python:3.7-slim AS build-image
COPY --from=compile-image /opt/venv /opt/venv
RUN useradd -ms /bin/bash user
USER user

View File

@ -81,7 +81,7 @@ The `preview` command only works with existing doc files. When you add a complet
## Adding a new element to the navigation bar
Accepted files are Markdown (.md).
Accepted files are Markdown (.md or .mdx).
Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/accelerate/blob/main/docs/source/_toctree.yml) file.

View File

@ -22,11 +22,7 @@
- local: usage_guides/training_zoo
title: Example Zoo
- local: usage_guides/big_modeling
title: How to perform inference on large models with small resources
- local: usage_guides/model_size_estimator
title: Knowing how big of a model you can fit into memory
- local: usage_guides/quantization
title: How to quantize model
title: How perform inference on large models with small resources
- local: usage_guides/distributed_inference
title: How to perform distributed inference with normal resources
- local: usage_guides/gradient_accumulation
@ -37,8 +33,6 @@
title: Saving and loading training states
- local: usage_guides/tracking
title: Using experiment trackers
- local: usage_guides/debug
title: Debugging timeout errors
- local: usage_guides/memory
title: How to avoid CUDA Out-of-Memory
- local: usage_guides/mps
@ -55,8 +49,6 @@
title: How to use 🤗 Accelerate with Intel® Extension for PyTorch for cpu
title: How-To Guides
- sections:
- local: concept_guides/big_model_inference
title: Loading big models into memory
- local: concept_guides/performance
title: Comparing performance across distributed setups
- local: concept_guides/deferring_execution
@ -91,6 +83,4 @@
title: Utility functions and classes
- local: package_reference/megatron_lm
title: Megatron-LM Utilities
- local: package_reference/fsdp
title: Fully Sharded Data Parallelism Utilities
title: "Reference"

View File

@ -8,14 +8,11 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Installation and Configuration
Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 Accelerate. 🤗 Accelerate is tested on **Python 3.8+**.
Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 Accelerate. 🤗 Accelerate is tested on **Python 3.7+**.
## Installing 🤗 Accelerate

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Launching your 🤗 Accelerate scripts
@ -39,7 +36,7 @@ for batch in training_dataloader:
But how do you run this code and have it utilize the special hardware available to it?
First, you should rewrite the above code into a function, and make it callable as a script. For example:
First you should rewrite the above code into a function, and make it callable as a script. For example:
```diff
from accelerate import Accelerator
@ -64,7 +61,7 @@ First, you should rewrite the above code into a function, and make it callable a
+ main()
```
Next, you need to launch it with `accelerate launch`.
Next you need to launch it with `accelerate launch`.
<Tip warning={true}>
@ -77,7 +74,7 @@ Next, you need to launch it with `accelerate launch`.
## Using accelerate launch
🤗 Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them is.
This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them are.
<Tip>
@ -91,7 +88,7 @@ You can launch your script quickly by using:
accelerate launch {script_name.py} --arg1 --arg2 ...
```
Just put `accelerate launch` at the start of your command, and pass in additional arguments and parameters to your script afterward like normal!
Just put `accelerate launch` at the start of your command, and pass in additional arguments and parameters to your script afterwards like normal!
Since this runs the various torch spawn methods, all of the expected environment variables can be modified here as well.
For example, here is how to use `accelerate launch` with a single GPU:
@ -199,4 +196,4 @@ use_cpu: false
Launching a script from the location of that custom yaml file looks like the following:
```bash
accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2} ...
```
```

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Migrating your code to 🤗 Accelerate
@ -124,6 +121,3 @@ for batch in training_dataloader:
scheduler.step()
```
## More Resources
To check out more ways on how to migrate to 🤗 Accelerate, check out our [interactive migration tutorial](https://huggingface.co/docs/accelerate/usage_guides/explore) which showcases other items that need to be watched for when using Accelerate and how to do so quickly.

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Launching Multi-GPU Training from a Jupyter Environment
@ -401,26 +398,6 @@ args = ("fp16", 42, 64)
notebook_launcher(training_loop, args, num_processes=2)
```
In the case of running on multiple nodes, you need to set up a Jupyter session at each node and run the launching cell at the same time.
For an environment containing 2 nodes (computers) with 8 GPUs each and the main computer with an IP address of "172.31.43.8", it would look like so:
```python
notebook_launcher(training_loop, args, master_addr="172.31.43.8", node_rank=0, num_nodes=2, num_processes=8)
```
And in the second Jupyter session on the other machine:
<Tip>
Notice how the `node_rank` has changed
</Tip>
```python
notebook_launcher(training_loop, args, master_addr="172.31.43.8", node_rank=1, num_nodes=2, num_processes=8)
```
In the case of running on the TPU, it would look like so:
```python
@ -443,13 +420,6 @@ epoch 4: 94.71
And that's it!
## Debugging
A common issue when running the `notebook_launcher` is receiving a CUDA has already been initialized issue. This usually stems
from an import or prior code in the notebook that makes a call to the PyTorch `torch.cuda` sublibrary. To help narrow down what went wrong,
you can launch the `notebook_launcher` with `ACCELERATE_DEBUG_MODE=yes` in your environment and an additional check
will be made when spawning that a regular process can be created and utilize CUDA without issue. (Your CUDA code can still be ran afterwards).
## Conclusion
This notebook showed how to perform distributed training from inside of a Jupyter Notebook. Some key notes to remember:

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Overview

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Deferring Executions
@ -108,23 +105,3 @@ with accelerator.main_process_first():
remove_columns=["idx", "sentence1", "sentence2"],
)
```
## Applying checks such as Early Stopping
To have a check that works with a flag set by a particular process, the `set_trigger` and `check_trigger` API should be used. Useful examples
for doing so can include situations such as using early stopping and monitoring the loss (as each loss slightly differs on each process).
Call [`Accelerator.set_trigger`] when your condition has been met, and [`Accelerator.check_trigger`] when checking if that condition has been met in any process:
```python
for (x,y) in data_loader:
logits = model(x)
loss = loss_func(logits, y)
# Assume `should_do_early_stopping` is a custom defined function that returns a conditional
if should_do_early_stopping(loss):
accelerator.set_trigger()
# Later in the training script when we need to check for the breakpoint
if accelerator.check_trigger():
break
```

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Gradient Synchronization
@ -48,30 +45,22 @@ training in a distributed setup. But how does this risk slowing down your code?
In DDP (distributed data parallel), the specific order in which processes are performed and ran are expected
at specific points and these must also occur at roughly the same time before moving on.
The most direct example is when you update model parameters through
`optimizer.step()`.
Without gradient accumulation, all instances of the model need to have updated
their gradients computed, collated, and updated before moving on to the next
batch of data.
When performing gradient accumulation, you accumulate `n` loss gradients and
skip `optimizer.step()` until `n` batches have been reached. As all training
processes only need to sychronize by the time `optimizer.step()` is called,
without any modification to your training step, this neededless inter-process
communication can cause a significant slowdown.
How can you avoid this overhead?
The most direct example is when you update all of the parameters in a model through `.backward()`. All instances of the model
need to have updated their gradients, collated, and updated again before moving on to the next batch of data. But when performing
gradient accumulation, you accumulate `n` losses and skip `.backward()` until `n` batches have been reached. This
can cause a significant slowdown since all the processes need to communicate with them more times than needed. How
can you avoid this overhead?
## Solving the slowdown problem
Since you are skipping model parameter updates when training on these batches, their gradients do not need to be synchronized until the point where `optimizer.step()` is actually called.
Since you are skipping these batches, their gradients do not need to be synchronized until the point where `.backward()` is actually called.
PyTorch cannot automagically tell when you need to do this, but they do provide a tool to help through the [`no_sync`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.no_sync) context manager
that is added to your model after converting it to DDP.
Under this context manager, PyTorch will skip synchronizing the gradients when
`.backward()` is called, and the first call to `.backward()` outside this
Under this context manager, PyTorch will skip synchronizing the gradients when `.backward()` is called, and the first call to `.backward()` outside this
context manager will trigger the synchronization. See an example below:
```python
ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
ddp_model, dataloader = accelerator.prepare(model, dataloader)
for index, batch in enumerate(dataloader):
inputs, targets = batch
@ -87,14 +76,13 @@ for index, batch in enumerate(dataloader):
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
optimizer.step()
```
In 🤗 Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
`ddp_model.no_sync` gets replaced with [`~Accelerator.no_sync`] and operates the same way:
```diff
ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
ddp_model, dataloader = accelerator.prepare(model, dataloader)
for index, batch in enumerate(dataloader):
inputs, targets = batch
@ -111,15 +99,13 @@ In 🤗 Accelerate to make this an API that can be called no matter the training
outputs = ddp_model(inputs)
loss = loss_func(outputs)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
As you may expect, the [`~Accelerator.accumulate`] function wraps around this conditional check by keeping track of the current batch number, leaving you with the final
gradient accumulation API:
```python
ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer)
ddp_model, dataloader = accelerator.prepare(model, dataloader)
for batch in dataloader:
with accelerator.accumulate(model):
@ -128,8 +114,6 @@ for batch in dataloader:
outputs = model(inputs)
loss = loss_function(outputs, targets)
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
```
As a result, you should either use *`accelerator.accumulate` or `accelerator.no_sync`* when it comes to API choice.

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Comparing performance between different device setups

View File

@ -8,14 +8,11 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Training on TPUs with 🤗 Accelerate
Training on TPUs can be slightly different from training on multi-gpu, even with 🤗 Accelerate. This guide aims to show you
Training on TPUs can be slightly different than training on multi-gpu, even with 🤗 Accelerate. This guide aims to show you
where you should be careful and why, as well as the best practices in general.
## Training in a Notebook
@ -27,8 +24,8 @@ While on a TPU that last part is not as important, a critical part to understand
When launching from the command-line, you perform **spawning**, where a python process is not currently running and you *spawn* a new process in. Since your Jupyter notebook is already
utilizing a python process, you need to *fork* a new process from it to launch your code.
Where this becomes important is in regard to declaring your model. On forked TPU processes, it is recommended that you instantiate your model *once* and pass this into your
training function. This is different than training on GPUs where you create `n` models that have their gradients synced and back-propagated at certain moments. Instead, one
Where this becomes important is in regards to declaring your model. On forked TPU processes, it is recommended that you instantiate your model *once* and pass this into your
training function. This is different than training on GPUs where you create `n` models that have their gradients synced and back-propagated at certain moments. Instead one
model instance is shared between all the nodes and it is passed back and forth. This is important especially when training on low-resource TPUs such as those provided in Kaggle kernels or
on Google Colaboratory.
@ -137,7 +134,7 @@ At the base level, this is enabled when passing `mixed_precision="bf16"` to `Acc
```python
accelerator = Accelerator(mixed_precision="bf16")
```
By default, this will cast `torch.float` and `torch.double` to `bfloat16` on TPUs.
By default this will cast `torch.float` and `torch.double` to `bfloat16` on TPUs.
The specific configuration being set is an environmental variable of `XLA_USE_BF16` is set to `1`.
There is a further configuration you can perform which is setting the `XLA_DOWNCAST_BF16` environmental variable. If set to `1`, then
@ -164,4 +161,4 @@ new batch size after the first few iterations.
Just because the memory is allocated does not mean it will be used or that the batch size will increase when going back to your training dataloader.
</Tip>
</Tip>

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerate

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerator
@ -120,51 +117,21 @@ Use [`~Accelerator.wait_for_everyone`] to make sure all processes join that poin
### Saving and loading
Use [`~Accelerator.unwrap_model`] before saving to remove all special model wrappers added during the distributed process.
```python
model = MyModel()
model = accelerator.prepare(model)
# Unwrap
model = accelerator.unwrap_model(model)
```
Use [`~Accelerator.save_model`] instead of `torch.save` to save a model. It will remove all model wrappers added during the distributed process, get the state_dict of the model and save it. The state_dict will be in the same precision as the model being trained.
Use [`~Accelerator.save`] instead of `torch.save`:
```diff
state_dict = model.state_dict()
- torch.save(state_dict, "my_state.pkl")
+ accelerator.save_model(model, save_directory)
```
[`~Accelerator.save_model`] can also save a model into sharded checkpoints or with safetensors format.
Here is an example:
```python
accelerator.save_model(model, save_directory, max_shard_size="1GB", safe_serialization=True)
```
#### 🤗 Transformers models
If you are using models from the [🤗 Transformers](https://huggingface.co/docs/transformers/) library, you can use the `.save_pretrained()` method.
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("bert-base-cased")
model = accelerator.prepare(model)
# ...fine-tune with PyTorch...
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(
"path/to/my_model_directory",
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
)
```
This will ensure your model stays compatible with other 🤗 Transformers functionality like the `.from_pretrained()` method.
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("path/to/my_model_directory")
+ accelerator.save(state_dict, "my_state.pkl")
```
### Operations

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Working with large models
@ -22,8 +19,6 @@ rendered properly in your Markdown viewer.
[[autodoc]] big_modeling.disk_offload
[[autodoc]] big_modeling.dispatch_model
[[autodoc]] big_modeling.load_checkpoint_and_dispatch
[[autodoc]] big_modeling.load_checkpoint_in_model
[[autodoc]] utils.infer_auto_device_map
## Model Hooks

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# The Command Line
@ -228,36 +225,6 @@ The following arguments are only useful when training in SageMaker
* `--aws_access_key_id AWS_ACCESS_KEY_ID` (`str`) -- The AWS_ACCESS_KEY_ID used to launch the Amazon SageMaker training job
* `--aws_secret_access_key AWS_SECRET_ACCESS_KEY` (`str`) -- The AWS_SECRET_ACCESS_KEY used to launch the Amazon SageMaker training job
## accelerate estimate-memory
**Command**:
`accelerate estimate-memory` or `accelerate-estimate-memory` or `python -m accelerate.commands.estimate`
Estimates the total vRAM a particular model hosted on the Hub needs to be loaded in with an estimate for training. Requires that `huggingface_hub` be installed.
<Tip>
When performing inference, typically add ≤20% to the result as overall allocation [as referenced here](https://blog.eleuther.ai/transformer-math/). We will have more extensive estimations in the future that will automatically be included in the calculation.
</Tip>
**Usage**:
```bash
accelerate estimate-memory {MODEL_NAME} --library_name {LIBRARY_NAME} --dtypes {dtype_1} {dtype_2} ...
```
**Required Arguments**:
* `MODEL_NAME` (`str`)-- The model name on the Hugging Face Hub
**Optional Arguments**:
* `--library_name {timm,transformers}` (`str`) -- The library the model has an integration with, such as `transformers`, needed only if this information is not stored on the Hub
* `--dtypes {float32,float16,int8,int4}` (`[{float32,float16,int8,int4} ...]`) -- The dtypes to use for the model, must be one (or many) of `float32`, `float16`, `int8`, and `int4`
* `--trust_remote_code` (`bool`) -- Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be passed for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
## accelerate tpu-config
`accelerate tpu-config`

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Utilities for DeepSpeed

View File

@ -1,18 +0,0 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Utilities for Fully Sharded Data Parallelism
[[autodoc]] utils.FullyShardedDataParallelPlugin

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Kwargs Handlers
@ -18,18 +15,11 @@ rendered properly in your Markdown viewer.
The following objects can be passed to the main [`Accelerator`] to customize how some PyTorch objects
related to distributed training or mixed precision are created.
## AutocastKwargs
[[autodoc]] AutocastKwargs
## DistributedDataParallelKwargs
[[autodoc]] DistributedDataParallelKwargs
## FP8RecipeKwargs
[[autodoc]] utils.FP8RecipeKwargs
## GradScalerKwargs
[[autodoc]] GradScalerKwargs

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Launchers

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Logging with Accelerate

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Utilities for Megatron-LM

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Stateful Classes

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Wrapper classes for torch Dataloaders, Optimizers, and Schedulers
@ -21,7 +18,6 @@ when calling [`~Accelerator.prepare`].
## Datasets and DataLoaders
[[autodoc]] data_loader.prepare_data_loader
[[autodoc]] data_loader.skip_first_batches
[[autodoc]] data_loader.BatchSamplerShard
[[autodoc]] data_loader.IterableDatasetShard

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Experiment Tracking

View File

@ -8,64 +8,24 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Helpful Utilities
Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case.
## Constants
Constants used throughout 🤗 Accelerate for reference
The following are constants used when utilizing [`Accelerator.save_state`]
`utils.MODEL_NAME`: `"pytorch_model"`
`utils.OPTIMIZER_NAME`: `"optimizer"`
`utils.RNG_STATE_NAME`: `"random_states"`
`utils.SCALER_NAME`: `"scaler.pt`
`utils.SCHEDULER_NAME`: `"scheduler`
The following are constants used when utilizing [`Accelerator.save_model`]
`utils.WEIGHTS_NAME`: `"pytorch_model.bin"`
`utils.SAFE_WEIGHTS_NAME`: `"model.safetensors"`
`utils.WEIGHTS_INDEX_NAME`: `"pytorch_model.bin.index.json"`
`utils.SAFE_WEIGHTS_INDEX_NAME`: `"model.safetensors.index.json"`
## Data Classes
These are basic dataclasses used throughout 🤗 Accelerate and they can be passed in as parameters.
[[autodoc]] utils.DistributedType
[[autodoc]] utils.DynamoBackend
[[autodoc]] utils.LoggerType
[[autodoc]] utils.PrecisionType
[[autodoc]] utils.ProjectConfiguration
## Plugins
These are plugins that can be passed to the [`Accelerator`] object. While they are defined elsewhere in the documentation,
for convience all of them are available to see here:
[[autodoc]] utils.DeepSpeedPlugin
[[autodoc]] utils.FullyShardedDataParallelPlugin
[[autodoc]] utils.GradientAccumulationPlugin
[[autodoc]] utils.MegatronLMPlugin
[[autodoc]] utils.TorchDynamoPlugin
## Data Manipulation and Operations
These include data operations that mimic the same `torch` ops but can be used on distributed processes.
@ -88,23 +48,11 @@ These functionalities check the state of the current working environment includi
[[autodoc]] utils.is_bf16_available
[[autodoc]] utils.is_ipex_available
[[autodoc]] utils.is_mps_available
[[autodoc]] utils.is_npu_available
[[autodoc]] utils.is_torch_version
[[autodoc]] utils.is_tpu_available
[[autodoc]] utils.is_xpu_available
## Environment Manipulation
[[autodoc]] utils.patch_environment
[[autodoc]] utils.clear_environment
## Environment Configuration
[[autodoc]] utils.write_basic_config
@ -154,17 +102,3 @@ These utilities relate to setting and synchronizing of all the random states.
These include utilities that are useful while using PyTorch with XLA.
[[autodoc]] utils.install_xla
## Loading model weights
These include utilities that are useful to load checkpoints.
[[autodoc]] utils.load_checkpoint_in_model
## Quantization
These include utilities that are useful to quantize model.
[[autodoc]] utils.load_and_quantize_model
[[autodoc]] utils.BnbQuantizationConfig

View File

@ -8,27 +8,17 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Quick tour
This guide aims to help you get started with 🤗 Accelerate quickly. It covers the essential steps you need to take to
enable distributed training, as well as the adjustments that you need to make in some common scenarios.
Let's have a look at the 🤗 Accelerate main features and traps to avoid.
To help you navigate, the guide is split into two sections:
* [Getting Started with 🤗 Accelerate](#getting-started-with--accelerate): start here to learn how to modify your script to enable distributed training with 🤗 Accelerate
* [Common adaptations to the base case](#common-adaptations-to-the-base-case): check out this section for common deviations from the baseline scenario and what adjustments may need to be made to support them.
## Main use
## Getting started with 🤗 Accelerate
To use 🤗 Accelerate in your own script, you have to change four things:
### Enable distributed training in your script
To use 🤗 Accelerate in your own training script, you have to modify four things:
1. Import the [`Accelerator`] main class and instantiate one in an `accelerator` object.
1. Import the [`Accelerator`] main class and instantiate one in an `accelerator` object:
```python
from accelerate import Accelerator
@ -36,27 +26,27 @@ from accelerate import Accelerator
accelerator = Accelerator()
```
Add this at the beginning of your training script as it will initialize everything necessary for distributed training.
You don't need to indicate the kind of environment you are in (a single machine with a GPU, a machine with several GPUs,
or several machines with multiple GPUs or a TPU), the library will detect this automatically.
This should happen as early as possible in your training script as it will initialize everything necessary for
distributed training. You don't need to indicate the kind of environment you are in (just one machine with a GPU, one
machines with several GPUs, several machines with multiple GPUs or a TPU), the library will detect this automatically.
2. Remove the `.to(device)` or `.cuda()` calls for your model and input data.
2. Remove the call `.to(device)` or `.cuda()` for your model and input data. The `accelerator` object
will handle this for you and place all those objects on the right device for you. If you know what you're doing, you
can leave those `.to(device)` calls but you should use the device provided by the `accelerator` object:
`accelerator.device`.
The `accelerator` object will handle placing these objects on the right device for you.
If you choose to leave those `.to(device)` calls, make sure to use the device provided by the `accelerator` object: `accelerator.device`.
To fully deactivate the automatic device placement, pass along `device_placement=False` when initializing your
[`Accelerator`].
<Tip warning={true}>
You can fully deactivate the automatic device placement by passing along `device_placement=False` when
initializing [`Accelerator`].
However, if you place your objects manually on the proper device, be careful to create your optimizer after putting your
If you place your objects manually on the proper device, be careful to create your optimizer after putting your
model on `accelerator.device` or your training will fail on TPU.
</Tip>
3. Pass all objects relevant to training (optimizer, model, training dataloader, learning rate scheduler) to the
[`~Accelerator.prepare`] method as soon as these objects are created, before starting your actual
training loop:
[`~Accelerator.prepare`] method. This will make sure everything is ready for training.
```python
model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
@ -64,40 +54,58 @@ model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
)
```
**Important notes**:
* Only pass the learning rate scheduler to [`~Accelerator.prepare`] when the scheduler needs to be stepped at each optimizer step.
* While you can send your dataloader to [`~Accelerator.prepare`] on its own, it's best to send it to [`~Accelerator.prepare`] together with the model and optimizer.
* If you wish to run distributed evaluation, send your validation dataloader to [`~Accelerator.prepare`] as well. There are some nuances to distributed validation, check the [Distributed evaluation](#add-distributed-evaluation) section of the guide.
* Any instruction using your training dataloader length (for instance if you want to log the number of total training
steps) should go after the call to [`~Accelerator.prepare`].
Passing these objects to the [`~Accelerator.prepare`] method ensures that your training dataloader will be sharded across
all GPUs/TPU cores available so that each one sees a different portion of the training dataset. Also, the random states
of all processes will be synchronized at the beginning of each iteration through your dataloader, to make sure the data
is shuffled the same way (if you decided to use `shuffle=True` or any kind of random sampler).
In particular, your training dataloader will be sharded across all GPUs/TPU cores available so that each one sees a
different portion of the training dataset. Also, the random states of all processes will be synchronized at the
beginning of each iteration through your dataloader, to make sure the data is shuffled the same way (if you decided to
use `shuffle=True` or any kind of random sampler).
<Tip>
The actual batch size for your training will be the number of devices used multiplied by the batch size you set in
your script. For instance, training on 4 GPUs with a batch size of 16 set when creating the training dataloader will
train at an actual batch size of 64.
If you want the batch size remain the same regardless of how many GPUs the script is run on, you can use the
option `split_batches=True` when creating and initializing [`Accelerator`].
your script: for instance training on 4 GPUs with a batch size of 16 set when creating the training dataloader will
train at an actual batch size of 64.
</Tip>
Alternatively, you can use the option `split_batches=True` when creating and initializing your
[`Accelerator`], in which case the batch size will always stay the same, whether you run your
script on 1, 2, 4, or 64 GPUs.
You should execute this instruction as soon as all objects for training are created, before starting your actual
training loop.
<Tip warning={true}>
You should only pass the learning rate scheduler to [`~Accelerator.prepare`] when the scheduler needs to be stepped
at each optimizer step.
</Tip>
<Tip warning={true}>
Your training dataloader may change length when going through this method: if you run on X GPUs, it will have its
length divided by X (since your actual batch size will be multiplied by X), unless you set
`split_batches=True`.
</Tip>
Any instruction using your training dataloader length (for instance if you want to log the number of total training
steps) should go after the call to [`~Accelerator.prepare`].
4. Replace the `loss.backward()` line with `accelerator.backward(loss)`.
You can perfectly send your dataloader to [`~Accelerator.prepare`] on its own, but it's best to send the
model and optimizer to [`~Accelerator.prepare`] together.
You may or may not want to send your validation dataloader to [`~Accelerator.prepare`], depending on
whether you want to run distributed evaluation or not (see below).
4. Replace the line `loss.backward()` by `accelerator.backward(loss)`.
And you're all set! With all these changes, your script will run on your local machine as well as on multiple GPUs or a
TPU! You can either use your favorite tool to launch the distributed training, or you can use the 🤗 Accelerate
launcher.
### Add distributed evaluation
## Distributed evaluation
You can perform regular evaluation in your training script, if you leave your validation dataloader out of the
[`~Accelerator.prepare`] method. In this case, you will need to put the input data on the
@ -110,9 +118,9 @@ method:
validation_dataloader = accelerator.prepare(validation_dataloader)
```
Same as with your training dataloader, each device will only see part of the evaluation data should you run your script
on multiple devices. This means you will need to group your predictions together which you can do with
the [`~Accelerator.gather_for_metrics`] method.
As for your training dataloader, it will mean that (should you run your script on multiple devices) each device will
only see part of the evaluation data. This means you will need to group your predictions together. This is very easy to
do with the [`~Accelerator.gather_for_metrics`] method.
```python
for inputs, targets in validation_dataloader:
@ -131,9 +139,11 @@ for inputs, targets in validation_dataloader:
</Tip>
Some data at the end of the dataset may be duplicated so the batch can be divided equally among all workers. As a result,
metrics should be calculated through the [`~Accelerator.gather_for_metrics`] method to automatically remove the duplicated
data while gathering.
Any instruction using your training dataloader length (for instance if you need the number of total training steps
to create a learning rate scheduler) should go after the call to [`~Accelerator.prepare`].
Some data at the end of the dataset may be duplicated so the batch can be divided equally among all workers. As a result, metrics
should be calculated through the [`~Accelerator.gather_for_metrics`] method to automatically remove the duplicated data while gathering.
<Tip>
@ -152,35 +162,36 @@ data while gathering.
</Tip>
### Launch your distributed script
## Launching your distributed script
You can use the regular commands to launch your distributed training (like `torch.distributed.run` for
PyTorch) - they are fully compatible with 🤗 Accelerate.
PyTorch), they are fully compatible with 🤗 Accelerate.
Alternatively, 🤗 Accelerate provides a CLI tool that unifies all launchers, so you only have to remember one command. \
To use it, run a quick configuration setup first on your machine and answer the questions:
🤗 Accelerate also provides a CLI tool that unifies all launchers, so you only have to remember one command. To use it,
just run:
```bash
accelerate config
```
At the end of the setup, a *default_config.yaml* file will be saved in your cache folder for 🤗 Accelerate. That cache
folder is (with decreasing order of priority):
on your machine and reply to the questions asked. This will save a *default_config.yaml* file in your cache folder for
🤗 Accelerate. That cache folder is (with decreasing order of priority):
- The content of your environment variable `HF_HOME` suffixed with *accelerate*.
- If it does not exist, the content of your environment variable `XDG_CACHE_HOME` suffixed with
*huggingface/accelerate*.
- If this does not exist either, the folder *~/.cache/huggingface/accelerate*.
- If this does not exist either, the folder *~/.cache/huggingface/accelerate*
By specifying the `--config_file` flag you can specify an alternative location of the configuration file.
Once the configuration setup is complete, you can test your setup by running:
You can also specify with the flag `--config_file` the location of the file you want to save.
Once this is done, you can test everything is going well on your setup by running:
```bash
accelerate test
```
This will launch a short script that will test the distributed environment. If it runs without issues, you are ready for
the next step!
This will launch a short script that will test the distributed environment. If it runs fine, you are ready for the next
step!
Note that if you specified a location for the config file in the previous step, you need to pass it here as well:
@ -200,23 +211,19 @@ If you stored the config file in a non-default location, you can indicate it to
accelerate launch --config_file path_to_config.yaml path_to_script.py --args_for_the_script
```
You can override any of the arguments determined by your config file. To see the complete list of parameters that you
can pass in, run `accelerate launch -h`.
You can also override any of the arguments determined by your config file.
To see the complete list of parameters that you can pass in, run `accelerate launch -h`.
Check out the [Launch tutorial](basic_tutorials/launch) for more information about launching your scripts.
Check out the [Launch tutorial](basic_tutorials/launch) for more information about launching your scripts.
## Common modifications of the base case
The previous section covers the minimal essential steps to move a training script into a distributed setup with 🤗 Accelerate.
Here we describe common modifications/deviations from the base case scenario and the adjustments you need to make to accommodate for them.
## Launching training from a notebook
### Launch distributed training from a notebook
In Accelerate 0.3.0, a new [`notebook_launcher`] has been introduced to help you launch your training
function from a notebook. This launcher supports launching a training with TPUs on Colab or Kaggle, as well as training
on several GPUs (if the machine on which you are running your notebook has them).
In Accelerate 0.3.0, a new [`notebook_launcher`] has been introduced to help you launch your training function from a
notebook. This launcher supports launching a training with TPUs on Colab or Kaggle, as well as training on several GPUs
(if the machine on which you are running your notebook has them).
Define a function responsible for your whole training and/or evaluation in a cell of the notebook, then execute a
Just define a function responsible for your whole training and/or evaluation in a cell of the notebook, then execute a
cell with the following code:
```python
@ -232,9 +239,10 @@ notebook_launcher(training_function)
</Tip>
Check out the [Notebook Launcher tutorial](basic_tutorials/notebook) for more information about training on TPUs.
Check out the [Notebook Launcher tutorial](basic_tutorials/notebook) for more information about training on TPUs.
### Specifics of training on TPU
## Training on TPU
If you want to launch your script on TPUs, there are a few caveats you should be aware of. Behind the scenes, the TPUs
will create a graph of all the operations happening in your training step (forward pass, backward pass and optimizer
@ -273,7 +281,12 @@ passed your model to [`~Accelerator.prepare`]) will break the tying. You will ne
after. You can find an example of this in the [run_clm_no_trainer](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) script in
the Transformers repository.
Check out the [TPU tutorial](concept_guides/training_tpu) for more information about training on TPUs.
Check out the [TPU tutorial](concept_guides/training_tpu) for more information about training on TPUs.
## Other caveats
We list here all smaller issues you could have in your script conversion and how to resolve them.
### Execute a statement only on one processes
@ -307,14 +320,14 @@ For printing statements you only want executed once per machine, you can just re
`accelerator.print`.
### Defer execution on multiple GPUs
### Defer execution
When you run your usual script, instructions are executed in order. Using 🤗 Accelerate to deploy your script on several
GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
faster than others.
You might need to wait for all processes to have reached a certain point before executing a given instruction. For
instance, you shouldn't save a model before making sure every process is done with training. To do this, add the
instance, you shouldn't save a model before being sure every process is done with training. To do this, just write the
following line in your code:
```
@ -325,54 +338,37 @@ This instruction will block all the processes that arrive first until all the ot
point (if you run your script on just one GPU or CPU, this won't do anything).
### Save/load a model in a distributed setup
### Saving/loading a model
Saving the model you trained might need a bit of adjustment: first you should wait for all processes to reach that
point in the script as shown above, and then, you should unwrap your model before saving it. This is because when going
through the [`~Accelerator.prepare`] method, your model may have been placed inside a bigger model,
which deals with the distributed training. This in turn means that saving your model state dictionary without taking
any precaution will take that potential extra layer into account, and you will end up with weights you can't load back
in your base model. The [`~Accelerator.save_model`] method will help you to achieve that. It will unwrap your model and save
the model state dictionary.
in your base model.
Here is an example:
This is why it's recommended to *unwrap* your model first. Here is an example:
```
accelerator.wait_for_everyone()
accelerator.save_model(model, save_directory)
```
The [`~Accelerator.save_model`] method can also save a model into sharded checkpoints or with safetensors format:
```python
accelerator.wait_for_everyone()
accelerator.save_model(model, save_directory, max_shard_size="1GB", safe_serialization=True)
unwrapped_model = accelerator.unwrap_model(model)
accelerator.save(unwrapped_model.state_dict(), filename)
```
If your script contains logic to load a checkpoint, we also recommend you load your weights in the unwrapped model
(this is only useful if you use the load function after making your model go through
[`~Accelerator.prepare`]). Here is an example:
```python
```
unwrapped_model = accelerator.unwrap_model(model)
path_to_checkpoint = os.path.join(save_directory,"pytorch_model.bin")
unwrapped_model.load_state_dict(torch.load(path_to_checkpoint))
unwrapped_model.load_state_dict(torch.load(filename))
```
Note that since all the model parameters are references to tensors, this will load your weights inside `model`.
If you want to load a sharded checkpoint or a checkpoint with safetensors format into the model with a specific `device`,
we recommend you to load it with [`~utils.load_checkpoint_in_model`] function. Here's an example:
## Saving/loading entire states
```python
load_checkpoint_in_model(unwrapped_model, save_directory, device_map={"":device})
```
### Save/load entire states
When training your model, you may want to save the current state of the model, optimizer, random generators, and potentially
learning rate schedulers to be restored in the _same script_.
When training your model, you may want to save the current state of the model, optimizer, random generators, and potentially LR schedulers to be restored in the _same script_.
You can use [`~Accelerator.save_state`] and [`~Accelerator.load_state`] respectively to do so.
To further customize where and how states saved through [`~Accelerator.save_state`] the [`~utils.ProjectConfiguration`] class can be used. For example
@ -387,19 +383,19 @@ If you have registered any other stateful items to be stored through [`~Accelera
</Tip>
### Use gradient clipping
### Gradient clipping
If you are using gradient clipping in your script, you should replace the calls to
`torch.nn.utils.clip_grad_norm_` or `torch.nn.utils.clip_grad_value_` with [`~Accelerator.clip_grad_norm_`]
and [`~Accelerator.clip_grad_value_`] respectively.
### Train with mixed precision
### Mixed Precision training
If you are running your training in Mixed Precision with 🤗 Accelerate, you will get the best result with your loss being
computed inside your model (like in Transformer models for instance). Every computation outside of the model will be
executed in full precision (which is generally what you want for loss computation, especially if it involves a
softmax). However, you might want to put your loss computation inside the [`~Accelerator.autocast`] context manager:
softmax). However you might want to put your loss computation inside the *accelerator.autocast* context manager:
```
with accelerator.autocast():
@ -420,7 +416,7 @@ if not accelerator.optimizer_step_was_skipped:
lr_scheduler.step()
```
### Use gradient accumulation
### Gradient Accumulation
To perform gradient accumulation use [`~Accelerator.accumulate`] and specify a `gradient_accumulation_steps`.
This will also automatically ensure the gradients are synced or unsynced when on multi-device training, check if the step should
@ -439,3 +435,70 @@ for input, label in training_dataloader:
scheduler.step()
optimizer.zero_grad()
```
### DeepSpeed
DeepSpeed support is experimental, so the underlying API will evolve in the near future and may have some slight
breaking changes. In particular, 🤗 Accelerate does not support DeepSpeed config you have written yourself yet, this
will be added in a next version.
<Tip warning={true}>
The [`notebook_launcher`] does not support the DeepSpeed integration yet.
</Tip>
## Internal mechanism
Internally, the library works by first analyzing the environment in which the script is launched to determine which
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
that information is stored in the [`~AcceleratorState`].
This class is initialized the first time you instantiate an [`~Accelerator`] as well as performing any
specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of
[`~state.AcceleratorState`].
Then, when calling [`~Accelerator.prepare`], the library:
- wraps your model(s) in the container adapted for the distributed setup,
- wraps your optimizer(s) in a [`~optimizer.AcceleratedOptimizer`],
- creates a new version of your dataloader(s) in a [`~data_loader.DataLoaderShard`].
While the model(s) and optimizer(s) are just put in simple wrappers, the dataloader(s) are re-created. This is mostly
because PyTorch does not let the user change the `batch_sampler` of a dataloader once it's been created and the
library handles the sharding of your data between processes by changing that `batch_sampler` to yield every other
`num_processes` batches.
The [`~data_loader.DataLoaderShard`] subclasses `DataLoader` to add the following functionality:
- it synchronizes the appropriate random number generator of all processes at each new iteration, to ensure any
randomization (like shuffling) is done the exact same way across processes.
- it puts the batches on the proper device before yielding them (unless you have opted out of
`device_placement=True`).
The random number generator synchronization will by default synchronize:
- the `generator` attribute of a given sampler (like the PyTorch `RandomSampler`) for PyTorch >= 1.6
- the main random number generator in PyTorch <=1.5.1
You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main
[`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on a local `generator` to avoid
setting the same seed in the main random number generator in all processes.
<Tip warning={true}>
Synchronization of the main torch (or CUDA or XLA) random number generator will affect any other potential random
artifacts you could have in your dataset (like random data augmentation) in the sense that all processes will get
the same random numbers from the torch random modules (so will apply the same random data augmentation if it's
controlled by torch).
</Tip>
<Tip>
The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local
`torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example.
</Tip>
For more details about the internals, see the [Internals page](package_reference/torch_wrappers).

View File

@ -1,150 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Handling big models for inference
One of the biggest advancements 🤗 Accelerate provides is the concept of [large model inference](../concept_guides/big_model_inference) wherein you can perform *inference* on models that cannot fully fit on your graphics card.
This tutorial will be broken down into two parts showcasing how to use both 🤗 Accelerate and 🤗 Transformers (a higher API-level) to make use of this idea.
## Using 🤗 Accelerate
For these tutorials, we'll assume a typical workflow for loading your model in such that:
```py
import torch
my_model = ModelClass(...)
state_dict = torch.load(checkpoint_file)
my_model.load_state_dict(state_dict)
```
Note that here we assume that `ModelClass` is a model that takes up more video-card memory than what can fit on your device (be it `mps` or `cuda`).
The first step is to init an empty skeleton of the model which won't take up any RAM using the [`init_empty_weights`] context manager:
```py
from accelerate import init_empty_weights
with init_empty_weights():
my_model = ModelClass(...)
```
With this `my_model` currently is "parameterless", hence leaving the smaller footprint than what one would normally get loading this onto the CPU directly.
Next we need to load in the weights to our model so we can perform inference.
For this we will use [`load_checkpoint_and_dispatch`], which as the name implies will load a checkpoint inside your empty model and dispatch the weights for each layer across all the devices you have available (GPU/MPS and CPU RAM).
To determine how this `dispatch` can be performed, generally specifying `device_map="auto"` will be good enough as 🤗 Accelerate
will attempt to fill all the space in your GPU(s), then loading them to the CPU, and finally if there is not enough RAM it will be loaded to the disk (the absolute slowest option).
<Tip>
For more details on desigining your own device map, see this section of the [concept guide](../concept_guide/big_model_inference#desigining-a-device-map)
</Tip>
See an example below:
```py
from accelerate import load_checkpoint_and_dispatch
model = load_checkpoint_and_dispatch(
model, checkpoint=checkpoint_file, device_map="auto"
)
```
<Tip>
If there are certain "chunks" of layers that shouldn't be split, you can pass them in as `no_split_module_classes`. Read more about it [here](../concept_guides/big_model_inference#loading-weights)
</Tip>
<Tip>
Also to save on memory (such as if the `state_dict` will not fit in RAM), a model's weights can be divided and split into multiple checkpoint files. Read more about it [here](../concept_guides/big_model_inference#sharded-checkpoints)
</Tip>
Now that the model is dispatched fully, you can perform inference as normal with the model:
```py
input = torch.randn(2,3)
input = input.to("cuda")
output = model(input)
```
What will happen now is each time the input gets passed through a layer, it will be sent from the CPU to the GPU (or disk to CPU to GPU), the output is calculated, and then the layer is pulled back off the GPU going back down the line. While this adds some overhead to the inference being performed, through this method it is possible to run **any size model** on your system, as long as the largest layer is capable of fitting on your GPU.
<Tip>
Multiple GPUs can be utilized, however this is considered "model parallism" and as a result only one GPU will be active at a given moment, waiting for the prior one to send it the output. You should launch your script normally with `python`
and not need `torchrun`, `accelerate launch`, etc.
</Tip>
For a visual representation of this, check out the animation below:
<Youtube id="MWCSGj9jEAo" />
### Complete Example
Below is the full example showcasing what we performed above:
```py
import torch
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
with init_empty_weights():
model = MyModel(...)
model = load_checkpoint_and_dispatch(
model, checkpoint=checkpoint_file, device_map="auto"
)
input = torch.randn(2,3)
input = input.to("cuda")
output = model(input)
```
## Using 🤗 Transformers, 🤗 Diffusers, and other 🤗 Open Source Libraries
Libraries that support 🤗 Accelerate big model inference include all of the earlier logic in their `from_pretrained` constructors.
These operate by specifying a string representing the model to download from the [🤗 Hub](https://hf.co/models) and then denoting `device_map="auto"` along with a few extra parameters.
As a brief example, we will look at using `transformers` and loading in Big Science's T0pp model.
```py
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
```
After loading the model in, the initial steps from before to prepare a model have all been done and the model is fully
ready to make use of all the resources in your machine. Through these constructors, you can also save *more* memory by
specifying the precision the model is loaded into as well, through the `torch_dtype` parameter, such as:
```py
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto", torch_dtype=torch.float16)
```
To learn more about this, check out the 🤗 Transformers documentation available [here](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
## Where to go from here
For a much more detailed look at big model inference, be sure to check out the [Conceptual Guide on it](../concept_guides/big_model_inference)

View File

@ -8,14 +8,11 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Handling big models for inference
When loading a pre-trained model in PyTorch, the usual workflow looks like this:
When loading a pretrained model in PyTorch, the usual workflow looks like this:
```py
import torch
@ -30,11 +27,11 @@ In plain English, those steps are:
2. Load the model weights (in a dictionary usually called a state dict) from the disk
3. Load those weights inside the model
While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pre-trained weights. If you're loading a model with 6 billion parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16).
While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pretrained weights. If you're loading a model with 6 billions parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16).
<Tip warning={true}>
This API is quite new and still in its experimental stage. While we strive to provide a stable API, it's possible some small parts of the public API will change in the future.
This API is quite new and still in its experimental stage. While we strive to provide a stable API, it's possible some small parts of the public API will change in the future.
</Tip>
@ -46,7 +43,7 @@ This API is quite new and still in its experimental stage. While we strive to pr
### Instantiating an empty model
The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM, so that step 1 can be done on models of any size. Here is how it works:
```py
from accelerate import init_empty_weights
@ -62,7 +59,7 @@ with init_empty_weights():
model = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
```
initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved to that device.
initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved on that device.
<Tip warning={true}>
@ -72,9 +69,9 @@ initializes an empty model with a bit more than 100B parameters. Behind the scen
### Sharded checkpoints
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split into several smaller files that we call checkpoint shards.
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split in several smaller files that we call checkpoint shards.
🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. For instance we could have a folder containing:
```bash
first_state_dict.bin
@ -99,65 +96,52 @@ and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"l
The second tool 🤗 Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
If you want to use big model inference with 🤗 Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
Here is how we can use this to load the [GPT2-1.5B](https://huggingface.co/marcsun13/gpt2-xl-linear-sharded) model.
Let's download the sharded version of this model.
Here is how we can use this to load the [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) model. You clone the sharded version of this model with:
```bash
pip install huggingface_hub
git clone https://huggingface.co/sgugger/sharded-gpt-j-6B
cd sharded-gpt-j-6B
git-lfs install
git lfs pull
```
```py
from huggingface_hub import snapshot_download
checkpoint = "marcsun13/gpt2-xl-linear-sharded"
weights_location = snapshot_download(repo_id=checkpoint)
```
In order to initialize the model, we will use the library minGPT.
```bash
git clone https://github.com/karpathy/minGPT.git
pip install minGPT/
```
then we can initialize the model with
```py
from accelerate import init_empty_weights
from mingpt.model import GPT
from transformers import AutoConfig, AutoModelForCausalLM
model_config = GPT.get_default_config()
model_config.model_type = 'gpt2-xl'
model_config.vocab_size = 50257
model_config.block_size = 1024
checkpoint = "EleutherAI/gpt-j-6B"
config = AutoConfig.from_pretrained(checkpoint)
with init_empty_weights():
model = GPT(model_config)
model = AutoModelForCausalLM.from_config(config)
```
Then, load the checkpoint we just downloaded with:
Note that loading the model with `from_config` in Transformers does not tie the weights, which may cause issue when
loading a checkpoint that does not contain duplicate keys for the tied weights. So you should tie the weights before
loading the checkpoint.
```py
model.tie_weights()
```
Then load the checkpoint we just downloaded with:
```py
from accelerate import load_checkpoint_and_dispatch
model = load_checkpoint_and_dispatch(
model, checkpoint=weights_location, device_map="auto", no_split_module_classes=['Block']
model, "sharded-gpt-j-6B", device_map="auto", no_split_module_classes=["GPTJBlock"]
)
```
By passing `device_map="auto"`, we tell 🤗 Accelerate to determine automatically where to put each layer of the model depending on the available resources:
- first, we use the maximum space available on the GPU(s)
- first we use the maximum space available on the GPU(s)
- if we still need space, we store the remaining weights on the CPU
- if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors
#### `no_split_module_classes`
This parameter will indicate that some of the modules with the name `"Block"` should not be split across different devices. You should set here all blocks that
include a residutal connection of some kind.
#### The `device_map`
`no_split_module_classes=["GPTJBlock"]` indicates that the modules that are `GPTJBlock` should not be split on different devices. You should set here all blocks that include a residual connection of some kind.
You can see the `device_map` that 🤗 Accelerate picked by accessing the `hf_device_map` attribute of your model:
@ -167,34 +151,43 @@ model.hf_device_map
```python out
{'transformer.wte': 0,
'transformer.wpe': 0,
'transformer.drop': 0,
'transformer.h.0': 0,
...
'transformer.h.21': 0,
'transformer.h.22': 1,
'transformer.h.23': 1,
'transformer.h.1': 0,
'transformer.h.2': 0,
'transformer.h.3': 0,
'transformer.h.4': 0,
'transformer.h.5': 0,
'transformer.h.6': 0,
'transformer.h.7': 0,
'transformer.h.8': 0,
'transformer.h.9': 0,
'transformer.h.10': 0,
'transformer.h.11': 0,
'transformer.h.12': 0,
'transformer.h.13': 0,
'transformer.h.14': 0,
'transformer.h.15': 0,
'transformer.h.16': 0,
'transformer.h.17': 0,
'transformer.h.18': 0,
'transformer.h.19': 0,
'transformer.h.20': 0,
'transformer.h.21': 0,
'transformer.h.22': 0,
'transformer.h.23': 0,
'transformer.h.24': 1,
...
'transformer.h.47': 1,
'transformer.ln_f': 1,
'transformer.h.25': 1,
'transformer.h.26': 1,
'transformer.h.27': 1,
'transformer.ln_f': 1,
'lm_head': 1}
```
It's fully possible to create your own device map for the layers to use as well, specifying the GPU device to use (a number), `"cpu"`, or `"disk"` and pass this in:
```python
device_map = {
"transformer.wte": "cpu",
"transformer.wpe": 0,
"transformer.drop": "cpu",
"transformer.h.0": "disk"
}
model = load_checkpoint_and_dispatch(
model, checkpoint=weights_location, device_map=device_map
)
You can also design your `device_map` yourself, if you prefer to explicitly decide where each layer should be. In this case, the command above becomes:
```py
model = load_checkpoint_and_dispatch(model, "sharded-gpt-j-6B", device_map=my_device_map)
```
### Run the model
@ -202,30 +195,31 @@ model = load_checkpoint_and_dispatch(
Now that we have done this, our model lies across several devices, and maybe the hard drive. But it can still be used as a regular PyTorch model:
```py
from mingpt.bpe import BPETokenizer
tokenizer = BPETokenizer()
inputs = tokenizer("Hello, my name is").to(0)
from transformers import AutoTokenizer
outputs = model.generate(x1, max_new_tokens=10, do_sample=False)[0]
tokenizer.decode(outputs.cpu().squeeze())
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
inputs = tokenizer("Hello, my name is", return_tensors="pt")
inputs = inputs.to(0)
output = model.generate(inputs["input_ids"])
tokenizer.decode(output[0].tolist())
```
Behind the scenes, 🤗 Accelerate added hooks to the model, so that:
- at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works)
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass, and cleaned up just after
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass, and cleaned up just after
This way, your model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM!
This way, you model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM!
<Tip warning={true}>
This only supports the inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations.
This only supports inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations.
</Tip>
### Designing a device map
You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself, if you want more control over where each layer should go.
<Tip>
@ -235,7 +229,7 @@ You can let 🤗 Accelerate handle the device map computation by setting `device
All the options will produce the same result when you don't have enough GPU memory to accommodate the whole model (which is to fit everything that can on the GPU, then offload weights on the CPU or even on the disk if there is not enough RAM).
When you have more GPU memory available than the model size, here is the difference between each option:
When you have more GPU memory available than the model size, here the difference between each option:
- `"auto"` and `"balanced"` evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1.
- `"balanced_low_0"` evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the `generate` function for Transformers models
- `"sequential"` will fit what it can on GPU 0, then move on GPU 1 and so forth (so won't use the last GPUs if it doesn't need to).
@ -246,9 +240,9 @@ When you have more GPU memory available than the model size, here is the differe
</Tip>
First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want to use for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`.
First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want used for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`.
Here is an example where we don't want to use more than 10GiB on each of the two GPUs and no more than 30GiB of CPU RAM for the model weights:
Here is an example where we don't want to use more than 10GiB on each of two GPUs and no more than 30GiB of CPU RAM for the model weights:
```python
from accelerate import infer_auto_device_map
@ -260,18 +254,18 @@ device_map = infer_auto_device_map(my_model, max_memory={0: "10GiB", 1: "10GiB",
When a first allocation happens in PyTorch, it loads CUDA kernels which take about 1-2GB of memory depending on the GPU. Therefore you always have less usable memory than the actual size of the GPU. To see how much memory is actually used do `torch.ones(1).cuda()` and look at the memory usage.
Therefore when you create memory maps with `max_memory` make sure to adjust the available memory accordingly to avoid out-of-memory errors.
Therefore when you create memory maps with `max_memory` make sure to adjust the avaialble memory accordingly to avoid out-of-memory errors.
</Tip>
Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup, the close-to-ideal map is:
Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup the close to ideal map is:
```python
max_memory = {0: "30GIB", 1: "46GIB", 2: "46GIB", 3: "46GIB", 4: "46GIB", 5: "46GIB", 6: "46GIB", 7: "46GIB"}
```
as you can see we gave the remaining 7 GPUs ~50% more memory than GPU 0.
If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance, if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be:
If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be:
```python
device_map = {"block1": 0, "block2": 1}
@ -300,7 +294,7 @@ device_map = {"block1": 0, "block2.linear1": 1, "block2.linear2": 1}
We are aware of the current limitations in the API:
- While this could theoretically work on just one CPU with potential disk offload, you need at least one GPU to run this API. This will be fixed in further development.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to a lack of RAM.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to lack of RAM.
- [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) attributes devices sequentially (to avoid moving things back and forth) so if your first layer is bigger than the size of the GPU you have, it will end up with everything on the CPU/Disk.
- [`load_checkpoint_and_dispatch`] and [`load_checkpoint_in_model`] do not perform any check on the correctness of your state dict compared to your model at the moment (this will be fixed in a future version), so you may get some weird errors if trying to load a checkpoint with mismatched or missing keys.
- The model parallelism used when your model is split on several GPUs is naive and not optimized, meaning that only one GPU works at a given time and the other sits idle.

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Checkpointing
@ -20,7 +17,7 @@ saving and loading the model, optimizer, RNG generators, and the GradScaler. Ins
- Use [`~Accelerator.save_state`] for saving everything mentioned above to a folder location
- Use [`~Accelerator.load_state`] for loading everything stored from an earlier `save_state`
To further customize where and how states are saved through [`~Accelerator.save_state`] the [`~utils.ProjectConfiguration`] class can be used. For example
To further customize where and how states saved through [`~Accelerator.save_state`] the [`~utils.ProjectConfiguration`] class can be used. For example
if `automatic_checkpoint_naming` is enabled each saved checkpoint will be located then at `Accelerator.project_dir/checkpoints/checkpoint_{checkpoint_number}`.
It should be noted that the expectation is that those states come from the same training script, they should not be from two separate scripts.
@ -62,13 +59,13 @@ for epoch in range(num_epochs):
my_optimizer.step()
my_scheduler.step()
# Restore the previous state
# Restore previous state
accelerator.load_state("my/save/path/checkpointing/checkpoint_0")
```
## Restoring the state of the DataLoader
After resuming from a checkpoint, it may also be desirable to resume from a particular point in the active `DataLoader` if
After resuming from a checkpoint, it may also be desireable to resume from a particular point in the active `DataLoader` if
the state was saved during the middle of an epoch. You can use [`~Accelerator.skip_first_batches`] to do so.
```python
@ -93,4 +90,4 @@ for batch in skipped_dataloader:
for batch in train_dataloader:
# Do something
pass
```
```

View File

@ -1,93 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Debugging Distributed Operations
When running scripts in a distributed fashion, often functions such as [`Accelerator.gather`] and [`Accelerator.reduce`] (and others) are neccessary to grab tensors across devices and perform certain operations on them. However, if the tensors which are being grabbed are not the proper shapes then this will result in your code hanging forever. The only sign that exists of this truly happening is hitting a timeout exception from `torch.distributed`, but this can get quite costly as usually the timeout is 10 minutes.
Accelerate now has a `debug` mode which adds a neglible amount of time to each operation, but allows it to verify that the inputs you are bringing in can *actually* perform the operation you want **without** hitting this timeout problem!
## Visualizing the problem
To have a tangible example of this issue, let's take the following setup (on 2 GPUs):
```python
from accelerate import PartialState
state = PartialState()
if state.process_index == 0:
tensor = torch.tensor([[0.0, 1, 2, 3, 4]]).to(state.device)
else:
tensor = torch.tensor([[[0.0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]]).to(state.device)
broadcast_tensor = broadcast(tensor)
print(broadcast_tensor)
```
We've created a single tensor on each device, with two radically different shapes. With this setup if we want to perform an operation such as [`utils.broadcast`], we would forever hit a timeout because `torch.distributed` requires that these operations have the **exact same shape** across all processes for it to work.
If you run this yourself, you will find that `broadcast_tensor` can be printed on the main process, but its results won't quite be right, and then it will just hang never printing it on any of the other processes:
```
>>> tensor([[0, 1, 2, 3, 4]], device='cuda:0')
```
## The solution
By enabling Accelerate's operational debug mode, Accelerate will properly find and catch errors such as this and provide a very clear traceback immediatly:
```
Traceback (most recent call last):
File "/home/zach_mueller_huggingface_co/test.py", line 18, in <module>
main()
File "/home/zach_mueller_huggingface_co/test.py", line 15, in main
main()broadcast_tensor = broadcast(tensor)
File "/home/zach_mueller_huggingface_co/accelerate/src/accelerate/utils/operations.py", line 303, in wrapper
broadcast_tensor = broadcast(tensor)
accelerate.utils.operations.DistributedOperationException: Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid.
Operation: `accelerate.utils.operations.broadcast`
Input shapes:
- Process 0: [1, 5]
- Process 1: [1, 2, 5]
```
This explains that the shapes across our devices were *not* the same, and that we should ensure that they match properly to be compatible. Typically this means that there is either an extra dimension, or certain dimensions are incompatible with the operation.
To enable this please do one of the following:
Enable it through the questionarre during `accelerate config` (recommended)
From the CLI:
```
accelerate launch --debug {my_script.py} --arg1 --arg2
```
As an environmental variable (which avoids the need for `accelerate launch`):
```
ACCELERATE_DEBUG_MODE="1" accelerate launch {my_script.py} --arg1 --arg2
```
Manually changing the `config.yaml` file:
```diff
compute_environment: LOCAL_MACHINE
+debug: true
```

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DeepSpeed
@ -585,10 +582,8 @@ Mixed precision type: fp16
ds_config: {'bf16': {'enabled': False}, 'zero_optimization': {'stage': 3, 'stage3_gather_16bit_weights_on_model_save': True, 'offload_optimizer': {'device': 'nvme'}, 'offload_param': {'device': 'cpu'}}, 'gradient_clipping': 1.0, 'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 5, 'steps_per_print': inf, 'fp16': {'enabled': True, 'auto_cast': True}}
```
**Note**:
1. Remaining `"auto"` values are handled in `accelerator.prepare()` call as explained in point 2 of
**Note**: Remaining `"auto"` values are handled in `accelerator.prepare()` call as explained in point 2 of
`Important code changes when using DeepSpeed Config File`.
2. Only when `gradient_accumulation_steps` is `auto`, the value passed while creating `Accelerator` object via `Accelerator(gradient_accumulation_steps=k)` will be used. When using DeepSpeed Plugin, the value from it will be used and it will overwrite the value passed while creating Accelerator object.
## Saving and loading

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Distributed Inference with 🤗 Accelerate
@ -120,7 +117,7 @@ needs to be the same length. Basic inference does not require this.
For instance:
```python
from accelerate import PartialState # Can also be Accelerator or AcceleratorState
from accelerate import PartialState # Can also be Accelerator or AcceleratorStaet
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Learning how to incorporate 🤗 Accelerate features quickly!
@ -37,14 +34,14 @@ for batch in dataloader:
<div class="block dark:hidden">
<iframe
src="https://hf-accelerate-accelerate-examples.hf.space?__theme=light"
src="https://muellerzr-accelerate-examples.hf.space?__theme=light"
width="850"
height="1600"
></iframe>
</div>
<div class="hidden dark:block">
<iframe
src="https://hf-accelerate-accelerate-examples.hf.space?__theme=dark"
src="https://muellerzr-accelerate-examples.hf.space?__theme=dark"
width="850"
height="1600"
></iframe>

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Fully Sharded Data Parallel
@ -49,7 +46,7 @@ fsdp_config:
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: BertLayer
fsdp_transformer_layer_cls_to_wrap: GPT2Block
machine_rank: 0
main_process_ip: null
main_process_port: null
@ -67,86 +64,19 @@ accelerate launch examples/nlp_example.py
Currently, `Accelerate` supports the following config through the CLI:
```bash
`Sharding Strategy`: [1] FULL_SHARD (shards optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD (DDP), [4] HYBRID_SHARD (shards optimizer states, gradients and parameters within each node while each node has full copy), [5] HYBRID_SHARD_ZERO2 (shards optimizer states and gradients within each node while each node has full copy)
`Sharding Strategy`: [1] FULL_SHARD (shards optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD
`Offload Params`: Decides Whether to offload parameters and gradients to CPU
`Auto Wrap Policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP
`Auto Wrap Policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP
`Transformer Layer Class to Wrap`: When using `TRANSFORMER_BASED_WRAP`, user specifies comma-separated string of transformer layer class names (case-sensitive) to wrap ,e.g,
`BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`...
This is important because submodules that share weights (e.g., embedding layer) should not end up in different FSDP wrapped units.
Using this policy, wrapping happens for each block containing Multi-Head Attention followed by couple of MLP layers.
Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit.
Therefore, use this for transformer based models.
You can use the `model._no_split_modules` for 🤗 Transformer models by answering `yes` to
`Do you want to use the model's `_no_split_modules` to wrap. Only applicable for 🤗 Transformers`.
It will try to use `model._no_split_modules` when available.
`Min Num Params`: minimum number of parameters when using `SIZE_BASED_WRAP`
`Backward Prefetch`: [1] BACKWARD_PRE, [2] BACKWARD_POST, [3] NO_PREFETCH
`State Dict Type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT
`Forward Prefetch`: if True, then FSDP explicitly prefetches the next upcoming
all-gather while executing in the forward pass. only use with Static graphs.
`Use Orig Params`: If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable paramteres.
Useful in cases such as parameter-efficient fine-tuning.
Please refer this [blog](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019)
`Sync Module States`: If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0
```
For additional and more nuanced control, you can specify other FSDP parameters via `FullyShardedDataParallelPlugin`.
When creating `FullyShardedDataParallelPlugin` object, pass it the parameters that weren't part of the accelerate config or if you want to override them.
The FSDP parameters will be picked based on the accelerate config file or launch command arguments and other parameters that you will pass directly through the `FullyShardedDataParallelPlugin` object will set/override that.
Below is an example:
```py
from accelerate import FullyShardedDataParallelPlugin
from torch.distributed.fsdp.fully_sharded_data_parallel import FullOptimStateDictConfig, FullStateDictConfig
fsdp_plugin = FullyShardedDataParallelPlugin(
state_dict_config=FullStateDictConfig(offload_to_cpu=False, rank0_only=False),
optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=False, rank0_only=False),
)
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
`State Dict Type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT
```
## Saving and loading
The new recommended way of checkpointing when using FSDP models is to use `SHARDED_STATE_DICT` as `StateDictType` when setting up the accelerate config.
Below is the code snippet to save using `save_state` utility of accelerate.
```py
accelerator.save_state("ckpt")
```
Inspect the ckeckpoint folder to see model and optimizer as shards per process:
```
ls ckpt
# optimizer_0 pytorch_model_0 random_states_0.pkl random_states_1.pkl scheduler.bin
cd ckpt
ls optimizer_0
# __0_0.distcp __1_0.distcp
ls pytorch_model_0
# __0_0.distcp __1_0.distcp
```
To load them back for resuming the training, use the `load_state` utility of accelerate
```py
accelerator.load_state("ckpt")
```
When using transformers `save_pretrained`, pass `state_dict=accelerator.get_state_dict(model)` to save the model state dict.
1. When using transformers `save_pretrained`, pass `state_dict=accelerator.get_state_dict(model)` to save the model state dict.
Below is an example:
```diff

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Performing gradient accumulation with 🤗 Accelerate
@ -127,11 +124,6 @@ training on. 🤗 Accelerate automagically does this for you by default. Behind
Below is the finished implementation for performing gradient accumulation with 🤗 Accelerate
```python
from accelerate import Accelerator
accelerator = Accelerator(gradient_accumulation_steps=2)
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
model, optimizer, training_dataloader, scheduler
)
for batch in training_dataloader:
with accelerator.accumulate(model):
inputs, targets = batch
@ -143,74 +135,4 @@ for batch in training_dataloader:
optimizer.zero_grad()
```
<Tip warning={true}>
It's important that **only one forward/backward** should be done inside the context manager `with accelerator.accumulate(model)`.
</Tip>
To learn more about what magic this wraps around, read the [Gradient Synchronization concept guide](../concept_guides/gradient_synchronization)
## Self-contained example
Here is a self-contained example that you can run to see gradient accumulation in action with 🤗 Accelerate:
```python
import torch
import copy
from accelerate import Accelerator
from accelerate.utils import set_seed
from torch.utils.data import TensorDataset, DataLoader
# seed
set_seed(0)
# define toy inputs and labels
x = torch.tensor([1., 2., 3., 4., 5., 6., 7., 8.])
y = torch.tensor([2., 4., 6., 8., 10., 12., 14., 16.])
gradient_accumulation_steps = 4
batch_size = len(x) // gradient_accumulation_steps
# define dataset and dataloader
dataset = TensorDataset(x, y)
dataloader = DataLoader(dataset, batch_size=batch_size)
# define model, optimizer and loss function
model = torch.zeros((1, 1), requires_grad=True)
model_clone = copy.deepcopy(model)
criterion = torch.nn.MSELoss()
model_optimizer = torch.optim.SGD([model], lr=0.02)
accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps)
model, model_optimizer, dataloader = accelerator.prepare(model, model_optimizer, dataloader)
model_clone_optimizer = torch.optim.SGD([model_clone], lr=0.02)
print(f"initial model weight is {model.mean().item():.5f}")
print(f"initial model weight is {model_clone.mean().item():.5f}")
for i, (inputs, labels) in enumerate(dataloader):
with accelerator.accumulate(model):
inputs = inputs.view(-1, 1)
print(i, inputs.flatten())
labels = labels.view(-1, 1)
outputs = inputs @ model
loss = criterion(outputs, labels)
accelerator.backward(loss)
model_optimizer.step()
model_optimizer.zero_grad()
loss = criterion(x.view(-1, 1) @ model_clone, y.view(-1, 1))
model_clone_optimizer.zero_grad()
loss.backward()
model_clone_optimizer.step()
print(f"w/ accumulation, the final model weight is {model.mean().item():.5f}")
print(f"w/o accumulation, the final model weight is {model_clone.mean().item():.5f}")
```
```
initial model weight is 0.00000
initial model weight is 0.00000
0 tensor([1., 2.])
1 tensor([3., 4.])
2 tensor([5., 6.])
3 tensor([7., 8.])
w/ accumulation, the final model weight is 2.04000
w/o accumulation, the final model weight is 2.04000
```

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Intel® Extension for PyTorch

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Using Local SGD with 🤗 Accelerate

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Memory Utilities

View File

@ -1,121 +0,0 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Understanding how big of a model can fit on your machine
One very difficult aspect when exploring potential models to use on your machine is knowing just how big of a model will *fit* into memory with your current graphics card (such as loading the model onto CUDA).
To help alleviate this, 🤗 Accelerate has a CLI interface through `accelerate estimate-memory`. This tutorial will
help walk you through using it, what to expect, and at the end link to the interactive demo hosted on the 🤗 Hub which will
even let you post those results directly on the model repo!
Currently we support searching for models that can be used in `timm` and `transformers`.
<Tip>
This API will load the model into memory on the `meta` device, so we are not actually downloading
and loading the full weights of the model into memory, nor do we need to. As a result it's
perfectly fine to measure 8 billion parameter models (or more), without having to worry about
if your CPU can handle it!
</Tip>
## The Command
When using `accelerate estimate-memory`, you need to pass in the name of the model you want to use, potentially the framework
that model utilizing (if it can't be found automatically), and the data types you want the model to be loaded in with.
For example, here is how we can calculate the memory footprint for `bert-base-cased`:
```bash
accelerate estimate-memory bert-base-cased
```
This will download the `config.json` for `bert-based-cased`, load the model on the `meta` device, and report back how much space
it will use:
Memory Usage for loading `bert-base-cased`:
| dtype | Largest Layer | Total Size | Training using Adam |
|---------|---------------|------------|---------------------|
| float32 | 84.95 MB | 418.18 MB | 1.61 GB |
| float16 | 42.47 MB | 206.59 MB | 826.36 MB |
| int8 | 21.24 MB | 103.29 MB | 413.18 MB |
| int4 | 10.62 MB | 51.65 MB | 206.59 MB |
By default it will return all the supported dtypes (`int4` through `float32`), but if you are interested in specific ones these can be filtered.
### Specific libraries
If the source library cannot be determined automatically (like it could in the case of `bert-base-cased`), a library name can
be passed in.
```bash
accelerate estimate-memory HuggingFaceM4/idefics-80b-instruct --library_name transformers
```
Memory Usage for loading `HuggingFaceM4/idefics-80b-instruct`:
| dtype | Largest Layer | Total Size | Training using Adam |
|---------|---------------|------------|---------------------|
| float32 | 3.02 GB | 297.12 GB | 1.16 TB |
| float16 | 1.51 GB | 148.56 GB | 594.24 GB |
| int8 | 772.52 MB | 74.28 GB | 297.12 GB |
| int4 | 386.26 MB | 37.14 GB | 148.56 GB |
```bash
accelerate estimate-memory timm/resnet50.a1_in1k --library_name timm
```
Memory Usage for loading `timm/resnet50.a1_in1k`:
| dtype | Largest Layer | Total Size | Training using Adam |
|---------|---------------|------------|---------------------|
| float32 | 9.0 MB | 97.7 MB | 390.78 MB |
| float16 | 4.5 MB | 48.85 MB | 195.39 MB |
| int8 | 2.25 MB | 24.42 MB | 97.7 MB |
| int4 | 1.12 MB | 12.21 MB | 48.85 MB |
### Specific dtypes
As mentioned earlier, while we return `int4` through `float32` by default, any dtype can be used from `float32`, `float16`, `int8`, and `int4`.
To do so, pass them in after specifying `--dtypes`:
```bash
accelerate estimate-memory bert-base-cased --dtypes float32 float16
```
Memory Usage for loading `bert-base-cased`:
| dtype | Largest Layer | Total Size | Training using Adam |
|---------|---------------|------------|---------------------|
| float32 | 84.95 MB | 413.18 MB | 1.61 GB |
| float16 | 42.47 MB | 206.59 MB | 826.36 MB |
## Caveats with this calculator
This calculator will tell you how much memory is needed to purely load the model in, *not* to perform inference.
This calculation is accurate within a few % of the actual value, so it is a very good view of just how much memory it will take. For instance loading `bert-base-cased` actually takes `413.68 MB` when loaded on CUDA in full precision, and the calculator estimates `413.18 MB`.
When performing inference you can expect to add up to an additional 20% as found by [EleutherAI](https://blog.eleuther.ai/transformer-math/). We'll be conducting research into finding a more accurate estimate to these values, and will update
this calculator once done.
## Live Gradio Demo
Lastly, we invite you to try the [live Gradio demo](https://huggingface.co/spaces/hf-accelerate/model-memory-usage) of this utility,
which includes an option to post a discussion thread on a models repository with this data. Doing so will help provide access to these numbers in the community faster and help users know what you've learned!

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Accelerated PyTorch Training on Mac

View File

@ -1,136 +0,0 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Quantization
## `bitsandbytes` Integration
🤗 Accelerate brings `bitsandbytes` quantization to your model. You can now load any pytorch model in 8-bit or 4-bit with a few lines of code.
If you want to use 🤗 Transformers models with `bitsandbytes`, you should follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization).
To learn more about how the `bitsandbytes` quantization works, check out the blog posts on [8-bit quantization](https://huggingface.co/blog/hf-bitsandbytes-integration) and [4-bit quantization](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
### Pre-Requisites
You will need to install the following requirements:
- Install `bitsandbytes` library
```bash
pip install bitsandbytes
```
- Install latest `accelerate` from source
```bash
pip install git+https://github.com/huggingface/accelerate.git
```
- Install `minGPT` and `huggingface_hub` to run examples
```bash
git clone https://github.com/karpathy/minGPT.git
pip install minGPT/
pip install huggingface_hub
```
### How it works
First, we need to initialize our model. To save memory, we can initialize an empty model using the context manager [`init_empty_weights`].
Let's take the GPT2 model from minGPT library.
```py
from accelerate import init_empty_weights
from mingpt.model import GPT
model_config = GPT.get_default_config()
model_config.model_type = 'gpt2-xl'
model_config.vocab_size = 50257
model_config.block_size = 1024
with init_empty_weights():
empty_model = GPT(model_config)
```
Then, we need to get the path to the weights of your model. The path can be the state_dict file (e.g. "pytorch_model.bin") or a folder containing the sharded checkpoints.
```py
from huggingface_hub import snapshot_download
weights_location = snapshot_download(repo_id="marcsun13/gpt2-xl-linear-sharded")
```
Finally, you need to set your quantization configuration with [`~utils.BnbQuantizationConfig`].
Here's an example for 8-bit quantization:
```py
from accelerate.utils import BnbQuantizationConfig
bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True, llm_int8_threshold = 6)
```
Here's an example for 4-bit quantization:
```py
from accelerate.utils import BnbQuantizationConfig
bnb_quantization_config = BnbQuantizationConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4")
```
To quantize your empty model with the selected configuration, you need to use [`~utils.load_and_quantize_model`].
```py
from accelerate.utils import load_and_quantize_model
quantized_model = load_and_quantize_model(empty_model, weights_location=weights_location, bnb_quantization_config=bnb_quantization_config, device_map = "auto")
```
### Saving and loading 8-bit model
You can save your 8-bit model with accelerate using [`~Accelerator.save_model`].
```py
from accelerate import Accelerator
accelerate = Accelerator()
new_weights_location = "path/to/save_directory"
accelerate.save_model(quantized_model, new_weights_location)
quantized_model_from_saved = load_and_quantize_model(empty_model, weights_location=new_weights_location, bnb_quantization_config=bnb_quantization_config, device_map = "auto")
```
Note that 4-bit model serialization is currently not supported.
### Offload modules to cpu and disk
You can offload some modules to cpu/disk if you don't have enough space on the GPU to store the entire model on your GPUs.
This uses big model inference under the hood. Check this [documentation](https://huggingface.co/docs/accelerate/usage_guides/big_modeling) for more details.
For 8-bit quantization, the selected modules will be converted to 8-bit precision.
For 4-bit quantization, the selected modules will be kept in `torch_dtype` that the user passed in `BnbQuantizationConfig`. We will add support to convert these offloaded modules in 4-bit when 4-bit serialization will be possible.
You just need to pass a custom `device_map` in order to offload modules on cpu/disk. The offload modules will be dispatched on the GPU when needed. Here's an example :
```py
device_map = {
"transformer.wte": 0,
"transformer.wpe": 0,
"transformer.drop": 0,
"transformer.h": "cpu",
"transformer.ln_f": "disk",
"lm_head": "disk",
}
```
### Fine-tune a quantized model
It is not possible to perform pure 8bit or 4bit training on these models. However, you can train these models by leveraging parameter efficient fine tuning methods (PEFT) and train for example adapters on top of them. Please have a look at [peft](https://github.com/huggingface/peft) library for more details.
Currently, you can't add adapters on top of any quantized model. However, with the official support of adapters with 🤗 Transformers models, you can fine-tune quantized models. If you want to finetune a 🤗 Transformers model , follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization) instead. Check out this [demo](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing) on how to fine-tune a 4-bit 🤗 Transformers model.
Note that you dont need to pass `device_map` when loading the model for training. It will automatically load your model on your GPU. Please note that `device_map=auto` should be used for inference only.
### Example demo - running GPT2 1.5b on a Google Colab
Check out the Google Colab [demo](https://colab.research.google.com/drive/1T1pOgewAWVpR9gKpaEWw4orOrzPFb3yM?usp=sharing) for running quantized models on a GTP2 model. The GPT2-1.5B model checkpoint is in FP32 which uses 6GB of memory. After quantization, it uses 1.6GB with 8-bit modules and 1.2GB with 4-bit modules.

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Amazon SageMaker

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Tracking
@ -86,15 +83,11 @@ for iteration in config["num_iterations"]:
accelerator.end_training()
```
If a tracker requires a directory to save data to, such as `TensorBoard`, then pass the directory path to `project_dir`. The `project_dir` parameter is useful
when there are other configurations to be combined with in the [`~utils.ProjectConfiguration`] data class. For example, you can save the TensorBoard data to `project_dir` and everything else can be logged in the `logging_dir` parameter of [`~utils.ProjectConfiguration`:
If a tracker requires a directory to save data to such as `TensorBoard` then a `logging_dir` or `project_dir` can be passed in. `project_dir` is useful
if there are other further configurations such as those which can be combined with the [`~utils.ProjectConfiguration`] dataclass.
```python
accelerator = Accelerator(log_with="tensorboard", project_dir=".")
# use with ProjectConfiguration
config = ProjectConfiguration(project_dir=".", logging_dir="another/directory")
accelerator = Accelerator(log_with="tensorboard", project_config=config)
accelerator = Accelerator(log_with="tensorboard", logging_dir=".")
```
## Implementing Custom Trackers

View File

@ -8,9 +8,6 @@ http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Example Zoo
@ -122,11 +119,6 @@ These are tutorials from libraries that integrate with 🤗 Accelerate:
- [How to implement a sentiment learning task with trlx](https://github.com/CarperAI/trlx#example-how-to-add-a-task)
### Comfy-UI
- [Enabling using large Stable Diffusion Models in low-vram settings using Accelerate](https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/model_management.py#L291-L296)
## In Science
Below contains a non-exhaustive list of papers utilizing 🤗 Accelerate.

View File

@ -51,13 +51,13 @@ To run it in each of these various modes, use the following commands:
python ./nlp_example.py # from a server with a GPU
```
- with fp16 (mixed-precision)
* from any server by passing `mixed_precison=fp16` to the `Accelerator`.
* from any server by passing `fp16=True` to the `Accelerator`.
```bash
python ./nlp_example.py --mixed_precision fp16
python ./nlp_example.py --fp16
```
* from any server with Accelerate launcher
```bash
accelerate launch --mixed_precision fp16 ./nlp_example.py
accelerate launch --fp16 ./nlp_example.py
- multi GPUs (using PyTorch distributed mode)
* With Accelerate config and launcher
```bash
@ -139,13 +139,13 @@ To run it in each of these various modes, use the following commands:
python ./cv_example.py # from a server with a GPU
```
- with fp16 (mixed-precision)
* from any server by passing `mixed_precison=fp16` to the `Accelerator`.
* from any server by passing `fp16=True` to the `Accelerator`.
```bash
python ./cv_example.py --data_dir path_to_data --mixed_precison fp16
python ./cv_example.py --data_dir path_to_data --fp16
```
* from any server with Accelerate launcher
```bash
accelerate launch --mixed_precison fp16 ./cv_example.py --data_dir path_to_data
accelerate launch --fp16 ./cv_example.py --data_dir path_to_data
- multi GPUs (using PyTorch distributed mode)
* With Accelerate config and launcher
```bash
@ -202,7 +202,7 @@ run the script to automatically launch multi GPU training on remote hardware.
This script uses [Runhouse](https://github.com/run-house/runhouse) to launch on self-hosted hardware (e.g. in your own
cloud account or on-premise cluster) but there are other options for running remotely as well. Runhouse can be installed
with `pip install runhouse`, and you can refer to
[hardware setup](https://runhouse-docs.readthedocs-hosted.com/en/latest/api/python/cluster.html#hardware-setup)
[hardware setup](https://runhouse-docs.readthedocs-hosted.com/en/main/rh_primitives/cluster.html#hardware-setup)
for hardware setup instructions, or this
[Colab tutorial](https://colab.research.google.com/drive/1qVwYyLTCPYPSdz9ZX7BZl9Qm0A3j7RJe) for a more in-depth walkthrough.

View File

@ -243,6 +243,39 @@ def parse_args():
return args
# New Code #
def checkpoint_model(checkpoint_folder, ckpt_id, model, epoch, last_global_step, **kwargs):
"""Utility function for checkpointing model + optimizer dictionaries
The main purpose for this is to be able to resume training from that instant again
"""
checkpoint_state_dict = {
"epoch": epoch,
"last_global_step": last_global_step,
}
# Add extra kwargs too
checkpoint_state_dict.update(kwargs)
success = model.save_checkpoint(checkpoint_folder, ckpt_id, checkpoint_state_dict)
status_msg = f"checkpointing: checkpoint_folder={checkpoint_folder}, ckpt_id={ckpt_id}"
if success:
logging.info(f"Success {status_msg}")
else:
logging.warning(f"Failure {status_msg}")
return
# New Code #
def load_training_checkpoint(model, load_dir, tag=None, **kwargs):
"""Utility function for checkpointing model + optimizer dictionaries
The main purpose for this is to be able to resume training from that instant again
"""
_, checkpoint_state_dict = model.load_checkpoint(load_dir, tag=tag, **kwargs)
epoch = checkpoint_state_dict["epoch"]
last_global_step = checkpoint_state_dict["last_global_step"]
del checkpoint_state_dict
return (epoch, last_global_step)
# New Code #
def evaluate(args, model, eval_dataloader, accelerator, eval_dataset):
model.eval()
@ -269,20 +302,9 @@ def main():
# Initialize the accelerator. We will let the accelerator handle device placement for us in this example.
# If we're using tracking, we also need to initialize it here and it will by default pick up all supported trackers
# in the environment
# when using DeepSpeed, the `gradient_accumulation_steps` is properly set from the DeepSpeed plugin/config
# or from `accelerate launch` via `--gradient_accumulation_steps` else
# defaulting to the passed `args.gradient_accumulation_steps`
accelerator = (
Accelerator(
log_with=args.report_to,
project_dir=args.output_dir,
gradient_accumulation_steps=args.gradient_accumulation_steps,
)
if args.with_tracking
else Accelerator(gradient_accumulation_steps=args.gradient_accumulation_steps)
Accelerator(log_with=args.report_to, logging_dir=args.output_dir) if args.with_tracking else Accelerator()
)
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
@ -516,11 +538,17 @@ def main():
model.tie_weights()
# Scheduler and math around the number of training steps.
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / accelerator.gradient_accumulation_steps)
overrode_max_train_steps = False
# New Code
# Get gradient accumulation steps from deepspeed config if available
if accelerator.state.deepspeed_plugin is not None:
args.gradient_accumulation_steps = accelerator.state.deepspeed_plugin.deepspeed_config[
"gradient_accumulation_steps"
]
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
if args.max_train_steps is None:
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
overrode_max_train_steps = True
else:
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
@ -547,16 +575,16 @@ def main():
)
# We need to recalculate our total training steps as the size of the training dataloader may have changed.
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / accelerator.gradient_accumulation_steps)
if overrode_max_train_steps:
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
# Afterwards we recalculate our number of training epochs
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
# Figure out how many steps we should save the Accelerator states
checkpointing_steps = args.checkpointing_steps
if checkpointing_steps is not None and checkpointing_steps.isdigit():
checkpointing_steps = int(checkpointing_steps)
if hasattr(args.checkpointing_steps, "isdigit"):
checkpointing_steps = args.checkpointing_steps
if args.checkpointing_steps.isdigit():
checkpointing_steps = int(args.checkpointing_steps)
else:
checkpointing_steps = None
# We need to initialize the trackers we use, and also store our configuration.
# The trackers initializes automatically on the main process.
@ -567,16 +595,14 @@ def main():
accelerator.init_trackers("clm_no_trainer", experiment_config)
# Train!
total_batch_size = (
args.per_device_train_batch_size * accelerator.num_processes * accelerator.gradient_accumulation_steps
)
total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
logger.info("***** Running training *****")
logger.info(f" Num examples = {len(train_dataset)}")
logger.info(f" Num Epochs = {args.num_train_epochs}")
logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}")
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
logger.info(f" Gradient Accumulation steps = {accelerator.gradient_accumulation_steps}")
logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
logger.info(f" Total optimization steps = {args.max_train_steps}")
# Only show the progress bar once on each machine.
progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
@ -587,61 +613,45 @@ def main():
# Potentially load in the weights and states from a previous save
if args.resume_from_checkpoint:
accelerator.load_state(args.resume_from_checkpoint)
# New Code #
# Loads the DeepSpeed checkpoint from the specified path
_, last_global_step = load_training_checkpoint(
model,
args.resume_from_checkpoint,
**{"load_optimizer_states": True, "load_lr_scheduler_states": True},
)
accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
path = os.path.basename(args.resume_from_checkpoint)
training_difference = os.path.splitext(path)[0]
if "epoch" in training_difference:
starting_epoch = int(training_difference.replace("epoch_", "")) + 1
resume_step = None
completed_steps = starting_epoch * num_update_steps_per_epoch
else:
resume_step = int(training_difference.replace("step_", ""))
starting_epoch = resume_step // num_update_steps_per_epoch
resume_step -= starting_epoch * num_update_steps_per_epoch
completed_steps = resume_step
# update progress bar if resumed from checkpoint
progress_bar.update(completed_steps)
resume_step = last_global_step
starting_epoch = resume_step // len(train_dataloader)
resume_step -= starting_epoch * len(train_dataloader)
for epoch in range(starting_epoch, args.num_train_epochs):
model.train()
if args.with_tracking:
total_loss = 0
# skip new `skip_first_batches` to skip the batches when resuming from ckpt
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
for step, batch in enumerate(train_dataloader):
# We need to skip steps until we reach the resumed step
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
else:
# After the first iteration though, we need to go back to the original dataloader
active_dataloader = train_dataloader
for step, batch in enumerate(active_dataloader):
# In particular, DeepSpeed handles `gradient_accumulation` via `DeepSpeedEngine`.
# Below, we use `accelerator.accumulate` if the user
# wants to switch to other approaches such as plain DDP, PyTorch FSDP ...
# This avoids having to change any code as things are all handled across different distributed setups.
with accelerator.accumulate(model):
outputs = model(**batch)
loss = outputs.loss
accelerator.backward(loss)
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
completed_steps += 1
continue
outputs = model(**batch)
loss = outputs.loss
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
loss = loss / args.gradient_accumulation_steps
accelerator.backward(loss)
if (step + 1) % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
if accelerator.sync_gradients:
progress_bar.update(1)
completed_steps += 1
# We keep track of the loss at each epoch
if args.with_tracking:
step_loss = accelerator.reduce(loss.detach().clone()).item()
total_loss += step_loss
progress_bar.update(1)
completed_steps += 1
if isinstance(checkpointing_steps, int):
if completed_steps % checkpointing_steps == 0:
output_dir = f"step_{completed_steps}"
output_dir = f"step_{completed_steps }"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
@ -656,29 +666,34 @@ def main():
{
"perplexity": perplexity,
"eval_loss": eval_loss,
"train_loss": total_loss / len(train_dataloader),
"train_loss": total_loss.item() / len(train_dataloader),
"epoch": epoch,
"step": completed_steps,
},
step=completed_steps,
)
if isinstance(checkpointing_steps, str) and checkpointing_steps == "epoch":
accelerator.save_state(os.path.join(args.output_dir, f"epoch_{epoch}"))
# New Code #
# Save the DeepSpeed checkpoint to the specified path
checkpoint_model(args.output_dir, epoch, model, epoch, completed_steps)
# New Code #
# Tracks the best checkpoint and best metric
if best_metric is None or best_metric > perplexity:
best_metric = perplexity
best_metric_checkpoint = os.path.join(args.output_dir, "best_checkpoint")
accelerator.save_state(best_metric_checkpoint)
best_metric_checkpoint = os.path.join(args.output_dir, str(epoch))
accelerator.print(f"New best metric: {best_metric} at epoch {epoch}")
accelerator.print(f"best_metric_checkpoint: {best_metric_checkpoint}")
# New Code #
# Loads the best checkpoint after the training is finished
if args.load_best_model:
accelerator.load_state(best_metric_checkpoint)
_, last_global_step = load_training_checkpoint(
model,
"/".join(best_metric_checkpoint.split("/")[:-1]),
tag=best_metric_checkpoint.split("/")[-1],
**{"load_optimizer_states": True, "load_lr_scheduler_states": True},
)
# New Code #
# Evaluates using the best checkpoint

View File

@ -1,246 +0,0 @@
# coding=utf-8
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import evaluate
import torch
from datasets import load_dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
########################################################################
# This is a fully working simple example to use Accelerate
# specifically showcasing how to perform early stopping,
# and builds off the `nlp_example.py` script
#
# This example trains a Bert base model on GLUE MRPC
# in any of the following settings (with the same script):
# - single CPU or single GPU
# - multi GPUS (using PyTorch distributed mode)
# - (multi) TPUs
# - fp16 (mixed-precision) or fp32 (normal precision)
#
# To run it in each of these various modes, follow the instructions
# in the readme for examples:
# https://github.com/huggingface/accelerate/tree/main/examples
#
########################################################################
MAX_GPU_BATCH_SIZE = 16
EVAL_BATCH_SIZE = 32
def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):
"""
Creates a set of `DataLoader`s for the `glue` dataset,
using "bert-base-cased" as the tokenizer.
Args:
accelerator (`Accelerator`):
An `Accelerator` object
batch_size (`int`, *optional*):
The batch size for the train and validation DataLoaders.
"""
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
datasets = load_dataset("glue", "mrpc")
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
# Apply the method we just defined to all the examples in all the splits of the dataset
# starting with the main process first:
with accelerator.main_process_first():
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
# transformers library
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
def collate_fn(examples):
# On TPU it's best to pad everything to the same length or training will be very slow.
max_length = 128 if accelerator.distributed_type == DistributedType.TPU else None
# When using mixed precision we want round multiples of 8/16
if accelerator.mixed_precision == "fp8":
pad_to_multiple_of = 16
elif accelerator.mixed_precision != "no":
pad_to_multiple_of = 8
else:
pad_to_multiple_of = None
return tokenizer.pad(
examples,
padding="longest",
max_length=max_length,
pad_to_multiple_of=pad_to_multiple_of,
return_tensors="pt",
)
# Instantiate dataloaders.
train_dataloader = DataLoader(
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
)
eval_dataloader = DataLoader(
tokenized_datasets["validation"],
shuffle=False,
collate_fn=collate_fn,
batch_size=EVAL_BATCH_SIZE,
drop_last=(accelerator.mixed_precision == "fp8"),
)
return train_dataloader, eval_dataloader
# New code
class EarlyStoppingCallback:
"A callback class that helps with early stopping"
def __init__(self, min_delta=0, patience=5):
self.min_delta = min_delta
self.patience = patience
self.counter = 0
self.lowest_loss = float("inf")
def check_early_stopping(self, eval_loss):
delta = self.lowest_loss - eval_loss
if delta >= self.min_delta:
self.lowest_loss = eval_loss
self.counter = 0
else:
self.counter += 1
if self.counter >= self.patience:
return True
return False
callback = EarlyStoppingCallback()
def training_function(config, args):
# Initialize accelerator
accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision)
# Sample hyper-parameters for learning rate, batch size, seed and a few other HPs
lr = config["lr"]
num_epochs = int(config["num_epochs"])
seed = int(config["seed"])
batch_size = int(config["batch_size"])
metric = evaluate.load("glue", "mrpc")
# If the batch size is too big we use gradient accumulation
gradient_accumulation_steps = 1
if batch_size > MAX_GPU_BATCH_SIZE and accelerator.distributed_type != DistributedType.TPU:
gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE
batch_size = MAX_GPU_BATCH_SIZE
set_seed(seed)
train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)
# We could avoid this line since the accelerator is set with `device_placement=True` (default value).
# Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer
# creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that).
model = model.to(accelerator.device)
# Instantiate optimizer
optimizer = AdamW(params=model.parameters(), lr=lr)
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=100,
num_training_steps=(len(train_dataloader) * num_epochs) // gradient_accumulation_steps,
)
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# Now we train the model
for epoch in range(num_epochs):
model.train()
for step, batch in enumerate(train_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
loss = loss / gradient_accumulation_steps
accelerator.backward(loss)
if step % gradient_accumulation_steps == 0:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
# New code
# Check if we should stop the training on any processes
if callback.check_early_stopping(loss.item()):
accelerator.set_trigger()
# If so, we break the loop
if accelerator.check_trigger():
break
model.eval()
for step, batch in enumerate(eval_dataloader):
# We could avoid this line since we set the accelerator with `device_placement=True`.
batch.to(accelerator.device)
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1)
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
# Use accelerator.print to print only on the main process.
accelerator.print(f"epoch {epoch}:", eval_metric)
def main():
parser = argparse.ArgumentParser(description="Simple example of training script.")
parser.add_argument(
"--mixed_precision",
type=str,
default=None,
choices=["no", "fp16", "bf16", "fp8"],
help="Whether to use mixed precision. Choose"
"between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
"and an Nvidia Ampere GPU.",
)
parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.")
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
training_function(config, args)
if __name__ == "__main__":
main()

View File

@ -15,23 +15,14 @@
import argparse
import gc
import os
import threading
import evaluate
import psutil
import torch
from datasets import load_dataset
from torch.distributed.fsdp.fully_sharded_data_parallel import FullOptimStateDictConfig, FullStateDictConfig
from torch.utils.data import DataLoader
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer,
get_linear_schedule_with_warmup,
set_seed,
)
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType, FullyShardedDataParallelPlugin
from accelerate.utils import is_npu_available, is_xpu_available
from accelerate import Accelerator, DistributedType
########################################################################
@ -69,65 +60,18 @@ def b2mb(x):
class TorchTracemalloc:
def __enter__(self):
gc.collect()
if torch.cuda.is_available():
torch.cuda.empty_cache()
torch.cuda.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.cuda.memory_allocated()
elif is_xpu_available():
torch.xpu.empty_cache()
torch.xpu.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.xpu.memory_allocated()
elif is_npu_available():
torch.npu.empty_cache()
torch.npu.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.npu.memory_allocated()
self.process = psutil.Process()
self.cpu_begin = self.cpu_mem_used()
self.peak_monitoring = True
peak_monitor_thread = threading.Thread(target=self.peak_monitor_func)
peak_monitor_thread.daemon = True
peak_monitor_thread.start()
torch.cuda.empty_cache()
torch.cuda.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.cuda.memory_allocated()
return self
def cpu_mem_used(self):
"""get resident set size memory for the current process"""
return self.process.memory_info().rss
def peak_monitor_func(self):
self.cpu_peak = -1
while True:
self.cpu_peak = max(self.cpu_mem_used(), self.cpu_peak)
# can't sleep or will not catch the peak right (this comment is here on purpose)
# time.sleep(0.001) # 1msec
if not self.peak_monitoring:
break
def __exit__(self, *exc):
self.peak_monitoring = False
gc.collect()
if torch.cuda.is_available():
torch.cuda.empty_cache()
self.end = torch.cuda.memory_allocated()
self.peak = torch.cuda.max_memory_allocated()
elif is_xpu_available():
torch.xpu.empty_cache()
self.end = torch.xpu.memory_allocated()
self.peak = torch.xpu.max_memory_allocated()
elif is_npu_available():
torch.npu.empty_cache()
self.end = torch.npu.memory_allocated()
self.peak = torch.npu.max_memory_allocated()
torch.cuda.empty_cache()
self.end = torch.cuda.memory_allocated()
self.peak = torch.cuda.max_memory_allocated()
self.used = b2mb(self.end - self.begin)
self.peaked = b2mb(self.peak - self.begin)
self.cpu_end = self.cpu_mem_used()
self.cpu_used = b2mb(self.cpu_end - self.cpu_begin)
self.cpu_peaked = b2mb(self.cpu_peak - self.cpu_begin)
# print(f"delta used/peak {self.used:4d}/{self.peaked:4d}")
@ -142,25 +86,13 @@ def training_function(config, args):
# For testing only
if os.environ.get("TESTING_MOCKED_DATALOADERS", None) == "1":
config["num_epochs"] = 2
# New Code #
# Pass the advanced FSDP settings not part of the accelerate config by creating fsdp_plugin
fsdp_plugin = FullyShardedDataParallelPlugin(
state_dict_config=FullStateDictConfig(offload_to_cpu=False, rank0_only=False),
optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=False, rank0_only=False),
)
# Initialize accelerator
if args.with_tracking:
accelerator = Accelerator(
cpu=args.cpu,
mixed_precision=args.mixed_precision,
log_with="wandb",
project_dir=args.logging_dir,
fsdp_plugin=fsdp_plugin,
cpu=args.cpu, mixed_precision=args.mixed_precision, log_with="wandb", logging_dir=args.logging_dir
)
else:
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
accelerator = Accelerator()
accelerator.print(accelerator.distributed_type)
if hasattr(args.checkpointing_steps, "isdigit"):
@ -243,10 +175,7 @@ def training_function(config, args):
set_seed(seed)
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForSequenceClassification.from_pretrained(
args.model_name_or_path, return_dict=True, low_cpu_mem_usage=True
)
model = AutoModelForSequenceClassification.from_pretrained(args.model_name_or_path, return_dict=True)
# New Code #
# For FSDP feature, it is highly recommended and efficient to prepare the model before creating optimizer
model = accelerator.prepare(model)
@ -316,6 +245,7 @@ def training_function(config, args):
batch.to(accelerator.device)
outputs = model(**batch)
loss = outputs.loss
loss = loss / gradient_accumulation_steps
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
@ -456,7 +386,7 @@ def main():
required=True,
)
args = parser.parse_args()
config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16}
config = {"lr": 2e-5, "num_epochs": 3, "seed": 1, "batch_size": 16}
training_function(config, args)

View File

@ -19,7 +19,7 @@ extras = {}
extras["quality"] = ["black ~= 23.1", "ruff >= 0.0.241", "hf-doc-builder >= 0.3.0", "urllib3 < 2.0.0"]
extras["docs"] = []
extras["test_prod"] = ["pytest", "pytest-xdist", "pytest-subtests", "parameterized"]
extras["test_dev"] = ["datasets", "evaluate", "transformers", "scipy", "scikit-learn", "deepspeed", "tqdm", "bitsandbytes", "timm"]
extras["test_dev"] = ["datasets", "evaluate", "transformers", "scipy", "scikit-learn", "deepspeed", "tqdm"]
extras["testing"] = extras["test_prod"] + extras["test_dev"]
extras["rich"] = ["rich"]
@ -32,7 +32,7 @@ extras["sagemaker"] = [
setup(
name="accelerate",
version="0.24.0.dev0",
version="0.20.3",
description="Accelerate",
long_description=open("README.md", "r", encoding="utf-8").read(),
long_description_content_type="text/markdown",
@ -47,12 +47,11 @@ setup(
"console_scripts": [
"accelerate=accelerate.commands.accelerate_cli:main",
"accelerate-config=accelerate.commands.config:main",
"accelerate-estimate-memory=accelerate.commands.estimate:main",
"accelerate-launch=accelerate.commands.launch:main",
]
},
python_requires=">=3.8.0",
install_requires=["numpy>=1.17", "packaging>=20.0", "psutil", "pyyaml", "torch>=1.10.0", "huggingface_hub"],
python_requires=">=3.7.0",
install_requires=["numpy>=1.17", "packaging>=20.0", "psutil", "pyyaml", "torch>=1.6.0"],
extras_require=extras,
classifiers=[
"Development Status :: 5 - Production/Stable",
@ -62,35 +61,27 @@ setup(
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.7",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
],
)
# Release checklist
# 1. Checkout the release branch (for a patch the current release branch, for a new minor version, create one):
# git checkout -b vXX.xx-release
# The -b is only necessary for creation (so remove it when doing a patch)
# 2. Change the version in __init__.py and setup.py to the proper value.
# 3. Commit these changes with the message: "Release: v<VERSION>"
# 4. Add a tag in git to mark the release:
# git tag v<VERSION> -m 'Adds tag v<VERSION> for pypi'
# Push the tag and release commit to git: git push --tags origin vXX.xx-release
# 5. Run the following commands in the top-level directory:
# rm -rf dist
# rm -rf build
# 1. Change the version in __init__.py and setup.py.
# 2. Commit these changes with the message: "Release: VERSION"
# 3. Add a tag in git to mark the release: "git tag VERSION -m 'Adds tag VERSION for pypi' "
# Push the tag to git: git push --tags origin main
# 4. Run the following commands in the top-level directory:
# python setup.py bdist_wheel
# python setup.py sdist
# 6. Upload the package to the pypi test server first:
# twine upload dist/* -r testpypi
# 7. Check that you can install it in a virtualenv by running:
# pip install accelerate
# pip uninstall accelerate
# 5. Upload the package to the pypi test server first:
# twine upload dist/* -r pypitest
# twine upload dist/* -r pypitest --repository-url=https://test.pypi.org/legacy/
# 6. Check that you can install it in a virtualenv by running:
# pip install -i https://testpypi.python.org/pypi accelerate
# accelerate env
# accelerate test
# 8. Upload the final version to actual pypi:
# 7. Upload the final version to actual pypi:
# twine upload dist/* -r pypi
# 9. Add release notes to the tag in github once everything is looking hunky-dory.
# 10. Go back to the main branch and update the version in __init__.py, setup.py to the new version ".dev" and push to
# main.
# 8. Add release notes to the tag in github once everything is looking hunky-dory.
# 9. Update the version in __init__.py, setup.py to the new version "-dev" and push to master

View File

@ -1,4 +1,4 @@
__version__ = "0.24.0.dev0"
__version__ = "0.20.3"
from .accelerator import Accelerator
from .big_modeling import (
@ -14,7 +14,6 @@ from .data_loader import skip_first_batches
from .launchers import debug_launcher, notebook_launcher
from .state import PartialState
from .utils import (
AutocastKwargs,
DeepSpeedPlugin,
DistributedDataParallelKwargs,
DistributedType,

File diff suppressed because it is too large Load Diff

View File

@ -12,10 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
from contextlib import contextmanager
from functools import wraps
from typing import Dict, List, Optional, Union
import torch
@ -36,25 +34,21 @@ from .utils import (
find_tied_parameters,
get_balanced_memory,
infer_auto_device_map,
is_torch_version,
load_checkpoint_in_model,
offload_state_dict,
parse_flag_from_env,
retie_parameters,
)
logger = logging.getLogger(__name__)
from .utils.versions import is_torch_version
@contextmanager
def init_empty_weights(include_buffers: bool = None):
def init_empty_weights(include_buffers: bool = False):
"""
A context manager under which models are initialized with all parameters on the meta device, therefore creating an
empty model. Useful when just initializing the model would blow the available RAM.
Args:
include_buffers (`bool`, *optional*):
include_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to also put all buffers on the meta device while initializing.
Example:
@ -75,21 +69,21 @@ def init_empty_weights(include_buffers: bool = None):
</Tip>
"""
if include_buffers is None:
include_buffers = parse_flag_from_env("ACCELERATE_INIT_INCLUDE_BUFFERS", False)
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("Initializing empty weights to a meta device requires torch >= 1.9.0")
with init_on_device(torch.device("meta"), include_buffers=include_buffers) as f:
yield f
@contextmanager
def init_on_device(device: torch.device, include_buffers: bool = None):
def init_on_device(device: torch.device, include_buffers: bool = False):
"""
A context manager under which models are initialized with all parameters on the specified device.
Args:
device (`torch.device`):
Device to initialize all parameters on.
include_buffers (`bool`, *optional*):
include_buffers (`bool`, *optional*, defaults to `False`):
Whether or not to also put all buffers on the meta device while initializing.
Example:
@ -102,15 +96,6 @@ def init_on_device(device: torch.device, include_buffers: bool = None):
tst = nn.Liner(100, 100) # on `cuda` device
```
"""
if include_buffers is None:
include_buffers = parse_flag_from_env("ACCELERATE_INIT_INCLUDE_BUFFERS", False)
# TODO(shingjan): remove the torch version check once older versions are deprecated
if is_torch_version(">=", "2.0") and include_buffers:
with device:
yield
return
old_register_parameter = nn.Module.register_parameter
if include_buffers:
old_register_buffer = nn.Module.register_buffer
@ -186,6 +171,8 @@ def cpu_offload(
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("CPU offloading requires torch >= 1.9.0")
if execution_device is None:
execution_device = next(iter(model.parameters())).device
if state_dict is None:
@ -275,6 +262,8 @@ def disk_offload(
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("Disk offloading requires torch >= 1.9.0")
if not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json")):
offload_state_dict(offload_dir, model.state_dict())
if execution_device is None:
@ -304,7 +293,6 @@ def dispatch_model(
offload_buffers: bool = False,
skip_keys: Optional[Union[str, List[str]]] = None,
preload_module_classes: Optional[List[str]] = None,
force_hooks: bool = False,
):
"""
Dispatches a model according to a given device map. Layers of the model might be spread across GPUs, offloaded on
@ -335,99 +323,64 @@ def dispatch_model(
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
force_hooks (`bool`, *optional*, defaults to `False`):
Whether or not to force device hooks to be attached to the model even if all layers are dispatched to a
single device.
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("Model dispatching requires torch >= 1.9.0")
# Error early if the device map is incomplete.
check_device_map(model, device_map)
# for backward compatibility
is_bnb_quantized = (
getattr(model, "is_quantized", False) or getattr(model, "is_loaded_in_8bit", False)
) and getattr(model, "quantization_method", "bitsandbytes") == "bitsandbytes"
# We attach hooks if the device_map has at least 2 different devices or if
# force_hooks is set to `True`. Otherwise, the model in already loaded
# in the unique device and the user can decide where to dispatch the model.
# If the model is quantized, we always force-dispatch the model
if (len(set(device_map.values())) > 1) or is_bnb_quantized or force_hooks:
if main_device is None:
if set(device_map.values()) == {"cpu"} or set(device_map.values()) == {"cpu", "disk"}:
main_device = "cpu"
else:
main_device = [d for d in device_map.values() if d not in ["cpu", "disk"]][0]
if main_device != "cpu":
cpu_modules = [name for name, device in device_map.items() if device == "cpu"]
if state_dict is None and len(cpu_modules) > 0:
state_dict = extract_submodules_state_dict(model.state_dict(), cpu_modules)
disk_modules = [name for name, device in device_map.items() if device == "disk"]
if offload_dir is None and offload_index is None and len(disk_modules) > 0:
raise ValueError(
"We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules "
f"need to be offloaded: {', '.join(disk_modules)}."
)
if (
len(disk_modules) > 0
and offload_index is None
and (not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json")))
):
disk_state_dict = extract_submodules_state_dict(model.state_dict(), disk_modules)
offload_state_dict(offload_dir, disk_state_dict)
execution_device = {
name: main_device if device in ["cpu", "disk"] else device for name, device in device_map.items()
}
execution_device[""] = main_device
offloaded_devices = ["disk"] if main_device == "cpu" or main_device == "mps" else ["cpu", "disk"]
offload = {name: device in offloaded_devices for name, device in device_map.items()}
save_folder = offload_dir if len(disk_modules) > 0 else None
if state_dict is not None or save_folder is not None or offload_index is not None:
device = main_device if offload_index is not None else None
weights_map = OffloadedWeightsLoader(
state_dict=state_dict, save_folder=save_folder, index=offload_index, device=device
)
if main_device is None:
if set(device_map.values()) == {"cpu"} or set(device_map.values()) == {"cpu", "disk"}:
main_device = "cpu"
else:
weights_map = None
main_device = [d for d in device_map.values() if d not in ["cpu", "disk"]][0]
tied_params = find_tied_parameters(model)
attach_align_device_hook_on_blocks(
model,
execution_device=execution_device,
offload=offload,
offload_buffers=offload_buffers,
weights_map=weights_map,
skip_keys=skip_keys,
preload_module_classes=preload_module_classes,
if main_device != "cpu":
cpu_modules = [name for name, device in device_map.items() if device == "cpu"]
if state_dict is None and len(cpu_modules) > 0:
state_dict = extract_submodules_state_dict(model.state_dict(), cpu_modules)
disk_modules = [name for name, device in device_map.items() if device == "disk"]
if offload_dir is None and offload_index is None and len(disk_modules) > 0:
raise ValueError(
"We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules "
f"need to be offloaded: {', '.join(disk_modules)}."
)
# Attaching the hook may break tied weights, so we retie them
retie_parameters(model, tied_params)
# add warning to cuda and to method
def add_warning(fn, model):
@wraps(fn)
def wrapper(*args, **kwargs):
logger.warning("You shouldn't move a model when it is dispatched on multiple devices.")
for param in model.parameters():
if param.device == torch.device("meta"):
raise RuntimeError("You can't move a model that has some modules offloaded to cpu or disk.")
return fn(*args, **kwargs)
return wrapper
model.to = add_warning(model.to, model)
model.cuda = add_warning(model.cuda, model)
if (
len(disk_modules) > 0
and offload_index is None
and (not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_dir, "index.json")))
):
disk_state_dict = extract_submodules_state_dict(model.state_dict(), disk_modules)
offload_state_dict(offload_dir, disk_state_dict)
execution_device = {
name: main_device if device in ["cpu", "disk"] else device for name, device in device_map.items()
}
execution_device[""] = main_device
offloaded_devices = ["disk"] if main_device == "cpu" or main_device == "mps" else ["cpu", "disk"]
offload = {name: device in offloaded_devices for name, device in device_map.items()}
save_folder = offload_dir if len(disk_modules) > 0 else None
if state_dict is not None or save_folder is not None or offload_index is not None:
device = main_device if offload_index is not None else None
weights_map = OffloadedWeightsLoader(
state_dict=state_dict, save_folder=save_folder, index=offload_index, device=device
)
else:
device = list(device_map.values())[0]
if device != "disk":
model.to(device)
else:
raise ValueError(
"You are trying to offload the whole model to the disk. Please use the `disk_offload` function instead."
)
weights_map = None
tied_params = find_tied_parameters(model)
attach_align_device_hook_on_blocks(
model,
execution_device=execution_device,
offload=offload,
offload_buffers=offload_buffers,
weights_map=weights_map,
skip_keys=skip_keys,
preload_module_classes=preload_module_classes,
)
# Attaching the hook may break tied weights, so we retie them
retie_parameters(model, tied_params)
model.hf_device_map = device_map
return model
@ -444,7 +397,6 @@ def load_checkpoint_and_dispatch(
offload_state_dict: Optional[bool] = None,
skip_keys: Optional[Union[str, List[str]]] = None,
preload_module_classes: Optional[List[str]] = None,
force_hooks: bool = False,
):
"""
Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are
@ -487,9 +439,6 @@ def load_checkpoint_and_dispatch(
of the forward. This should only be used for classes that have submodules which are registered but not
called directly during the forward, for instance if a `dense` linear layer is registered, but at forward,
`dense.weight` and `dense.bias` are used in some operations instead of calling `dense` directly.
force_hooks (`bool`, *optional*, defaults to `False`):
Whether or not to force device hooks to be attached to the model even if all layers are dispatched to a
single device.
Example:
@ -513,6 +462,8 @@ def load_checkpoint_and_dispatch(
... )
```
"""
if not is_torch_version(">=", "1.9.0"):
raise NotImplementedError("Loading and dispatching requires torch >= 1.9.0")
if isinstance(device_map, str) and device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]:
raise ValueError(
"If passing a string for `device_map`, please choose 'auto', 'balanced', 'balanced_low_0' or "
@ -550,5 +501,4 @@ def load_checkpoint_and_dispatch(
offload_buffers=offload_buffers,
skip_keys=skip_keys,
preload_module_classes=preload_module_classes,
force_hooks=force_hooks,
)

View File

@ -18,7 +18,6 @@ from argparse import ArgumentParser
from accelerate.commands.config import get_config_parser
from accelerate.commands.env import env_command_parser
from accelerate.commands.estimate import estimate_command_parser
from accelerate.commands.launch import launch_command_parser
from accelerate.commands.test import test_command_parser
from accelerate.commands.tpu import tpu_command_parser
@ -30,7 +29,6 @@ def main():
# Register commands
get_config_parser(subparsers=subparsers)
estimate_command_parser(subparsers=subparsers)
env_command_parser(subparsers=subparsers)
launch_command_parser(subparsers=subparsers)
tpu_command_parser(subparsers=subparsers)

View File

@ -21,7 +21,6 @@ from ...utils import (
DistributedType,
is_deepspeed_available,
is_mps_available,
is_npu_available,
is_transformers_available,
is_xpu_available,
)
@ -48,7 +47,7 @@ from .config_utils import (
def get_cluster_input():
distributed_type = _ask_options(
"Which type of machine are you using?",
["No distributed training", "multi-CPU", "multi-XPU", "multi-GPU", "multi-NPU", "TPU"],
["No distributed training", "multi-CPU", "multi-XPU", "multi-GPU", "TPU"],
_convert_distributed_mode,
)
@ -60,14 +59,8 @@ def get_cluster_input():
main_process_port = None
rdzv_backend = "static"
same_network = True
debug = False
if distributed_type in [
DistributedType.MULTI_GPU,
DistributedType.MULTI_NPU,
DistributedType.MULTI_XPU,
DistributedType.MULTI_CPU,
]:
if distributed_type in [DistributedType.MULTI_GPU, DistributedType.MULTI_XPU, DistributedType.MULTI_CPU]:
num_machines = _ask_field(
"How many different machines will you use (use more than 1 for multi-node training)? [1]: ",
int,
@ -96,16 +89,10 @@ def get_cluster_input():
rdzv_backend = _ask_field(
"What rendezvous backend will you use? ('static', 'c10d', ...): ", default="static"
)
debug = _ask_field(
"Should distributed operations be checked while running for errors? This can avoid timeout issues but will be slower. [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
if distributed_type == DistributedType.NO:
use_cpu = _ask_field(
"Do you want to run your training on CPU only (even if a GPU / Apple Silicon / Ascend NPU device is available)? [yes/NO]:",
"Do you want to run your training on CPU only (even if a GPU / Apple Silicon device is available)? [yes/NO]:",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
@ -123,11 +110,7 @@ def get_cluster_input():
default=False,
error_message="Please enter yes or no.",
)
if (
not use_cpu
and is_xpu_available()
and distributed_type not in [DistributedType.MULTI_GPU, DistributedType.MULTI_NPU, DistributedType.TPU]
):
if not use_cpu and is_xpu_available():
ipex_config["use_xpu"] = _ask_field(
"Do you want to use XPU plugin to speed up training on XPU? [yes/NO]:",
_convert_yes_no_to_bool,
@ -313,7 +296,7 @@ def get_cluster_input():
)
fsdp_config = {}
if distributed_type in [DistributedType.MULTI_GPU, DistributedType.MULTI_NPU, DistributedType.MULTI_XPU]:
if distributed_type in [DistributedType.MULTI_GPU]:
use_fsdp = _ask_field(
"Do you want to use FullyShardedDataParallel? [yes/NO]: ",
_convert_yes_no_to_bool,
@ -343,18 +326,11 @@ def get_cluster_input():
lambda x: FSDP_AUTO_WRAP_POLICY[int(x)],
)
if fsdp_config["fsdp_auto_wrap_policy"] == FSDP_AUTO_WRAP_POLICY[0]:
use_no_split_modules = _ask_field(
"Do you want to use the model's `_no_split_modules` to wrap. Only applicable for 🤗 Transformers [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
fsdp_config["fsdp_transformer_layer_cls_to_wrap"] = _ask_field(
"Specify the comma-separated list of transformer layer class names (case-sensitive) to wrap ,e.g, :"
"`BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput` ...? : ",
str,
)
if not use_no_split_modules:
fsdp_config["fsdp_transformer_layer_cls_to_wrap"] = _ask_field(
"Specify the comma-separated list of transformer layer class names (case-sensitive) to wrap ,e.g, :"
"`BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput` ...? : ",
str,
)
elif fsdp_config["fsdp_auto_wrap_policy"] == FSDP_AUTO_WRAP_POLICY[1]:
fsdp_config["fsdp_min_num_params"] = _ask_field(
"What should be your FSDP's minimum number of parameters for Default Auto Wrapping Policy? [1e8]: ",
@ -372,25 +348,6 @@ def get_cluster_input():
fsdp_state_dict_type_query,
FSDP_STATE_DICT_TYPE,
lambda x: FSDP_STATE_DICT_TYPE[int(x)],
default=2,
)
fsdp_config["fsdp_forward_prefetch"] = _ask_field(
"Do you want to enable FSDP's forward prefetch policy? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
fsdp_config["fsdp_use_orig_params"] = _ask_field(
"Do you want to enable FSDP's `use_orig_params` feature? [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
fsdp_config["fsdp_sync_module_states"] = _ask_field(
"Do you want each individually wrapped FSDP unit to broadcast module parameters from rank 0 at the start? [YES/no]: ",
_convert_yes_no_to_bool,
default=True,
error_message="Please enter yes or no.",
)
megatron_lm_config = {}
@ -468,7 +425,6 @@ def get_cluster_input():
DistributedType.MULTI_CPU,
DistributedType.MULTI_XPU,
DistributedType.MULTI_GPU,
DistributedType.MULTI_NPU,
DistributedType.TPU,
]:
machine_type = str(distributed_type).split(".")[1].replace("MULTI_", "")
@ -492,28 +448,9 @@ def get_cluster_input():
else:
num_processes = 1
if (distributed_type == DistributedType.MULTI_GPU) and (num_machines == 1) and (num_processes == 1):
raise ValueError(
f"Specified distributed type {distributed_type} but only using 1 GPU on a single machine. Please select `No distributed training` for the type of machine you are using."
)
if (
distributed_type
in [
DistributedType.MULTI_GPU,
DistributedType.MULTI_NPU,
DistributedType.MULTI_XPU,
DistributedType.NO,
]
and not use_cpu
and not use_mps
):
if is_npu_available():
machine_type = "NPU(s)"
else:
machine_type = "GPU(s)"
if distributed_type in [DistributedType.MULTI_GPU, DistributedType.NO] and not use_cpu and not use_mps:
gpu_ids = _ask_field(
f"What {machine_type} (by id) should be used for training on this machine as a comma-seperated list? [all]:",
"What GPU(s) (by id) should be used for training on this machine as a comma-seperated list? [all]:",
default="all",
)
@ -641,5 +578,4 @@ def get_cluster_input():
tpu_use_sudo=tpu_use_sudo,
tpu_use_cluster=tpu_use_cluster,
dynamo_config=dynamo_config,
debug=debug,
)

View File

@ -78,7 +78,6 @@ class BaseConfig:
distributed_type: Union[DistributedType, SageMakerDistributedType]
mixed_precision: str
use_cpu: bool
debug: bool
def to_dict(self):
result = self.__dict__
@ -107,15 +106,6 @@ class BaseConfig:
config_dict["dynamo_config"] = {} if dynamo_backend == "NO" else {"dynamo_backend": dynamo_backend}
if "use_cpu" not in config_dict:
config_dict["use_cpu"] = False
if "debug" not in config_dict:
config_dict["debug"] = False
extra_keys = sorted(set(config_dict.keys()) - set(cls.__dataclass_fields__.keys()))
if len(extra_keys) > 0:
raise ValueError(
f"The config file at {json_file} had unknown keys ({extra_keys}), please try upgrading your `accelerate`"
" version or fix (and potentially remove) these keys from your config file."
)
return cls(**config_dict)
def to_json_file(self, json_file):
@ -130,6 +120,7 @@ class BaseConfig:
config_dict = yaml.safe_load(f)
if "compute_environment" not in config_dict:
config_dict["compute_environment"] = ComputeEnvironment.LOCAL_MACHINE
if "mixed_precision" not in config_dict:
config_dict["mixed_precision"] = "fp16" if ("fp16" in config_dict and config_dict["fp16"]) else None
if isinstance(config_dict["mixed_precision"], bool) and not config_dict["mixed_precision"]:
@ -141,14 +132,6 @@ class BaseConfig:
config_dict["dynamo_config"] = {} if dynamo_backend == "NO" else {"dynamo_backend": dynamo_backend}
if "use_cpu" not in config_dict:
config_dict["use_cpu"] = False
if "debug" not in config_dict:
config_dict["debug"] = False
extra_keys = sorted(set(config_dict.keys()) - set(cls.__dataclass_fields__.keys()))
if len(extra_keys) > 0:
raise ValueError(
f"The config file at {yaml_file} had unknown keys ({extra_keys}), please try upgrading your `accelerate`"
" version or fix (and potentially remove) these keys from your config file."
)
return cls(**config_dict)
def to_yaml_file(self, yaml_file):

View File

@ -66,7 +66,7 @@ def _convert_compute_environment(value):
def _convert_distributed_mode(value):
value = int(value)
return DistributedType(["NO", "MULTI_CPU", "MULTI_XPU", "MULTI_GPU", "MULTI_NPU", "TPU"][value])
return DistributedType(["NO", "MULTI_CPU", "MULTI_XPU", "MULTI_GPU", "TPU"][value])
def _convert_dynamo_backend(value):

View File

@ -18,7 +18,7 @@ from pathlib import Path
import torch
from ...utils import is_npu_available, is_xpu_available
from ...utils import is_xpu_available
from .config_args import ClusterConfig, default_json_config_file
from .config_utils import SubcommandHelpFormatter
@ -73,20 +73,11 @@ def write_basic_config(mixed_precision="no", save_location: str = default_json_c
config["distributed_type"] = "MULTI_XPU"
else:
config["distributed_type"] = "NO"
elif is_npu_available():
num_npus = torch.npu.device_count()
config["num_processes"] = num_npus
config["use_cpu"] = False
if num_npus > 1:
config["distributed_type"] = "MULTI_NPU"
else:
config["distributed_type"] = "NO"
else:
num_xpus = 0
config["use_cpu"] = True
config["num_processes"] = 1
config["distributed_type"] = "NO"
config["debug"] = False
config = ClusterConfig(**config)
config.to_json_file(path)
return path

View File

@ -221,15 +221,6 @@ def get_sagemaker_input():
ec2_instance_query += "? [ml.p3.2xlarge]:"
ec2_instance_type = _ask_field(ec2_instance_query, lambda x: str(x).lower(), default="ml.p3.2xlarge")
debug = False
if distributed_type != SageMakerDistributedType.NO:
debug = _ask_field(
"Should distributed operations be checked while running for errors? This can avoid timeout issues but will be slower. [yes/NO]: ",
_convert_yes_no_to_bool,
default=False,
error_message="Please enter yes or no.",
)
num_machines = 1
if distributed_type in (SageMakerDistributedType.DATA_PARALLEL, SageMakerDistributedType.MODEL_PARALLEL):
num_machines = _ask_field(
@ -263,5 +254,4 @@ def get_sagemaker_input():
num_machines=num_machines,
sagemaker_inputs_file=sagemaker_inputs_file,
sagemaker_metrics_file=sagemaker_metrics_file,
debug=debug,
)

View File

@ -25,7 +25,7 @@ import torch
from accelerate import __version__ as version
from accelerate.commands.config import default_config_file, load_config_from_file
from ..utils import is_npu_available, is_xpu_available
from ..utils import is_xpu_available
def env_command_parser(subparsers=None):
@ -47,7 +47,6 @@ def env_command(args):
pt_version = torch.__version__
pt_cuda_available = torch.cuda.is_available()
pt_xpu_available = is_xpu_available()
pt_npu_available = is_npu_available()
accelerate_config = "Not found"
# Get the default from the config file.
@ -61,7 +60,6 @@ def env_command(args):
"Numpy version": np.__version__,
"PyTorch version (GPU?)": f"{pt_version} ({pt_cuda_available})",
"PyTorch XPU available": str(pt_xpu_available),
"PyTorch NPU available": str(pt_npu_available),
"System RAM": f"{psutil.virtual_memory().total / 1024 ** 3:.2f} GB",
}
if pt_cuda_available:

View File

@ -1,270 +0,0 @@
#!/usr/bin/env python
# Copyright 2023 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from huggingface_hub import model_info
from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError
from accelerate import init_empty_weights
from accelerate.utils import (
calculate_maximum_sizes,
convert_bytes,
is_timm_available,
is_transformers_available,
)
if is_transformers_available():
import transformers
from transformers import AutoConfig, AutoModel
if is_timm_available():
import timm
def verify_on_hub(repo: str, token: str = None):
"Verifies that the model is on the hub and returns the model info."
try:
return model_info(repo, token=token)
except GatedRepoError:
return "gated"
except RepositoryNotFoundError:
return "repo"
def check_has_model(error):
"""
Checks what library spawned `error` when a model is not found
"""
if is_timm_available() and isinstance(error, RuntimeError) and "Unknown model" in error.args[0]:
return "timm"
elif (
is_transformers_available()
and isinstance(error, OSError)
and "does not appear to have a file named" in error.args[0]
):
return "transformers"
else:
return "unknown"
def create_empty_model(model_name: str, library_name: str, trust_remote_code: bool = False, access_token: str = None):
"""
Creates an empty model from its parent library on the `Hub` to calculate the overall memory consumption.
Args:
model_name (`str`):
The model name on the Hub
library_name (`str`):
The library the model has an integration with, such as `transformers`. Will be used if `model_name` has no
metadata on the Hub to determine the library.
trust_remote_code (`bool`, `optional`, defaults to `False`):
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to `True` for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
access_token (`str`, `optional`, defaults to `None`):
The access token to use to access private or gated models on the Hub. (for use on the Gradio app)
Returns:
`torch.nn.Module`: The torch model that has been initialized on the `meta` device.
"""
model_info = verify_on_hub(model_name, access_token)
# Simplified errors
if model_info == "gated":
raise GatedRepoError(
f"Repo for model `{model_name}` is gated. You must be authenticated to access it. Please run `huggingface-cli login`."
)
elif model_info == "repo":
raise RepositoryNotFoundError(
f"Repo for model `{model_name}` does not exist on the Hub. If you are trying to access a private repo,"
" make sure you are authenticated via `huggingface-cli login` and have access."
)
if library_name is None:
library_name = getattr(model_info, "library_name", False)
if not library_name:
raise ValueError(
f"Model `{model_name}` does not have any library metadata on the Hub, please manually pass in a `--library_name` to use (such as `transformers`)"
)
if library_name == "transformers":
if not is_transformers_available():
raise ImportError(
f"To check `{model_name}`, `transformers` must be installed. Please install it via `pip install transformers`"
)
print(f"Loading pretrained config for `{model_name}` from `transformers`...")
auto_map = model_info.config.get("auto_map", False)
config = AutoConfig.from_pretrained(model_name, trust_remote_code=trust_remote_code)
with init_empty_weights():
# remote code could specify a specific `AutoModel` class in the `auto_map`
constructor = AutoModel
if isinstance(auto_map, dict):
value = None
for key in auto_map.keys():
if key.startswith("AutoModelFor"):
value = key
break
if value is not None:
constructor = getattr(transformers, value)
model = constructor.from_config(config, trust_remote_code=trust_remote_code)
elif library_name == "timm":
if not is_timm_available():
raise ImportError(
f"To check `{model_name}`, `timm` must be installed. Please install it via `pip install timm`"
)
print(f"Loading pretrained config for `{model_name}` from `timm`...")
with init_empty_weights():
model = timm.create_model(model_name, pretrained=False)
else:
raise ValueError(
f"Library `{library_name}` is not supported yet, please open an issue on GitHub for us to add support."
)
return model
def create_ascii_table(headers: list, rows: list, title: str):
"Creates a pretty table from a list of rows, minimal version of `tabulate`."
sep_char, in_between = "", ""
column_widths = []
for i in range(len(headers)):
column_values = [row[i] for row in rows] + [headers[i]]
max_column_width = max(len(value) for value in column_values)
column_widths.append(max_column_width)
formats = [f"%{column_widths[i]}s" for i in range(len(rows[0]))]
pattern = f"{sep_char}{sep_char.join(formats)}{sep_char}"
diff = 0
def make_row(left_char, middle_char, right_char):
return f"{left_char}{middle_char.join([in_between * n for n in column_widths])}{in_between * diff}{right_char}"
separator = make_row("", "", "")
if len(title) > sum(column_widths):
diff = abs(len(title) - len(separator))
column_widths[-1] += diff
# Update with diff
separator = make_row("", "", "")
initial_rows = [
make_row("", in_between, ""),
f"{sep_char}{title.center(len(separator) - 2)}{sep_char}",
make_row("", "", ""),
]
table = "\n".join(initial_rows) + "\n"
column_widths[-1] += diff
centered_line = [text.center(column_widths[i]) for i, text in enumerate(headers)]
table += f"{pattern % tuple(centered_line)}\n{separator}\n"
for i, line in enumerate(rows):
centered_line = [t.center(column_widths[i]) for i, t in enumerate(line)]
table += f"{pattern % tuple(centered_line)}\n"
table += f'{"".join([in_between * n for n in column_widths])}'
return table
def estimate_command_parser(subparsers=None):
if subparsers is not None:
parser = subparsers.add_parser("estimate-memory")
else:
parser = argparse.ArgumentParser(description="Model size estimator for fitting a model onto CUDA memory.")
parser.add_argument("model_name", type=str, help="The model name on the Hugging Face Hub.")
parser.add_argument(
"--library_name",
type=str,
help="The library the model has an integration with, such as `transformers`, needed only if this information is not stored on the Hub.",
choices=["timm", "transformers"],
)
parser.add_argument(
"--dtypes",
type=str,
nargs="+",
default=["float32", "float16", "int8", "int4"],
help="The dtypes to use for the model, must be one (or many) of `float32`, `float16`, `int8`, and `int4`",
choices=["float32", "float16", "int8", "int4"],
)
parser.add_argument(
"--trust_remote_code",
action="store_true",
help="""Whether or not to allow for custom models defined on the Hub in their own modeling files. This flag
should only be used for repositories you trust and in which you have read the code, as it will execute
code present on the Hub on your local machine.""",
)
if subparsers is not None:
parser.set_defaults(func=estimate_command)
return parser
def gather_data(args):
"Creates an empty model and gathers the data for the sizes"
try:
model = create_empty_model(
args.model_name, library_name=args.library_name, trust_remote_code=args.trust_remote_code
)
except (RuntimeError, OSError) as e:
library = check_has_model(e)
if library != "unknown":
raise RuntimeError(
f"Tried to load `{args.model_name}` with `{library}` but a possible model to load was not found inside the repo."
)
raise e
total_size, largest_layer = calculate_maximum_sizes(model)
data = []
for dtype in args.dtypes:
dtype_total_size = total_size
dtype_largest_layer = largest_layer[0]
if dtype == "float16":
dtype_total_size /= 2
dtype_largest_layer /= 2
elif dtype == "int8":
dtype_total_size /= 4
dtype_largest_layer /= 4
elif dtype == "int4":
dtype_total_size /= 8
dtype_largest_layer /= 8
dtype_training_size = dtype_total_size * 4
data.append([dtype, dtype_largest_layer, dtype_total_size, dtype_training_size])
return data
def estimate_command(args):
data = gather_data(args)
for row in data:
for i, item in enumerate(row):
if isinstance(item, (int, float)):
row[i] = convert_bytes(item)
headers = ["dtype", "Largest Layer", "Total Size", "Training using Adam"]
title = f"Memory Usage for loading `{args.model_name}`"
table = create_ascii_table(headers, data, title)
print(table)
def main():
parser = estimate_command_parser()
args = parser.parse_args()
estimate_command(args)
if __name__ == "__main__":
main()

View File

@ -34,14 +34,10 @@ from accelerate.utils import (
DistributedType,
PrepareForLaunch,
_filter_args,
is_bf16_available,
is_deepspeed_available,
is_npu_available,
is_rich_available,
is_sagemaker_available,
is_torch_version,
is_tpu_available,
is_xpu_available,
patch_environment,
prepare_deepspeed_cmd_env,
prepare_multi_gpu_env,
@ -510,27 +506,6 @@ def launch_command_parser(subparsers=None):
type=str,
help="FSDP's state dict type. (useful only when `use_fsdp` flag is passed).",
)
fsdp_args.add_argument(
"--fsdp_forward_prefetch",
default="false",
type=str,
help="If True, then FSDP explicitly prefetches the next upcoming "
"all-gather while executing in the forward pass (useful only when `use_fsdp` flag is passed).",
)
fsdp_args.add_argument(
"--fsdp_use_orig_params",
default="false",
type=str,
help="If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable paramteres."
" (useful only when `use_fsdp` flag is passed).",
)
fsdp_args.add_argument(
"--fsdp_sync_module_states",
default="true",
type=str,
help="If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0."
" (useful only when `use_fsdp` flag is passed).",
)
# megatron_lm args
megatron_lm_args = parser.add_argument_group("Megatron-LM Arguments", "Arguments related to Megatron-LM.")
@ -631,7 +606,13 @@ def simple_launcher(args):
def multi_gpu_launcher(args):
import torch.distributed.run as distrib_run
if is_torch_version(">=", "1.9.1"):
import torch.distributed.run as distrib_run
else:
raise NotImplementedError(
"Native multi-GPU training through `accelerate launch` requires pytorch>=1.9.1. "
"Please call `torch.distributed.launch` directly instead."
)
current_env = prepare_multi_gpu_env(args)
@ -654,8 +635,8 @@ def multi_gpu_launcher(args):
def deepspeed_launcher(args):
import torch.distributed.run as distrib_run
if is_torch_version(">=", "1.9.1"):
import torch.distributed.run as distrib_run
if not is_deepspeed_available():
raise ImportError("DeepSpeed is not installed => run `pip3 install deepspeed` or build it from source.")
@ -676,6 +657,9 @@ def deepspeed_launcher(args):
else:
sys.exit(1)
else:
if is_torch_version("<", "1.9.1"):
raise NotImplementedError("Multi-node training requires pytorch>=1.9.1")
debug = getattr(args, "debug", False)
args = _filter_args(
args,
@ -828,12 +812,7 @@ def _validate_launch_command(args):
and not args.use_megatron_lm
):
args.use_deepspeed = defaults.distributed_type == DistributedType.DEEPSPEED
args.multi_gpu = (
True
if defaults.distributed_type
in (DistributedType.MULTI_GPU, DistributedType.MULTI_NPU, DistributedType.MULTI_XPU)
else False
)
args.multi_gpu = defaults.distributed_type == DistributedType.MULTI_GPU
args.tpu = defaults.distributed_type == DistributedType.TPU
args.use_fsdp = defaults.distributed_type == DistributedType.FSDP
args.use_megatron_lm = defaults.distributed_type == DistributedType.MEGATRON_LM
@ -877,44 +856,21 @@ def _validate_launch_command(args):
and getattr(args, name, None) is None
):
setattr(args, name, attr)
if not args.debug:
args.debug = defaults.debug
if not args.mixed_precision:
if defaults.mixed_precision is None:
args.mixed_precision = "no"
else:
args.mixed_precision = defaults.mixed_precision
mp_from_config_flag = True
else:
native_amp = False
err = "{mode} mixed precision requires {requirement}"
if args.use_cpu or (args.use_xpu and torch.xpu.is_available()):
native_amp = is_torch_version(">=", "1.10")
else:
native_amp = is_bf16_available(True)
if args.mixed_precision == "bf16" and not native_amp and not (args.tpu and is_tpu_available()):
raise ValueError(err.format(mode="bf16", requirement="PyTorch >= 1.10 and a supported device."))
# Silently set the default here
if args.dynamo_backend is None:
args.dynamo_backend = "no"
else:
if args.num_processes is None:
if args.use_xpu and is_xpu_available():
args.num_processes = torch.xpu.device_count()
elif is_npu_available():
args.num_processes = torch.npu.device_count()
else:
args.num_processes = torch.cuda.device_count()
args.num_processes = torch.cuda.device_count()
warned.append(f"\t`--num_processes` was set to a value of `{args.num_processes}`")
if args.debug is None:
args.debug = False
if not args.multi_gpu and (
(args.use_xpu and is_xpu_available() and torch.xpu.device_count() > 1)
or (is_npu_available() and torch.npu.device_count() > 1)
or (torch.cuda.device_count() > 1)
):
if torch.cuda.device_count() > 1 and not args.multi_gpu:
warned.append(
"\t\tMore than one GPU was found, enabling multi-GPU training.\n"
"\t\tIf this was unintended please pass in `--num_processes=1`."
@ -931,8 +887,6 @@ def _validate_launch_command(args):
if args.dynamo_backend is None:
warned.append("\t`--dynamo_backend` was set to a value of `'no'`")
args.dynamo_backend = "no"
if args.debug:
logger.debug("Running script in debug mode, expect distributed operations to be slightly slower.")
is_aws_env_disabled = defaults is None or (
defaults is not None and defaults.compute_environment != ComputeEnvironment.AMAZON_SAGEMAKER
@ -962,6 +916,7 @@ def _validate_launch_command(args):
def launch_command(args):
args, defaults, mp_from_config_flag = _validate_launch_command(args)
# Use the proper launcher
if args.use_deepspeed and not args.cpu:
args.deepspeed_fields_from_accelerate_config = list(defaults.deepspeed_config.keys()) if defaults else []

View File

@ -15,22 +15,13 @@
"""
Main driver for the selection menu, based on https://github.com/bchao1/bullet
"""
import builtins
import sys
from ...utils.imports import _is_package_available
from . import cursor, input
from .helpers import Direction, clear_line, forceWrite, linebreak, move_cursor, reset_cursor, writeColor
from .keymap import KEYMAP
in_colab = False
try:
in_colab = _is_package_available("google.colab")
except ModuleNotFoundError:
pass
@input.register
class BulletMenu:
"""
@ -116,10 +107,7 @@ class BulletMenu:
if self.prompt:
linebreak()
forceWrite(self.prompt, "\n")
if in_colab:
forceWrite("Please input a choice index (starting from 0), and press enter", "\n")
else:
forceWrite("Please select a choice using the arrow or number keys, and selecting with enter", "\n")
forceWrite("Please select a choice using the arrow or number keys, and selecting with enter", "\n")
self.position = default_choice
for i in range(len(self.choices)):
self.print_choice(i)
@ -127,13 +115,7 @@ class BulletMenu:
move_cursor(len(self.choices) - self.position, "UP")
with cursor.hide():
while True:
if in_colab:
try:
choice = int(builtins.input())
except ValueError:
choice = default_choice
else:
choice = self.handle_input()
choice = self.handle_input()
if choice is not None:
reset_cursor()
for _ in range(len(self.choices) + 1):

View File

@ -14,7 +14,7 @@
import math
from contextlib import suppress
from typing import Callable, List, Optional, Union
from typing import List, Optional, Union
import torch
from torch.utils.data import BatchSampler, DataLoader, IterableDataset
@ -52,12 +52,12 @@ _PYTORCH_DATALOADER_KWARGS = {
"worker_init_fn": None,
"multiprocessing_context": None,
"generator": None,
"prefetch_factor": 2,
"persistent_workers": False,
}
# kwargs added after by version
_PYTORCH_DATALOADER_ADDITIONAL_KWARGS = {}
_PYTORCH_DATALOADER_ADDITIONAL_KWARGS = {
"1.7.0": {"prefetch_factor": 2, "persistent_workers": False},
}
for v, additional_kwargs in _PYTORCH_DATALOADER_ADDITIONAL_KWARGS.items():
if is_torch_version(">=", v):
@ -320,18 +320,6 @@ class DataLoaderStateMixin:
self.end_of_dataloader = False
self.remainder = -1
def begin(self):
"Prepares the gradient state for the current dataloader"
self.reset()
with suppress(Exception):
length = getattr(self.dataset, "total_dataset_length", len(self.dataset))
self.remainder = length % self.total_batch_size
self.gradient_state._add_dataloader(self)
def end(self):
"Cleans up the gradient state after exiting the dataloader"
self.gradient_state._remove_dataloader(self)
class DataLoaderShard(DataLoader, DataLoaderStateMixin):
"""
@ -377,7 +365,12 @@ class DataLoaderShard(DataLoader, DataLoaderStateMixin):
def __iter__(self):
if self.rng_types is not None:
synchronize_rng_states(self.rng_types, self.synchronized_generator)
self.begin()
self.reset()
self.gradient_state._add_dataloader(self)
# We can safely pass because the default is -1
with suppress(Exception):
length = getattr(self.dataset, "total_dataset_length", len(self.dataset))
self.remainder = length % self.total_batch_size
dataloader_iter = super().__iter__()
# We iterate one batch ahead to check when we are at the end
try:
@ -401,7 +394,7 @@ class DataLoaderShard(DataLoader, DataLoaderStateMixin):
if batch_index >= self.skip_batches:
yield current_batch
break
self.end()
self.gradient_state._remove_dataloader(self)
@property
def total_batch_size(self):
@ -485,9 +478,7 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
- **total_dataset_length** (`int`) -- Total length of the inner dataset across all processes.
"""
def __init__(
self, dataset, split_batches: bool = False, skip_batches=0, _drop_last: bool = False, slice_fn=None, **kwargs
):
def __init__(self, dataset, split_batches: bool = False, skip_batches=0, _drop_last: bool = False, **kwargs):
shuffle = False
if is_torch_version(">=", "1.11.0"):
from torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe
@ -497,6 +488,10 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
shuffle = dataset._shuffle_enabled
super().__init__(dataset, **kwargs)
self.split_batches = split_batches
if is_torch_version("<", "1.8.0"):
raise ImportError(
f"Using `DataLoaderDispatcher` requires PyTorch 1.8.0 minimum. You have {torch.__version__}."
)
if shuffle:
torch.utils.data.graph_settings.apply_shuffle_settings(dataset, shuffle=shuffle)
@ -504,8 +499,10 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
self.state = AcceleratorState()
self._drop_last = _drop_last
self.skip_batches = skip_batches
self.slice_fn = slice_tensors if slice_fn is None else slice_fn
# We can safely pass because the default is -1
with suppress(Exception):
length = getattr(self.dataset, "total_dataset_length", len(self.dataset))
self.remainder = length % self.total_batch_size
def _fetch_batches(self, iterator):
batches, batch = None, None
@ -545,14 +542,10 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
return batch, batch_info
def __iter__(self):
self.begin()
self.gradient_state._add_dataloader(self)
main_iterator = None
if is_torch_version(">=", "2.0.1"):
# NOTE PyTorch DataLoader adds forward compatibilities for DataPipes, which broadcasts
# shared seed to all dist processes. Thus, we need to create iterator for all dist processes.
# But, we only iterate through the DataLoader on process 0.
main_iterator = super().__iter__()
elif self.state.process_index == 0:
if self.state.process_index == 0:
# We only iterate through the DataLoader on process 0.
main_iterator = super().__iter__()
stop_iteration = False
self._stop_iteration = False
@ -571,12 +564,7 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
if not self._drop_last and first_batch is None:
# We keep at least num processes elements of the first batch to be able to complete the last batch
first_batch = self.slice_fn(
batch,
slice(0, self.state.num_processes),
process_index=self.state.process_index,
num_processes=self.state.num_processes,
)
first_batch = slice_tensors(batch, slice(0, self.state.num_processes))
if batch is None:
raise ValueError(
@ -602,12 +590,7 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
batch_size += 1
data_slice = slice(self.state.process_index * batch_size, (self.state.process_index + 1) * batch_size)
batch = self.slice_fn(
batch,
data_slice,
process_index=self.state.process_index,
num_processes=self.state.num_processes,
)
batch = slice_tensors(batch, data_slice)
if stop_iteration:
self.end_of_dataloader = True
@ -615,7 +598,7 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
if batch_index >= self.skip_batches:
yield batch
batch_index += 1
self.end()
self.gradient_state._remove_dataloader(self)
def __len__(self):
whole_length = super().__len__()
@ -647,7 +630,6 @@ def prepare_data_loader(
rng_types: Optional[List[Union[str, RNGType]]] = None,
dispatch_batches: Optional[bool] = None,
even_batches: bool = True,
slice_fn_for_dispatch: Optional[Callable] = None,
) -> DataLoader:
"""
Wraps a PyTorch `DataLoader` to generate batches for one of the processes only.
@ -697,10 +679,6 @@ def prepare_data_loader(
If set to `True`, in cases where the total batch size across all processes does not exactly divide the
dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among
all workers.
slice_fn_for_dispatch (`Callable`, *optional*`):
If passed, this function will be used to slice tensors across `num_processes`. Will default to
[`~utils.slice_tensors`]. This argument is used only when `dispatch_batches` is set to `True` and will be
ignored otherwise.
Returns:
`torch.utils.data.dataloader.DataLoader`: A new data loader that will yield the portion of the batches
@ -713,7 +691,7 @@ def prepare_data_loader(
</Tip>
"""
if dispatch_batches is None:
if not put_on_device:
if is_torch_version("<", "1.8.0") or not put_on_device:
dispatch_batches = False
else:
dispatch_batches = isinstance(dataloader.dataset, IterableDataset)
@ -805,7 +783,6 @@ def prepare_data_loader(
split_batches=split_batches,
batch_sampler=new_batch_sampler,
_drop_last=dataloader.drop_last,
slice_fn=slice_fn_for_dispatch,
**kwargs,
)
elif sampler_is_batch_sampler:

View File

@ -155,17 +155,17 @@ def add_hook_to_module(module: nn.Module, hook: ModelHook, append: bool = False)
module = hook.init_hook(module)
module._hf_hook = hook
def new_forward(module, *args, **kwargs):
@functools.wraps(old_forward)
def new_forward(*args, **kwargs):
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
if module._hf_hook.no_grad:
with torch.no_grad():
output = module._old_forward(*args, **kwargs)
output = old_forward(*args, **kwargs)
else:
output = module._old_forward(*args, **kwargs)
output = old_forward(*args, **kwargs)
return module._hf_hook.post_forward(module, output)
module.forward = functools.update_wrapper(functools.partial(new_forward, module), old_forward)
module.forward = new_forward
return module
@ -242,7 +242,7 @@ class AlignDevicesHook(ModelHook):
def __repr__(self):
return (
f"AlignDevicesHook(execution_device={self.execution_device}, offload={self.offload}, "
f"AlignDeviceHook(execution_device={self.execution_device}, offload={self.offload}, "
f"io_same_device={self.io_same_device}, offload_buffers={self.offload_buffers}, "
f"place_submodules={self.place_submodules}, skip_keys={repr(self.skip_keys)})"
)
@ -279,13 +279,7 @@ class AlignDevicesHook(ModelHook):
for name, _ in named_module_tensors(
module, include_buffers=self.offload_buffers, recurse=self.place_submodules
):
fp16_statistics = None
if "weight" in name and name.replace("weight", "SCB") in self.weights_map.keys():
if self.weights_map[name].dtype == torch.int8:
fp16_statistics = self.weights_map[name.replace("weight", "SCB")]
set_module_tensor_to_device(
module, name, self.execution_device, value=self.weights_map[name], fp16_statistics=fp16_statistics
)
set_module_tensor_to_device(module, name, self.execution_device, value=self.weights_map[name])
return send_to_device(args, self.execution_device), send_to_device(
kwargs, self.execution_device, skip_keys=self.skip_keys
@ -297,9 +291,6 @@ class AlignDevicesHook(ModelHook):
module, include_buffers=self.offload_buffers, recurse=self.place_submodules
):
set_module_tensor_to_device(module, name, "meta")
if type(module).__name__ == "Linear8bitLt":
module.state.SCB = None
module.state.CxB = None
if self.io_same_device and self.input_device is not None:
output = send_to_device(output, self.input_device, skip_keys=self.skip_keys)
@ -311,7 +302,6 @@ class AlignDevicesHook(ModelHook):
for name, device in self.original_devices.items():
if device != torch.device("meta"):
set_module_tensor_to_device(module, name, device, value=self.weights_map.get(name, None))
return module
def attach_execution_device_hook(

View File

@ -18,37 +18,20 @@ import tempfile
import torch
from .state import AcceleratorState, PartialState
from .state import AcceleratorState
from .utils import PrecisionType, PrepareForLaunch, is_mps_available, patch_environment
def test_launch():
"Verify a `PartialState` can be initialized."
_ = PartialState()
def notebook_launcher(
function,
args=(),
num_processes=None,
mixed_precision="no",
use_port="29500",
master_addr="127.0.0.1",
node_rank=0,
num_nodes=1,
):
def notebook_launcher(function, args=(), num_processes=None, mixed_precision="no", use_port="29500"):
"""
Launches a training function, using several processes or multiple nodes if it's possible in the current environment
(TPU with multiple cores for instance).
Launches a training function, using several processes if it's possible in the current environment (TPU with
multiple cores for instance).
<Tip warning={true}>
To use this function absolutely zero calls to a CUDA device must be made in the notebook session before calling. If
any have been made, you will need to restart the notebook and make sure no cells use any CUDA capability.
Setting `ACCELERATE_DEBUG_MODE="1"` in your environment will run a test before truly launching to ensure that none
of those calls have been made.
</Tip>
Args:
@ -64,12 +47,6 @@ def notebook_launcher(
If `fp16` or `bf16`, will use mixed precision training on multi-GPU.
use_port (`str`, *optional*, defaults to `"29500"`):
The port to use to communicate between processes when launching a multi-GPU training.
master_addr (`str`, *optional*, defaults to `"127.0.0.1"`):
The address to use for communication between processes.
node_rank (`int`, *optional*, defaults to 0):
The rank of the current node.
num_nodes (`int`, *optional*, defaults to 1):
The number of nodes to use for training.
Example:
@ -129,8 +106,7 @@ def notebook_launcher(
raise ValueError(
"You have to specify the number of GPUs you would like to use, add `num_processes=...` to your call."
)
if node_rank >= num_nodes:
raise ValueError("The node_rank must be less than the number of nodes.")
if num_processes > 1:
# Multi-GPU launch
from torch.multiprocessing import start_processes
@ -142,33 +118,19 @@ def notebook_launcher(
"inside your training function. Restart your notebook and make sure no cells initializes an "
"`Accelerator`."
)
if torch.cuda.is_initialized():
raise ValueError(
"To launch a multi-GPU training from your notebook, you need to avoid running any instruction "
"using `torch.cuda` in any cell. Restart your notebook and make sure no cells use any CUDA "
"function."
)
# torch.distributed will expect a few environment variable to be here. We set the ones common to each
# process here (the other ones will be set be the launcher).
with patch_environment(
nproc=num_processes,
node_rank=node_rank,
world_size=num_nodes * num_processes,
master_addr=master_addr,
master_port=use_port,
mixed_precision=mixed_precision,
world_size=num_processes, master_addr="127.0.01", master_port=use_port, mixed_precision=mixed_precision
):
# First dummy launch
if os.environ.get("ACCELERATE_DEBUG_MODE", "false").lower() == "true":
launcher = PrepareForLaunch(test_launch, distributed_type="MULTI_GPU")
try:
start_processes(launcher, args=(), nprocs=num_processes, start_method="fork")
except ProcessRaisedException as e:
err = "An issue was found when verifying a stable environment for the notebook launcher."
if "Cannot re-initialize CUDA in forked subprocess" in e.args[0]:
raise RuntimeError(
f"{err}"
"This likely stems from an outside import causing issues once the `notebook_launcher()` is called. "
"Please review your imports and test them when running the `notebook_launcher()` to identify "
"which one is problematic and causing CUDA to be initialized."
) from e
else:
raise RuntimeError(f"{err} The following error was raised: {e}") from e
# Now the actual launch
launcher = PrepareForLaunch(function, distributed_type="MULTI_GPU")
print(f"Launching training on {num_processes} GPUs.")
try:
@ -179,10 +141,8 @@ def notebook_launcher(
"CUDA has been initialized before the `notebook_launcher` could create a forked subprocess. "
"This likely stems from an outside import causing issues once the `notebook_launcher()` is called. "
"Please review your imports and test them when running the `notebook_launcher()` to identify "
"which one is problematic and causing CUDA to be initialized."
"which one is problematic."
) from e
else:
raise RuntimeError(f"An issue was found when launching the training: {e}") from e
else:
# No need for a distributed launch otherwise as it's either CPU, GPU or MPS.

View File

@ -108,5 +108,4 @@ def get_logger(name: str, log_level: str = None):
logger = logging.getLogger(name)
if log_level is not None:
logger.setLevel(log_level.upper())
logger.root.setLevel(log_level.upper())
return MultiProcessAdapter(logger, {})

View File

@ -18,7 +18,7 @@ import warnings
import torch
from .state import AcceleratorState, GradientState
from .utils import DistributedType, honor_type, is_tpu_available
from .utils import DistributedType, honor_type, is_torch_version, is_tpu_available
if is_tpu_available(check_device=False):
@ -60,11 +60,6 @@ class AcceleratedOptimizer(torch.optim.Optimizer):
self.device_placement = device_placement
self._is_overflow = False
if self.scaler is not None:
self._accelerate_step_called = False
self._optimizer_original_step_method = self.optimizer.step
self._optimizer_patched_step_method = patch_optimizer_step(self, self.optimizer.step)
# Handle device placement
if device_placement:
state_dict = self.optimizer.state_dict()
@ -111,15 +106,23 @@ class AcceleratedOptimizer(torch.optim.Optimizer):
def zero_grad(self, set_to_none=None):
if self.gradient_state.sync_gradients:
accept_arg = "set_to_none" in inspect.signature(self.optimizer.zero_grad).parameters
if accept_arg:
if set_to_none is None:
set_to_none = False
self.optimizer.zero_grad(set_to_none=set_to_none)
else:
if is_torch_version("<", "1.7.0"):
if set_to_none is not None:
raise ValueError("`set_to_none` for Optimizer.zero_grad` is not supported by this optimizer.")
raise ValueError(
"`set_to_none` for Optimizer.zero_grad` was introduced in PyTorch 1.7.0 and can't be used for "
f"earlier versions (found version {torch.__version__})."
)
self.optimizer.zero_grad()
else:
accept_arg = "set_to_none" in inspect.signature(self.optimizer.zero_grad).parameters
if accept_arg:
if set_to_none is None:
set_to_none = False
self.optimizer.zero_grad(set_to_none=set_to_none)
else:
if set_to_none is not None:
raise ValueError("`set_to_none` for Optimizer.zero_grad` is not supported by this optimizer.")
self.optimizer.zero_grad()
def step(self, closure=None):
if self.gradient_state.sync_gradients:
@ -127,20 +130,12 @@ class AcceleratedOptimizer(torch.optim.Optimizer):
optimizer_args = {"closure": closure} if closure is not None else {}
xm.optimizer_step(self.optimizer, optimizer_args=optimizer_args)
elif self.scaler is not None:
self.optimizer.step = self._optimizer_patched_step_method
scale_before = self.scaler.get_scale()
self.scaler.step(self.optimizer, closure)
self.scaler.update()
if not self._accelerate_step_called:
# If the optimizer step was skipped, gradient overflow was detected.
self._is_overflow = True
else:
self._is_overflow = False
# Reset the step method to the original one
self.optimizer.step = self._optimizer_original_step_method
# Reset the indicator
self._accelerate_step_called = False
scale_after = self.scaler.get_scale()
# If we reduced the loss scale, it means the optimizer step was skipped because of gradient overflow.
self._is_overflow = scale_after < scale_before
else:
self.optimizer.step(closure)
@ -164,24 +159,7 @@ class AcceleratedOptimizer(torch.optim.Optimizer):
return self._is_overflow
def __getstate__(self):
_ignored_keys = [
"_accelerate_step_called",
"_optimizer_original_step_method",
"_optimizer_patched_step_method",
]
return {k: v for k, v in self.__dict__.items() if k not in _ignored_keys}
return self.__dict__.copy()
def __setstate__(self, state):
self.__dict__.update(state)
if self.scaler is not None:
self._accelerate_step_called = False
self._optimizer_original_step_method = self.optimizer.step
self._optimizer_patched_step_method = patch_optimizer_step(self, self.optimizer.step)
def patch_optimizer_step(accelerated_optimizer: AcceleratedOptimizer, method):
def patched_step(*args, **kwargs):
accelerated_optimizer._accelerate_step_called = True
return method(*args, **kwargs)
return patched_step

View File

@ -35,7 +35,6 @@ from .utils import (
is_fp8_available,
is_ipex_available,
is_mps_available,
is_npu_available,
is_tpu_available,
is_xpu_available,
parse_choice_from_env,
@ -48,10 +47,6 @@ if is_tpu_available(check_device=False):
import torch_xla.core.xla_model as xm
if is_npu_available(check_device=False):
import torch_npu # noqa: F401
def is_initialized() -> bool:
"""
Checks if the `AcceleratorState` has been initialized from `Accelerator`. Same as `AcceleratorState.initialized`,
@ -110,13 +105,12 @@ class PartialState:
in use.
- **local_process_index** (`int`) -- The index of the current process on the current server.
- **mixed_precision** (`str`) -- Whether or not the current script will use mixed precision, and if so the type
of mixed precision being performed. (Choose from 'no','fp16','bf16 or 'fp8').
of mixed precision being performed.
- **num_processes** (`int`) -- The number of processes currently launched in parallel.
- **process_index** (`int`) -- The index of the current process.
- **is_last_process** (`bool`) -- Whether or not the current process is the last one.
- **is_main_process** (`bool`) -- Whether or not the current process is the main one.
- **is_local_main_process** (`bool`) -- Whether or not the current process is the main one on the local node.
- **debug** (`bool`) -- Whether or not the current script is being run in debug mode.
"""
_shared_state = SharedDict()
@ -128,7 +122,6 @@ class PartialState:
self.backend = None
env_device = os.environ.get("ACCELERATE_TORCH_DEVICE", None)
self.device = torch.device(env_device) if env_device is not None else None
self.debug = parse_flag_from_env("ACCELERATE_DEBUG_MODE")
use_sagemaker_dp = kwargs.pop("_use_sagemaker_dp", None)
if use_sagemaker_dp is None:
use_sagemaker_dp = (
@ -172,11 +165,7 @@ class PartialState:
# DeepSpeed always uses nccl
kwargs.pop("backend", None)
if is_xpu_available and is_ccl_available():
# Set DeepSpeed backend to ccl for xpu
self.backend = "ccl"
else:
self.backend = "nccl"
self.backend = "nccl"
dist.init_distributed(dist_backend=self.backend, auto_mpi_discovery=False, **kwargs)
self.num_processes = torch.distributed.get_world_size()
@ -192,7 +181,7 @@ class PartialState:
if self.device is not None:
torch.cuda.set_device(self.device)
self._mixed_precision = "no" # deepspeed handles mixed_precision using deepspeed_config
elif int(os.environ.get("LOCAL_RANK", -1)) != -1 and not cpu and torch.cuda.is_available():
elif int(os.environ.get("LOCAL_RANK", -1)) != -1 and not cpu:
self.distributed_type = DistributedType.MULTI_GPU
if not torch.distributed.is_initialized():
self.backend = kwargs.pop("backend", "nccl")
@ -206,28 +195,12 @@ class PartialState:
if self.device is None:
self.device = torch.device("cuda", self.local_process_index)
torch.cuda.set_device(self.device)
elif is_npu_available() and not cpu and int(os.environ.get("LOCAL_RANK", -1)) != -1:
self.distributed_type = DistributedType.MULTI_NPU
if not torch.distributed.is_initialized():
# Backend is not set by the user, we set it here
kwargs.pop("backend", None)
self.backend = "hccl"
torch.distributed.init_process_group(backend=self.backend, **kwargs)
self.num_processes = torch.distributed.get_world_size()
self.process_index = torch.distributed.get_rank()
self.local_process_index = int(os.environ.get("LOCAL_RANK", -1))
if self.device is None:
self.device = torch.device("npu", self.local_process_index)
torch.npu.set_device(self.device)
elif get_int_from_env(["PMI_SIZE", "OMPI_COMM_WORLD_SIZE", "MV2_COMM_WORLD_SIZE", "WORLD_SIZE"], 1) > 1:
if not cpu and is_xpu_available():
self.distributed_type = DistributedType.MULTI_XPU
else:
self.distributed_type = DistributedType.MULTI_CPU
# Actually, CCL_WORKER_COUNT is a CPU only env var in CCL, no need to set it for XPU.
if is_ccl_available() and (
get_int_from_env(["CCL_WORKER_COUNT"], 0) > 0 or self.distributed_type == DistributedType.MULTI_XPU
):
if is_ccl_available() and get_int_from_env(["CCL_WORKER_COUNT"], 0) > 0:
if get_ccl_version() >= "1.12":
import oneccl_bindings_for_pytorch # noqa: F401
else:
@ -258,20 +231,6 @@ class PartialState:
"Looks like distributed multinode run but MASTER_ADDR env not set, "
"please try exporting rank 0's hostname as MASTER_ADDR"
)
if (
self.distributed_type == DistributedType.MULTI_CPU
and get_int_from_env(["OMP_NUM_THREADS", "MKL_NUM_THREADS"], 0) == 0
):
import psutil
num_cpu_threads_per_process = int(psutil.cpu_count(logical=False) / local_size)
if num_cpu_threads_per_process == 0:
num_cpu_threads_per_process = 1
torch.set_num_threads(num_cpu_threads_per_process)
warnings.warn(
f"OMP_NUM_THREADS/MKL_NUM_THREADS unset, we set it at {num_cpu_threads_per_process} to improve oob"
" performance."
)
if not torch.distributed.is_initialized():
# Backend is not set by the user, we set it here
kwargs.pop("backend", None)
@ -279,13 +238,9 @@ class PartialState:
torch.distributed.init_process_group(self.backend, rank=rank, world_size=size, **kwargs)
self.num_processes = torch.distributed.get_world_size()
self.process_index = torch.distributed.get_rank()
if cpu:
self.device = torch.device("cpu")
elif is_xpu_available():
self.device = torch.device("xpu", self.local_process_index)
torch.xpu.set_device(self.device)
else:
self.device = self.default_device
self.local_process_index = local_rank
if self.device is None:
self.device = torch.device("cpu") if cpu else self.default_device
else:
self.distributed_type = DistributedType.NO
self.num_processes = 1
@ -367,8 +322,6 @@ class PartialState:
"""
if self.distributed_type in (
DistributedType.MULTI_GPU,
DistributedType.MULTI_NPU,
DistributedType.MULTI_XPU,
DistributedType.MULTI_CPU,
DistributedType.DEEPSPEED,
DistributedType.FSDP,
@ -428,24 +381,23 @@ class PartialState:
if self.num_processes == 1:
yield inputs
return
length = len(inputs)
# Nested dictionary of any types
if isinstance(inputs, dict):
length = len(inputs[list(inputs.keys())[0]])
if not all(len(v) == length for v in inputs.values()):
raise ValueError("All values in the dictionary must have the same length")
num_samples_per_process = math.ceil(length / self.num_processes)
num_samples_per_process = math.ceil(len(inputs) / self.num_processes)
start_index = self.process_index * num_samples_per_process
end_index = start_index + num_samples_per_process
if (len(inputs) % self.num_processes != 0) and (self.process_index == self.num_processes - 1):
end_index = length
if isinstance(inputs, (list, tuple, torch.Tensor)):
end_index = len(inputs)
elif isinstance(inputs, dict):
end_index = len(inputs[list(inputs.keys())[0]])
def _split_values(inputs, start_index, end_index):
if isinstance(inputs, (list, tuple, torch.Tensor)):
if start_index >= len(inputs):
result = inputs[-1:]
else:
result = inputs[start_index:end_index]
result = inputs[start_index:end_index]
if apply_padding:
if isinstance(result, torch.Tensor):
from accelerate.utils import pad_across_processes, send_to_device
@ -675,7 +627,6 @@ class PartialState:
Returns the default device which is:
- MPS if `torch.backends.mps.is_available()` and `torch.backends.mps.is_built()` both return True.
- CUDA if `torch.cuda.is_available()`
- NPU if `is_npu_available()`
- CPU otherwise
"""
if is_mps_available():
@ -685,8 +636,6 @@ class PartialState:
return torch.device("cuda")
elif is_xpu_available():
return torch.device("xpu:0")
elif is_npu_available():
return torch.device("npu")
else:
return torch.device("cpu")
@ -703,13 +652,12 @@ class AcceleratorState:
- **initialized** (`bool`) -- Whether or not the `AcceleratorState` has been initialized from `Accelerator`.
- **local_process_index** (`int`) -- The index of the current process on the current server.
- **mixed_precision** (`str`) -- Whether or not the current script will use mixed precision, and if so the type
of mixed precision being performed. (Choose from 'no','fp16','bf16 or 'fp8').
of mixed precision being performed.
- **num_processes** (`int`) -- The number of processes currently launched in parallel.
- **process_index** (`int`) -- The index of the current process.
- **is_last_process** (`bool`) -- Whether or not the current process is the last one.
- **is_main_process** (`bool`) -- Whether or not the current process is the main one.
- **is_local_main_process** (`bool`) -- Whether or not the current process is the main one on the local node.
- **debug** (`bool`) -- Whether or not the current script is being run in debug mode.
"""
_shared_state = SharedDict()
@ -734,7 +682,6 @@ class AcceleratorState:
self._check_initialized(mixed_precision, cpu)
if not self.initialized:
self.deepspeed_plugin = None
self.use_ipex = None
mixed_precision = (
parse_choice_from_env("ACCELERATE_MIXED_PRECISION", "no")
if mixed_precision is None
@ -772,25 +719,12 @@ class AcceleratorState:
self.distributed_type = DistributedType.MEGATRON_LM
megatron_lm_plugin.set_mixed_precision(self._mixed_precision)
self.megatron_lm_plugin = megatron_lm_plugin
elif self.distributed_type == DistributedType.MULTI_NPU:
if os.environ.get("ACCELERATE_USE_FSDP", "false") == "true":
self.distributed_type = DistributedType.FSDP
if self._mixed_precision != "no":
fsdp_plugin.set_mixed_precision(self._mixed_precision)
self.fsdp_plugin = fsdp_plugin
elif self.distributed_type in [DistributedType.MULTI_CPU, DistributedType.MULTI_XPU, DistributedType.NO]:
if is_ipex_available():
"check if user disables it explicitly"
self.use_ipex = parse_flag_from_env("ACCELERATE_USE_IPEX", default=True)
else:
self.use_ipex = False
if self.distributed_type == DistributedType.MULTI_XPU:
if os.environ.get("ACCELERATE_USE_FSDP", "false") == "true":
self.distributed_type = DistributedType.FSDP
if self._mixed_precision != "no":
fsdp_plugin.set_mixed_precision(self._mixed_precision)
self.fsdp_plugin = fsdp_plugin
if (
self.dynamo_plugin.backend != DynamoBackend.NO
and self._mixed_precision == "no"
@ -955,12 +889,10 @@ class GradientState:
- **sync_gradients** (`bool`) -- Whether the gradients should be synced across all devices
- **active_dataloader** (`Optional[DataLoader]`) -- The dataloader that is currently being iterated over
- **dataloader_references** (`List[Optional[DataLoader]]`) -- A list of references to the dataloaders that are
being iterated over
being iterated over
- **num_steps** (`int`) -- The number of steps to accumulate over
- **adjust_scheduler** (`bool`) -- Whether the scheduler should be adjusted to account for the gradient
accumulation
- **sync_with_dataloader** (`bool`) -- Whether the gradients should be synced at the end of the dataloader
iteration and the number of total steps reset
accumulation
"""
_shared_state = SharedDict()
@ -989,11 +921,6 @@ class GradientState:
"Returns whether the scheduler should be adjusted"
return self.plugin_kwargs.get("adjust_scheduler", False)
@property
def sync_with_dataloader(self) -> bool:
"Returns whether the gradients should be synced at the end of the dataloader iteration and the number of total steps reset"
return self.plugin_kwargs.get("sync_with_dataloader", True)
@property
def initialized(self) -> bool:
"Returns whether the `GradientState` has been initialized"

View File

@ -1,8 +1,6 @@
from .testing import (
are_the_same_tensors,
assert_exception,
execute_subprocess_async,
require_bnb,
require_cpu,
require_cuda,
require_huggingface_suite,
@ -18,7 +16,7 @@ from .testing import (
skip,
slow,
)
from .training import RegressionDataset, RegressionModel, RegressionModel4XPU
from .training import RegressionDataset, RegressionModel
from .scripts import test_script, test_sync, test_ops # isort: skip

View File

@ -12,7 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import math
import os
from copy import deepcopy
@ -22,11 +21,10 @@ import evaluate
import torch
import transformers
from datasets import load_dataset
from torch.utils.data import DataLoader, IterableDataset
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from accelerate import Accelerator
from accelerate.data_loader import DataLoaderDispatcher
from accelerate.test_utils import RegressionDataset, RegressionModel
from accelerate.utils import is_tpu_available, set_seed
@ -34,15 +32,6 @@ from accelerate.utils import is_tpu_available, set_seed
os.environ["TRANSFORMERS_NO_ADVISORY_WARNINGS"] = "true"
class ListHandler(logging.Handler):
def __init__(self, *args, **kwargs):
super(ListHandler, self).__init__(*args, **kwargs)
self.logs = []
def emit(self, record):
self.logs.append(record)
def get_basic_setup(accelerator, num_samples=82, batch_size=16):
"Returns everything needed to perform basic training"
set_seed(42)
@ -149,76 +138,6 @@ def test_mrpc(dispatch_batches: bool = False, split_batches: bool = False):
), f"Baseline and Distributed are not the same for key {key}:\n\tBaseline: {baseline[key]}\n\tDistributed: {distributed[key]}\n"
def test_gather_for_metrics_with_non_tensor_objects_iterable_dataset():
class DummyIterableDataset(IterableDataset):
def __init__(self, data):
self.data = data
def __len__(self):
return len(self.data)
def __iter__(self):
for element in self.data:
yield element
iterable_dataset = DummyIterableDataset([n for n in range(30)])
dataloader = DataLoader(iterable_dataset, batch_size=4)
accelerator = Accelerator()
prepared_dataloader = accelerator.prepare(dataloader)
if accelerator.is_main_process:
logger = logging.root.manager.loggerDict["accelerate.accelerator"]
list_handler = ListHandler()
logger.addHandler(list_handler)
batches_for_metrics = []
for batch in prepared_dataloader:
batches_for_metrics.append(accelerator.gather_for_metrics(batch))
assert torch.cat(batches_for_metrics).size(0) == 30
if accelerator.is_main_process:
assert len(list_handler.logs) == 0
logger.removeHandler(list_handler)
def test_gather_for_metrics_with_iterable_dataset():
class DummyIterableDataset(IterableDataset):
def __init__(self, data):
self.data = data
def __len__(self):
return len(self.data)
def __iter__(self):
for element in self.data:
yield element
iterable_dataset = DummyIterableDataset(torch.as_tensor(range(30)))
dataloader = DataLoader(iterable_dataset, batch_size=4)
accelerator = Accelerator()
prepared_dataloader = accelerator.prepare(dataloader)
assert isinstance(prepared_dataloader, DataLoaderDispatcher)
if accelerator.is_main_process:
logger = logging.root.manager.loggerDict["accelerate.accelerator"]
list_handler = ListHandler()
logger.addHandler(list_handler)
batches_for_metrics = []
for batch in prepared_dataloader:
batches_for_metrics.append(accelerator.gather_for_metrics(batch))
assert torch.cat(batches_for_metrics).size(0) == 30
if accelerator.is_main_process:
assert len(list_handler.logs) == 0
logger.removeHandler(list_handler)
def main():
accelerator = Accelerator(split_batches=False, dispatch_batches=False)
if accelerator.is_local_main_process:
@ -237,10 +156,6 @@ def main():
print(f"With: `split_batches={split_batches}`, `dispatch_batches={dispatch_batches}`")
test_mrpc(dispatch_batches, split_batches)
accelerator.state._reset_state()
print("test_gather_for_metrics_with_iterable_dataset")
test_gather_for_metrics_with_iterable_dataset()
print("test gather_for_metrics_with_non_tensor_objects_iterable_dataset")
test_gather_for_metrics_with_non_tensor_objects_iterable_dataset()
if accelerator.is_local_main_process:
print("**Test torch metrics**")
for split_batches in [True, False]:

View File

@ -24,7 +24,6 @@ from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed
from accelerate import Accelerator, DistributedType
from accelerate.utils import is_npu_available, is_xpu_available
from accelerate.utils.deepspeed import DummyOptim, DummyScheduler
@ -41,34 +40,16 @@ def b2mb(x):
class TorchTracemalloc:
def __enter__(self):
gc.collect()
if torch.cuda.is_available():
torch.cuda.empty_cache()
torch.cuda.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.cuda.memory_allocated()
elif is_npu_available():
torch.npu.empty_cache()
torch.npu.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.npu.memory_allocated()
elif is_xpu_available():
torch.xpu.empty_cache()
torch.xpu.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.xpu.memory_allocated()
torch.cuda.empty_cache()
torch.cuda.reset_max_memory_allocated() # reset the peak gauge to zero
self.begin = torch.cuda.memory_allocated()
return self
def __exit__(self, *exc):
gc.collect()
if torch.cuda.is_available():
torch.cuda.empty_cache()
self.end = torch.cuda.memory_allocated()
self.peak = torch.cuda.max_memory_allocated()
elif is_npu_available():
torch.npu.empty_cache()
self.end = torch.npu.memory_allocated()
self.peak = torch.npu.max_memory_allocated()
elif is_xpu_available():
torch.xpu.empty_cache()
self.end = torch.xpu.memory_allocated()
self.peak = torch.xpu.max_memory_allocated()
torch.cuda.empty_cache()
self.end = torch.cuda.memory_allocated()
self.peak = torch.cuda.max_memory_allocated()
self.used = b2mb(self.end - self.begin)
self.peaked = b2mb(self.peak - self.begin)
# print(f"delta used/peak {self.used:4d}/{self.peaked:4d}")

View File

@ -1,17 +0,0 @@
# Test file to ensure that in general certain situational setups for notebooks work.
import argparse
from accelerate import PartialState, notebook_launcher
parser = argparse.ArgumentParser()
parser.add_argument("--num_processes", type=int, default=1)
args = parser.parse_args()
def function():
print(f"PartialState:\n{PartialState()}")
if __name__ == "__main__":
notebook_launcher(function, num_processes=int(args.num_processes))

View File

@ -17,16 +17,8 @@
import torch
from accelerate import PartialState
from accelerate.test_utils.testing import assert_exception
from accelerate.utils.dataclasses import DistributedType
from accelerate.utils.operations import (
DistributedOperationException,
broadcast,
gather,
gather_object,
pad_across_processes,
reduce,
)
from accelerate.utils.imports import is_torch_version
from accelerate.utils.operations import broadcast, gather, gather_object, pad_across_processes, reduce
def create_tensor(state):
@ -46,14 +38,6 @@ def test_gather_object(state):
assert gathered_obj == list(range(state.num_processes)), f"{gathered_obj} != {list(range(state.num_processes))}"
def test_gather_non_contigous(state):
# Create a non-contiguous tensor
tensor = torch.arange(12).view(4, 3).t().to(state.device)
assert not tensor.is_contiguous()
# Shouldn't error out
_ = gather(tensor)
def test_broadcast(state):
tensor = create_tensor(state)
broadcasted_tensor = broadcast(tensor)
@ -94,41 +78,6 @@ def test_reduce_mean(state):
assert torch.allclose(reduced_tensor, truth_tensor), f"{reduced_tensor} != {truth_tensor}"
def test_op_checker(state):
# Must be in a distributed state
if state.distributed_type == DistributedType.NO:
return
state.debug = True
# `pad_across_processes`
if state.process_index == 0:
data = {"tensor": torch.tensor([[0.0, 1, 2, 3, 4]]).to(state.device)}
else:
data = {"tensor": torch.tensor([[[0.0, 1, 2, 3, 4, 5]]]).to(state.device)}
with assert_exception(DistributedOperationException):
pad_across_processes(data, dim=0)
# `reduce`
if state.process_index == 0:
data = {"tensor": torch.tensor([[0.0, 1, 2, 3, 4]]).to(state.device)}
else:
data = {"tensor": torch.tensor([[[0.0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]]).to(state.device)}
with assert_exception(DistributedOperationException):
reduce(data)
# `broadcast`
if state.process_index == 0:
data = {"tensor": torch.tensor([[0.0, 1, 2, 3, 4]]).to(state.device)}
else:
data = {"tensor": torch.tensor([[[0.0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]]).to(state.device)}
with assert_exception(DistributedOperationException):
broadcast(data)
state.debug = False
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
@ -139,10 +88,9 @@ def main():
state.print(f"State: {state}")
state.print("testing gather")
test_gather(state)
state.print("testing gather_object")
test_gather_object(state)
state.print("testing gather non-contigous")
test_gather_non_contigous(state)
if is_torch_version(">=", "1.7.0"):
state.print("testing gather_object")
test_gather_object(state)
state.print("testing broadcast")
test_broadcast(state)
state.print("testing pad_across_processes")
@ -151,8 +99,6 @@ def main():
test_reduce_sum(state)
state.print("testing reduce_mean")
test_reduce_mean(state)
state.print("testing op_checker")
test_op_checker(state)
if __name__ == "__main__":

View File

@ -27,26 +27,19 @@ from torch.utils.data import DataLoader
from accelerate import Accelerator
from accelerate.data_loader import prepare_data_loader
from accelerate.state import AcceleratorState
from accelerate.test_utils import RegressionDataset, are_the_same_tensors
from accelerate.test_utils import RegressionDataset, RegressionModel, are_the_same_tensors
from accelerate.utils import (
DistributedType,
gather,
is_bf16_available,
is_ipex_available,
is_npu_available,
is_torch_version,
is_xpu_available,
set_seed,
synchronize_rng_states,
)
# TODO: remove RegressionModel4XPU once ccl support empty buffer in broadcasting.
if is_xpu_available():
from accelerate.test_utils import RegressionModel4XPU as RegressionModel
else:
from accelerate.test_utils import RegressionModel
def print_main(state):
print(f"Printing from the main process {state.process_index}")
@ -66,6 +59,7 @@ def print_on(state, process_idx):
def process_execution_check():
accelerator = Accelerator()
num_processes = accelerator.num_processes
# Test main_process_first context manager
path = Path("check_main_process_first.txt")
with accelerator.main_process_first():
@ -77,7 +71,6 @@ def process_execution_check():
with open(path, "a+") as f:
f.write("Now on another process\n")
accelerator.wait_for_everyone()
if accelerator.is_main_process:
with open(path, "r") as f:
text = "".join(f.readlines())
@ -86,8 +79,8 @@ def process_execution_check():
if num_processes > 1:
assert text.endswith("Now on another process\n"), "Main process was not first"
assert (
text.count("Now on another process\n") == accelerator.num_processes - 1
), f"Only wrote to file {text.count('Now on another process') + 1} times, not {accelerator.num_processes}"
text.count("Now on another process\n") == num_processes - 1
), f"Only wrote to file {text.count('Now on another process') + 1} times, not {num_processes}"
except AssertionError:
path.unlink()
raise
@ -151,9 +144,6 @@ def rng_sync_check():
if state.distributed_type == DistributedType.MULTI_GPU:
synchronize_rng_states(["cuda"])
assert are_the_same_tensors(torch.cuda.get_rng_state()), "RNG states improperly synchronized on GPU."
elif state.distributed_type == DistributedType.MULTI_XPU:
synchronize_rng_states(["xpu"])
assert are_the_same_tensors(torch.xpu.get_rng_state()), "RNG states improperly synchronized on XPU."
generator = torch.Generator()
synchronize_rng_states(["generator"], generator=generator)
assert are_the_same_tensors(generator.get_state()), "RNG states improperly synchronized in generator."
@ -359,7 +349,7 @@ def training_check():
accelerator.print("Training yielded the same results on one CPU or distributes setup with batch split.")
if torch.cuda.is_available() or is_npu_available():
if torch.cuda.is_available():
# Mostly a test that FP16 doesn't crash as the operation inside the model is not converted to FP16
print("FP16 training check.")
AcceleratorState._reset_state()
@ -383,21 +373,6 @@ def training_check():
assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on CPU or distributed training."
assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on CPU or distributed training."
if torch.cuda.is_available():
# Mostly a test that model.forward will have autocast when running unwrap_model(model, keep_fp32_wrapper=True)
print("Keep fp32 wrapper check.")
AcceleratorState._reset_state()
accelerator = Accelerator(mixed_precision="fp16")
model = torch.nn.Linear(2, 4)
model = accelerator.prepare(model)
model_with_fp32_wrapper = accelerator.unwrap_model(model, keep_fp32_wrapper=True)
# Run forward with fp16 as input.
# When the model is with mixed precision wrapper, no error will be raised.
input_tensor = torch.Tensor([1, 2]).to(dtype=torch.float16, device=accelerator.device)
output = model_with_fp32_wrapper(input_tensor)
# BF16 support is only for CPU + TPU, and some GPU
if is_bf16_available():
# Mostly a test that BF16 doesn't crash as the operation inside the model is not converted to BF16
@ -480,7 +455,7 @@ def test_split_between_processes_list():
len(results) == 2
), f"Each process did not have two items. Process index: {state.process_index}; Length: {len(results)}"
data = list(range(0, (3 * state.num_processes) - 1))
data = list(range(0, (2 * state.num_processes) + 1))
with state.split_between_processes(data, apply_padding=True) as results:
if state.is_last_process:
# Test that the last process gets the extra item(s)
@ -488,45 +463,38 @@ def test_split_between_processes_list():
assert (
len(results) == num_samples_per_device
), f"Last process did not get the extra item(s). Process index: {state.process_index}; Length: {len(results)}"
state.wait_for_everyone()
def test_split_between_processes_nested_dict():
state = AcceleratorState()
a = [1, 2, 3, 4, 5, 6, 7, 8]
b = ["a", "b", "c", "d", "e", "f", "g", "h"]
c = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8])
if state.num_processes in (1, 2, 4):
data = {"a": a, "b": b, "c": c}
data = {"a": [1, 2, 3, 4], "b": ["w", "x", "y", "z"], "c": torch.tensor([0, 1, 2, 3])}
data_copy = deepcopy(data)
with state.split_between_processes(data) as results:
if state.process_index == 0:
assert results["a"] == data_copy["a"][: 8 // state.num_processes]
assert results["a"] == data_copy["a"][: 4 // state.num_processes]
elif state.num_processes == 2:
assert results["a"] == data_copy["a"][4:]
elif state.process_index == 3:
# We return a list each time
assert results["a"] == data_copy["a"][-2:], f'Expected: {data_copy["a"][-2]}, Actual: {results["a"]}'
assert results["a"] == data_copy["a"][2:]
else:
assert results["a"] == data_copy["a"][-1]
if state.process_index == 0:
assert results["b"] == data_copy["b"][: 8 // state.num_processes]
assert results["b"] == data_copy["b"][: 4 // state.num_processes]
elif state.num_processes == 2:
assert results["b"] == data_copy["b"][4:]
elif state.process_index == 3:
assert results["b"] == data_copy["b"][-2:]
assert results["b"] == data_copy["b"][2:]
else:
assert results["b"] == data_copy["b"][-1]
if state.process_index == 0:
assert torch.allclose(
results["c"], data_copy["c"][: 8 // state.num_processes]
), f"Did not obtain expected values on process 0, expected `{data['c'][:8 // state.num_processes]}`, received: {results['c']}"
results["c"], data_copy["c"][: 4 // state.num_processes]
), f"Did not obtain expected values on process 0, expected `{data['c'][:4//state.num_processes]}`, received: {results['c']}"
elif state.num_processes == 2:
assert torch.allclose(
results["c"], data_copy["c"][4:]
), f"Did not obtain expected values on process 2, expected `{data['c'][4:]}`, received: {results['c']}"
results["c"], data_copy["c"][2:]
), f"Did not obtain expected values on process 2, expected `{data['c'][2:]}`, received: {results['c']}"
elif state.process_index == 3:
assert torch.allclose(
results["c"], data_copy["c"][-2:]
), f"Did not obtain expected values on process 4, expected `{data['c'][-2:]}`, received: {results['c']}"
state.wait_for_everyone()
results["c"], data_copy["c"][3]
), f"Did not obtain expected values on process 4, expected `{data['c'][3]}`, received: {results['c']}"
def test_split_between_processes_tensor():
@ -538,24 +506,6 @@ def test_split_between_processes_tensor():
assert torch.allclose(results, torch.tensor([0, 1, 2, 3]).to(state.device))
else:
assert torch.allclose(results, torch.tensor([4, 5, 6, 7]).to(state.device))
state.wait_for_everyone()
def test_trigger():
accelerator = Accelerator()
# should start with being false
assert accelerator.check_trigger() is False
# set a breakpoint on the main process
if accelerator.is_main_process:
accelerator.set_trigger()
# check it's been activated across all processes
# calls `all_reduce` and triggers a sync
assert accelerator.check_trigger() is True
# check it's been reset after the sync
assert accelerator.check_trigger() is False
def main():
@ -564,30 +514,21 @@ def main():
if state.local_process_index == 0:
print("**Initialization**")
init_state_check()
state.wait_for_everyone()
if state.local_process_index == 0:
print("\n**Test process execution**")
process_execution_check()
if state.distributed_type == DistributedType.MULTI_GPU:
num_processes_per_node = torch.cuda.device_count()
else:
num_processes_per_node = state.num_processes
if state.local_process_index == 0:
print("\n**Test split between processes as a list**")
test_split_between_processes_list()
# We only run this test on non-multinode
if num_processes_per_node == state.num_processes:
if state.process_index == 0:
print("\n**Test process execution**")
process_execution_check()
if state.local_process_index == 0:
print("\n**Test split between processes as a dict**")
test_split_between_processes_nested_dict()
if state.process_index == 0:
print("\n**Test split between processes as a list**")
test_split_between_processes_list()
if state.process_index == 0:
print("\n**Test split between processes as a dict**")
test_split_between_processes_nested_dict()
if state.process_index == 0:
print("\n**Test split between processes as a tensor**")
test_split_between_processes_tensor()
if state.local_process_index == 0:
print("\n**Test split between processes as a tensor**")
test_split_between_processes_tensor()
if state.local_process_index == 0:
print("\n**Test random number generator synchronization**")
@ -596,7 +537,7 @@ def main():
if state.local_process_index == 0:
print("\n**DataLoader integration test**")
dl_preparation_check()
if state.distributed_type != DistributedType.TPU:
if state.distributed_type != DistributedType.TPU and is_torch_version(">=", "1.8.0"):
central_dl_preparation_check()
# Trainings are not exactly the same in DeepSpeed and CPU mode
@ -607,10 +548,6 @@ def main():
print("\n**Training integration test**")
training_check()
if state.local_process_index == 0:
print("\n**Breakpoint trigger test**")
test_trigger()
if __name__ == "__main__":
main()

View File

@ -150,60 +150,6 @@ def test_distributed_sync(accelerator):
ddp_input = ddp_input[torch.randperm(len(ddp_input))]
def test_distributed_sync_multiple_fwd(accelerator):
# Test on distributed setup that context manager behaves properly when used with multiple forwards followed by multiple backwards
model, ddp_model, dataloader = get_training_setup(accelerator)
# Do multiple forwards
losses = []
num_iterations = 3
for iteration in range(num_iterations):
ddp_input, ddp_target = next(iter(dataloader)).values()
# Gather the distributed inputs and targs for the base model
input, target = accelerator.gather((ddp_input, ddp_target))
input, target = input.to(accelerator.device), target.to(accelerator.device)
# Perform our initial ground truth step in non "DDP"
step_model(model, input, target, accelerator)
# Accumulate grads locally
with accelerator.no_sync(ddp_model):
ddp_output = ddp_model(ddp_input)
loss = F.mse_loss(ddp_output, ddp_target.to(ddp_output.device))
losses.append(loss)
# Do multiple backwards and sync only at the last backward
for iteration in range(num_iterations):
loss = losses[iteration]
if iteration < num_iterations - 1:
# Accumulate grads locally
accelerator.backward(loss)
# DDP model and model should only be in sync after last backward
for param, ddp_param in zip(model.parameters(), ddp_model.parameters()):
if not param.requires_grad:
continue
# Grads should not be in sync
assert (
torch.allclose(param.grad, ddp_param.grad) is False
), f"Gradients in sync when they should not be:\nModel grad ({param.grad}) == DDP grad ({ddp_param.grad})"
else:
# Sync grads if last backward
with accelerator.trigger_sync_in_backward(ddp_model):
accelerator.backward(loss)
# DDP model and model should only be in sync after last backward
for param, ddp_param in zip(model.parameters(), ddp_model.parameters()):
if not param.requires_grad:
continue
# Grads should be in sync
assert (
torch.allclose(param.grad, ddp_param.grad) is True
), f"Gradients not in sync when they should be:\nModel grad ({param.grad}) != DDP grad ({ddp_param.grad})"
def test_gradient_accumulation(split_batches=False, dispatch_batches=False):
accelerator = Accelerator(
split_batches=split_batches, dispatch_batches=dispatch_batches, gradient_accumulation_steps=2
@ -320,14 +266,11 @@ def main():
if state.local_process_index == 0:
print("**Test NOOP `no_sync` context manager**")
test_noop_sync(accelerator)
if state.distributed_type in (DistributedType.MULTI_GPU, DistributedType.MULTI_NPU, DistributedType.MULTI_CPU):
if state.distributed_type in (DistributedType.MULTI_GPU, DistributedType.MULTI_CPU):
if state.local_process_index == 0:
print("**Test Distributed `no_sync` context manager**")
test_distributed_sync(accelerator)
if state.local_process_index == 0:
print("**Test Distributed `no_sync` context manager with multiple forwards**")
test_distributed_sync_multiple_fwd(accelerator)
if state.distributed_type in (DistributedType.MULTI_GPU, DistributedType.MULTI_NPU):
if state.distributed_type == DistributedType.MULTI_GPU:
for split_batch in [True, False]:
for dispatch_batches in [True, False]:
if state.local_process_index == 0:
@ -345,7 +288,7 @@ def main():
"`split_batches=False`, `dispatch_batches=False`**",
)
test_gradient_accumulation_with_opt_and_scheduler()
if state.distributed_type in (DistributedType.MULTI_GPU, DistributedType.MULTI_NPU):
if state.distributed_type == DistributedType.MULTI_GPU:
for split_batch in [True, False]:
for dispatch_batches in [True, False]:
if not split_batch and not dispatch_batches:

View File

@ -19,7 +19,7 @@ import subprocess
import sys
import tempfile
import unittest
from contextlib import contextmanager
from distutils.util import strtobool
from functools import partial
from pathlib import Path
from typing import List, Union
@ -30,20 +30,17 @@ import torch
from ..state import AcceleratorState, PartialState
from ..utils import (
gather,
is_bnb_available,
is_comet_ml_available,
is_datasets_available,
is_deepspeed_available,
is_mps_available,
is_safetensors_available,
is_tensorboard_available,
is_timm_available,
is_torch_version,
is_tpu_available,
is_transformers_available,
is_wandb_available,
is_xpu_available,
str_to_bool,
)
@ -56,7 +53,7 @@ def parse_flag_from_env(key, default=False):
else:
# KEY is set, convert it to True or False.
try:
_value = str_to_bool(value)
_value = strtobool(value)
except ValueError:
# More values are supported, but let's keep the message simple.
raise ValueError(f"If set, {key} must be yes or no.")
@ -117,27 +114,6 @@ def require_huggingface_suite(test_case):
)(test_case)
def require_transformers(test_case):
"""
Decorator marking a test that requires transformers. These tests are skipped when they are not.
"""
return unittest.skipUnless(is_transformers_available(), "test requires the transformers library")(test_case)
def require_timm(test_case):
"""
Decorator marking a test that requires transformers. These tests are skipped when they are not.
"""
return unittest.skipUnless(is_timm_available(), "test requires the timm library")(test_case)
def require_bnb(test_case):
"""
Decorator marking a test that requires bitsandbytes. These tests are skipped when they are not.
"""
return unittest.skipUnless(is_bnb_available(), "test requires the bitsandbytes library")(test_case)
def require_tpu(test_case):
"""
Decorator marking a test that requires TPUs. These tests are skipped when there are no TPUs available.
@ -431,22 +407,3 @@ def run_command(command: List[str], return_stdout=False):
raise SubprocessCallException(
f"Command `{' '.join(command)}` failed with the following error:\n\n{e.output.decode()}"
) from e
@contextmanager
def assert_exception(exception_class: Exception, msg: str = None) -> bool:
"""
Context manager to assert that the right `Exception` class was raised.
If `msg` is provided, will check that the message is contained in the raised exception.
"""
was_ran = False
try:
yield
was_ran = True
except Exception as e:
assert isinstance(e, exception_class), f"Expected exception of type {exception_class} but got {type(e)}"
if msg is not None:
assert msg in str(e), f"Expected message '{msg}' to be in exception but got '{str(e)}'"
if was_ran:
raise AssertionError(f"Expected exception of type {exception_class} but ran without issue.")

View File

@ -33,20 +33,6 @@ class RegressionDataset:
return {"x": self.x[i], "y": self.y[i]}
class RegressionModel4XPU(torch.nn.Module):
def __init__(self, a=0, b=0, double_output=False):
super().__init__()
self.a = torch.nn.Parameter(torch.tensor([2, 3]).float())
self.b = torch.nn.Parameter(torch.tensor([2, 3]).float())
self.first_batch = True
def forward(self, x=None):
if self.first_batch:
print(f"Model dtype: {self.a.dtype}, {self.b.dtype}. Input dtype: {x.dtype}")
self.first_batch = False
return x * self.a[0] + self.b[0]
class RegressionModel(torch.nn.Module):
def __init__(self, a=0, b=0, double_output=False):
super().__init__()

View File

@ -32,25 +32,37 @@ from .utils import (
is_mlflow_available,
is_tensorboard_available,
is_wandb_available,
listify,
)
_available_trackers = []
if is_tensorboard_available():
try:
from torch.utils import tensorboard
except ModuleNotFoundError:
import tensorboardX as tensorboard
_available_trackers.append(LoggerType.TENSORBOARD)
if is_wandb_available():
import wandb
_available_trackers.append(LoggerType.WANDB)
if is_comet_ml_available():
from comet_ml import Experiment
_available_trackers.append(LoggerType.COMETML)
if is_aim_available():
from aim import Run
_available_trackers.append(LoggerType.AIM)
if is_mlflow_available():
import mlflow
_available_trackers.append(LoggerType.MLFLOW)
logger = get_logger(__name__)
@ -172,10 +184,6 @@ class TensorBoardTracker(GeneralTracker):
@on_main_process
def __init__(self, run_name: str, logging_dir: Union[str, os.PathLike], **kwargs):
try:
from torch.utils import tensorboard
except ModuleNotFoundError:
import tensorboardX as tensorboard
super().__init__()
self.run_name = run_name
self.logging_dir = os.path.join(logging_dir, run_name)
@ -228,7 +236,6 @@ class TensorBoardTracker(GeneralTracker):
Additional key word arguments passed along to either `SummaryWriter.add_scaler`,
`SummaryWriter.add_text`, or `SummaryWriter.add_scalers` method based on the contents of `values`.
"""
values = listify(values)
for k, v in values.items():
if isinstance(v, (int, float)):
self.writer.add_scalar(k, v, global_step=step, **kwargs)
@ -284,9 +291,6 @@ class WandBTracker(GeneralTracker):
def __init__(self, run_name: str, **kwargs):
super().__init__()
self.run_name = run_name
import wandb
self.run = wandb.init(project=self.run_name, **kwargs)
logger.debug(f"Initialized WandB project {self.run_name}")
logger.debug(
@ -307,9 +311,7 @@ class WandBTracker(GeneralTracker):
Values to be stored as initial hyperparameters as key-value pairs. The values need to have type `bool`,
`str`, `float`, `int`, or `None`.
"""
import wandb
wandb.config.update(values, allow_val_change=True)
wandb.config.update(values)
logger.debug("Stored initial configuration hyperparameters to WandB")
@on_main_process
@ -342,8 +344,6 @@ class WandBTracker(GeneralTracker):
kwargs:
Additional key word arguments passed along to the `wandb.log` method.
"""
import wandb
for k, v in values.items():
self.log({k: [wandb.Image(image) for image in v]}, step=step, **kwargs)
logger.debug("Successfully logged images to WandB")
@ -374,7 +374,6 @@ class WandBTracker(GeneralTracker):
step (`int`, *optional*):
The run step. If included, the log will be affiliated with this step.
"""
import wandb
values = {table_name: wandb.Table(columns=columns, data=data, dataframe=dataframe)}
self.log(values, step=step, **kwargs)
@ -408,9 +407,6 @@ class CometMLTracker(GeneralTracker):
def __init__(self, run_name: str, **kwargs):
super().__init__()
self.run_name = run_name
from comet_ml import Experiment
self.writer = Experiment(project_name=run_name, **kwargs)
logger.debug(f"Initialized CometML project {self.run_name}")
logger.debug(
@ -486,9 +482,6 @@ class AimTracker(GeneralTracker):
@on_main_process
def __init__(self, run_name: str, logging_dir: Optional[Union[str, os.PathLike]] = ".", **kwargs):
self.run_name = run_name
from aim import Run
self.writer = Run(repo=logging_dir, **kwargs)
self.writer.name = self.run_name
logger.debug(f"Initialized Aim project {self.run_name}")
@ -586,8 +579,6 @@ class MLflowTracker(GeneralTracker):
nested_run = os.getenv("MLFLOW_NESTED_RUN", nested_run)
import mlflow
exps = mlflow.search_experiments(filter_string=f"name = '{experiment_name}'")
if len(exps) > 0:
if len(exps) > 1:
@ -627,7 +618,6 @@ class MLflowTracker(GeneralTracker):
values (`dict`):
Values to be stored as initial hyperparameters as key-value pairs.
"""
import mlflow
for name, value in list(values.items()):
# internally, all values are converted to str in MLflow
@ -666,7 +656,6 @@ class MLflowTracker(GeneralTracker):
f'MLflowTracker is attempting to log a value of "{v}" of type {type(v)} for key "{k}" as a metric. '
"MLflow's log_metric() only accepts float and int types so we dropped this attribute."
)
import mlflow
mlflow.log_metrics(metrics, step=step)
logger.debug("Successfully logged to mlflow")
@ -676,8 +665,6 @@ class MLflowTracker(GeneralTracker):
"""
End the active MLflow run.
"""
import mlflow
mlflow.end_run()

View File

@ -1,21 +1,6 @@
from .constants import (
MODEL_NAME,
OPTIMIZER_NAME,
RNG_STATE_NAME,
SAFE_WEIGHTS_INDEX_NAME,
SAFE_WEIGHTS_NAME,
SCALER_NAME,
SCHEDULER_NAME,
TORCH_DISTRIBUTED_OPERATION_TYPES,
TORCH_LAUNCH_PARAMS,
WEIGHTS_INDEX_NAME,
WEIGHTS_NAME,
)
from .constants import MODEL_NAME, OPTIMIZER_NAME, RNG_STATE_NAME, SCALER_NAME, SCHEDULER_NAME, TORCH_LAUNCH_PARAMS
from .dataclasses import (
AutocastKwargs,
BnbQuantizationConfig,
ComputeEnvironment,
CustomDtype,
DeepSpeedPlugin,
DistributedDataParallelKwargs,
DistributedType,
@ -35,18 +20,14 @@ from .dataclasses import (
TensorInformation,
TorchDynamoPlugin,
)
from .environment import get_int_from_env, parse_choice_from_env, parse_flag_from_env, str_to_bool
from .environment import get_int_from_env, parse_choice_from_env, parse_flag_from_env
from .imports import (
get_ccl_version,
is_4bit_bnb_available,
is_8bit_bnb_available,
is_aim_available,
is_bf16_available,
is_bnb_available,
is_boto3_available,
is_ccl_available,
is_comet_ml_available,
is_cuda_available,
is_datasets_available,
is_deepspeed_available,
is_fp8_available,
@ -54,19 +35,17 @@ from .imports import (
is_megatron_lm_available,
is_mlflow_available,
is_mps_available,
is_npu_available,
is_rich_available,
is_safetensors_available,
is_sagemaker_available,
is_tensorboard_available,
is_timm_available,
is_tpu_available,
is_transformers_available,
is_wandb_available,
is_xpu_available,
)
from .modeling import (
calculate_maximum_sizes,
CustomDtype,
check_device_map,
check_tied_parameters_in_config,
check_tied_parameters_on_same_device,
@ -78,7 +57,6 @@ from .modeling import (
get_max_layer_size,
get_max_memory,
get_mixed_precision_context_manager,
id_tensor_storage,
infer_auto_device_map,
load_checkpoint_in_model,
load_offloaded_weights,
@ -86,7 +64,6 @@ from .modeling import (
named_module_tensors,
retie_parameters,
set_module_tensor_to_device,
shard_checkpoint,
)
from .offload import (
OffloadedWeightsLoader,
@ -113,7 +90,6 @@ from .operations import (
is_namedtuple,
is_tensor_information,
is_torch_tensor,
listify,
pad_across_processes,
recursively_apply,
reduce,
@ -133,11 +109,10 @@ if is_deepspeed_available():
HfDeepSpeedConfig,
)
from .bnb import has_4bit_bnb_layers, load_and_quantize_model
from .fsdp_utils import load_fsdp_model, load_fsdp_optimizer, save_fsdp_model, save_fsdp_optimizer
from .launch import (
PrepareForLaunch,
_filter_args,
get_launch_prefix,
prepare_deepspeed_cmd_env,
prepare_multi_gpu_env,
prepare_sagemager_args_inputs,
@ -164,11 +139,8 @@ from .megatron_lm import prepare_optimizer as megatron_lm_prepare_optimizer
from .megatron_lm import prepare_scheduler as megatron_lm_prepare_scheduler
from .memory import find_executable_batch_size, release_memory
from .other import (
clear_environment,
convert_bytes,
extract_model_from_parallel,
get_pretty_name,
is_port_in_use,
merge_dicts,
patch_environment,
save,

View File

@ -1,467 +0,0 @@
# Copyright 2023 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
from copy import deepcopy
from typing import Dict, List, Optional, Union
import torch
import torch.nn as nn
from accelerate.utils.imports import (
is_4bit_bnb_available,
is_8bit_bnb_available,
)
from ..big_modeling import dispatch_model, init_empty_weights
from .dataclasses import BnbQuantizationConfig
from .modeling import (
find_tied_parameters,
get_balanced_memory,
infer_auto_device_map,
load_checkpoint_in_model,
offload_weight,
set_module_tensor_to_device,
)
logger = logging.getLogger(__name__)
def load_and_quantize_model(
model: torch.nn.Module,
bnb_quantization_config: BnbQuantizationConfig,
weights_location: Union[str, os.PathLike] = None,
device_map: Optional[Dict[str, Union[int, str, torch.device]]] = None,
no_split_module_classes: Optional[List[str]] = None,
max_memory: Optional[Dict[Union[int, str], Union[int, str]]] = None,
offload_folder: Optional[Union[str, os.PathLike]] = None,
offload_state_dict: bool = False,
):
"""
This function will quantize the input model with the associated config passed in `bnb_quantization_config`. If the
model is in the meta device, we will load and dispatch the weights according to the `device_map` passed. If the
model is already loaded, we will quantize the model and put the model on the GPU,
Args:
model (`torch.nn.Module`):
Input model. The model can be already loaded or on the meta device
bnb_quantization_config (`BnbQuantizationConfig`):
The bitsandbytes quantization parameters
weights_location (`str` or `os.PathLike`):
The folder weights_location to load. It can be:
- a path to a file containing a whole model state dict
- a path to a `.json` file containing the index to a sharded checkpoint
- a path to a folder containing a unique `.index.json` file and the shards of a checkpoint.
- a path to a folder containing a unique pytorch_model.bin file.
device_map (`Dict[str, Union[int, str, torch.device]]`, *optional*):
A map that specifies where each submodule should go. It doesn't need to be refined to each parameter/buffer
name, once a given module name is inside, every submodule of it will be sent to the same device.
no_split_module_classes (`List[str]`, *optional*):
A list of layer class names that should never be split across device (for instance any layer that has a
residual connection).
max_memory (`Dict`, *optional*):
A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset.
offload_folder (`str` or `os.PathLike`, *optional*):
If the `device_map` contains any value `"disk"`, the folder where we will offload weights.
offload_state_dict (`bool`, *optional*, defaults to `False`):
If `True`, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if
the weight of the CPU state dict + the biggest shard does not fit.
Returns:
`torch.nn.Module`: The quantized model
"""
load_in_4bit = bnb_quantization_config.load_in_4bit
load_in_8bit = bnb_quantization_config.load_in_8bit
if load_in_8bit and not is_8bit_bnb_available():
raise ImportError(
"You have a version of `bitsandbytes` that is not compatible with 8bit quantization,"
" make sure you have the latest version of `bitsandbytes` installed."
)
if load_in_4bit and not is_4bit_bnb_available():
raise ValueError(
"You have a version of `bitsandbytes` that is not compatible with 4bit quantization,"
"make sure you have the latest version of `bitsandbytes` installed."
)
modules_on_cpu = []
# custom device map
if isinstance(device_map, dict) and len(device_map.keys()) > 1:
modules_on_cpu = [key for key, value in device_map.items() if value in ["disk", "cpu"]]
# We keep some modules such as the lm_head in their original dtype for numerical stability reasons
if bnb_quantization_config.skip_modules is None:
bnb_quantization_config.skip_modules = get_keys_to_not_convert(model)
# add cpu modules to skip modules only for 4-bit modules
if load_in_4bit:
bnb_quantization_config.skip_modules.extend(modules_on_cpu)
modules_to_not_convert = bnb_quantization_config.skip_modules
# We add the modules we want to keep in full precision
if bnb_quantization_config.keep_in_fp32_modules is None:
bnb_quantization_config.keep_in_fp32_modules = []
keep_in_fp32_modules = bnb_quantization_config.keep_in_fp32_modules
modules_to_not_convert.extend(keep_in_fp32_modules)
# compatibility with peft
model.is_loaded_in_4bit = load_in_4bit
model.is_loaded_in_8bit = load_in_8bit
model_device = get_parameter_device(model)
if model_device.type != "meta":
# quantization of an already loaded model
logger.warning(
"It is not recommended to quantize a loaded model. "
"The model should be instantiated under the `init_empty_weights` context manager."
)
model = replace_with_bnb_layers(model, bnb_quantization_config, modules_to_not_convert=modules_to_not_convert)
# convert param to the right dtype
dtype = bnb_quantization_config.torch_dtype
for name, param in model.state_dict().items():
if any(module_to_keep_in_fp32 in name for module_to_keep_in_fp32 in keep_in_fp32_modules):
param.to(torch.float32)
if param.dtype != torch.float32:
name = name.replace(".weight", "").replace(".bias", "")
param = getattr(model, name, None)
if param is not None:
param.to(torch.float32)
elif torch.is_floating_point(param):
param.to(dtype)
if model_device.type == "cuda":
# move everything to cpu in the first place because we can't do quantization if the weights are already on cuda
model.cuda(torch.cuda.current_device())
torch.cuda.empty_cache()
elif torch.cuda.is_available():
model.to(torch.cuda.current_device())
else:
raise RuntimeError("No GPU found. A GPU is needed for quantization.")
logger.info(
f"The model device type is {model_device.type}. However, cuda is needed for quantization."
"We move the model to cuda."
)
return model
elif weights_location is None:
raise RuntimeError(
f"`weights_location` needs to be the folder path containing the weights of the model, but we found {weights_location} "
)
else:
with init_empty_weights():
model = replace_with_bnb_layers(
model, bnb_quantization_config, modules_to_not_convert=modules_to_not_convert
)
device_map = get_quantized_model_device_map(
model,
bnb_quantization_config,
device_map,
max_memory=max_memory,
no_split_module_classes=no_split_module_classes,
)
if offload_state_dict is None and device_map is not None and "disk" in device_map.values():
offload_state_dict = True
offload = any(x in list(device_map.values()) for x in ["cpu", "disk"])
load_checkpoint_in_model(
model,
weights_location,
device_map,
dtype=bnb_quantization_config.torch_dtype,
offload_folder=offload_folder,
offload_state_dict=offload_state_dict,
keep_in_fp32_modules=bnb_quantization_config.keep_in_fp32_modules,
offload_8bit_bnb=load_in_8bit and offload,
)
return dispatch_model(model, device_map=device_map, offload_dir=offload_folder)
def get_quantized_model_device_map(
model, bnb_quantization_config, device_map=None, max_memory=None, no_split_module_classes=None
):
if device_map is None:
if torch.cuda.is_available():
device_map = {"": torch.cuda.current_device()}
else:
raise RuntimeError("No GPU found. A GPU is needed for quantization.")
logger.info("The device_map was not initialized." "Setting device_map to `{'':torch.cuda.current_device()}`.")
if isinstance(device_map, str):
if device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]:
raise ValueError(
"If passing a string for `device_map`, please choose 'auto', 'balanced', 'balanced_low_0' or "
"'sequential'."
)
special_dtypes = {}
special_dtypes.update(
{
name: bnb_quantization_config.torch_dtype
for name, _ in model.named_parameters()
if any(m in name for m in bnb_quantization_config.skip_modules)
}
)
special_dtypes.update(
{
name: torch.float32
for name, _ in model.named_parameters()
if any(m in name for m in bnb_quantization_config.keep_in_fp32_modules)
}
)
kwargs = {}
kwargs["special_dtypes"] = special_dtypes
kwargs["no_split_module_classes"] = no_split_module_classes
kwargs["dtype"] = bnb_quantization_config.target_dtype
# get max_memory for each device.
if device_map != "sequential":
max_memory = get_balanced_memory(
model,
low_zero=(device_map == "balanced_low_0"),
max_memory=max_memory,
**kwargs,
)
kwargs["max_memory"] = max_memory
device_map = infer_auto_device_map(model, **kwargs)
if isinstance(device_map, dict):
# check if don't have any quantized module on the cpu
modules_not_to_convert = bnb_quantization_config.skip_modules + bnb_quantization_config.keep_in_fp32_modules
device_map_without_some_modules = {
key: device_map[key] for key in device_map.keys() if key not in modules_not_to_convert
}
for device in ["cpu", "disk"]:
if device in device_map_without_some_modules.values():
if bnb_quantization_config.load_in_4bit:
raise ValueError(
"""
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in `torch_dtype`, you need to pass a custom `device_map` to
`load_and_quantize_model`. Check
https://huggingface.co/docs/accelerate/main/en/usage_guides/quantization#offload-modules-to-cpu-and-disk
for more details.
"""
)
else:
logger.info(
"Some modules are are offloaded to the CPU or the disk. Note that these modules will be converted to 8-bit"
)
del device_map_without_some_modules
return device_map
def replace_with_bnb_layers(model, bnb_quantization_config, modules_to_not_convert=None, current_key_name=None):
"""
A helper function to replace all `torch.nn.Linear` modules by `bnb.nn.Linear8bit` modules or by `bnb.nn.Linear4bit`
modules from the `bitsandbytes`library. The function will be run recursively and replace `torch.nn.Linear` modules.
Parameters:
model (`torch.nn.Module`):
Input model or `torch.nn.Module` as the function is run recursively.
modules_to_not_convert (`List[str]`):
Names of the modules to not quantize convert. In practice we keep the `lm_head` in full precision for
numerical stability reasons.
current_key_name (`List[str]`, *optional*):
An array to track the current key of the recursion. This is used to check whether the current key (part of
it) is not in the list of modules to not convert.
"""
if modules_to_not_convert is None:
modules_to_not_convert = []
model, has_been_replaced = _replace_with_bnb_layers(
model, bnb_quantization_config, modules_to_not_convert, current_key_name
)
if not has_been_replaced:
logger.warning(
"You are loading your model in 8bit or 4bit but no linear modules were found in your model."
" this can happen for some architectures such as gpt2 that uses Conv1D instead of Linear layers."
" Please double check your model architecture, or submit an issue on github if you think this is"
" a bug."
)
return model
def _replace_with_bnb_layers(
model,
bnb_quantization_config,
modules_to_not_convert=None,
current_key_name=None,
):
"""
Private method that wraps the recursion for module replacement.
Returns the converted model and a boolean that indicates if the conversion has been successfull or not.
"""
# bitsandbytes will initialize CUDA on import, so it needs to be imported lazily
import bitsandbytes as bnb
has_been_replaced = False
for name, module in model.named_children():
if current_key_name is None:
current_key_name = []
current_key_name.append(name)
if isinstance(module, nn.Linear) and name not in modules_to_not_convert:
# Check if the current key is not in the `modules_to_not_convert`
current_key_name_str = ".".join(current_key_name)
proceed = True
for key in modules_to_not_convert:
if (
(key in current_key_name_str) and (key + "." in current_key_name_str)
) or key == current_key_name_str:
proceed = False
break
if proceed:
# Load bnb module with empty weight and replace ``nn.Linear` module
if bnb_quantization_config.load_in_8bit:
bnb_module = bnb.nn.Linear8bitLt(
module.in_features,
module.out_features,
module.bias is not None,
has_fp16_weights=False,
threshold=bnb_quantization_config.llm_int8_threshold,
)
elif bnb_quantization_config.load_in_4bit:
bnb_module = bnb.nn.Linear4bit(
module.in_features,
module.out_features,
module.bias is not None,
bnb_quantization_config.bnb_4bit_compute_dtype,
compress_statistics=bnb_quantization_config.bnb_4bit_use_double_quant,
quant_type=bnb_quantization_config.bnb_4bit_quant_type,
)
else:
raise ValueError("load_in_8bit and load_in_4bit can't be both False")
bnb_module.weight.data = module.weight.data
if module.bias is not None:
bnb_module.bias.data = module.bias.data
bnb_module.requires_grad_(False)
setattr(model, name, bnb_module)
has_been_replaced = True
if len(list(module.children())) > 0:
_, _has_been_replaced = _replace_with_bnb_layers(
module, bnb_quantization_config, modules_to_not_convert, current_key_name
)
has_been_replaced = has_been_replaced | _has_been_replaced
# Remove the last key for recursion
current_key_name.pop(-1)
return model, has_been_replaced
def get_keys_to_not_convert(model):
r"""
An utility function to get the key of the module to keep in full precision if any For example for CausalLM modules
we may want to keep the lm_head in full precision for numerical stability reasons. For other architectures, we want
to keep the tied weights of the model. The function will return a list of the keys of the modules to not convert in
int8.
Parameters:
model (`torch.nn.Module`):
Input model
"""
# Create a copy of the model
with init_empty_weights():
tied_model = deepcopy(model) # this has 0 cost since it is done inside `init_empty_weights` context manager`
tied_params = find_tied_parameters(tied_model)
# For compatibility with Accelerate < 0.18
if isinstance(tied_params, dict):
tied_keys = sum(list(tied_params.values()), []) + list(tied_params.keys())
else:
tied_keys = sum(tied_params, [])
has_tied_params = len(tied_keys) > 0
# Check if it is a base model
is_base_model = False
if hasattr(model, "base_model_prefix"):
is_base_model = not hasattr(model, model.base_model_prefix)
# Ignore this for base models (BertModel, GPT2Model, etc.)
if (not has_tied_params) and is_base_model:
return []
# otherwise they have an attached head
list_modules = list(model.named_children())
list_last_module = [list_modules[-1][0]]
# add last module together with tied weights
intersection = set(list_last_module) - set(tied_keys)
list_untouched = list(set(tied_keys)) + list(intersection)
# remove ".weight" from the keys
names_to_remove = [".weight", ".bias"]
filtered_module_names = []
for name in list_untouched:
for name_to_remove in names_to_remove:
if name_to_remove in name:
name = name.replace(name_to_remove, "")
filtered_module_names.append(name)
return filtered_module_names
def has_4bit_bnb_layers(model):
"""Check if we have `bnb.nn.Linear4bit` or `bnb.nn.Linear8bitLt` layers inside our model"""
# bitsandbytes will initialize CUDA on import, so it needs to be imported lazily
import bitsandbytes as bnb
for m in model.modules():
if isinstance(m, bnb.nn.Linear4bit):
return True
return False
def get_parameter_device(parameter: nn.Module):
return next(parameter.parameters()).device
def quantize_and_offload_8bit(model, param, param_name, new_dtype, offload_folder, offload_index, fp16_statistics):
# if it is not quantized, we quantize and offload the quantized weights and the SCB stats
if fp16_statistics is None:
set_module_tensor_to_device(model, param_name, 0, dtype=new_dtype, value=param)
tensor_name = param_name
module = model
if "." in tensor_name:
splits = tensor_name.split(".")
for split in splits[:-1]:
new_module = getattr(module, split)
if new_module is None:
raise ValueError(f"{module} has no attribute {split}.")
module = new_module
tensor_name = splits[-1]
# offload weights
module._parameters[tensor_name].requires_grad = False
offload_weight(module._parameters[tensor_name], param_name, offload_folder, index=offload_index)
if hasattr(module._parameters[tensor_name], "SCB"):
offload_weight(
module._parameters[tensor_name].SCB,
param_name.replace("weight", "SCB"),
offload_folder,
index=offload_index,
)
else:
offload_weight(param, param_name, offload_folder, index=offload_index)
offload_weight(fp16_statistics, param_name.replace("weight", "SCB"), offload_folder, index=offload_index)
set_module_tensor_to_device(model, param_name, "meta", dtype=new_dtype, value=torch.empty(*param.size()))

View File

@ -20,20 +20,15 @@ MODEL_NAME = "pytorch_model"
RNG_STATE_NAME = "random_states"
OPTIMIZER_NAME = "optimizer"
SCHEDULER_NAME = "scheduler"
WEIGHTS_NAME = "pytorch_model.bin"
WEIGHTS_INDEX_NAME = "pytorch_model.bin.index.json"
SAFE_WEIGHTS_NAME = "model.safetensors"
SAFE_WEIGHTS_INDEX_NAME = "model.safetensors.index.json"
SAGEMAKER_PYTORCH_VERSION = "1.10.2"
SAGEMAKER_PYTHON_VERSION = "py38"
SAGEMAKER_TRANSFORMERS_VERSION = "4.17.0"
SAGEMAKER_PARALLEL_EC2_INSTANCES = ["ml.p3.16xlarge", "ml.p3dn.24xlarge", "ml.p4dn.24xlarge"]
FSDP_SHARDING_STRATEGY = ["FULL_SHARD", "SHARD_GRAD_OP", "NO_SHARD", "HYBRID_SHARD", "HYBRID_SHARD_ZERO2"]
FSDP_SHARDING_STRATEGY = ["FULL_SHARD", "SHARD_GRAD_OP", "NO_SHARD"]
FSDP_AUTO_WRAP_POLICY = ["TRANSFORMER_BASED_WRAP", "SIZE_BASED_WRAP", "NO_WRAP"]
FSDP_BACKWARD_PREFETCH = ["BACKWARD_PRE", "BACKWARD_POST", "NO_PREFETCH"]
FSDP_STATE_DICT_TYPE = ["FULL_STATE_DICT", "LOCAL_STATE_DICT", "SHARDED_STATE_DICT"]
FSDP_PYTORCH_VERSION = "2.0.1"
DEEPSPEED_MULTINODE_LAUNCHERS = ["pdsh", "standard", "openmpi", "mvapich", "mpich"]
DEEPSPEED_MULTINODE_LAUNCHERS = ["pdsh", "standard", "openmpi", "mvapich"]
TORCH_DYNAMO_MODES = ["default", "reduce-overhead", "max-autotune"]
STR_OPERATION_TO_FUNC = {">": op.gt, ">=": op.ge, "==": op.eq, "!=": op.ne, "<=": op.le, "<": op.lt}
@ -66,4 +61,4 @@ TORCH_LAUNCH_PARAMS = [
]
CUDA_DISTRIBUTED_TYPES = ["DEEPSPEED", "MULTI_GPU", "FSDP", "MEGATRON_LM"]
TORCH_DISTRIBUTED_OPERATION_TYPES = CUDA_DISTRIBUTED_TYPES + ["MULTI_NPU", "MULTI_XPU", "MULTI_CPU"]
XPU_DISTRIBUTED_TYPES = ["DEEPSPEED", "MULTI_XPU", "FSDP"]

View File

@ -26,13 +26,13 @@ import warnings
from contextlib import contextmanager
from dataclasses import dataclass, field
from datetime import timedelta
from distutils.util import strtobool
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple
import torch
from .constants import FSDP_AUTO_WRAP_POLICY, FSDP_BACKWARD_PREFETCH, FSDP_STATE_DICT_TYPE
from .environment import str_to_bool
from .versions import compare_versions
from .constants import FSDP_AUTO_WRAP_POLICY, FSDP_BACKWARD_PREFETCH, FSDP_STATE_DICT_TYPE, MODEL_NAME, OPTIMIZER_NAME
from .versions import is_torch_version
class KwargsHandler:
@ -47,37 +47,11 @@ class KwargsHandler:
"""
Returns a dictionary containing the attributes with values different from the default of this class.
"""
# import clear_environment here to avoid circular import problem
from .other import clear_environment
with clear_environment():
default_dict = self.__class__().to_dict()
default_dict = self.__class__().to_dict()
this_dict = self.to_dict()
return {k: v for k, v in this_dict.items() if default_dict[k] != v}
@dataclass
class AutocastKwargs(KwargsHandler):
"""
Use this object in your [`Accelerator`] to customize how `torch.autocast` behaves. Please refer to the
documentation of this [context manager](https://pytorch.org/docs/stable/amp.html#torch.autocast) for more
information on each argument.
Example:
```python
from accelerate import Accelerator
from accelerate.utils import AutocastKwargs
kwargs = AutocastKwargs(cache_enabled=True)
accelerator = Accelerator(kwargs_handlers=[kwargs])
```
"""
enabled: bool = True
cache_enabled: bool = None
@dataclass
class DistributedDataParallelKwargs(KwargsHandler):
"""
@ -209,7 +183,6 @@ class DistributedType(str, enum.Enum):
- **NO** -- Not a distributed environment, just a single process.
- **MULTI_CPU** -- Distributed on multiple CPU nodes.
- **MULTI_GPU** -- Distributed on multiple GPUs.
- **MULTI_NPU** -- Distributed on multiple NPUs.
- **MULTI_XPU** -- Distributed on multiple XPUs.
- **DEEPSPEED** -- Using DeepSpeed.
- **TPU** -- Distributed on TPUs.
@ -219,7 +192,6 @@ class DistributedType(str, enum.Enum):
NO = "NO"
MULTI_CPU = "MULTI_CPU"
MULTI_GPU = "MULTI_GPU"
MULTI_NPU = "MULTI_NPU"
MULTI_XPU = "MULTI_XPU"
DEEPSPEED = "DEEPSPEED"
FSDP = "FSDP"
@ -364,20 +336,11 @@ class PrecisionType(BaseEnum):
class RNGType(BaseEnum):
TORCH = "torch"
CUDA = "cuda"
NPU = "npu"
XLA = "xla"
XPU = "xpu"
GENERATOR = "generator"
class CustomDtype(enum.Enum):
r"""
An enum that contains multiple custom dtypes that can be used for `infer_auto_device_map`.
"""
FP8 = "fp8"
INT4 = "int4"
# data classes
@ -415,14 +378,9 @@ class ProjectConfiguration:
metadata={"help": "The current save iteration."},
)
def set_directories(self, project_dir: str = None):
"Sets `self.project_dir` and `self.logging_dir` to the appropriate values."
self.project_dir = project_dir
if self.logging_dir is None:
self.logging_dir = project_dir
def __post_init__(self):
self.set_directories(self.project_dir)
if self.logging_dir is None:
self.logging_dir = self.project_dir
@dataclass
@ -438,12 +396,6 @@ class GradientAccumulationPlugin(KwargsHandler):
"help": "Whether to adjust the scheduler steps to account for the number of steps being accumulated. Should be `True` if the used scheduler was not adjusted for gradient accumulation."
},
)
sync_with_dataloader: bool = field(
default=True,
metadata={
"help": "Whether to synchronize setting the gradients when at the end of the dataloader. Should only be set to `False` if you know what you're doing."
},
)
@dataclass
@ -472,9 +424,9 @@ class TorchDynamoPlugin(KwargsHandler):
if self.mode is None:
self.mode = os.environ.get(prefix + "MODE", "default")
if self.fullgraph is None:
self.fullgraph = str_to_bool(os.environ.get(prefix + "USE_FULLGRAPH", "False")) == 1
self.fullgraph = strtobool(os.environ.get(prefix + "USE_FULLGRAPH", "False")) == 1
if self.dynamic is None:
self.dynamic = str_to_bool(os.environ.get(prefix + "USE_DYNAMIC", "False")) == 1
self.dynamic = strtobool(os.environ.get(prefix + "USE_DYNAMIC", "False")) == 1
def to_dict(self):
dynamo_config = copy.deepcopy(self.__dict__)
@ -495,10 +447,7 @@ class DeepSpeedPlugin:
},
)
gradient_accumulation_steps: int = field(
default=None,
metadata={
"help": "Number of steps to accumulate gradients before updating optimizer states. If not set, will use the value from the `Accelerator` directly."
},
default=None, metadata={"help": "Number of steps to accumulate gradients before updating optimizer states"}
)
gradient_clipping: float = field(default=None, metadata={"help": "Enable gradient clipping with value"})
zero_stage: int = field(
@ -541,8 +490,7 @@ class DeepSpeedPlugin:
from .deepspeed import HfDeepSpeedConfig
if self.gradient_accumulation_steps is None:
gas = os.environ.get("ACCELERATE_GRADIENT_ACCUMULATION_STEPS", "auto")
self.gradient_accumulation_steps = int(gas) if gas.isdigit() else gas
self.gradient_accumulation_steps = int(os.environ.get("ACCELERATE_GRADIENT_ACCUMULATION_STEPS", 1))
if self.gradient_clipping is None:
gradient_clipping = os.environ.get("ACCELERATE_GRADIENT_CLIPPING", "none")
@ -635,7 +583,7 @@ class DeepSpeedPlugin:
self.deepspeed_config["steps_per_print"] = float("inf") # this will stop deepspeed from logging @ stdout
if self.zero3_init_flag is None:
self.zero3_init_flag = (
str_to_bool(os.environ.get("ACCELERATE_DEEPSPEED_ZERO3_INIT", str(self.hf_ds_config.is_zero3()))) == 1
strtobool(os.environ.get("ACCELERATE_DEEPSPEED_ZERO3_INIT", str(self.hf_ds_config.is_zero3()))) == 1
)
if self.zero3_init_flag and not self.hf_ds_config.is_zero3():
warnings.warn("DeepSpeed Zero3 Init flag is only applicable for ZeRO Stage 3. Setting it to False.")
@ -730,10 +678,7 @@ class DeepSpeedPlugin:
if ds_config["train_batch_size"] == "auto":
del ds_config["train_batch_size"]
if compare_versions("transformers", "<", "4.33"):
from transformers.deepspeed import HfDeepSpeedConfig
else:
from transformers.integrations import HfDeepSpeedConfig
from transformers.deepspeed import HfDeepSpeedConfig
self.dschf = HfDeepSpeedConfig(ds_config) # keep this object alive # noqa
@ -824,24 +769,21 @@ class FullyShardedDataParallelPlugin:
default=None,
metadata={"help": "A list of modules to ignore for FSDP."},
)
state_dict_type: "typing.Any" = field(
default=None,
metadata={
"help": "FSDP State Dict Type of type `torch.distributed.fsdp.fully_sharded_data_parallel.StateDictType`"
},
)
state_dict_config: "typing.Any" = field(
default=None,
metadata={
"help": "FSDP State Dict Config of type `torch.distributed.fsdp.fully_sharded_data_parallel.StateDictConfig`"
},
)
optim_state_dict_config: "typing.Any" = field(
default=None,
metadata={
"help": "FSDP Optimizer State Dict Config of type `torch.distributed.fsdp.fully_sharded_data_parallel.OptimStateDictConfig`"
},
)
limit_all_gathers: bool = field(
default=False,
metadata={
@ -851,72 +793,41 @@ class FullyShardedDataParallelPlugin:
"Enabling this can help lower the number of CUDA malloc retries."
},
)
use_orig_params: bool = field(
default=False,
metadata={
"help": "If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable paramteres. "
"Useful in cases such as parameter-efficient fine-tuning. "
"Please refer this [blog](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019)"
},
)
param_init_fn: Optional[Callable[[torch.nn.Module], None]] = field(
default=None,
metadata={
"help": "A Callable[torch.nn.Module] -> None that specifies how modules "
"that are currently on the meta device should be initialized onto an actual device."
},
)
sync_module_states: bool = field(
default=True,
metadata={
"help": "If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0 "
"to ensure they are the same across all ranks after initialization"
},
)
forward_prefetch: bool = field(
default=False,
metadata={
"help": "If True, then FSDP explicitly prefetches the next upcoming "
"all-gather while executing in the forward pass. only use with Static graphs."
},
)
activation_checkpointing: bool = field(
default=False,
metadata={
"help": "If True, activation checkpointing is a technique to reduce memory usage by clearing activations of "
"certain layers and recomputing them during a backward pass. Effectively, this trades extra computation time "
"for reduced memory usage."
},
metadata={"help": "If True, enables parameter-efficient fine-tuning"},
)
def __post_init__(self):
from torch.distributed.fsdp.fully_sharded_data_parallel import BackwardPrefetch, CPUOffload, ShardingStrategy
from torch.distributed.fsdp.fully_sharded_data_parallel import (
BackwardPrefetch,
CPUOffload,
FullStateDictConfig,
ShardingStrategy,
StateDictType,
)
prefix = "FSDP_"
if self.sharding_strategy is None:
self.sharding_strategy = ShardingStrategy(int(os.environ.get(prefix + "SHARDING_STRATEGY", 1)))
self.sharding_strategy = ShardingStrategy(int(os.environ.get("FSDP_SHARDING_STRATEGY", 1)))
if self.cpu_offload is None:
if str_to_bool(os.environ.get(prefix + "OFFLOAD_PARAMS", "False")) == 1:
if os.environ.get("FSDP_OFFLOAD_PARAMS", "false") == "true":
self.cpu_offload = CPUOffload(offload_params=True)
else:
self.cpu_offload = CPUOffload(offload_params=False)
if self.backward_prefetch is None:
prefetch_policy = os.environ.get(prefix + "BACKWARD_PREFETCH", "NO_PREFETCH")
prefetch_policy = os.environ.get("FSDP_BACKWARD_PREFETCH", "NO_PREFETCH")
if prefetch_policy != FSDP_BACKWARD_PREFETCH[-1]:
self.backward_prefetch = BackwardPrefetch(FSDP_BACKWARD_PREFETCH.index(prefetch_policy) + 1)
if self.state_dict_type is None:
state_dict_type_policy = os.environ.get(prefix + "STATE_DICT_TYPE", "FULL_STATE_DICT")
self.set_state_dict_type(state_dict_type_policy)
self.use_orig_params = str_to_bool(os.environ.get(prefix + "USE_ORIG_PARAMS", "False")) == 1
self.sync_module_states = str_to_bool(os.environ.get(prefix + "SYNC_MODULE_STATES", "True")) == 1
self.forward_prefetch = str_to_bool(os.environ.get(prefix + "FORWARD_PREFETCH", "False")) == 1
self.activation_checkpointing = str_to_bool(os.environ.get(prefix + "ACTIVATION_CHECKPOINTING", "False")) == 1
state_dict_type_policy = os.environ.get("FSDP_STATE_DICT_TYPE", "FULL_STATE_DICT")
self.state_dict_type = StateDictType(FSDP_STATE_DICT_TYPE.index(state_dict_type_policy) + 1)
if self.sync_module_states:
self.param_init_fn = lambda x: x.to_empty(device=torch.cuda.current_device(), recurse=False)
if self.state_dict_type == StateDictType.FULL_STATE_DICT and self.state_dict_config is None:
self.state_dict_config = FullStateDictConfig(offload_to_cpu=True, rank0_only=True)
@staticmethod
def get_module_class_from_name(module, name):
@ -941,15 +852,10 @@ class FullyShardedDataParallelPlugin:
def set_auto_wrap_policy(self, model):
from torch.distributed.fsdp.wrap import size_based_auto_wrap_policy, transformer_auto_wrap_policy
default_transformer_cls_names_to_wrap = (
",".join(model._no_split_modules) if getattr(model, "_no_split_modules", None) is not None else ""
)
if self.auto_wrap_policy is None:
auto_wrap_policy = os.environ.get("FSDP_AUTO_WRAP_POLICY", "NO_WRAP")
if auto_wrap_policy == FSDP_AUTO_WRAP_POLICY[0]:
transformer_cls_names_to_wrap = os.environ.get(
"FSDP_TRANSFORMER_CLS_TO_WRAP", default_transformer_cls_names_to_wrap
).split(",")
transformer_cls_names_to_wrap = os.environ.get("FSDP_TRANSFORMER_CLS_TO_WRAP", "").split(",")
transformer_cls_to_wrap = set()
for layer_class in transformer_cls_names_to_wrap:
transformer_cls = FullyShardedDataParallelPlugin.get_module_class_from_name(model, layer_class)
@ -982,20 +888,94 @@ class FullyShardedDataParallelPlugin:
if self.mixed_precision_policy is None:
self.mixed_precision_policy = MixedPrecision(param_dtype=dtype, reduce_dtype=dtype, buffer_dtype=dtype)
def set_state_dict_type(self, state_dict_type_policy):
from torch.distributed.fsdp.fully_sharded_data_parallel import (
FullOptimStateDictConfig,
FullStateDictConfig,
StateDictType,
)
def save_model(self, accelerator, model, output_dir, model_index=0):
from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.fully_sharded_data_parallel import StateDictType
self.state_dict_type = StateDictType(FSDP_STATE_DICT_TYPE.index(state_dict_type_policy) + 1)
if is_torch_version("<=", "1.13.5"):
with FSDP.state_dict_type(model, self.state_dict_type, self.state_dict_config):
state_dict = model.state_dict()
else:
FSDP.set_state_dict_type(model, self.state_dict_type, self.state_dict_config)
state_dict = model.state_dict()
if self.state_dict_type == StateDictType.FULL_STATE_DICT:
if self.state_dict_config is None:
self.state_dict_config = FullStateDictConfig(offload_to_cpu=True, rank0_only=True)
if self.optim_state_dict_config is None:
self.optim_state_dict_config = FullOptimStateDictConfig(offload_to_cpu=True, rank0_only=True)
weights_name = f"{MODEL_NAME}.bin" if model_index == 0 else f"{MODEL_NAME}_{model_index}.bin"
output_model_file = os.path.join(output_dir, weights_name)
if accelerator.process_index == 0:
print(f"Saving model to {output_model_file}")
torch.save(state_dict, output_model_file)
print(f"Model saved to {output_model_file}")
else:
weights_name = (
f"{MODEL_NAME}_rank{accelerator.process_index}.bin"
if model_index == 0
else f"{MODEL_NAME}_{model_index}_rank{accelerator.process_index}.bin"
)
output_model_file = os.path.join(output_dir, weights_name)
print(f"Saving model to {output_model_file}")
torch.save(state_dict, output_model_file)
print(f"Model saved to {output_model_file}")
def load_model(self, accelerator, model, input_dir, model_index=0):
from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.fully_sharded_data_parallel import StateDictType
accelerator.wait_for_everyone()
if self.state_dict_type == StateDictType.FULL_STATE_DICT:
weights_name = f"{MODEL_NAME}.bin" if model_index == 0 else f"{MODEL_NAME}_{model_index}.bin"
input_model_file = os.path.join(input_dir, weights_name)
accelerator.print(f"Loading model from {input_model_file}")
state_dict = torch.load(input_model_file)
accelerator.print(f"Model loaded from {input_model_file}")
else:
weights_name = (
f"{MODEL_NAME}_rank{accelerator.process_index}.bin"
if model_index == 0
else f"{MODEL_NAME}_{model_index}_rank{accelerator.process_index}.bin"
)
input_model_file = os.path.join(input_dir, weights_name)
print(f"Loading model from {input_model_file}")
state_dict = torch.load(input_model_file)
print(f"Model loaded from {input_model_file}")
if is_torch_version("<=", "1.13.5"):
with FSDP.state_dict_type(model, self.state_dict_type, self.state_dict_config):
model.load_state_dict(state_dict)
else:
FSDP.set_state_dict_type(model, self.state_dict_type, self.state_dict_config)
model.load_state_dict(state_dict)
def save_optimizer(self, accelerator, optimizer, model, output_dir, optimizer_index=0, optim_input=None):
from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP
optim_state = FSDP.full_optim_state_dict(model, optimizer, optim_input=optim_input)
if accelerator.process_index == 0:
optim_state_name = (
f"{OPTIMIZER_NAME}.bin" if optimizer_index == 0 else f"{OPTIMIZER_NAME}_{optimizer_index}.bin"
)
output_optimizer_file = os.path.join(output_dir, optim_state_name)
print(f"Saving Optimizer state to {output_optimizer_file}")
torch.save(optim_state, output_optimizer_file)
print(f"Optimizer state saved in {output_optimizer_file}")
def load_optimizer(self, accelerator, optimizer, model, input_dir, optimizer_index=0):
from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP
accelerator.wait_for_everyone()
full_osd = None
if accelerator.process_index == 0:
optimizer_name = (
f"{OPTIMIZER_NAME}.bin" if optimizer_index == 0 else f"{OPTIMIZER_NAME}_{optimizer_index}.bin"
)
input_optimizer_file = os.path.join(input_dir, optimizer_name)
print(f"Loading Optimizer state from {input_optimizer_file}")
full_osd = torch.load(input_optimizer_file)
print(f"Optimizer state loaded from {input_optimizer_file}")
# called from all ranks, though only rank0 has a valid param for full_osd
sharded_osd = FSDP.scatter_full_optim_state_dict(full_osd, model)
optimizer.load_state_dict(sharded_osd)
@dataclass
@ -1169,13 +1149,13 @@ class MegatronLMPlugin:
if self.gradient_clipping is None:
self.gradient_clipping = float(os.environ.get(prefix + "GRADIENT_CLIPPING", 1.0))
if self.recompute_activation is None:
self.recompute_activation = str_to_bool(os.environ.get(prefix + "RECOMPUTE_ACTIVATION", "False")) == 1
self.recompute_activation = strtobool(os.environ.get(prefix + "RECOMPUTE_ACTIVATION", "False")) == 1
if self.use_distributed_optimizer is None:
self.use_distributed_optimizer = (
str_to_bool(os.environ.get(prefix + "USE_DISTRIBUTED_OPTIMIZER", "False")) == 1
strtobool(os.environ.get(prefix + "USE_DISTRIBUTED_OPTIMIZER", "False")) == 1
)
if self.sequence_parallelism is None:
self.sequence_parallelism = str_to_bool(os.environ.get(prefix + "SEQUENCE_PARALLELISM", "False")) == 1
self.sequence_parallelism = strtobool(os.environ.get(prefix + "SEQUENCE_PARALLELISM", "False")) == 1
if self.pp_degree > 1 or self.use_distributed_optimizer:
self.DDP_impl = "local"
@ -1371,134 +1351,3 @@ class MegatronLMPlugin:
self.megatron_lm_default_args[key] = True
elif key.startswith("no_log_"):
self.megatron_lm_default_args[key.replace("no_", "")] = True
@dataclass
class BnbQuantizationConfig:
"""
A plugin to enable BitsAndBytes 4bit and 8bit quantization
"""
load_in_8bit: bool = field(default=False, metadata={"help": "enable 8bit quantization."})
llm_int8_threshold: float = field(
default=6.0, metadata={"help": "value of the outliner threshold. only relevant when load_in_8bit=True"}
)
load_in_4bit: bool = field(default=False, metadata={"help": "enable 4bit quantization."})
bnb_4bit_quant_type: str = field(
default="fp4",
metadata={
"help": "set the quantization data type in the `bnb.nn.Linear4Bit` layers. Options are {'fp4','np4'}."
},
)
bnb_4bit_use_double_quant: bool = field(
default=False,
metadata={
"help": "enable nested quantization where the quantization constants from the first quantization are quantized again."
},
)
bnb_4bit_compute_dtype: bool = field(
default="fp16",
metadata={
"help": "This sets the computational type which might be different than the input time. For example, inputs might be "
"fp32, but computation can be set to bf16 for speedups. Options are {'fp32','fp16','bf16'}."
},
)
torch_dtype: torch.dtype = field(
default=None,
metadata={
"help": "this sets the dtype of the remaining non quantized layers. `bitsandbytes` library suggests to set the value"
"to `torch.float16` for 8 bit model and use the same dtype as the compute dtype for 4 bit model "
},
)
skip_modules: List[str] = field(
default=None,
metadata={
"help": "an explicit list of the modules that we don't quantize. The dtype of these modules will be `torch_dtype`."
},
)
keep_in_fp32_modules: List[str] = field(
default=None,
metadata={"help": "an explicit list of the modules that we don't quantize. We keep them in `torch.float32`."},
)
def __post_init__(self):
"""
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
"""
if not isinstance(self.load_in_8bit, bool):
raise ValueError("load_in_8bit must be a boolean")
if not isinstance(self.load_in_4bit, bool):
raise ValueError("load_in_4bit must be a boolean")
if self.load_in_4bit and self.load_in_8bit:
raise ValueError("load_in_4bit and load_in_8 can't be both True")
if not self.load_in_4bit and not self.load_in_8bit:
raise ValueError("load_in_4bit and load_in_8 can't be both False")
if not isinstance(self.llm_int8_threshold, (int, float)):
raise ValueError("llm_int8_threshold must be a float or an int")
if not isinstance(self.bnb_4bit_quant_type, str):
raise ValueError("bnb_4bit_quant_type must be a string")
elif self.bnb_4bit_quant_type not in ["fp4", "nf4"]:
raise ValueError(f"bnb_4bit_quant_type must be in ['fp4','nf4'] but found {self.bnb_4bit_quant_type}")
if not isinstance(self.bnb_4bit_use_double_quant, bool):
raise ValueError("bnb_4bit_use_double_quant must be a boolean")
if isinstance(self.bnb_4bit_compute_dtype, str):
if self.bnb_4bit_compute_dtype == "fp32":
self.bnb_4bit_compute_dtype = torch.float32
elif self.bnb_4bit_compute_dtype == "fp16":
self.bnb_4bit_compute_dtype = torch.float16
elif self.bnb_4bit_compute_dtype == "bf16":
self.bnb_4bit_compute_dtype = torch.bfloat16
else:
raise ValueError(
f"bnb_4bit_compute_dtype must be in ['fp32','fp16','bf16'] but found {self.bnb_4bit_compute_dtype}"
)
elif not isinstance(self.bnb_4bit_compute_dtype, torch.dtype):
raise ValueError("bnb_4bit_compute_dtype must be a string or a torch.dtype")
if self.skip_modules is not None and not isinstance(self.skip_modules, list):
raise ValueError("skip_modules must be a list of strings")
if self.keep_in_fp32_modules is not None and not isinstance(self.keep_in_fp32_modules, list):
raise ValueError("keep_in_fp_32_modules must be a list of strings")
if self.load_in_4bit:
self.target_dtype = CustomDtype.INT4
if self.load_in_8bit:
self.target_dtype = torch.int8
if self.load_in_4bit and self.llm_int8_threshold != 6.0:
warnings.warn("llm_int8_threshold can only be used for model loaded in 8bit")
if isinstance(self.torch_dtype, str):
if self.torch_dtype == "fp32":
self.torch_dtype = torch.float32
elif self.torch_dtype == "fp16":
self.torch_dtype = torch.float16
elif self.torch_dtype == "bf16":
self.torch_dtype = torch.bfloat16
else:
raise ValueError(f"torch_dtype must be in ['fp32','fp16','bf16'] but found {self.torch_dtype}")
if self.load_in_8bit and self.torch_dtype is None:
self.torch_dtype = torch.float16
if self.load_in_4bit and self.torch_dtype is None:
self.torch_dtype = self.bnb_4bit_compute_dtype
if not isinstance(self.torch_dtype, torch.dtype):
raise ValueError("torch_dtype must be a torch.dtype")

View File

@ -254,19 +254,16 @@ class DummyScheduler:
Args:
optimizer (`torch.optim.optimizer.Optimizer`):
The optimizer to wrap.
total_num_steps (int, *optional*):
total_num_steps (int):
Total number of steps.
warmup_num_steps (int, *optional*):
warmup_num_steps (int):
Number of steps for warmup.
lr_scheduler_callable (callable, *optional*):
A callable function that creates an LR Scheduler. It accepts only one argument `optimizer`.
**kwargs:
Other arguments.
"""
def __init__(self, optimizer, total_num_steps=None, warmup_num_steps=0, lr_scheduler_callable=None, **kwargs):
def __init__(self, optimizer, total_num_steps=None, warmup_num_steps=0, **kwargs):
self.optimizer = optimizer
self.total_num_steps = total_num_steps
self.warmup_num_steps = warmup_num_steps
self.lr_scheduler_callable = lr_scheduler_callable
self.kwargs = kwargs

View File

@ -13,21 +13,7 @@
# limitations under the License.
import os
def str_to_bool(value) -> int:
"""
Converts a string representation of truth to `True` (1) or `False` (0).
True values are `y`, `yes`, `t`, `true`, `on`, and `1`; False value are `n`, `no`, `f`, `false`, `off`, and `0`;
"""
value = value.lower()
if value in ("y", "yes", "t", "true", "on", "1"):
return 1
elif value in ("n", "no", "f", "false", "off", "0"):
return 0
else:
raise ValueError(f"invalid truth value {value}")
from distutils.util import strtobool
def get_int_from_env(env_keys, default):
@ -42,7 +28,7 @@ def get_int_from_env(env_keys, default):
def parse_flag_from_env(key, default=False):
"""Returns truthy value for `key` from the env if available else the default."""
value = os.environ.get(key, str(default))
return str_to_bool(value) == 1 # As its name indicates `str_to_bool` actually returns an int...
return strtobool(value) == 1 # As its name indicates `strtobool` actually returns an int...
def parse_choice_from_env(key, default="no"):

Some files were not shown because too many files have changed in this diff Show More