mirror of
https://github.com/huggingface/accelerate.git
synced 2025-11-13 06:15:13 +08:00
Compare commits
341 Commits
make-versi
...
v1.11.0
| Author | SHA1 | Date | |
|---|---|---|---|
| 9a81156b4b | |||
| 5998f8625b | |||
| f0313a64a2 | |||
| df0c1870d9 | |||
| bc2478a472 | |||
| 057edec226 | |||
| 14383311c2 | |||
| a737437c8a | |||
| 6997855ace | |||
| 401075ffff | |||
| 8031e24e84 | |||
| 3db9fb6991 | |||
| fe795fd324 | |||
| 409b356f45 | |||
| 1b50d93999 | |||
| e79f383625 | |||
| 0cb1a33475 | |||
| dfdc219018 | |||
| 45959d7b96 | |||
| 8b493524c8 | |||
| 9ead94e556 | |||
| a0bc36e8ed | |||
| 8830e58a91 | |||
| 40ebb4bea3 | |||
| ec92b1af7a | |||
| 62ede1ed2a | |||
| 9f9c490c6b | |||
| 8b55e62b2c | |||
| 0e4419b347 | |||
| 3b67c21696 | |||
| 7b981788ca | |||
| c4460e33ef | |||
| 5dd3d0b690 | |||
| 5fe4460ccd | |||
| 979d81e4a9 | |||
| 7c25f696b8 | |||
| a7d6f28f99 | |||
| 23cf4ef8a3 | |||
| ff872f5f71 | |||
| 2941a6b0fb | |||
| c0a3aefea8 | |||
| 42fdda1c1f | |||
| e23b004b30 | |||
| 898cad39e8 | |||
| 24c8157bba | |||
| 6891c57072 | |||
| 24e48f3d20 | |||
| 6640ff415c | |||
| c173b4fdd6 | |||
| cb343c63d7 | |||
| 9359a0194f | |||
| 2f075c724c | |||
| 7ecc2d7f39 | |||
| 12f89bb754 | |||
| 348aabaaaf | |||
| 3b13453bbf | |||
| 0408ab12d7 | |||
| 55e518a762 | |||
| 7e11ac43f0 | |||
| e2cc537db8 | |||
| 847ae58c74 | |||
| 6e104f31de | |||
| 524e5f9828 | |||
| d6c986c3f2 | |||
| 1ac8643df7 | |||
| 07ce74868c | |||
| 175fe91589 | |||
| fe16ce8bce | |||
| 5987d79a53 | |||
| 31af8d4e8e | |||
| b7493a82b1 | |||
| a16d2bb3c1 | |||
| cac22ed980 | |||
| be826a6b7b | |||
| 5939640829 | |||
| 7f9c8cbe34 | |||
| 9888c7ed23 | |||
| 42a68c30dc | |||
| 6597dae780 | |||
| 8878d93745 | |||
| 2eaf5cdbbc | |||
| 23c1d8db89 | |||
| 0af621bbec | |||
| bee04f1b01 | |||
| 8a953f08c6 | |||
| 3518c03584 | |||
| 2f8fd72e51 | |||
| d2e6b0313d | |||
| b9fee48c85 | |||
| 3a82b056cf | |||
| 6b61a373a2 | |||
| 682691deac | |||
| 791055b484 | |||
| 16bf1d8901 | |||
| ab3c604e48 | |||
| 273799c85d | |||
| 43526c5c08 | |||
| 07f2392f40 | |||
| ee2f48c2c3 | |||
| 4f3abb73a7 | |||
| db536cbfeb | |||
| 4e9d0deba6 | |||
| 8cb3ace894 | |||
| b6d97cb856 | |||
| 33967d4733 | |||
| 5b1fcda371 | |||
| f55f0533b5 | |||
| 1ec99f0b58 | |||
| 417bc52965 | |||
| 97c93c4809 | |||
| cd37bbb629 | |||
| 7aa3b56c80 | |||
| 14f4306ca6 | |||
| e6e717589e | |||
| 1f6efcea0b | |||
| 9fa97f9600 | |||
| 764eee4a48 | |||
| 202e6c178a | |||
| 32874257f3 | |||
| 281314b479 | |||
| 3524a504c8 | |||
| f48d95c493 | |||
| f76208f5a8 | |||
| ae0499ea96 | |||
| ddc49f1e9a | |||
| 9b2d6eaf32 | |||
| 7b5774ac55 | |||
| 7013365791 | |||
| 8d8fd83672 | |||
| 3a941d4b4e | |||
| d02e51cc21 | |||
| c5caa11e85 | |||
| 39e2bebb12 | |||
| 0af45bf1e8 | |||
| 806ac848c9 | |||
| 23b092507a | |||
| 8fb073536a | |||
| 4f35cf713c | |||
| ada21cfbbd | |||
| b451956fd6 | |||
| 6a9a61520d | |||
| 423fbbfdea | |||
| 34c1779828 | |||
| 54496571fd | |||
| 4a3cbcb63c | |||
| 583b26db3c | |||
| 7812d979c3 | |||
| 67adb473a4 | |||
| ee4cab96ed | |||
| 73c2378c55 | |||
| b2f937faec | |||
| 3b89987710 | |||
| a43e4170fc | |||
| 334d6ab957 | |||
| 650b6659c0 | |||
| fb90996365 | |||
| 32b2e1606f | |||
| 8c0a29626d | |||
| 63168b151f | |||
| 3cf5e4c802 | |||
| 9642a1ac81 | |||
| 3169339f5b | |||
| 67a768be07 | |||
| 531643436e | |||
| 83e09a9331 | |||
| 9c4eeb9ba8 | |||
| a0edc8dcf2 | |||
| 11a3c0001d | |||
| 8b31a2fe2c | |||
| 3f636d6260 | |||
| 803b6648b4 | |||
| 17f9c19f48 | |||
| d7c741a6bc | |||
| 8ab01d32cf | |||
| 140acb356e | |||
| 8576112bc8 | |||
| 806f661cd3 | |||
| 9015a26f09 | |||
| 6de900e10a | |||
| ffb27138f7 | |||
| 4b6be89910 | |||
| a702364256 | |||
| a31bd767c1 | |||
| 71036329f7 | |||
| f648feba97 | |||
| 14fc61eeac | |||
| d9e6af8773 | |||
| b271eb1365 | |||
| 4677b8089f | |||
| e456796be8 | |||
| ac3749dc11 | |||
| 6e8eea2e73 | |||
| c7b3625592 | |||
| 90f81986b9 | |||
| fa26dc6156 | |||
| 6fcc8efd2e | |||
| 8039158d71 | |||
| e34db4d0d2 | |||
| 526925b48c | |||
| 24f8d0276c | |||
| 5cc99e6e02 | |||
| ce63623421 | |||
| f19b95700f | |||
| 81d8a0356c | |||
| f076495580 | |||
| 03153658f4 | |||
| 675e35bcd4 | |||
| 8f2d31c5b9 | |||
| 4c2c89ea90 | |||
| 28c171b05a | |||
| 65356780d4 | |||
| 78b8126bff | |||
| 7e324103c4 | |||
| 02d25612a5 | |||
| fbfa53bc5e | |||
| d09040dfc9 | |||
| 828aae4e32 | |||
| f0b030554c | |||
| 80973430ee | |||
| c67d47ae79 | |||
| 8c423cff79 | |||
| 95f34d6243 | |||
| ba90f85627 | |||
| b13aadcb67 | |||
| 58f14364d5 | |||
| 54370d4504 | |||
| d6d3e03cd4 | |||
| acfbf72a7f | |||
| 200c9eb783 | |||
| 7b2edc0bf2 | |||
| b92fb4774f | |||
| 3e62fbb09c | |||
| cb8b7c637a | |||
| aa16d69561 | |||
| f9a2e7902f | |||
| 51fd482d6e | |||
| 60461ff7c4 | |||
| f8c77f0522 | |||
| b626ef5f00 | |||
| dd68af886a | |||
| 11818e657b | |||
| 1f508a6df6 | |||
| 4a100eef43 | |||
| c6f34a060f | |||
| 29be478862 | |||
| e11d3ceff3 | |||
| 08101b9dde | |||
| 5f96369161 | |||
| 069743775e | |||
| 77f2b6235e | |||
| d7b1b368e9 | |||
| 8ad2b3b8e7 | |||
| e724c9a97f | |||
| cf169a1ae6 | |||
| 8ade23cc6a | |||
| c0552c9012 | |||
| bf4572b6ce | |||
| a4a44aca1f | |||
| b0e5fd353c | |||
| 8159c98d43 | |||
| 497eb3cf86 | |||
| 87732a4c32 | |||
| ffbca15979 | |||
| ba7ab93f5e | |||
| 85f35647db | |||
| 2f39575bbd | |||
| 1ace241db4 | |||
| 78e1bdd088 | |||
| 4dda5797bd | |||
| 1f4fbb77a2 | |||
| c809f8e45c | |||
| 39dc2b120f | |||
| 735dfa3018 | |||
| a84327e596 | |||
| 292954b547 | |||
| 0e61127b5a | |||
| 6f79b63b86 | |||
| 1d2ca747f1 | |||
| cba3f2d5e0 | |||
| f1f2b4d1a8 | |||
| fd9880da91 | |||
| 21c994c298 | |||
| 52581c3f01 | |||
| f4ee5a2dc7 | |||
| 55136b8dc4 | |||
| fb68cb9d0e | |||
| 506d732230 | |||
| ae9cb6e4db | |||
| 127818fc27 | |||
| bcc13c00b5 | |||
| d4d6b6e7f5 | |||
| 1077611552 | |||
| cd93e35e08 | |||
| e93b056687 | |||
| 5060574827 | |||
| 018a99e5f6 | |||
| 4305033f80 | |||
| 4617be3760 | |||
| 521eb5bee4 | |||
| 9f9951325c | |||
| e9e5a73fcc | |||
| 79a8426416 | |||
| 8a43837cc9 | |||
| a768b2b753 | |||
| 85b1a03552 | |||
| fc52fa969e | |||
| 3a670bd0da | |||
| b32d8bcb75 | |||
| d5b7b70e06 | |||
| 1ce2eb6385 | |||
| 3fd02e60dc | |||
| ed9a574564 | |||
| 7d3bbe721b | |||
| 4b4c036933 | |||
| e7e01812df | |||
| 5ad982ac51 | |||
| 9d67867ad9 | |||
| 52b3421d8f | |||
| f1ca8ac78f | |||
| ab89fc7e1d | |||
| b5235f21d8 | |||
| 8931e5e48c | |||
| a84859242d | |||
| 758d6243a7 | |||
| b07ad2adf2 | |||
| 1d09a20fc1 | |||
| 3fcc9461c4 | |||
| 939ce400cb | |||
| c2120927b0 | |||
| 654e1d9984 | |||
| 8c3aded21a | |||
| 2789933938 | |||
| 726140cad2 | |||
| 2d4f1dda7e | |||
| c0cf860dc6 | |||
| ad3f574a3b | |||
| 1a6af0bd6d | |||
| 52fae0960c | |||
| 7ffe7662ca | |||
| 5536a3a893 | |||
| 7ec8eab955 |
12
.github/PULL_REQUEST_TEMPLATE.md
vendored
12
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -37,11 +37,11 @@ members/contributors who may be interested in your PR.
|
||||
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
|
||||
|
||||
- Big modeling: @SunMarc
|
||||
- Fully-Sharded Data Parallism: @muellerzr
|
||||
- DeepSpeed: @muellerzr
|
||||
- Command Line Interface: @muellerzr
|
||||
- Documentation: @muellerzr
|
||||
- Core parts of the library: @muellerzr @BenjaminBossan @SunMarc
|
||||
- Maintained examples: @muellerzr or @SunMarc
|
||||
- Fully-Sharded Data Parallism: @SunMarc @zach-huggingface
|
||||
- DeepSpeed: @SunMarc @zach-huggingface
|
||||
- Command Line Interface: @SunMarc @zach-huggingface
|
||||
- Documentation: @SunMarc @zach-huggingface
|
||||
- Core parts of the library: @BenjaminBossan @SunMarc @zach-huggingface
|
||||
- Maintained examples: @SunMarc or @zach-huggingface
|
||||
|
||||
-->
|
||||
@ -15,7 +15,7 @@ jobs:
|
||||
outputs:
|
||||
version: ${{ steps.step1.outputs.version }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3.1.0
|
||||
- uses: actions/checkout@v4
|
||||
- id: step1
|
||||
run: echo "version=$(python setup.py --version)" >> $GITHUB_OUTPUT
|
||||
|
||||
@ -82,3 +82,23 @@ jobs:
|
||||
push: true
|
||||
tags: huggingface/accelerate:gpu-deepspeed-release-${{needs.get-version.outputs.version}}
|
||||
|
||||
version-cuda-fp8-transformerengine:
|
||||
name: "Latest Accelerate GPU FP8 TransformerEngine [version]"
|
||||
runs-on:
|
||||
group: aws-g6-4xlarge-plus
|
||||
needs: get-version
|
||||
steps:
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v2
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_PASSWORD }}
|
||||
|
||||
- name: Build and Push GPU
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
file: docker/accelerate-gpu/Dockerfile
|
||||
push: true
|
||||
tags: huggingface/accelerate:gpu-fp8-transformerengine-release-${{needs.get-version.outputs.version}}
|
||||
6
.github/workflows/build_and_run_tests.yml
vendored
6
.github/workflows/build_and_run_tests.yml
vendored
@ -16,13 +16,13 @@ jobs:
|
||||
outputs:
|
||||
changed: ${{ steps.was_changed.outputs.changed }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3.1.0
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: "2"
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v41
|
||||
uses: tj-actions/changed-files@3f54ebb830831fc121d3263c1857cfbdc310cdb9 #v42
|
||||
|
||||
- name: Was setup changed
|
||||
id: was_changed
|
||||
@ -47,4 +47,4 @@ jobs:
|
||||
run-integration-tests:
|
||||
needs: build-docker-containers
|
||||
if: always()
|
||||
uses: ./.github/workflows/self_hosted_integration_tests.yml
|
||||
uses: ./.github/workflows/self_hosted_integration_tests.yml
|
||||
|
||||
28
.github/workflows/build_docker_images.yml
vendored
28
.github/workflows/build_docker_images.yml
vendored
@ -86,3 +86,31 @@ jobs:
|
||||
huggingface/accelerate:gpu-deepspeed-nightly
|
||||
huggingface/accelerate:gpu-deepspeed-nightly-${{ env.date }}
|
||||
|
||||
latest-cuda-fp8-transformerengine:
|
||||
name: "Latest Accelerate GPU FP8 TransformerEngine [dev]"
|
||||
runs-on:
|
||||
group: aws-g6-4xlarge-plus
|
||||
steps:
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v2
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_PASSWORD }}
|
||||
- name: Get current date
|
||||
id: date
|
||||
run: |
|
||||
echo "date=$(date '+%Y-%m-%d')" >> $GITHUB_ENV
|
||||
# Get the previous month
|
||||
echo "base_year=$(date -d 'last month' '+%y')" >> $GITHUB_ENV
|
||||
echo "base_month=$(date -d 'last month' '+%m')" >> $GITHUB_ENV
|
||||
- name: Build and Push GPU
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
file: benchmarks/fp8/transformer_engine/Dockerfile
|
||||
push: true
|
||||
tags: huggingface/accelerate:gpu-fp8-transformerengine-nightly-${{ env.date }}
|
||||
build-args: |
|
||||
BASE_YEAR=${{ env.base_year }}
|
||||
BASE_MONTH=${{ env.base_month }}
|
||||
37
.github/workflows/fp8_runner.yml
vendored
Normal file
37
.github/workflows/fp8_runner.yml
vendored
Normal file
@ -0,0 +1,37 @@
|
||||
name: Test FP8 Runner
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
jobs:
|
||||
set-prev-day:
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
prev-day: ${{ steps.set-prev-day.outputs.prev-day }}
|
||||
steps:
|
||||
- name: Set PREV_DAY
|
||||
id: set-prev-day
|
||||
run: |
|
||||
PREV_DAY=$(date -d "yesterday" '+%Y-%m-%d')
|
||||
echo "prev-day=$PREV_DAY" >> $GITHUB_OUTPUT
|
||||
run-fp8-tests:
|
||||
needs: set-prev-day
|
||||
runs-on:
|
||||
group: aws-g6e-12xlarge
|
||||
container:
|
||||
image: huggingface/accelerate:gpu-fp8-transformerengine-nightly-${{ needs.set-prev-day.outputs.prev-day }}
|
||||
options: --gpus all --shm-size "16gb"
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Install the library
|
||||
run: |
|
||||
pip install -e .[test_prod,test_fp8]
|
||||
- name: Show installed libraries
|
||||
run: |
|
||||
pip freeze
|
||||
- name: Run TE FP8 tests
|
||||
run: |
|
||||
python -m pytest -s -v ./tests/test_fp8.py
|
||||
|
||||
87
.github/workflows/gaudi3_scheduled.yml
vendored
Normal file
87
.github/workflows/gaudi3_scheduled.yml
vendored
Normal file
@ -0,0 +1,87 @@
|
||||
name: Gaudi3 tests (scheduled)
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule: # every day at 6 AM UTC
|
||||
- cron: "0 6 * * *"
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
run-gaudi3-tests:
|
||||
runs-on:
|
||||
group: itac-bm-emr-gaudi3-dell-2gaudi
|
||||
|
||||
container:
|
||||
image: docker://vault.habana.ai/gaudi-docker/1.21.1/ubuntu22.04/habanalabs/pytorch-installer-2.6.0:latest
|
||||
options: --runtime=habana --shm-size=64G --cap-add=sys_nice --env HABANA_VISIBLE_DEVICES
|
||||
env:
|
||||
OMPI_MCA_btl_vader_single_copy_mechanism: none
|
||||
PT_ENABLE_INT64_SUPPORT: 1
|
||||
PT_HPU_LAZY_MODE: 0
|
||||
RUN_SLOW: 1
|
||||
|
||||
steps:
|
||||
- name: HL-SMI (1)
|
||||
run: |
|
||||
hl-smi
|
||||
echo "HABANA_VISIBLE_DEVICES=${HABANA_VISIBLE_DEVICES}"
|
||||
echo "HABANA_VISIBLE_MODULES=${HABANA_VISIBLE_MODULES}"
|
||||
|
||||
- name: Extract HPU visible modules
|
||||
id: add-modules
|
||||
run: |
|
||||
export HABANA_VISIBLE_MODULES=$(hl-smi -Q module_id -f csv,noheader | tr '\n' ',' | sed 's/,$//')
|
||||
echo "HABANA_VISIBLE_MODULES=${HABANA_VISIBLE_MODULES}" >> $GITHUB_ENV
|
||||
|
||||
- name: HL-SMI (2)
|
||||
run: |
|
||||
hl-smi
|
||||
echo "HABANA_VISIBLE_DEVICES=${HABANA_VISIBLE_DEVICES}"
|
||||
echo "HABANA_VISIBLE_MODULES=${HABANA_VISIBLE_MODULES}"
|
||||
|
||||
- name: Checkout to Accelerate
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Install Accelerate with Transformers & DeepSpeed
|
||||
run: |
|
||||
pip install -e .[testing] \
|
||||
git+https://github.com/HabanaAI/DeepSpeed.git@1.20.0 \
|
||||
git+https://github.com/huggingface/transformers.git
|
||||
|
||||
- name: Run CLI tests
|
||||
if: ${{ !cancelled() && (success() || failure()) }}
|
||||
run: |
|
||||
make test_cli
|
||||
|
||||
- name: Run Core tests
|
||||
if: ${{ !cancelled() && (success() || failure()) }}
|
||||
run: |
|
||||
make test_core
|
||||
|
||||
- name: Run Big Modeling tests
|
||||
if: ${{ !cancelled() && (success() || failure()) }}
|
||||
run: |
|
||||
make test_big_modeling
|
||||
|
||||
- name: Run DeepSpeed integration tests
|
||||
if: ${{ !cancelled() && (success() || failure()) }}
|
||||
run: |
|
||||
make test_deepspeed
|
||||
|
||||
- name: Run FSDP integration tests
|
||||
if: ${{ !cancelled() && (success() || failure()) }}
|
||||
run: |
|
||||
make test_fsdp
|
||||
|
||||
- name: Run TP integration tests
|
||||
if: ${{ !cancelled() && (success() || failure()) }}
|
||||
run: |
|
||||
make test_tp
|
||||
|
||||
- name: Run Examples tests
|
||||
if: ${{ !cancelled() && (success() || failure()) }}
|
||||
run: |
|
||||
make test_examples
|
||||
8
.github/workflows/integration_tests.yml
vendored
8
.github/workflows/integration_tests.yml
vendored
@ -26,11 +26,11 @@ jobs:
|
||||
strategy:
|
||||
fail-fast: false
|
||||
steps:
|
||||
- uses: actions/checkout@v3.1.0
|
||||
- name: Set up python 3.8
|
||||
uses: actions/setup-python@v3
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up python 3.10
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.8
|
||||
python-version: '3.10'
|
||||
cache: 'pip'
|
||||
cache-dependency-path: 'setup.py'
|
||||
|
||||
|
||||
19
.github/workflows/pr_style_bot.yml
vendored
Normal file
19
.github/workflows/pr_style_bot.yml
vendored
Normal file
@ -0,0 +1,19 @@
|
||||
# To run this bot, comment "@bot /style" on a PR
|
||||
name: Style Bot
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
style:
|
||||
uses: huggingface/huggingface_hub/.github/workflows/style-bot-action.yml@main
|
||||
with:
|
||||
python_quality_dependencies: "[quality]"
|
||||
style_command_type: "default"
|
||||
secrets:
|
||||
bot_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
8
.github/workflows/quality.yml
vendored
8
.github/workflows/quality.yml
vendored
@ -6,11 +6,11 @@ jobs:
|
||||
quality:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3.1.0
|
||||
- name: Set up Python 3.8
|
||||
uses: actions/setup-python@v3
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Python 3.10
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.8
|
||||
python-version: '3.10'
|
||||
cache: 'pip'
|
||||
cache-dependency-path: 'setup.py'
|
||||
- name: Install Python dependencies
|
||||
|
||||
@ -112,7 +112,7 @@ jobs:
|
||||
cd skorch;
|
||||
git config --global --add safe.directory '*'
|
||||
git checkout master && git pull
|
||||
pip install .[testing]
|
||||
pip install .[test]
|
||||
pip install flaky
|
||||
|
||||
- name: Show installed libraries
|
||||
|
||||
9
.github/workflows/stale.yml
vendored
9
.github/workflows/stale.yml
vendored
@ -10,15 +10,18 @@ jobs:
|
||||
name: Close Stale Issues
|
||||
if: github.repository == 'huggingface/accelerate'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
issues: write
|
||||
pull-requests: write
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3.1.0
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v3
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.8
|
||||
python-version: '3.10'
|
||||
cache: 'pip'
|
||||
cache-dependency-path: 'setup.py'
|
||||
|
||||
|
||||
10
.github/workflows/test.yml
vendored
10
.github/workflows/test.yml
vendored
@ -38,11 +38,11 @@ jobs:
|
||||
test_rest
|
||||
]
|
||||
steps:
|
||||
- uses: actions/checkout@v3.1.0
|
||||
- name: Set up python 3.8
|
||||
uses: actions/setup-python@v3
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up python 3.10
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.8
|
||||
python-version: '3.10'
|
||||
cache: 'pip'
|
||||
cache-dependency-path: 'setup.py'
|
||||
|
||||
@ -52,7 +52,7 @@ jobs:
|
||||
if [[ ${{ matrix.test-kind }} != test_prod ]]; then pip install -e .[testing,test_trackers]; fi
|
||||
if [[ ${{ matrix.test-kind }} = test_rest ]]; then pip uninstall comet_ml -y; fi
|
||||
if [[ ${{ matrix.pytorch-version }} = minimum ]]; then pip install torchvision==0.18.1 torch==2.3.1; fi
|
||||
pip install pytest-reportlog tabulate setuptools
|
||||
pip install pytest-reportlog tabulate setuptools importlib_metadata
|
||||
|
||||
- name: Show installed libraries
|
||||
run: |
|
||||
|
||||
8
.github/workflows/test_imports.yml
vendored
8
.github/workflows/test_imports.yml
vendored
@ -26,11 +26,11 @@ jobs:
|
||||
minimum,
|
||||
]
|
||||
steps:
|
||||
- uses: actions/checkout@v3.1.0
|
||||
- name: Set up python 3.8
|
||||
uses: actions/setup-python@v3
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up python 3.10
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.8
|
||||
python-version: '3.10'
|
||||
cache: 'pip'
|
||||
cache-dependency-path: 'setup.py'
|
||||
|
||||
|
||||
@ -123,12 +123,15 @@ Follow these steps to start contributing:
|
||||
4. Set up a development environment by running the following command in a conda or a virtual environment you've created for working on this library:
|
||||
|
||||
```bash
|
||||
$ pip install -e ".[quality]"
|
||||
$ pip install -e ".[dev]"
|
||||
```
|
||||
|
||||
This will install all testing and linting/code quality dependencies for the library (see `quality`, `test_dev`,
|
||||
`test_prod` targets in [`setup.py`](./setup.py)).
|
||||
|
||||
(If accelerate was already installed in the virtual environment, remove
|
||||
it with `pip uninstall accelerate` before reinstalling it in editable
|
||||
mode with the `-e` flag.)
|
||||
mode with the `-e` flag).
|
||||
|
||||
Alternatively, if you are using [Visual Studio Code](https://code.visualstudio.com/Download), the fastest way to get set up is by using
|
||||
the provided Dev Container. Documentation on how to get started with dev containers is available [here](https://code.visualstudio.com/docs/remote/containers).
|
||||
|
||||
44
Makefile
44
Makefile
@ -8,37 +8,44 @@ extra_quality_checks:
|
||||
python utils/check_copies.py
|
||||
python utils/check_dummies.py
|
||||
python utils/check_repo.py
|
||||
doc-builder style src/accelerate docs/source --max_len 119
|
||||
|
||||
# this target runs checks on all files
|
||||
quality:
|
||||
ruff check $(check_dirs)
|
||||
ruff format --check $(check_dirs)
|
||||
doc-builder style src/accelerate docs/source --max_len 119 --check_only
|
||||
|
||||
# Format source code automatically and check is there are any problems left that need manual fixing
|
||||
style:
|
||||
ruff check $(check_dirs) --fix
|
||||
ruff format $(check_dirs)
|
||||
doc-builder style src/accelerate docs/source --max_len 119
|
||||
|
||||
# Run tests for the library
|
||||
test_big_modeling:
|
||||
python -m pytest -s -v ./tests/test_big_modeling.py ./tests/test_modeling_utils.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_big_modeling.log",)
|
||||
|
||||
test_core:
|
||||
python -m pytest -s -v ./tests/ --ignore=./tests/test_examples.py --ignore=./tests/deepspeed --ignore=./tests/test_big_modeling.py \
|
||||
--ignore=./tests/fsdp --ignore=./tests/test_cli.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_core.log",)
|
||||
python -m pytest -s -v ./tests/ \
|
||||
--ignore=./tests/test_big_modeling.py \
|
||||
--ignore=./tests/test_modeling_utils.py \
|
||||
--ignore=./tests/test_examples.py \
|
||||
--ignore=./tests/test_cli.py \
|
||||
--ignore=./tests/deepspeed \
|
||||
--ignore=./tests/fsdp \
|
||||
--ignore=./tests/tp \
|
||||
$(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_core.log",)
|
||||
|
||||
test_cli:
|
||||
python -m pytest -s -v ./tests/test_cli.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_cli.log",)
|
||||
|
||||
test_big_modeling:
|
||||
python -m pytest -s -v ./tests/test_big_modeling.py ./tests/test_modeling_utils.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_big_modeling.log",)
|
||||
|
||||
test_deepspeed:
|
||||
python -m pytest -s -v ./tests/deepspeed $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_deepspeed.log",)
|
||||
|
||||
test_fsdp:
|
||||
python -m pytest -s -v ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_fsdp.log",)
|
||||
|
||||
test_tp:
|
||||
python -m pytest -s -v ./tests/tp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_tp.log",)
|
||||
|
||||
# Since the new version of pytest will *change* how things are collected, we need `deepspeed` to
|
||||
# run after test_core and test_cli
|
||||
test:
|
||||
@ -47,13 +54,14 @@ test:
|
||||
$(MAKE) test_big_modeling
|
||||
$(MAKE) test_deepspeed
|
||||
$(MAKE) test_fsdp
|
||||
$(MAKE) test_tp
|
||||
|
||||
test_examples:
|
||||
python -m pytest -s -v ./tests/test_examples.py $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_examples.log",)
|
||||
|
||||
# Broken down example tests for the CI runners
|
||||
test_integrations:
|
||||
python -m pytest -s -v ./tests/deepspeed ./tests/fsdp $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_integrations.log",)
|
||||
python -m pytest -s -v ./tests/fsdp ./tests/tp ./tests/deepspeed $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_integrations.log",)
|
||||
|
||||
test_example_differences:
|
||||
python -m pytest -s -v ./tests/test_examples.py::ExampleDifferenceTests $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_example_diff.log",)
|
||||
@ -70,3 +78,21 @@ test_prod:
|
||||
|
||||
test_rest:
|
||||
python -m pytest -s -v ./tests/test_examples.py::FeatureExamplesTests -k "not by_step and not by_epoch" $(if $(IS_GITHUB_CI),--report-log "$(PYTORCH_VERSION)_rest.log",)
|
||||
|
||||
# For developers to prepare a release
|
||||
prepare_release:
|
||||
rm -rf dist build
|
||||
python setup.py bdist_wheel sdist
|
||||
|
||||
# Make sure this is ran in a fresh venv of some form
|
||||
install_test_release:
|
||||
pip uninstall accelerate -y
|
||||
pip install -i https://testpypi.python.org/pypi --extra-index-url https://pypi.org/simple accelerate$(if $(version),==$(version),)
|
||||
|
||||
# Run as `make target=testpypi upload_release`
|
||||
upload_release:
|
||||
@if [ "$(target)" != "testpypi" ] && [ "$(target)" != "pypi" ]; then \
|
||||
echo "Error: target must be either 'testpypi' or 'pypi'"; \
|
||||
exit 1; \
|
||||
fi
|
||||
twine upload dist/* -r $(target)
|
||||
@ -157,6 +157,8 @@ accelerate launch --multi_gpu --num_processes 2 examples/nlp_example.py
|
||||
|
||||
To learn more, check the CLI documentation available [here](https://huggingface.co/docs/accelerate/package_reference/cli).
|
||||
|
||||
Or view the configuration zoo [here](https://github.com/huggingface/accelerate/blob/main/examples/config_yaml_templates/)
|
||||
|
||||
## Launching multi-CPU run using MPI
|
||||
|
||||
🤗 Here is another way to launch multi-CPU run using MPI. You can learn how to install Open MPI on [this page](https://www.open-mpi.org/faq/?category=building#easy-build). You can use Intel MPI or MVAPICH as well.
|
||||
@ -256,7 +258,7 @@ pip install accelerate
|
||||
- multi-GPU on several nodes (machines)
|
||||
- TPU
|
||||
- FP16/BFloat16 mixed precision
|
||||
- FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine)
|
||||
- FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine) or [MS-AMP](https://github.com/Azure/MS-AMP/)
|
||||
- DeepSpeed support (Experimental)
|
||||
- PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental)
|
||||
- Megatron-LM support (Experimental)
|
||||
|
||||
@ -13,7 +13,7 @@ pip install transformers
|
||||
To reproduce or test a new setup, run
|
||||
|
||||
```py
|
||||
python inference_acc.py model_name
|
||||
python big_model_inference.py model_name
|
||||
```
|
||||
|
||||
This script supports `gpt-j-6b`, `gpt-neox`, `opt` (30B version) and `T0pp` out of the box, but you can specify any valid checkpoint for `model_name`.
|
||||
@ -43,4 +43,4 @@ Note on the results:
|
||||
|
||||
You will also note that Accelerate does not use anymore GPU and CPU RAM than necessary:
|
||||
- peak GPU memory is exactly the size of the model put on a given GPU
|
||||
- peak CPU memory is either the size of the biggest checkpoint shard or the part of the model offloaded on CPU, whichever is bigger.
|
||||
- peak CPU memory is either the size of the biggest checkpoint shard or the part of the model offloaded on CPU, whichever is bigger.
|
||||
|
||||
@ -18,6 +18,12 @@ import time
|
||||
import psutil
|
||||
import torch
|
||||
|
||||
from accelerate.test_utils.testing import get_backend
|
||||
|
||||
|
||||
torch_device_type, _, _ = get_backend()
|
||||
torch_accelerator_module = getattr(torch, torch_device_type, torch.cuda)
|
||||
|
||||
|
||||
class PeakCPUMemory:
|
||||
def __init__(self):
|
||||
@ -54,16 +60,16 @@ def start_measure():
|
||||
measures = {"time": time.time()}
|
||||
|
||||
gc.collect()
|
||||
torch.cuda.empty_cache()
|
||||
torch_accelerator_module.empty_cache()
|
||||
|
||||
# CPU mem
|
||||
measures["cpu"] = psutil.Process().memory_info().rss
|
||||
cpu_peak_tracker.start()
|
||||
|
||||
# GPU mem
|
||||
for i in range(torch.cuda.device_count()):
|
||||
measures[str(i)] = torch.cuda.memory_allocated(i)
|
||||
torch.cuda.reset_peak_memory_stats()
|
||||
for i in range(torch_accelerator_module.device_count()):
|
||||
measures[str(i)] = torch_accelerator_module.memory_allocated(i)
|
||||
torch_accelerator_module.reset_peak_memory_stats()
|
||||
|
||||
return measures
|
||||
|
||||
@ -73,16 +79,16 @@ def end_measure(start_measures):
|
||||
measures = {"time": time.time() - start_measures["time"]}
|
||||
|
||||
gc.collect()
|
||||
torch.cuda.empty_cache()
|
||||
torch_accelerator_module.empty_cache()
|
||||
|
||||
# CPU mem
|
||||
measures["cpu"] = (psutil.Process().memory_info().rss - start_measures["cpu"]) / 2**20
|
||||
measures["cpu-peak"] = (cpu_peak_tracker.stop() - start_measures["cpu"]) / 2**20
|
||||
|
||||
# GPU mem
|
||||
for i in range(torch.cuda.device_count()):
|
||||
measures[str(i)] = (torch.cuda.memory_allocated(i) - start_measures[str(i)]) / 2**20
|
||||
measures[f"{i}-peak"] = (torch.cuda.max_memory_allocated(i) - start_measures[str(i)]) / 2**20
|
||||
for i in range(torch_accelerator_module.device_count()):
|
||||
measures[str(i)] = (torch_accelerator_module.memory_allocated(i) - start_measures[str(i)]) / 2**20
|
||||
measures[f"{i}-peak"] = (torch_accelerator_module.max_memory_allocated(i) - start_measures[str(i)]) / 2**20
|
||||
|
||||
return measures
|
||||
|
||||
@ -90,9 +96,9 @@ def end_measure(start_measures):
|
||||
def log_measures(measures, description):
|
||||
print(f"{description}:")
|
||||
print(f"- Time: {measures['time']:.2f}s")
|
||||
for i in range(torch.cuda.device_count()):
|
||||
print(f"- GPU {i} allocated: {measures[str(i)]:.2f}MiB")
|
||||
for i in range(torch_accelerator_module.device_count()):
|
||||
print(f"- {torch_device_type} {i} allocated: {measures[str(i)]:.2f}MiB")
|
||||
peak = measures[f"{i}-peak"]
|
||||
print(f"- GPU {i} peak: {peak:.2f}MiB")
|
||||
print(f"- {torch_device_type} {i} peak: {peak:.2f}MiB")
|
||||
print(f"- CPU RAM allocated: {measures['cpu']:.2f}MiB")
|
||||
print(f"- CPU RAM peak: {measures['cpu-peak']:.2f}MiB")
|
||||
|
||||
12
benchmarks/fp8/ms_amp/Dockerfile
Normal file
12
benchmarks/fp8/ms_amp/Dockerfile
Normal file
@ -0,0 +1,12 @@
|
||||
FROM ghcr.io/azure/msamp
|
||||
|
||||
RUN pip install transformers evaluate datasets
|
||||
RUN git clone https://github.com/huggingface/accelerate
|
||||
|
||||
RUN cd accelerate && \
|
||||
pip install -e . && \
|
||||
cd benchmarks/fp8
|
||||
|
||||
CMD ["bash"]
|
||||
|
||||
|
||||
123
benchmarks/fp8/ms_amp/ddp.py
Normal file
123
benchmarks/fp8/ms_amp/ddp.py
Normal file
@ -0,0 +1,123 @@
|
||||
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
This script tests to ensure that `accelerate` performs at the same level as raw `MS-AMP`.
|
||||
|
||||
This particular script verifies this for DDP training.
|
||||
"""
|
||||
|
||||
import evaluate
|
||||
import msamp
|
||||
import torch
|
||||
from fp8_utils import evaluate_model, get_training_utilities
|
||||
from torch.nn.parallel import DistributedDataParallel as DDP
|
||||
|
||||
from accelerate import Accelerator
|
||||
from accelerate.state import AcceleratorState
|
||||
from accelerate.utils import FP8RecipeKwargs, get_grad_scaler, set_seed
|
||||
|
||||
|
||||
MODEL_NAME = "bert-base-cased"
|
||||
METRIC = evaluate.load("glue", "mrpc")
|
||||
|
||||
|
||||
def train_baseline(opt_level="O2"):
|
||||
set_seed(42)
|
||||
scaler = get_grad_scaler()
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
|
||||
accelerator = Accelerator()
|
||||
device = accelerator.device
|
||||
|
||||
model, optimizer = msamp.initialize(model, optimizer, opt_level=opt_level)
|
||||
|
||||
model.to(device)
|
||||
|
||||
# Convert the model to DDP
|
||||
device_ids, output_device = [accelerator.local_process_index], accelerator.local_process_index
|
||||
model = DDP(model, device_ids=device_ids, output_device=output_device)
|
||||
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.train()
|
||||
|
||||
for i, batch in enumerate(train_dataloader):
|
||||
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
scaler.scale(loss).backward()
|
||||
optimizer.step()
|
||||
optimizer.zero_grad()
|
||||
lr_scheduler.step()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
|
||||
def train_integration(opt_level="O2"):
|
||||
kwargs_handlers = [FP8RecipeKwargs(backend="msamp", opt_level=opt_level)]
|
||||
AcceleratorState()._reset_state(True)
|
||||
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=kwargs_handlers)
|
||||
set_seed(42)
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
|
||||
MODEL_NAME, accelerator=accelerator
|
||||
)
|
||||
|
||||
model, optimizer = accelerator.prepare(model, optimizer)
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.train()
|
||||
for i, batch in enumerate(train_dataloader):
|
||||
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
accelerator.backward(loss)
|
||||
optimizer.step()
|
||||
optimizer.zero_grad()
|
||||
lr_scheduler.step()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
for opt_level in ["O1", "O2"]:
|
||||
baseline_not_trained, baseline_trained = train_baseline(opt_level)
|
||||
accelerator_not_trained, accelerator_trained = train_integration(opt_level)
|
||||
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
|
||||
f"Accuracy not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
|
||||
f"F1 not the same for untrained baseline and accelerator using opt_level={opt_level}: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
|
||||
)
|
||||
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
|
||||
f"Accuracy not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_trained["f1"] == accelerator_trained["f1"], (
|
||||
f"F1 not the same for trained baseline and accelerator using opt_level={opt_level}: {baseline_trained['f1']} == {accelerator_trained['f1']}"
|
||||
)
|
||||
161
benchmarks/fp8/ms_amp/distrib_deepspeed.py
Normal file
161
benchmarks/fp8/ms_amp/distrib_deepspeed.py
Normal file
@ -0,0 +1,161 @@
|
||||
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
This script tests to ensure that `accelerate` performs at the same level as raw `MS-AMP`.
|
||||
|
||||
This particular script verifies this for DeepSpeed training.
|
||||
|
||||
NOTE: MS-AMP does *not* support ZeRO-3.
|
||||
"""
|
||||
|
||||
# import msamp.deepspeed as msamp_deepspeed
|
||||
import evaluate
|
||||
import torch
|
||||
from fp8_utils import evaluate_model, get_training_utilities
|
||||
from msamp import deepspeed as msamp_deepspeed
|
||||
|
||||
from accelerate import Accelerator, DeepSpeedPlugin
|
||||
from accelerate.state import AcceleratorState
|
||||
from accelerate.utils import set_seed
|
||||
|
||||
|
||||
MODEL_NAME = "bert-base-cased"
|
||||
METRIC = evaluate.load("glue", "mrpc")
|
||||
|
||||
|
||||
def train_baseline(zero_stage: int = 1, opt_level: str = "O1"):
|
||||
set_seed(42)
|
||||
accelerator = Accelerator()
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
|
||||
MODEL_NAME, accelerator=accelerator
|
||||
)
|
||||
|
||||
import numpy as np
|
||||
|
||||
config = {
|
||||
"train_batch_size": 32,
|
||||
"train_micro_batch_size_per_gpu": 16,
|
||||
"gradient_accumulation_steps": 1,
|
||||
"zero_optimization": {
|
||||
"stage": zero_stage,
|
||||
"offload_optimizer": {"device": "none", "nvme_path": None},
|
||||
"offload_param": {"device": "none", "nvme_path": None},
|
||||
},
|
||||
"gradient_clipping": 1.0,
|
||||
"steps_per_print": np.inf,
|
||||
"bf16": {"enabled": True},
|
||||
"fp16": {"enabled": False},
|
||||
"zero_allow_untested_optimizer": True,
|
||||
"msamp": {
|
||||
"enabled": True,
|
||||
"opt_level": opt_level,
|
||||
},
|
||||
}
|
||||
(
|
||||
model,
|
||||
optimizer,
|
||||
_,
|
||||
_,
|
||||
) = msamp_deepspeed.initialize(
|
||||
model=model,
|
||||
optimizer=optimizer,
|
||||
config_params=config,
|
||||
)
|
||||
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.train()
|
||||
|
||||
for _ in range(2):
|
||||
for batch in train_dataloader:
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
model.backward(loss)
|
||||
model.step()
|
||||
for _ in range(accelerator.num_processes):
|
||||
lr_scheduler.step()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.destroy()
|
||||
torch.cuda.empty_cache()
|
||||
AcceleratorState()._reset_state(True)
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
|
||||
def train_integration(zero_stage: int = 1, opt_level: str = "O1"):
|
||||
set_seed(42)
|
||||
deepspeed_plugin = DeepSpeedPlugin(
|
||||
zero_stage=zero_stage,
|
||||
enable_msamp=True,
|
||||
msamp_opt_level=opt_level,
|
||||
)
|
||||
accelerator = Accelerator(mixed_precision="fp8", deepspeed_plugin=deepspeed_plugin)
|
||||
accelerator.state.deepspeed_plugin.deepspeed_config["train_micro_batch_size_per_gpu"] = 16
|
||||
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
|
||||
MODEL_NAME, accelerator=accelerator
|
||||
)
|
||||
|
||||
model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler)
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.train()
|
||||
for _ in range(2):
|
||||
for batch in train_dataloader:
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
accelerator.backward(loss)
|
||||
optimizer.step()
|
||||
lr_scheduler.step()
|
||||
optimizer.zero_grad()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.destroy()
|
||||
torch.cuda.empty_cache()
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
AcceleratorState()._reset_state(True)
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
for zero_stage in [1, 2]:
|
||||
for opt_level in ["O1", "O2", "O3"]:
|
||||
baseline_not_trained, baseline_trained = train_baseline(zero_stage, opt_level)
|
||||
accelerator_not_trained, accelerator_trained = train_integration(zero_stage, opt_level)
|
||||
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
|
||||
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
|
||||
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
|
||||
)
|
||||
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
|
||||
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nAccuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_trained["f1"] == accelerator_trained["f1"], (
|
||||
f"ZERO stage {zero_stage}, opt_level={opt_level}:\nF1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
|
||||
)
|
||||
|
||||
torch.distributed.destroy_process_group()
|
||||
118
benchmarks/fp8/ms_amp/fp8_utils.py
Normal file
118
benchmarks/fp8/ms_amp/fp8_utils.py
Normal file
@ -0,0 +1,118 @@
|
||||
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import torch
|
||||
|
||||
|
||||
def get_dataloaders(model_name: str, batch_size: int = 16):
|
||||
from datasets import load_dataset
|
||||
from torch.utils.data import DataLoader
|
||||
from transformers import AutoTokenizer
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
datasets = load_dataset("glue", "mrpc")
|
||||
|
||||
def tokenize_function(examples):
|
||||
# max_length=None => use the model max length (it's actually the default)
|
||||
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
|
||||
return outputs
|
||||
|
||||
# Apply the method we just defined to all the examples in all the splits of the dataset
|
||||
# starting with the main process first:
|
||||
tokenized_datasets = datasets.map(
|
||||
tokenize_function,
|
||||
batched=True,
|
||||
remove_columns=["idx", "sentence1", "sentence2"],
|
||||
)
|
||||
|
||||
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
|
||||
# transformers library
|
||||
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
|
||||
|
||||
def collate_fn(examples):
|
||||
return tokenizer.pad(
|
||||
examples,
|
||||
padding="longest",
|
||||
pad_to_multiple_of=16, # Specific for FP8
|
||||
return_tensors="pt",
|
||||
)
|
||||
|
||||
# Instantiate dataloaders.
|
||||
train_dataloader = DataLoader(
|
||||
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
|
||||
)
|
||||
eval_dataloader = DataLoader(
|
||||
tokenized_datasets["validation"],
|
||||
shuffle=False,
|
||||
collate_fn=collate_fn,
|
||||
batch_size=16,
|
||||
drop_last=True,
|
||||
)
|
||||
|
||||
return train_dataloader, eval_dataloader
|
||||
|
||||
|
||||
def get_training_utilities(model_name: str, batch_size: int = 16, accelerator=None):
|
||||
"""
|
||||
Returns a tuple of:
|
||||
- Model
|
||||
- Optimizer
|
||||
- Train dataloader (prepared)
|
||||
- Eval dataloader (prepared)
|
||||
- LR Scheduler
|
||||
Suitable for training on the MRPC dataset
|
||||
"""
|
||||
from torch.optim import AdamW
|
||||
from transformers import AutoModelForSequenceClassification, get_linear_schedule_with_warmup
|
||||
|
||||
from accelerate import Accelerator
|
||||
|
||||
if accelerator is None:
|
||||
accelerator = Accelerator()
|
||||
model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
||||
train_dataloader, eval_dataloader = get_dataloaders(model_name, batch_size)
|
||||
optimizer = AdamW(model.parameters(), lr=0.0001)
|
||||
lr_scheduler = get_linear_schedule_with_warmup(
|
||||
optimizer=optimizer,
|
||||
num_warmup_steps=100,
|
||||
num_training_steps=len(train_dataloader) * 2,
|
||||
)
|
||||
train_dataloader, eval_dataloader = accelerator.prepare(train_dataloader, eval_dataloader)
|
||||
return model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
|
||||
|
||||
|
||||
def get_named_parameters(model):
|
||||
"""
|
||||
Same thing as `Accelerator.get_named_parameters` Returns a list of the named parameters of the model (extracted
|
||||
from parallel)
|
||||
"""
|
||||
from accelerate.utils import extract_model_from_parallel
|
||||
|
||||
model = extract_model_from_parallel(model)
|
||||
return {n: p for n, p in model.named_parameters()}
|
||||
|
||||
|
||||
def evaluate_model(model, dataloader, metric, accelerator=None):
|
||||
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
|
||||
model.eval()
|
||||
for step, batch in enumerate(dataloader):
|
||||
with torch.no_grad():
|
||||
# W/ MS-AMP, we need to cast while evaluating
|
||||
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
|
||||
outputs = model(**batch)
|
||||
predictions = outputs.logits.argmax(dim=-1)
|
||||
references = batch["labels"]
|
||||
if accelerator is not None and accelerator.num_processes > 1:
|
||||
predictions, references = accelerator.gather_for_metrics((predictions, references))
|
||||
metric.add_batch(predictions=predictions, references=references)
|
||||
return metric.compute()
|
||||
118
benchmarks/fp8/ms_amp/non_distributed.py
Normal file
118
benchmarks/fp8/ms_amp/non_distributed.py
Normal file
@ -0,0 +1,118 @@
|
||||
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
This script tests to ensure that `accelerate` performs at the same level as raw `MS-AMP`.
|
||||
|
||||
This particular script verifies this for single GPU training.
|
||||
"""
|
||||
|
||||
import evaluate
|
||||
import msamp
|
||||
import torch
|
||||
from fp8_utils import evaluate_model, get_training_utilities
|
||||
|
||||
from accelerate import Accelerator
|
||||
from accelerate.state import AcceleratorState
|
||||
from accelerate.utils import FP8RecipeKwargs, get_grad_scaler, set_seed
|
||||
|
||||
|
||||
MODEL_NAME = "bert-base-cased"
|
||||
METRIC = evaluate.load("glue", "mrpc")
|
||||
|
||||
|
||||
def train_baseline(opt_level="O2"):
|
||||
set_seed(42)
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
|
||||
|
||||
model, optimizer = msamp.initialize(model, optimizer, opt_level=opt_level)
|
||||
model.to("cuda")
|
||||
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
|
||||
model.train()
|
||||
scaler = get_grad_scaler()
|
||||
|
||||
for batch in train_dataloader:
|
||||
batch = batch.to("cuda")
|
||||
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
loss = scaler.scale(loss)
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
optimizer.zero_grad()
|
||||
lr_scheduler.step()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
|
||||
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
|
||||
def train_integration(opt_level="O2"):
|
||||
kwargs_handlers = [FP8RecipeKwargs(backend="msamp", opt_level=opt_level)]
|
||||
AcceleratorState()._reset_state(True)
|
||||
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=kwargs_handlers)
|
||||
set_seed(42)
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
|
||||
MODEL_NAME, accelerator=accelerator
|
||||
)
|
||||
|
||||
model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler)
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
|
||||
model.train()
|
||||
|
||||
for batch in train_dataloader:
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
accelerator.backward(loss)
|
||||
optimizer.step()
|
||||
optimizer.zero_grad()
|
||||
lr_scheduler.step()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
|
||||
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
for opt_level in ["O1", "O2"]:
|
||||
baseline_not_trained, baseline_trained = train_baseline(opt_level)
|
||||
accelerator_not_trained, accelerator_trained = train_integration(opt_level)
|
||||
|
||||
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
|
||||
)
|
||||
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_trained["f1"] == accelerator_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
|
||||
)
|
||||
32
benchmarks/fp8/torchao/README.md
Normal file
32
benchmarks/fp8/torchao/README.md
Normal file
@ -0,0 +1,32 @@
|
||||
# FP8 Benchmarks
|
||||
|
||||
Comparing and running [torchao](https://github.com/pytorch/ao/tree/main/torchao/float8) FP8 with accelerate
|
||||
|
||||
## Overview
|
||||
|
||||
This repo provides scripts which compare native `torchao` model training against `accelerate`'s own integration. Each modeling type is segmented out via a script, supporting the following:
|
||||
|
||||
* Single GPU training (`non_distributed.py`)
|
||||
* Multi-GPU training via DistributedDataParallelism (`ddp.py`)
|
||||
* Fully Sharded Data Parallelism (`fsdp.py`)
|
||||
* DeepSpeed ZeRO 1-3 (`deepspeed.py`)
|
||||
|
||||
To run them, it's recommended to use a docker image (see the attached `Dockerfile`) and not install `torchao` manually.
|
||||
|
||||
## Running:
|
||||
|
||||
There are official Docker images located at `huggingface/accelerate:gpu-fp8-torchao-nightly` which can be used.
|
||||
|
||||
You can run all scripts using the core `accelerate launch` command without any `accelerate config` being needed.
|
||||
|
||||
For single GPU, run it via `python`:
|
||||
|
||||
```bash
|
||||
python non_distributed.py
|
||||
```
|
||||
|
||||
For the rest, run it via `accelerate launch`:
|
||||
|
||||
```bash
|
||||
accelerate launch ddp.py # or distrib_deepspeed.py, ddp.py
|
||||
```
|
||||
158
benchmarks/fp8/torchao/ddp.py
Normal file
158
benchmarks/fp8/torchao/ddp.py
Normal file
@ -0,0 +1,158 @@
|
||||
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
|
||||
|
||||
This particular script verifies this for DDP training.
|
||||
"""
|
||||
|
||||
from functools import partial
|
||||
|
||||
import evaluate
|
||||
import torch
|
||||
from fp8_utils import get_training_utilities
|
||||
from torch.nn.parallel import DistributedDataParallel as DDP
|
||||
from torchao.float8 import convert_to_float8_training
|
||||
|
||||
from accelerate import Accelerator
|
||||
from accelerate.state import AcceleratorState
|
||||
from accelerate.utils import AORecipeKwargs, set_seed
|
||||
|
||||
|
||||
MODEL_NAME = "bert-base-cased"
|
||||
METRIC = evaluate.load("glue", "mrpc")
|
||||
|
||||
|
||||
def evaluate_model(model, dataloader, metric, accelerator=None):
|
||||
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
|
||||
model.eval()
|
||||
for step, batch in enumerate(dataloader):
|
||||
with torch.no_grad():
|
||||
outputs = model(**batch)
|
||||
predictions = outputs.logits.argmax(dim=-1)
|
||||
references = batch["labels"]
|
||||
if accelerator is not None and accelerator.num_processes > 1:
|
||||
predictions, references = accelerator.gather_for_metrics((predictions, references))
|
||||
metric.add_batch(predictions=predictions, references=references)
|
||||
return metric.compute()
|
||||
|
||||
|
||||
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
|
||||
if isinstance(module, torch.nn.Linear):
|
||||
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
|
||||
return False
|
||||
# For stability reasons, we skip the first and last linear layers
|
||||
# Otherwise can lead to the model not training or converging properly
|
||||
if fqn in (first_layer_name, last_layer_name):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def train_baseline():
|
||||
set_seed(42)
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
|
||||
first_linear = None
|
||||
last_linear = None
|
||||
for name, module in model.named_modules():
|
||||
if isinstance(module, torch.nn.Linear):
|
||||
if first_linear is None:
|
||||
first_linear = name
|
||||
last_linear = name
|
||||
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
|
||||
accelerator = Accelerator()
|
||||
device = accelerator.device
|
||||
model.to(device)
|
||||
|
||||
convert_to_float8_training(model, module_filter_fn=func)
|
||||
|
||||
# Convert the model to DDP
|
||||
device_ids, output_device = [accelerator.local_process_index], accelerator.local_process_index
|
||||
model = DDP(model, device_ids=device_ids, output_device=output_device)
|
||||
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.train()
|
||||
|
||||
for batch in train_dataloader:
|
||||
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
|
||||
batch = batch.to(device)
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
optimizer.zero_grad()
|
||||
lr_scheduler.step()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
|
||||
def train_integration():
|
||||
AcceleratorState()._reset_state(True)
|
||||
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=[AORecipeKwargs()])
|
||||
set_seed(42)
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
|
||||
MODEL_NAME, accelerator=accelerator
|
||||
)
|
||||
|
||||
model, optimizer = accelerator.prepare(model, optimizer)
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.train()
|
||||
|
||||
for batch in train_dataloader:
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
accelerator.backward(loss)
|
||||
optimizer.step()
|
||||
optimizer.zero_grad()
|
||||
lr_scheduler.step()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
baseline_not_trained, baseline_trained = train_baseline()
|
||||
accelerator_not_trained, accelerator_trained = train_integration()
|
||||
|
||||
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
|
||||
)
|
||||
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_trained["f1"] == accelerator_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
|
||||
)
|
||||
|
||||
torch.distributed.destroy_process_group()
|
||||
213
benchmarks/fp8/torchao/distrib_deepspeed.py
Normal file
213
benchmarks/fp8/torchao/distrib_deepspeed.py
Normal file
@ -0,0 +1,213 @@
|
||||
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
|
||||
|
||||
This particular script verifies this for deepspeed training.
|
||||
"""
|
||||
|
||||
from functools import partial
|
||||
from unittest.mock import patch
|
||||
|
||||
import deepspeed
|
||||
import evaluate
|
||||
import torch
|
||||
from fp8_utils import evaluate_model, get_training_utilities
|
||||
from torchao.float8 import convert_to_float8_training
|
||||
from transformers.integrations import HfDeepSpeedConfig
|
||||
|
||||
from accelerate import Accelerator, DeepSpeedPlugin
|
||||
from accelerate.state import AcceleratorState
|
||||
from accelerate.utils import AORecipeKwargs, set_seed
|
||||
|
||||
|
||||
MODEL_NAME = "bert-base-cased"
|
||||
METRIC = evaluate.load("glue", "mrpc")
|
||||
|
||||
|
||||
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
|
||||
if isinstance(module, torch.nn.Linear):
|
||||
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
|
||||
return False
|
||||
# For stability reasons, we skip the first and last linear layers
|
||||
# Otherwise can lead to the model not training or converging properly
|
||||
if fqn in (first_layer_name, last_layer_name):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def train_baseline(zero_stage: int = 1):
|
||||
set_seed(42)
|
||||
# This forces transformers to think Zero-3 Init should be used
|
||||
with patch("transformers.integrations.deepspeed.is_deepspeed_zero3_enabled") as mock:
|
||||
mock.return_value = zero_stage == 3
|
||||
|
||||
config = HfDeepSpeedConfig(
|
||||
{
|
||||
"train_micro_batch_size_per_gpu": 16,
|
||||
"gradient_accumulation_steps": 1,
|
||||
"zero_optimization": {"stage": zero_stage},
|
||||
}
|
||||
)
|
||||
plugin = DeepSpeedPlugin(hf_ds_config=config)
|
||||
accelerator = Accelerator(deepspeed_plugin=plugin)
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
|
||||
MODEL_NAME, accelerator=accelerator
|
||||
)
|
||||
first_linear = None
|
||||
last_linear = None
|
||||
for name, module in model.named_modules():
|
||||
if isinstance(module, torch.nn.Linear):
|
||||
if first_linear is None:
|
||||
first_linear = name
|
||||
last_linear = name
|
||||
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
|
||||
|
||||
convert_to_float8_training(model, module_filter_fn=func)
|
||||
|
||||
import numpy as np
|
||||
|
||||
config = {
|
||||
"train_batch_size": 32,
|
||||
"train_micro_batch_size_per_gpu": 16,
|
||||
"gradient_accumulation_steps": 1,
|
||||
"zero_optimization": {
|
||||
"stage": zero_stage,
|
||||
"offload_optimizer": {"device": "none", "nvme_path": None},
|
||||
"offload_param": {"device": "none", "nvme_path": None},
|
||||
"stage3_gather_16bit_weights_on_model_save": False,
|
||||
},
|
||||
"gradient_clipping": 1.0,
|
||||
"steps_per_print": np.inf,
|
||||
"bf16": {"enabled": True},
|
||||
"fp16": {"enabled": False},
|
||||
"zero_allow_untested_optimizer": True,
|
||||
}
|
||||
|
||||
(
|
||||
model,
|
||||
optimizer,
|
||||
_,
|
||||
lr_scheduler,
|
||||
) = deepspeed.initialize(
|
||||
model=model,
|
||||
optimizer=optimizer,
|
||||
lr_scheduler=lr_scheduler,
|
||||
config_params=config,
|
||||
)
|
||||
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.train()
|
||||
|
||||
model_outputs = []
|
||||
data = []
|
||||
|
||||
for batch in train_dataloader:
|
||||
outputs = model(**batch)
|
||||
data.append(batch.to("cpu"))
|
||||
model_outputs.append(outputs.logits.to("cpu"))
|
||||
loss = outputs.loss
|
||||
model.backward(loss)
|
||||
model.step()
|
||||
for _ in range(accelerator.num_processes):
|
||||
lr_scheduler.step()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.destroy()
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
del config
|
||||
return base_model_results, trained_model_results, model_outputs, data
|
||||
|
||||
|
||||
def train_integration(zero_stage: int = 1):
|
||||
set_seed(42)
|
||||
AcceleratorState()._reset_state(True)
|
||||
config = HfDeepSpeedConfig(
|
||||
{
|
||||
"train_micro_batch_size_per_gpu": 16,
|
||||
"gradient_accumulation_steps": 1,
|
||||
"zero_optimization": {"stage": zero_stage},
|
||||
}
|
||||
)
|
||||
deepspeed_plugin = DeepSpeedPlugin(
|
||||
hf_ds_config=config,
|
||||
)
|
||||
# This forces transformers to think Zero-3 Init should be used
|
||||
with patch("transformers.integrations.deepspeed.is_deepspeed_zero3_enabled") as mock:
|
||||
mock.return_value = zero_stage == 3
|
||||
accelerator = Accelerator(
|
||||
mixed_precision="fp8", kwargs_handlers=[AORecipeKwargs()], deepspeed_plugin=deepspeed_plugin
|
||||
)
|
||||
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
|
||||
MODEL_NAME, accelerator=accelerator
|
||||
)
|
||||
|
||||
model, optimizer, lr_scheduler, train_dataloader, eval_dataloader = accelerator.prepare(
|
||||
model, optimizer, lr_scheduler, train_dataloader, eval_dataloader
|
||||
)
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.train()
|
||||
model_outputs = []
|
||||
data = []
|
||||
for batch in train_dataloader:
|
||||
outputs = model(**batch)
|
||||
data.append(batch.to("cpu"))
|
||||
model_outputs.append(outputs.logits.to("cpu"))
|
||||
loss = outputs.loss
|
||||
accelerator.backward(loss)
|
||||
optimizer.step()
|
||||
lr_scheduler.step()
|
||||
optimizer.zero_grad()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.destroy()
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
del config
|
||||
return base_model_results, trained_model_results, model_outputs, data
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
for zero_stage in [1, 2, 3]:
|
||||
baseline_not_trained, baseline_trained, baseline_outputs, baseline_data = train_baseline(zero_stage)
|
||||
accelerator_not_trained, accelerator_trained, accelerator_outputs, accelerator_data = train_integration(
|
||||
zero_stage
|
||||
)
|
||||
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
|
||||
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
|
||||
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
|
||||
)
|
||||
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
|
||||
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_trained["f1"] == accelerator_trained["f1"], (
|
||||
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
|
||||
)
|
||||
AcceleratorState()._reset_state(True)
|
||||
torch.distributed.destroy_process_group()
|
||||
116
benchmarks/fp8/torchao/fp8_utils.py
Normal file
116
benchmarks/fp8/torchao/fp8_utils.py
Normal file
@ -0,0 +1,116 @@
|
||||
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import torch
|
||||
|
||||
|
||||
def get_dataloaders(model_name: str, batch_size: int = 16):
|
||||
from datasets import load_dataset
|
||||
from torch.utils.data import DataLoader
|
||||
from transformers import AutoTokenizer
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
datasets = load_dataset("glue", "mrpc")
|
||||
|
||||
def tokenize_function(examples):
|
||||
# max_length=None => use the model max length (it's actually the default)
|
||||
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
|
||||
return outputs
|
||||
|
||||
# Apply the method we just defined to all the examples in all the splits of the dataset
|
||||
# starting with the main process first:
|
||||
tokenized_datasets = datasets.map(
|
||||
tokenize_function,
|
||||
batched=True,
|
||||
remove_columns=["idx", "sentence1", "sentence2"],
|
||||
)
|
||||
|
||||
# We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the
|
||||
# transformers library
|
||||
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
|
||||
|
||||
def collate_fn(examples):
|
||||
return tokenizer.pad(
|
||||
examples,
|
||||
padding="longest",
|
||||
pad_to_multiple_of=16, # Specific for FP8
|
||||
return_tensors="pt",
|
||||
)
|
||||
|
||||
# Instantiate dataloaders.
|
||||
train_dataloader = DataLoader(
|
||||
tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True
|
||||
)
|
||||
eval_dataloader = DataLoader(
|
||||
tokenized_datasets["validation"],
|
||||
shuffle=False,
|
||||
collate_fn=collate_fn,
|
||||
batch_size=16,
|
||||
drop_last=True,
|
||||
)
|
||||
|
||||
return train_dataloader, eval_dataloader
|
||||
|
||||
|
||||
def get_training_utilities(model_name: str, batch_size: int = 16, accelerator=None, prepare=True):
|
||||
"""
|
||||
Returns a tuple of:
|
||||
- Model
|
||||
- Optimizer
|
||||
- Train dataloader (prepared)
|
||||
- Eval dataloader (prepared)
|
||||
- LR Scheduler
|
||||
Suitable for training on the MRPC dataset
|
||||
"""
|
||||
from torch.optim import AdamW
|
||||
from transformers import AutoModelForSequenceClassification, get_linear_schedule_with_warmup
|
||||
|
||||
from accelerate import Accelerator
|
||||
|
||||
if accelerator is None:
|
||||
accelerator = Accelerator()
|
||||
model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
||||
train_dataloader, eval_dataloader = get_dataloaders(model_name, batch_size)
|
||||
optimizer = AdamW(model.parameters(), lr=0.0001)
|
||||
lr_scheduler = get_linear_schedule_with_warmup(
|
||||
optimizer=optimizer,
|
||||
num_warmup_steps=100,
|
||||
num_training_steps=len(train_dataloader) * 2,
|
||||
)
|
||||
train_dataloader, eval_dataloader = accelerator.prepare(train_dataloader, eval_dataloader)
|
||||
return model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
|
||||
|
||||
|
||||
def get_named_parameters(model):
|
||||
"""
|
||||
Same thing as `Accelerator.get_named_parameters` Returns a list of the named parameters of the model (extracted
|
||||
from parallel)
|
||||
"""
|
||||
from accelerate.utils import extract_model_from_parallel
|
||||
|
||||
model = extract_model_from_parallel(model)
|
||||
return {n: p for n, p in model.named_parameters()}
|
||||
|
||||
|
||||
def evaluate_model(model, dataloader, metric, accelerator=None):
|
||||
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
|
||||
model.eval()
|
||||
for step, batch in enumerate(dataloader):
|
||||
with torch.no_grad():
|
||||
outputs = model(**batch)
|
||||
predictions = outputs.logits.argmax(dim=-1)
|
||||
references = batch["labels"]
|
||||
if accelerator is not None and accelerator.num_processes > 1:
|
||||
predictions, references = accelerator.gather_for_metrics((predictions, references))
|
||||
metric.add_batch(predictions=predictions, references=references)
|
||||
return metric.compute()
|
||||
173
benchmarks/fp8/torchao/fsdp.py
Normal file
173
benchmarks/fp8/torchao/fsdp.py
Normal file
@ -0,0 +1,173 @@
|
||||
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
|
||||
|
||||
This particular script verifies this for FSDP training.
|
||||
"""
|
||||
|
||||
from functools import partial
|
||||
|
||||
import evaluate
|
||||
import torch
|
||||
from fp8_utils import get_training_utilities
|
||||
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
|
||||
from torch.distributed.fsdp import MixedPrecision
|
||||
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
|
||||
from torchao.float8 import convert_to_float8_training
|
||||
from transformers.models.bert import BertLayer
|
||||
|
||||
from accelerate import Accelerator
|
||||
from accelerate import FullyShardedDataParallelPlugin as FSDPPlugin
|
||||
from accelerate.state import AcceleratorState
|
||||
from accelerate.utils import AORecipeKwargs, set_seed
|
||||
|
||||
|
||||
MODEL_NAME = "bert-base-cased"
|
||||
METRIC = evaluate.load("glue", "mrpc")
|
||||
|
||||
FSDP_WRAP_POLICY = partial(transformer_auto_wrap_policy, transformer_layer_cls={BertLayer})
|
||||
|
||||
|
||||
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
|
||||
if isinstance(module, torch.nn.Linear):
|
||||
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
|
||||
return False
|
||||
# For stability reasons, we skip the first and last linear layers
|
||||
# Otherwise can lead to the model not training or converging properly
|
||||
if fqn in (first_layer_name, last_layer_name):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def evaluate_model(model, dataloader, metric, accelerator=None):
|
||||
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
|
||||
model.eval()
|
||||
for step, batch in enumerate(dataloader):
|
||||
with torch.no_grad():
|
||||
outputs = model(**batch)
|
||||
predictions = outputs.logits.argmax(dim=-1)
|
||||
references = batch["labels"]
|
||||
if accelerator is not None and accelerator.num_processes > 1:
|
||||
predictions, references = accelerator.gather_for_metrics((predictions, references))
|
||||
metric.add_batch(predictions=predictions, references=references)
|
||||
return metric.compute()
|
||||
|
||||
|
||||
def train_baseline():
|
||||
set_seed(42)
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
|
||||
first_linear = None
|
||||
last_linear = None
|
||||
for name, module in model.named_modules():
|
||||
if isinstance(module, torch.nn.Linear):
|
||||
if first_linear is None:
|
||||
first_linear = name
|
||||
last_linear = name
|
||||
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
|
||||
accelerator = Accelerator()
|
||||
device = accelerator.device
|
||||
model.to(device)
|
||||
|
||||
convert_to_float8_training(model, module_filter_fn=func)
|
||||
|
||||
# Convert the model to FSDP
|
||||
model = FSDP(
|
||||
model,
|
||||
use_orig_params=True,
|
||||
mixed_precision=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.float32),
|
||||
auto_wrap_policy=FSDP_WRAP_POLICY,
|
||||
)
|
||||
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.train()
|
||||
|
||||
for batch in train_dataloader:
|
||||
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
|
||||
batch = batch.to(device)
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
optimizer.zero_grad()
|
||||
lr_scheduler.step()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
|
||||
def train_integration():
|
||||
AcceleratorState()._reset_state(True)
|
||||
fsdp_plugin = FSDPPlugin(
|
||||
auto_wrap_policy=FSDP_WRAP_POLICY,
|
||||
use_orig_params=True,
|
||||
mixed_precision_policy=MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.float32),
|
||||
)
|
||||
accelerator = Accelerator(mixed_precision="fp8", fsdp_plugin=fsdp_plugin, kwargs_handlers=[AORecipeKwargs()])
|
||||
set_seed(42)
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
|
||||
MODEL_NAME, accelerator=accelerator
|
||||
)
|
||||
|
||||
model, optimizer = accelerator.prepare(model, optimizer)
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.train()
|
||||
|
||||
for batch in train_dataloader:
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
accelerator.backward(loss)
|
||||
optimizer.step()
|
||||
optimizer.zero_grad()
|
||||
lr_scheduler.step()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
baseline_not_trained, baseline_trained = train_baseline()
|
||||
accelerator_not_trained, accelerator_trained = train_integration()
|
||||
|
||||
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
|
||||
)
|
||||
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_trained["f1"] == accelerator_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
|
||||
)
|
||||
|
||||
torch.distributed.destroy_process_group()
|
||||
145
benchmarks/fp8/torchao/non_distributed.py
Normal file
145
benchmarks/fp8/torchao/non_distributed.py
Normal file
@ -0,0 +1,145 @@
|
||||
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
This script tests to ensure that `accelerate` performs at the same level as raw `torchao`.
|
||||
|
||||
This particular script verifies this for single GPU training.
|
||||
"""
|
||||
|
||||
from functools import partial
|
||||
|
||||
import evaluate
|
||||
import torch
|
||||
from fp8_utils import get_training_utilities
|
||||
from torchao.float8 import convert_to_float8_training
|
||||
|
||||
from accelerate import Accelerator
|
||||
from accelerate.state import AcceleratorState
|
||||
from accelerate.utils import AORecipeKwargs, set_seed
|
||||
|
||||
|
||||
MODEL_NAME = "bert-base-cased"
|
||||
METRIC = evaluate.load("glue", "mrpc")
|
||||
|
||||
|
||||
def evaluate_model(model, dataloader, metric, accelerator=None):
|
||||
"Turns model to .eval(), runs dataloader, calculates metric, then turns eval back on"
|
||||
model.eval()
|
||||
for step, batch in enumerate(dataloader):
|
||||
with torch.no_grad():
|
||||
outputs = model(**batch)
|
||||
predictions = outputs.logits.argmax(dim=-1)
|
||||
references = batch["labels"]
|
||||
if accelerator is not None and accelerator.num_processes > 1:
|
||||
predictions, references = accelerator.gather_for_metrics((predictions, references))
|
||||
metric.add_batch(predictions=predictions, references=references)
|
||||
return metric.compute()
|
||||
|
||||
|
||||
def filter_linear_layers(module, fqn, first_layer_name=None, last_layer_name=None):
|
||||
if isinstance(module, torch.nn.Linear):
|
||||
if module.in_features % 16 != 0 or module.out_features % 16 != 0:
|
||||
return False
|
||||
# For stability reasons, we skip the first and last linear layers
|
||||
# Otherwise can lead to the model not training or converging properly
|
||||
if fqn in (first_layer_name, last_layer_name):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def train_baseline():
|
||||
set_seed(42)
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(MODEL_NAME)
|
||||
first_linear = None
|
||||
last_linear = None
|
||||
for name, module in model.named_modules():
|
||||
if isinstance(module, torch.nn.Linear):
|
||||
if first_linear is None:
|
||||
first_linear = name
|
||||
last_linear = name
|
||||
|
||||
func = partial(filter_linear_layers, first_layer_name=first_linear, last_layer_name=last_linear)
|
||||
model.to("cuda")
|
||||
convert_to_float8_training(model, module_filter_fn=func)
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
|
||||
model.train()
|
||||
|
||||
for batch in train_dataloader:
|
||||
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
optimizer.zero_grad()
|
||||
lr_scheduler.step()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
|
||||
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
|
||||
def train_integration():
|
||||
set_seed(42)
|
||||
accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=[AORecipeKwargs()])
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = get_training_utilities(
|
||||
MODEL_NAME, accelerator=accelerator
|
||||
)
|
||||
model = accelerator.prepare(model)
|
||||
base_model_results = evaluate_model(model, eval_dataloader, METRIC)
|
||||
model.train()
|
||||
|
||||
for batch in train_dataloader:
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
optimizer.zero_grad()
|
||||
lr_scheduler.step()
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
|
||||
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
baseline_not_trained, baseline_trained = train_baseline()
|
||||
AcceleratorState._reset_state(True)
|
||||
accelerator_not_trained, accelerator_trained = train_integration()
|
||||
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
|
||||
)
|
||||
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_trained["f1"] == accelerator_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
|
||||
)
|
||||
15
benchmarks/fp8/transformer_engine/Dockerfile
Normal file
15
benchmarks/fp8/transformer_engine/Dockerfile
Normal file
@ -0,0 +1,15 @@
|
||||
ARG BASE_YEAR=25
|
||||
ARG BASE_MONTH=03
|
||||
|
||||
FROM nvcr.io/nvidia/pytorch:${BASE_YEAR}.${BASE_MONTH}-py3
|
||||
|
||||
RUN pip install transformers evaluate datasets
|
||||
RUN git clone https://github.com/huggingface/accelerate.git
|
||||
|
||||
RUN cd accelerate && \
|
||||
pip install -e .[deepspeed] && \
|
||||
cd benchmarks/fp8
|
||||
|
||||
RUN /bin/bash
|
||||
|
||||
|
||||
@ -15,6 +15,8 @@ To run them, it's recommended to use a docker image (see the attached `Dockerfil
|
||||
|
||||
## Running:
|
||||
|
||||
There are official Docker images located at `huggingface/accelerate:gpu-fp8-transformerengine-nightly` which can be used.
|
||||
|
||||
You can run all scripts using the core `accelerate launch` command without any `accelerate config` being needed.
|
||||
|
||||
For single GPU, run it via `python`:
|
||||
@ -17,6 +17,7 @@ This script tests to ensure that `accelerate` performs at the same level as raw
|
||||
|
||||
This particular script verifies this for DDP training.
|
||||
"""
|
||||
|
||||
import evaluate
|
||||
import torch
|
||||
import transformer_engine.common.recipe as te_recipe
|
||||
@ -78,12 +79,12 @@ def train_baseline():
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
|
||||
assert (
|
||||
trained_model_results["accuracy"] > base_model_results["accuracy"]
|
||||
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
|
||||
assert (
|
||||
trained_model_results["f1"] > base_model_results["f1"]
|
||||
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
@ -113,12 +114,12 @@ def train_integration():
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
|
||||
assert (
|
||||
trained_model_results["accuracy"] > base_model_results["accuracy"]
|
||||
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
|
||||
assert (
|
||||
trained_model_results["f1"] > base_model_results["f1"]
|
||||
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
@ -127,17 +128,17 @@ if __name__ == "__main__":
|
||||
baseline_not_trained, baseline_trained = train_baseline()
|
||||
accelerator_not_trained, accelerator_trained = train_integration()
|
||||
|
||||
assert (
|
||||
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
|
||||
), f'Accuracy should be the same for the baseline and accelerator: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
|
||||
assert (
|
||||
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
|
||||
), f'F1 score should be the same for the baseline and accelerator: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
|
||||
assert (
|
||||
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
|
||||
), f'Accuracy should be the same for the baseline and accelerator: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
|
||||
assert (
|
||||
baseline_trained["f1"] == accelerator_trained["f1"]
|
||||
), f'F1 score should be the same for the baseline and accelerator: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'
|
||||
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
|
||||
)
|
||||
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_trained["f1"] == accelerator_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
|
||||
)
|
||||
|
||||
torch.distributed.destroy_process_group()
|
||||
@ -17,6 +17,7 @@ This script tests to ensure that `accelerate` performs at the same level as raw
|
||||
|
||||
This particular script verifies this for DDP training.
|
||||
"""
|
||||
|
||||
from unittest.mock import patch
|
||||
|
||||
import deepspeed
|
||||
@ -65,7 +66,7 @@ def train_baseline(zero_stage: int = 1):
|
||||
import numpy as np
|
||||
|
||||
config = {
|
||||
"train_batch_size": 32,
|
||||
"train_batch_size": 16,
|
||||
"train_micro_batch_size_per_gpu": 16,
|
||||
"gradient_accumulation_steps": 1,
|
||||
"zero_optimization": {
|
||||
@ -112,12 +113,12 @@ def train_baseline(zero_stage: int = 1):
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.destroy()
|
||||
assert (
|
||||
trained_model_results["accuracy"] > base_model_results["accuracy"]
|
||||
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
|
||||
assert (
|
||||
trained_model_results["f1"] > base_model_results["f1"]
|
||||
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results, model_outputs, data
|
||||
|
||||
@ -158,32 +159,33 @@ def train_integration(zero_stage: int = 1):
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
model.destroy()
|
||||
assert (
|
||||
trained_model_results["accuracy"] > base_model_results["accuracy"]
|
||||
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
|
||||
assert (
|
||||
trained_model_results["f1"] > base_model_results["f1"]
|
||||
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results, model_outputs, data
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# for zero_stage in [1, 2, 3]:
|
||||
zero_stage = 1
|
||||
baseline_not_trained, baseline_trained, baseline_outputs, baseline_data = train_baseline(zero_stage)
|
||||
accelerator_not_trained, accelerator_trained, accelerator_outputs, accelerator_data = train_integration(zero_stage)
|
||||
assert (
|
||||
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
|
||||
), f'ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
|
||||
assert (
|
||||
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
|
||||
), f'ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
|
||||
assert (
|
||||
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
|
||||
), f'ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
|
||||
assert (
|
||||
baseline_trained["f1"] == accelerator_trained["f1"]
|
||||
), f'ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'
|
||||
for zero_stage in [1, 2, 3]:
|
||||
baseline_not_trained, baseline_trained, baseline_outputs, baseline_data = train_baseline(zero_stage)
|
||||
accelerator_not_trained, accelerator_trained, accelerator_outputs, accelerator_data = train_integration(
|
||||
zero_stage
|
||||
)
|
||||
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
|
||||
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
|
||||
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
|
||||
)
|
||||
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
|
||||
f"ZERO stage {zero_stage}: Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_trained["f1"] == accelerator_trained["f1"], (
|
||||
f"ZERO stage {zero_stage}: F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
|
||||
)
|
||||
|
||||
torch.distributed.destroy_process_group()
|
||||
torch.distributed.destroy_process_group()
|
||||
@ -109,7 +109,8 @@ def evaluate_model(model, dataloader, metric, accelerator=None):
|
||||
with torch.no_grad():
|
||||
outputs = model(**batch)
|
||||
predictions = outputs.logits.argmax(dim=-1)
|
||||
references = batch["labels"]
|
||||
if accelerator is not None and accelerator.num_processes > 1:
|
||||
predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
|
||||
predictions, references = accelerator.gather_for_metrics((predictions, references))
|
||||
metric.add_batch(predictions=predictions, references=references)
|
||||
return metric.compute()
|
||||
@ -17,6 +17,7 @@ This script tests to ensure that `accelerate` performs at the same level as raw
|
||||
|
||||
This particular script verifies this for FSDP training.
|
||||
"""
|
||||
|
||||
from functools import partial
|
||||
|
||||
import evaluate
|
||||
@ -90,12 +91,12 @@ def train_baseline():
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
|
||||
assert (
|
||||
trained_model_results["accuracy"] > base_model_results["accuracy"]
|
||||
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
|
||||
assert (
|
||||
trained_model_results["f1"] > base_model_results["f1"]
|
||||
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
@ -130,12 +131,12 @@ def train_integration():
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC, accelerator=accelerator)
|
||||
|
||||
assert (
|
||||
trained_model_results["accuracy"] > base_model_results["accuracy"]
|
||||
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
|
||||
assert (
|
||||
trained_model_results["f1"] > base_model_results["f1"]
|
||||
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
@ -144,17 +145,17 @@ if __name__ == "__main__":
|
||||
baseline_not_trained, baseline_trained = train_baseline()
|
||||
accelerator_not_trained, accelerator_trained = train_integration()
|
||||
|
||||
assert (
|
||||
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
|
||||
), f'Accuracy should be the same for the baseline and accelerator: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
|
||||
assert (
|
||||
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
|
||||
), f'F1 score should be the same for the baseline and accelerator: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
|
||||
assert (
|
||||
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
|
||||
), f'Accuracy should be the same for the baseline and accelerator: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
|
||||
assert (
|
||||
baseline_trained["f1"] == accelerator_trained["f1"]
|
||||
), f'F1 score should be the same for the baseline and accelerator: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'
|
||||
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
|
||||
)
|
||||
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_trained["f1"] == accelerator_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
|
||||
)
|
||||
|
||||
torch.distributed.destroy_process_group()
|
||||
@ -17,6 +17,7 @@ This script tests to ensure that `accelerate` performs at the same level as raw
|
||||
|
||||
This particular script verifies this for single GPU training.
|
||||
"""
|
||||
|
||||
import evaluate
|
||||
import torch
|
||||
import transformer_engine.common.recipe as te_recipe
|
||||
@ -69,12 +70,12 @@ def train_baseline():
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
|
||||
|
||||
assert (
|
||||
trained_model_results["accuracy"] > base_model_results["accuracy"]
|
||||
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
|
||||
assert (
|
||||
trained_model_results["f1"] > base_model_results["f1"]
|
||||
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
@ -103,12 +104,12 @@ def train_integration():
|
||||
|
||||
trained_model_results = evaluate_model(model, eval_dataloader, METRIC)
|
||||
|
||||
assert (
|
||||
trained_model_results["accuracy"] > base_model_results["accuracy"]
|
||||
), f'Accuracy should be higher for the trained model: {trained_model_results["accuracy"]} > {base_model_results["accuracy"]}'
|
||||
assert (
|
||||
trained_model_results["f1"] > base_model_results["f1"]
|
||||
), f'F1 score should be higher for the trained model: {trained_model_results["f1"]} > {base_model_results["f1"]}'
|
||||
assert trained_model_results["accuracy"] > base_model_results["accuracy"], (
|
||||
f"Accuracy should be higher for the trained model: {trained_model_results['accuracy']} > {base_model_results['accuracy']}"
|
||||
)
|
||||
assert trained_model_results["f1"] > base_model_results["f1"], (
|
||||
f"F1 score should be higher for the trained model: {trained_model_results['f1']} > {base_model_results['f1']}"
|
||||
)
|
||||
|
||||
return base_model_results, trained_model_results
|
||||
|
||||
@ -117,15 +118,15 @@ if __name__ == "__main__":
|
||||
baseline_not_trained, baseline_trained = train_baseline()
|
||||
accelerator_not_trained, accelerator_trained = train_integration()
|
||||
|
||||
assert (
|
||||
baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"]
|
||||
), f'Accuracy should be the same for the baseline and accelerator: {baseline_not_trained["accuracy"]} == {accelerator_not_trained["accuracy"]}'
|
||||
assert (
|
||||
baseline_not_trained["f1"] == accelerator_not_trained["f1"]
|
||||
), f'F1 score should be the same for the baseline and accelerator: {baseline_not_trained["f1"]} == {accelerator_not_trained["f1"]}'
|
||||
assert (
|
||||
baseline_trained["accuracy"] == accelerator_trained["accuracy"]
|
||||
), f'Accuracy should be the same for the baseline and accelerator: {baseline_trained["accuracy"]} == {accelerator_trained["accuracy"]}'
|
||||
assert (
|
||||
baseline_trained["f1"] == accelerator_trained["f1"]
|
||||
), f'F1 score should be the same for the baseline and accelerator: {baseline_trained["f1"]} == {accelerator_trained["f1"]}'
|
||||
assert baseline_not_trained["accuracy"] == accelerator_not_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_not_trained['accuracy']} == {accelerator_not_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_not_trained["f1"] == accelerator_not_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_not_trained['f1']} == {accelerator_not_trained['f1']}"
|
||||
)
|
||||
assert baseline_trained["accuracy"] == accelerator_trained["accuracy"], (
|
||||
f"Accuracy should be the same for the baseline and accelerator: {baseline_trained['accuracy']} == {accelerator_trained['accuracy']}"
|
||||
)
|
||||
assert baseline_trained["f1"] == accelerator_trained["f1"], (
|
||||
f"F1 score should be the same for the baseline and accelerator: {baseline_trained['f1']} == {accelerator_trained['f1']}"
|
||||
)
|
||||
74
benchmarks/fsdp2/README.md
Normal file
74
benchmarks/fsdp2/README.md
Normal file
@ -0,0 +1,74 @@
|
||||
# FSDP2 Benchmarks
|
||||
|
||||
This benchmark showcases `FSDP2` in 🤗 `accelerate` and compares it to `torch` baseline.
|
||||
|
||||
## Overview
|
||||
|
||||
This benchmark consists of two parts:
|
||||
- `main.py` is the main script that runs the benchmark
|
||||
- `visualize.py` is the script that visualizes the results (if `--output_dir` was specified for the previous command)
|
||||
|
||||
## Motivation
|
||||
|
||||
We want to showcase that 🤗 `accelerate`'s integration of `FSDP2` is on par raw PyTorch, and highlight a "broken" part in PyTorch that creating an optimizer before applying `FSDP2` **doesn't result in a working training loop**. (more on this later)
|
||||
This script showcases **matching memory usage and convergence between `accelerate` and `torch`'s baseline.**
|
||||
To deal with this breaking change (and maintain backward compatibility with FSDP1 in terms of an API), `accelerate` had to come up with a workaround since `accelerate` assumes that the user will nearly always create a model, optimizer, scheduler, etc beforehand and bring them themselves. This lead to an issue of a stark increase in memory as well as the model not even training if the user creates an optimizer beforehand.
|
||||
To workaround this, we replace the parameters inside the optimizer with the newly created FSDP2 sharded ones. More about this can be found in this [blog post (TBD)](TODO)
|
||||
> [!WARNING]
|
||||
> This script is intended to fit on 2x 24GB GPUs, though on so few GPUs it's not possible to see the memory difference (discrepancies in grad allocation result in lower memory usage in the non-fixed case), only the difference in convergence. Below are attached results from 8x H100 GPUs where the difference is visible.
|
||||
> TLDR: more GPUs = bigger memory difference between fixed and non-fixed cases.
|
||||
|
||||
## Results
|
||||
|
||||
Here are the results from running the benchmark on 8x H100 GPUs:
|
||||
|
||||
<p align="center">
|
||||
<img src="imgs/allocated_memory.png" width="80%" alt="Allocated Memory Usage">
|
||||
</p>
|
||||
<p align="center">
|
||||
<img src="imgs/reserved_memory.png" width="80%" alt="Reserved Memory Usage">
|
||||
</p>
|
||||
|
||||
As you can see, the memory usage of `accelerate` and `torch_post_shard` (the **intended** way) are very similar, while `torch_pre_shard_not_fixed` uses significantly more memory. Our fix in `torch_pre_shard_fixed` brings the memory usage back in line with the **intended** approach.
|
||||
|
||||
> [!WARNING]
|
||||
> Timing discrepancies are due to the benchmarks being ran in 1 script.
|
||||
|
||||
|
||||
## Running
|
||||
|
||||
To run the benchmark, you can either use `accelerate launch` or `torchrun`:
|
||||
```bash
|
||||
accelerate launch main.py
|
||||
```
|
||||
```bash
|
||||
# For two GPUs
|
||||
torchrun --nproc_per_node 2 main.py
|
||||
```
|
||||
|
||||
This supports multiple configurable options, you can learn about them by running:
|
||||
```bash
|
||||
python3 main.py --help
|
||||
```
|
||||
|
||||
This script will run 4 different benchmarks:
|
||||
- `torch_optimizer_after_fsdp`: `torch` baseline where optimizer is created after applying `FSDP2`, this is the **intended** way to do it
|
||||
- `torch_optimizer_before_fsdp_not_fixed`: `torch` baseline where optimizer is created before applying `FSDP2` without fixing the optimizer parameters
|
||||
- `torch_optimizer_before_fsdp_fixed`: `torch` baseline where optimizer is created before applying `FSDP2` with our fix to the optimizer
|
||||
- `accelerate`: `accelerate`'s own integration of `FSDP2` where optimizer is created before applying `FSDP2`, but we apply our fix to the optimizer
|
||||
|
||||
Memory results are saved in a folder specified by `--output_dir` argument.
|
||||
Optionally, you can specify `--save_memory_snapshot` to save the torch memory snapshot, which can then be viewed using [`torch memory viz`](https://pytorch.org/memory_viz)
|
||||
|
||||
## Visualizing results
|
||||
|
||||
To visualize the results, you can run:
|
||||
|
||||
```bash
|
||||
python3 visualize.py --dir <path_to_output_dir>
|
||||
```
|
||||
|
||||
This will then create two plots, showcasing allocated and reserved memory usage between all the different benchmarks discussed above.
|
||||
|
||||
|
||||
|
||||
BIN
benchmarks/fsdp2/imgs/allocated_memory.png
Normal file
BIN
benchmarks/fsdp2/imgs/allocated_memory.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 124 KiB |
BIN
benchmarks/fsdp2/imgs/reserved_memory.png
Normal file
BIN
benchmarks/fsdp2/imgs/reserved_memory.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 56 KiB |
122
benchmarks/fsdp2/main.py
Normal file
122
benchmarks/fsdp2/main.py
Normal file
@ -0,0 +1,122 @@
|
||||
# Copyright 2025 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import functools
|
||||
from typing import Callable
|
||||
|
||||
import torch
|
||||
|
||||
from accelerate import Accelerator
|
||||
from utils import parse_args, prepare_accelerate, prepare_torch
|
||||
|
||||
|
||||
MODEL_NAME = "Qwen/Qwen2.5-1.5B-Instruct"
|
||||
LEARNING_RATE = 3e-5
|
||||
|
||||
CONFIG = {
|
||||
"model_name": MODEL_NAME,
|
||||
"learning_rate": LEARNING_RATE,
|
||||
}
|
||||
|
||||
|
||||
def train(
|
||||
model: torch.nn.Module,
|
||||
optimizer: torch.optim.Optimizer,
|
||||
train_dataloader: torch.utils.data.DataLoader,
|
||||
accelerator: Accelerator,
|
||||
) -> torch.Tensor:
|
||||
losses = []
|
||||
for batch in train_dataloader:
|
||||
optimizer.zero_grad()
|
||||
outputs = model(**batch, use_cache=False)
|
||||
|
||||
loss = outputs.loss
|
||||
losses.append(loss.item())
|
||||
accelerator.backward(loss)
|
||||
optimizer.step()
|
||||
|
||||
return torch.tensor(losses)
|
||||
|
||||
|
||||
def evaluate(args, config: dict, init_fn: Callable, run_name: str) -> torch.Tensor:
|
||||
model, optimizer, dataloader, accelerator, memory_tracker = init_fn(args, config)
|
||||
|
||||
loss = train(model, optimizer, dataloader, accelerator)
|
||||
|
||||
memory_tracker.stop()
|
||||
msg = f"""Results for {run_name} (rank 0):
|
||||
Loss: {loss[-1].item()}
|
||||
Peak Allocated Memory: {float(memory_tracker.peak_allocated_memory):.2f} MB
|
||||
Peak Reserved Memory: {float(memory_tracker.peak_reserved_memory):.2f} MB
|
||||
{"-" * 34}"""
|
||||
accelerator.print(msg)
|
||||
return loss
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
evaluations = [
|
||||
functools.partial(
|
||||
evaluate,
|
||||
init_fn=functools.partial(prepare_torch, post_shard_optimizer=False, apply_optimizer_fix=True),
|
||||
run_name="Optimizer Before FSDP (w/ fix)",
|
||||
),
|
||||
functools.partial(
|
||||
evaluate,
|
||||
init_fn=functools.partial(prepare_torch, post_shard_optimizer=False, apply_optimizer_fix=False),
|
||||
run_name="Optimizer Before FSDP (w/o fix)",
|
||||
),
|
||||
functools.partial(
|
||||
evaluate,
|
||||
init_fn=functools.partial(prepare_torch, post_shard_optimizer=True),
|
||||
run_name="Optimizer After FSDP",
|
||||
),
|
||||
functools.partial(evaluate, init_fn=prepare_accelerate, run_name="Accelerate"),
|
||||
]
|
||||
labels = [
|
||||
"Optimizer Before FSDP (w/ fix)",
|
||||
"Optimizer Before FSDP (w/o fix)",
|
||||
"Optimizer After FSDP",
|
||||
"Accelerate",
|
||||
]
|
||||
|
||||
results = {}
|
||||
torch.use_deterministic_algorithms(True)
|
||||
|
||||
for evaluation, label in zip(evaluations, labels):
|
||||
results[label] = evaluation(args, CONFIG)
|
||||
|
||||
torch.testing.assert_close(
|
||||
results["Optimizer After FSDP"],
|
||||
results["Optimizer Before FSDP (w/ fix)"],
|
||||
msg="Optimizer After FSDP and Optimizer Before FSDP (w/ fix) should be the same",
|
||||
)
|
||||
|
||||
torch.testing.assert_close(
|
||||
results["Optimizer After FSDP"],
|
||||
results["Accelerate"],
|
||||
msg="Optimizer After FSDP and Accelerate should be the same",
|
||||
)
|
||||
|
||||
torch.testing.assert_close(
|
||||
results["Accelerate"],
|
||||
results["Optimizer Before FSDP (w/ fix)"],
|
||||
msg="Accelerate and Optimizer Before FSDP (w/ fix) should be the same",
|
||||
)
|
||||
|
||||
torch.distributed.destroy_process_group()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
130
benchmarks/fsdp2/measure_utils.py
Normal file
130
benchmarks/fsdp2/measure_utils.py
Normal file
@ -0,0 +1,130 @@
|
||||
# Copyright 2025 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import gc
|
||||
import json
|
||||
import os
|
||||
import threading
|
||||
import time
|
||||
|
||||
import psutil
|
||||
import torch
|
||||
|
||||
from accelerate import PartialState
|
||||
|
||||
|
||||
class MemoryTracker:
|
||||
def __init__(
|
||||
self,
|
||||
device: torch.device,
|
||||
output_directory: str,
|
||||
run_name: str,
|
||||
save_memory_snapshot: bool,
|
||||
log_interval: float = 0.01,
|
||||
):
|
||||
"""Class for tracking gpu and cpu memory usage of the process.
|
||||
|
||||
Args:
|
||||
device (`torch.device`):
|
||||
PyTorch device to monitor.
|
||||
output_directory (`str`):
|
||||
Directory to save the memory usage data to, will be created if it doesn't exist.
|
||||
run_name (`str`):
|
||||
Name of the run, will be used to name the output files.
|
||||
save_memory_snapshot (`bool`):
|
||||
Whether to also save `torch.cuda.memory._dump_snapshot` to the output directory.
|
||||
log_interval (`float`, *optional*):
|
||||
Interval in seconds between memory measurements. Defaults to 0.01.
|
||||
"""
|
||||
self.log_interval = log_interval
|
||||
self.save_memory_snapshot = save_memory_snapshot
|
||||
self.output_directory = output_directory
|
||||
self.run_name = run_name
|
||||
|
||||
self.timestamps = []
|
||||
self.allocated_memory = []
|
||||
self.reserved_memory = []
|
||||
self.virtual_memory = []
|
||||
|
||||
self.start_time = None
|
||||
self.running = False
|
||||
|
||||
self._thread = None
|
||||
self._state = PartialState()
|
||||
self._process = psutil.Process()
|
||||
self._device = device
|
||||
self.torch_accelerator_module = getattr(torch, device.type, torch.cuda)
|
||||
|
||||
def _monitor(self):
|
||||
self.start_time = time.time()
|
||||
|
||||
while self.running:
|
||||
allocated = self.torch_accelerator_module.memory_allocated(self._device) / (1024 * 1024)
|
||||
reserved = self.torch_accelerator_module.memory_reserved(self._device) / (1024 * 1024)
|
||||
virtual_memory = self._process.memory_info().rss / (1024 * 1024)
|
||||
|
||||
self.allocated_memory.append(allocated)
|
||||
self.reserved_memory.append(reserved)
|
||||
self.virtual_memory.append(virtual_memory)
|
||||
self.timestamps.append(time.time() - self.start_time)
|
||||
|
||||
time.sleep(self.log_interval)
|
||||
|
||||
def start(self):
|
||||
gc.collect()
|
||||
self.torch_accelerator_module.empty_cache()
|
||||
|
||||
if self.output_directory:
|
||||
os.makedirs(self.output_directory, exist_ok=True)
|
||||
|
||||
if self.save_memory_snapshot:
|
||||
self.torch_accelerator_module.memory._record_memory_history()
|
||||
|
||||
self.running = True
|
||||
self._thread = threading.Thread(target=self._monitor)
|
||||
self._thread.daemon = True
|
||||
self._thread.start()
|
||||
|
||||
def stop(self):
|
||||
self.running = False
|
||||
if self._thread:
|
||||
self._thread.join()
|
||||
|
||||
if self.save_memory_snapshot and self._state.is_main_process and self.output_directory:
|
||||
output_file = os.path.join(self.output_directory, f"{self.run_name}_memory_snapshot.pkl")
|
||||
self.torch_accelerator_module.memory._dump_snapshot(output_file)
|
||||
|
||||
if self._state.is_main_process and self.output_directory:
|
||||
path = os.path.join(self.output_directory, f"{self.run_name}_memory_usage.json")
|
||||
with open(path, "w") as f:
|
||||
json.dump(
|
||||
{
|
||||
"timestamps": self.timestamps,
|
||||
"allocated_memory": self.allocated_memory,
|
||||
"reserved_memory": self.reserved_memory,
|
||||
"virtual_memory": self.virtual_memory,
|
||||
},
|
||||
f,
|
||||
)
|
||||
if self.save_memory_snapshot:
|
||||
self.torch_accelerator_module.memory._record_memory_history(False)
|
||||
self.torch_accelerator_module.empty_cache()
|
||||
|
||||
@property
|
||||
def peak_allocated_memory(self):
|
||||
return max(self.allocated_memory)
|
||||
|
||||
@property
|
||||
def peak_reserved_memory(self):
|
||||
return max(self.reserved_memory)
|
||||
290
benchmarks/fsdp2/utils.py
Normal file
290
benchmarks/fsdp2/utils.py
Normal file
@ -0,0 +1,290 @@
|
||||
# Copyright 2025 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import argparse
|
||||
from types import MethodType
|
||||
from typing import Union
|
||||
|
||||
import torch
|
||||
from datasets import load_dataset
|
||||
from measure_utils import MemoryTracker
|
||||
from torch.distributed.fsdp import MixedPrecisionPolicy, fully_shard
|
||||
from torch.optim import AdamW
|
||||
from torch.utils.data import DataLoader
|
||||
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, DataCollatorForLanguageModeling
|
||||
from transformers.models.qwen2.modeling_qwen2 import Qwen2DecoderLayer
|
||||
|
||||
from accelerate import Accelerator, FullyShardedDataParallelPlugin
|
||||
from accelerate.state import AcceleratorState, is_initialized
|
||||
from accelerate.utils import convert_outputs_to_fp32, set_seed
|
||||
|
||||
|
||||
SEED = 421
|
||||
|
||||
|
||||
def get_named_parameters(model: torch.nn.Module, drop_refs: bool = False) -> dict[str, Union[torch.Tensor, int]]:
|
||||
"""
|
||||
This function returns a dictionary mapping the parameter names to their data pointers or
|
||||
the original parameters if `drop_refs` is `False`.
|
||||
It is used to get the original parameter names before `fully_shard` is applied.
|
||||
|
||||
We only return the data pointers, so we drop the references to the original parameters
|
||||
and `fully_shard` will then trigger a new allocation for the sharded ones.
|
||||
|
||||
Args:
|
||||
model (`torch.nn.Module`): Model instance to get the named parameters from
|
||||
drop_refs (`bool`, *optional*, defaults to `False`): Whether to drop the references to the original parameters
|
||||
|
||||
Returns:
|
||||
`dict[str, Union[torch.Tensor, int]]`: Dictionary mapping the parameter names to their data pointers or the original parameters if `drop_refs` is `False`
|
||||
"""
|
||||
named_parameters = {}
|
||||
for n, p in model.named_parameters():
|
||||
# We only preserve the data pointers to have the unique 1:1 mapping between the original and the sharded parameters
|
||||
named_parameters[n] = p.data_ptr() if drop_refs else p
|
||||
return named_parameters
|
||||
|
||||
|
||||
def replace_optimizer_params(optimizer: torch.optim.Optimizer):
|
||||
"""
|
||||
This function is called before using `fully_shard` on the model. It replaces the parameters of the optimizer with
|
||||
empty tensors, so `fully_shard` can trigger a new allocation for the sharded ones. After this, we swap the parameters
|
||||
`data_ptr` to the original one, so we can reuse that later to map the sharded parameters to the original ones.
|
||||
This function modifies the optimizer in-place.
|
||||
|
||||
Args:
|
||||
optimizer (torch.optim.Optimizer): Optimizer instance which contains the original model parameters
|
||||
"""
|
||||
|
||||
for param_group in optimizer.param_groups:
|
||||
for i, p in enumerate(param_group["params"]):
|
||||
# We drop a reference to the original param here, so that _move_states_to_device triggers a reallocation
|
||||
# This is required or else the `fully_shard` -> `_move_states_to_device` uses the original memory address
|
||||
# for the sharded parameters, and we get a weird/undefined behavior.
|
||||
param_group["params"][i] = torch.empty_like(p)
|
||||
|
||||
# We save the original data_ptr, so we can swap back the parameters later
|
||||
param_group["params"][i].data_ptr = p.data_ptr()
|
||||
|
||||
|
||||
def swap_back_optimizer_params(
|
||||
model: torch.nn.Module, optimizer: torch.optim.Optimizer, old_named_parameter_pointers: dict[str, int]
|
||||
):
|
||||
"""
|
||||
This function is the counterpart of `replace_optimizer_params`. It is called after `fully_shard` being applied to
|
||||
the model. It swaps the parameters of the optimizer to their sharded counterparts.
|
||||
It is done using the `data_ptr` mapping prepared in `replace_optimizer_params` and `get_named_parameters`.
|
||||
|
||||
Args:
|
||||
model (`torch.nn.Module`): Model instance to get the new named parameters from
|
||||
optimizer (`torch.optim.Optimizer`): Optimizer instance to swap the parameters of
|
||||
old_named_parameter_pointers (`dict[str, int]`): Dictionary mapping the original parameter names: data_ptrs to the new ones
|
||||
"""
|
||||
# We get the new named parameters after `fully_shard` being applied
|
||||
# We don't drop the references as we need the sharded parameters now
|
||||
new_named_parameters = get_named_parameters(model, drop_refs=False)
|
||||
|
||||
# We create a mapping from the original data_ptr to the new sharded param corresponding to it
|
||||
mapping = {p: new_named_parameters[n] for n, p in old_named_parameter_pointers.items()}
|
||||
|
||||
for param_group in optimizer.param_groups:
|
||||
# We swap the parameters of the optimizer to the new sharded ones
|
||||
param_group["params"] = [mapping[p.data_ptr] for p in param_group["params"]]
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--output_dir",
|
||||
type=str,
|
||||
help="Directory to save the benchmarking results.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--save_memory_snapshot",
|
||||
action="store_true",
|
||||
default=False,
|
||||
help="If True, `torch.cuda.memory._dump_snapshot` will be used to additionaly save the memory trace.",
|
||||
)
|
||||
######################
|
||||
# Training arguments #
|
||||
######################
|
||||
parser.add_argument(
|
||||
"--batch_size",
|
||||
type=int,
|
||||
default=2,
|
||||
help="Batch size for the training loop.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--block_size",
|
||||
type=int,
|
||||
default=128,
|
||||
help="The maximum sequence length to use with the model.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dataset_fraction",
|
||||
type=float,
|
||||
default=1.0,
|
||||
help="Fraction of the dataset to use.",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def prepare_dataloader(tokenizer, args, accelerator: Accelerator) -> DataLoader:
|
||||
dataset = load_dataset("tiny_shakespeare", split="train", trust_remote_code=True)
|
||||
|
||||
def tokenize_function(example):
|
||||
return tokenizer(
|
||||
example["text"],
|
||||
)
|
||||
|
||||
dataset = dataset.map(
|
||||
tokenize_function,
|
||||
batched=True,
|
||||
remove_columns=["text"],
|
||||
)
|
||||
|
||||
block_size = min(tokenizer.model_max_length, args.block_size)
|
||||
|
||||
def group_texts(examples):
|
||||
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
|
||||
total_length = len(concatenated_examples[list(examples.keys())[0]])
|
||||
|
||||
total_length = (total_length // block_size) * block_size
|
||||
|
||||
result = {
|
||||
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
|
||||
for k, t in concatenated_examples.items()
|
||||
}
|
||||
|
||||
result["labels"] = result["input_ids"].copy()
|
||||
return result
|
||||
|
||||
dataset = dataset.map(group_texts, batched=True)
|
||||
dataset = dataset.select(range(int(len(dataset) * args.dataset_fraction)))
|
||||
|
||||
def collate_fn(examples):
|
||||
return DataCollatorForLanguageModeling(
|
||||
tokenizer=tokenizer,
|
||||
mlm=False,
|
||||
)(examples)
|
||||
|
||||
dataloader = DataLoader(
|
||||
dataset,
|
||||
batch_size=args.batch_size,
|
||||
collate_fn=collate_fn,
|
||||
)
|
||||
dataloader = accelerator.prepare(dataloader)
|
||||
return dataloader
|
||||
|
||||
|
||||
def get_model(model_name: str):
|
||||
# We reguire model to be loaded in fp32, otherwise benchmarks don't match as accelerate does upcasting of parameters to fp32
|
||||
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float32)
|
||||
model = AutoModelForCausalLM.from_config(config)
|
||||
return model
|
||||
|
||||
|
||||
def get_tokenizer(model_name: str):
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
||||
tokenizer.pad_token = tokenizer.eos_token
|
||||
return tokenizer
|
||||
|
||||
|
||||
def prepare_torch(
|
||||
args, config: dict, post_shard_optimizer: bool = False, apply_optimizer_fix: bool = False
|
||||
) -> tuple[torch.nn.Module, torch.optim.Optimizer, torch.utils.data.DataLoader, Accelerator]:
|
||||
mp_policy = MixedPrecisionPolicy(
|
||||
param_dtype=torch.bfloat16,
|
||||
reduce_dtype=torch.bfloat16,
|
||||
output_dtype=torch.bfloat16,
|
||||
)
|
||||
|
||||
accelerator = Accelerator(mixed_precision="bf16")
|
||||
set_seed(SEED)
|
||||
is_fixed = "fixed" if apply_optimizer_fix else "not_fixed"
|
||||
is_post_shard = "optimizer_after_fsdp" if post_shard_optimizer else "optimizer_before_fsdp"
|
||||
run_name = f"torch_{is_post_shard}" if post_shard_optimizer else f"torch_{is_post_shard}_{is_fixed}"
|
||||
|
||||
tokenizer = get_tokenizer(config["model_name"])
|
||||
train_dataloader = prepare_dataloader(tokenizer, args, accelerator)
|
||||
|
||||
memory_tracker = MemoryTracker(accelerator.device, args.output_dir, run_name, args.save_memory_snapshot)
|
||||
memory_tracker.start()
|
||||
|
||||
model = get_model(config["model_name"])
|
||||
optimizer = None
|
||||
|
||||
if not post_shard_optimizer:
|
||||
optimizer = AdamW(model.parameters(), lr=config["learning_rate"])
|
||||
|
||||
if apply_optimizer_fix:
|
||||
# We drop the references to the original parameters, so that `fully_shard` can trigger a new allocation
|
||||
# Then we get the `module_name: data_ptr` mapping, so we can swap back the parameters later
|
||||
old_named_parameters = get_named_parameters(model, drop_refs=True)
|
||||
|
||||
# We replace the parameters of the optimizer with empty tensors, so that `fully_shard` can trigger a new allocation
|
||||
# We also change the `data_ptr` of the parameters to the original ones, so we can swap back the parameters later
|
||||
replace_optimizer_params(optimizer)
|
||||
|
||||
for module in model.modules():
|
||||
if isinstance(module, Qwen2DecoderLayer):
|
||||
fully_shard(module, mp_policy=mp_policy)
|
||||
fully_shard(model, mp_policy=mp_policy)
|
||||
|
||||
# We do this to imitate how accelerate forces outputs to be in fp32 via `convert_outputs_to_fp32`
|
||||
autocast_context = torch.autocast(device_type=accelerator.state.device.type, dtype=torch.bfloat16)
|
||||
model_forward_func = model.forward.__func__
|
||||
new_forward = autocast_context(model_forward_func)
|
||||
model.forward = MethodType(new_forward, model)
|
||||
model.forward = MethodType(convert_outputs_to_fp32(model.forward.__func__), model)
|
||||
|
||||
if post_shard_optimizer:
|
||||
optimizer = AdamW(model.parameters(), lr=config["learning_rate"])
|
||||
|
||||
if not post_shard_optimizer and apply_optimizer_fix:
|
||||
# We swap back the parameters of the optimizer to the original ones
|
||||
swap_back_optimizer_params(model, optimizer, old_named_parameters)
|
||||
|
||||
return model, optimizer, train_dataloader, accelerator, memory_tracker
|
||||
|
||||
|
||||
def prepare_accelerate(
|
||||
args, config: dict
|
||||
) -> tuple[torch.nn.Module, torch.optim.Optimizer, torch.utils.data.DataLoader, Accelerator]:
|
||||
if is_initialized():
|
||||
AcceleratorState()._reset_state(True)
|
||||
|
||||
fsdp_plugin = FullyShardedDataParallelPlugin(
|
||||
fsdp_version=2,
|
||||
auto_wrap_policy="transformer_based_wrap",
|
||||
transformer_cls_names_to_wrap=["Qwen2DecoderLayer"],
|
||||
)
|
||||
accelerator = Accelerator(
|
||||
fsdp_plugin=fsdp_plugin,
|
||||
mixed_precision="bf16",
|
||||
)
|
||||
set_seed(SEED)
|
||||
|
||||
tokenizer = get_tokenizer(config["model_name"])
|
||||
train_dataloader = prepare_dataloader(tokenizer, args, accelerator)
|
||||
|
||||
memory_tracker = MemoryTracker(accelerator.device, args.output_dir, "accelerate", args.save_memory_snapshot)
|
||||
memory_tracker.start()
|
||||
|
||||
model = get_model(config["model_name"])
|
||||
optimizer = AdamW(model.parameters(), lr=config["learning_rate"])
|
||||
|
||||
model, optimizer = accelerator.prepare(model, optimizer)
|
||||
|
||||
return model, optimizer, train_dataloader, accelerator, memory_tracker
|
||||
114
benchmarks/fsdp2/visualize.py
Normal file
114
benchmarks/fsdp2/visualize.py
Normal file
@ -0,0 +1,114 @@
|
||||
# Copyright 2025 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import argparse
|
||||
import json
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--dir", type=str, help="Directory containing the memory usage data")
|
||||
parser.add_argument(
|
||||
"--memory_threshold",
|
||||
type=int,
|
||||
default=0,
|
||||
help="Memory threshold to filter data that is below this value (only filters 1st `--filter_partition` of the points which should roughtly correspond to the model loading)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--filter_partition",
|
||||
type=float,
|
||||
default=1 / 3,
|
||||
help="Partition to drop data from that are below the memory threshold",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def filter_data(data, memory_threshold, filter_partition, key):
|
||||
timestamps = data["timestamps"]
|
||||
memory = data[key]
|
||||
|
||||
mid_point = int(len(timestamps) * filter_partition)
|
||||
filtered_times = []
|
||||
filtered_memory = []
|
||||
for i, (t, m) in enumerate(zip(timestamps, memory)):
|
||||
if i < mid_point and m < memory_threshold:
|
||||
continue
|
||||
filtered_times.append(t)
|
||||
filtered_memory.append(m)
|
||||
return filtered_times, filtered_memory
|
||||
|
||||
|
||||
def compare_memory_usage(data, labels, memory_threshold, filter_partition):
|
||||
plt.style.use("seaborn-v0_8")
|
||||
colors = ["#2ecc71", "#e74c3c", "#3498db", "#f1c40f"]
|
||||
|
||||
fig1, ax1 = plt.subplots(figsize=(15, 5))
|
||||
for data_item, label, color in zip(data, labels, colors):
|
||||
timestamps, allocated = filter_data(data_item, memory_threshold, filter_partition, "allocated_memory")
|
||||
ax1.plot(timestamps, allocated, label=label, color=color, linewidth=2)
|
||||
|
||||
ax1.set_xlabel("Time (s)", fontsize=12)
|
||||
ax1.set_ylabel("Allocated Memory (GB)", fontsize=12)
|
||||
ax1.set_title("Allocated Memory Usage Over Time", fontsize=14, pad=15)
|
||||
ax1.grid(True, linestyle="--", alpha=0.7)
|
||||
ax1.legend(frameon=True, fancybox=True, shadow=True, fontsize=10)
|
||||
ax1.spines["top"].set_visible(False)
|
||||
ax1.spines["right"].set_visible(False)
|
||||
plt.tight_layout()
|
||||
|
||||
fig2, ax2 = plt.subplots(figsize=(15, 5))
|
||||
for data_item, label, color in zip(data, labels, colors):
|
||||
timestamps, reserved = filter_data(data_item, memory_threshold, filter_partition, "reserved_memory")
|
||||
ax2.plot(timestamps, reserved, label=label, color=color, linewidth=2)
|
||||
|
||||
ax2.set_xlabel("Time (s)", fontsize=12)
|
||||
ax2.set_ylabel("Reserved Memory (GB)", fontsize=12)
|
||||
ax2.set_title("Reserved Memory Usage Over Time", fontsize=14, pad=15)
|
||||
ax2.grid(True, linestyle="--", alpha=0.7)
|
||||
ax2.legend(frameon=True, fancybox=True, shadow=True, fontsize=10)
|
||||
ax2.spines["top"].set_visible(False)
|
||||
ax2.spines["right"].set_visible(False)
|
||||
plt.tight_layout()
|
||||
|
||||
return fig1, fig2
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
args = parse_args()
|
||||
DIR = args.dir
|
||||
with open(f"{DIR}/torch_optimizer_before_fsdp_not_fixed_memory_usage.json") as f:
|
||||
optimizer_before_fsdp_not_fixed = json.load(f)
|
||||
|
||||
with open(f"{DIR}/torch_optimizer_after_fsdp_memory_usage.json") as f:
|
||||
optimizer_after_fsdp = json.load(f)
|
||||
|
||||
with open(f"{DIR}/torch_optimizer_before_fsdp_fixed_memory_usage.json") as f:
|
||||
optimizer_before_fsdp_fixed = json.load(f)
|
||||
|
||||
with open(f"{DIR}/accelerate_memory_usage.json") as f:
|
||||
accelerate = json.load(f)
|
||||
|
||||
data = [optimizer_before_fsdp_not_fixed, optimizer_before_fsdp_fixed, optimizer_after_fsdp, accelerate]
|
||||
labels = [
|
||||
"Optimizer Before FSDP (w/o fix)",
|
||||
"Optimizer Before FSDP (w/ fix)",
|
||||
"Optimizer After FSDP",
|
||||
"Accelerate",
|
||||
]
|
||||
|
||||
fig1, fig2 = compare_memory_usage(data, labels, args.memory_threshold, args.filter_partition)
|
||||
fig1.savefig(f"{DIR}/allocated_memory.png")
|
||||
fig2.savefig(f"{DIR}/reserved_memory.png")
|
||||
111
benchmarks/torch.compile/README.md
Normal file
111
benchmarks/torch.compile/README.md
Normal file
@ -0,0 +1,111 @@
|
||||
# Regional Compilation Benchmark
|
||||
|
||||
This benchmark compares different compilation strategies using PyTorch's `torch.compile` and Accelerate's `compile_regions` utility, which is based on the recipe in [PyTorch documentation](https://pytorch.org/tutorials/recipes/regional_compilation.html).
|
||||
|
||||
## Overview
|
||||
|
||||
The benchmark evaluates three approaches:
|
||||
|
||||
- **Baseline**: No compilation, standard PyTorch eager execution.
|
||||
- **Full compilation**: Using PyTorch's `torch.compile()` on the entire model.
|
||||
- **Regional compilation**: Using `accelerate.utils.compile_regions()` which targets specific blocks of the model to optimize compilation time.
|
||||
|
||||
Each approach is tested with different batch sizes (1 and 4) and sequence lengths (128) on various LLaMA-based models ranging from 1B to 13B parameters. We purposefully run the forward pass outside of the `torch.no_grad()` context to simulate performance in a training environment, where gradients are needed.
|
||||
|
||||
## Usage
|
||||
|
||||
To run this benchmark:
|
||||
|
||||
```bash
|
||||
python regional_compilation.py
|
||||
```
|
||||
|
||||
The script will automatically download the model configurations, create models, and benchmark both compilation and inference times across different scenarios.
|
||||
|
||||
## Requirements
|
||||
|
||||
- Suitable GPU memory for the models being tested.
|
||||
- PyTorch with CUDA support.
|
||||
- Transformers library.
|
||||
- Accelerate library.
|
||||
|
||||
## Results
|
||||
|
||||
The benchmark results are summarized in the following figures:
|
||||
|
||||
- Compilation time is how long it takes to run the first forward pass.
|
||||
- Speedup factor is the ratio of non-compiled baseline inference time to the fully/regionally compiled inference time.
|
||||
|
||||
<p align="center">
|
||||
<img src="imgs/compilation_time.png" width="80%" alt="Compilation Time">
|
||||
</p>
|
||||
<p align="center">
|
||||
<img src="imgs/speedup_factor.png" width="80%" alt="Speedup Factor">
|
||||
</p>
|
||||
|
||||
Full results are available in the tables below:
|
||||
|
||||
```markdown
|
||||
[-------------------------------------------------- NousResearch/Llama-3.2-1B ---------------------------------------------------]
|
||||
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
|
||||
1 threads: -----------------------------------------------------------------------------------------------------------------------
|
||||
Baseline | 18.3 | 18.4 | |
|
||||
Full compilation | 6.3 | 10.0 | 10696.4 | 10248.0
|
||||
Regional compilation | 9.7 | 10.0 | 1952.7 | 2903.9
|
||||
|
||||
Times are in milliseconds (ms).
|
||||
|
||||
[---------------------------------------------- NousResearch/Hermes-3-Llama-3.2-3B ----------------------------------------------]
|
||||
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
|
||||
1 threads: -----------------------------------------------------------------------------------------------------------------------
|
||||
Baseline | 33.4 | 33.6 | |
|
||||
Full compilation | 11.2 | 23.9 | 17857.5 | 17736.5
|
||||
Regional compilation | 17.3 | 23.7 | 2993.2 | 2478.8
|
||||
|
||||
Times are in milliseconds (ms).
|
||||
|
||||
[---------------------------------------------- NousResearch/Hermes-3-Llama-3.1-8B ----------------------------------------------]
|
||||
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
|
||||
1 threads: -----------------------------------------------------------------------------------------------------------------------
|
||||
Baseline | 40.3 | 59.5 | |
|
||||
Full compilation | 18.9 | 54.4 | 20437.8 | 20152.3
|
||||
Regional compilation | 19.7 | 54.0 | 2903.1 | 2438.0
|
||||
|
||||
Times are in milliseconds (ms).
|
||||
|
||||
[--------------------------------------------- NousResearch/Nous-Hermes-Llama2-13b ----------------------------------------------]
|
||||
| Inference time (1x128) | Inference time (4x128) | Compile time (1x128) | Compile time (4x128)
|
||||
1 threads: -----------------------------------------------------------------------------------------------------------------------
|
||||
Baseline | 45.5 | 100.4 | |
|
||||
Full compilation | 29.4 | 89.7 | 23099.4 | 22885.9
|
||||
Regional compilation | 29.4 | 87.5 | 2945.5 | 2526.2
|
||||
|
||||
Times are in milliseconds (ms).
|
||||
```
|
||||
|
||||
## Results Summary
|
||||
|
||||
### Compilation Time
|
||||
|
||||
Regional compilation provides significantly faster compilation times compared to full model compilation:
|
||||
|
||||
- **Full compilation**: Takes ~10-23 seconds depending on model size.
|
||||
- **Regional compilation**: Takes only ~2-3 seconds across all model sizes.
|
||||
- **Speed improvement**: Regional compilation is **5-9x faster** to compile.
|
||||
|
||||
### Inference Time
|
||||
|
||||
Regional compilation delivers inference performance close to full compilation:
|
||||
|
||||
- For batch size 1:
|
||||
- For smaller models (1B-3B): Full compilation has a slight edge over regional compilation.
|
||||
- For larger models (8B-13B): Regional compilation performs similarly to full compilation.
|
||||
- For batch size 4: Regional compilation performs similarly to full compilation across all models.
|
||||
|
||||
## Key Takeaways
|
||||
|
||||
1. **Comparable Performance**: Regional compilation delivers performance speedups similar to full compilation, especially for larger models.
|
||||
2. **Faster Compilation**: Regional compilation significantly reduces the time taken to compile models, making it a more efficient choice for deployment.
|
||||
3. **Batch Size Impact**: At batch size 4, full compilation and regional compilation perform nearly identically.
|
||||
4. **Model Size Impact**: Even with a small batch size, full compilation and regional compilation perform similarly for larger models (8B-13B).
|
||||
5. **Practical Application**: For real-world applications, regional compilation is a practical choice for optimizing training cold start times, especially when working with large models.
|
||||
BIN
benchmarks/torch.compile/imgs/compilation_time.png
Normal file
BIN
benchmarks/torch.compile/imgs/compilation_time.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 242 KiB |
BIN
benchmarks/torch.compile/imgs/speedup_factor.png
Normal file
BIN
benchmarks/torch.compile/imgs/speedup_factor.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 218 KiB |
77
benchmarks/torch.compile/regional_compilation.py
Normal file
77
benchmarks/torch.compile/regional_compilation.py
Normal file
@ -0,0 +1,77 @@
|
||||
# Copyright 2025 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import torch
|
||||
from torch.utils.benchmark import Compare, Timer
|
||||
from transformers import AutoConfig, AutoModelForCausalLM
|
||||
|
||||
from accelerate.test_utils.testing import get_backend
|
||||
from accelerate.utils import compile_regions
|
||||
|
||||
|
||||
torch.set_float32_matmul_precision("high")
|
||||
|
||||
COMPILE_ITERS = 2
|
||||
INFERENCE_ITERS = 100
|
||||
|
||||
BASELINE = "Baseline"
|
||||
COMPILE_TIME = "Compile time"
|
||||
INFRENCE_TIME = "Inference time"
|
||||
FULL_COMPILATION = "Full compilation"
|
||||
REGIONAL_COMPILATION = "Regional compilation"
|
||||
|
||||
INFRENCE_STMT = "model(input_ids, use_cache=False)"
|
||||
COMPILE_STMT = f"torch._dynamo.reset(); torch._inductor.utils.clear_inductor_caches(); {INFRENCE_STMT}"
|
||||
|
||||
torch_device_type, _, _ = get_backend()
|
||||
|
||||
results = []
|
||||
for model_id in [
|
||||
# non-gated llama models
|
||||
"NousResearch/Llama-3.2-1B",
|
||||
"NousResearch/Hermes-3-Llama-3.2-3B",
|
||||
"NousResearch/Hermes-3-Llama-3.1-8B",
|
||||
"NousResearch/Nous-Hermes-Llama2-13b",
|
||||
]:
|
||||
with torch.device(torch_device_type):
|
||||
config = AutoConfig.from_pretrained(model_id)
|
||||
model = AutoModelForCausalLM.from_config(config).to(dtype=torch.float16).eval()
|
||||
|
||||
full_compilation_model = torch.compile(model)
|
||||
regional_compilation_model = compile_regions(model)
|
||||
|
||||
for model, sub_label, description, stmt, iters in [
|
||||
(model, BASELINE, INFRENCE_TIME, INFRENCE_STMT, INFERENCE_ITERS),
|
||||
(full_compilation_model, FULL_COMPILATION, COMPILE_TIME, COMPILE_STMT, COMPILE_ITERS),
|
||||
(full_compilation_model, FULL_COMPILATION, INFRENCE_TIME, INFRENCE_STMT, INFERENCE_ITERS),
|
||||
(regional_compilation_model, REGIONAL_COMPILATION, COMPILE_TIME, COMPILE_STMT, COMPILE_ITERS),
|
||||
(regional_compilation_model, REGIONAL_COMPILATION, INFRENCE_TIME, INFRENCE_STMT, INFERENCE_ITERS),
|
||||
]:
|
||||
for batch_size, sequence_length in [(1, 128), (4, 128)]:
|
||||
input_ids = torch.randint(
|
||||
0, 1000, size=(batch_size, sequence_length), dtype=torch.int64, device=torch_device_type
|
||||
)
|
||||
results.append(
|
||||
Timer(
|
||||
label=model_id,
|
||||
sub_label=sub_label,
|
||||
description=f"{description} ({batch_size}x{sequence_length})",
|
||||
globals={"model": model, "input_ids": input_ids},
|
||||
stmt=stmt,
|
||||
).timeit(number=iters)
|
||||
)
|
||||
|
||||
compare = Compare(results)
|
||||
compare.colorize()
|
||||
compare.print()
|
||||
@ -33,6 +33,7 @@ huggingface/accelerate:{accelerator}-{nightly,release}
|
||||
* `cpu`: Comes compiled off of `python:3.9-slim` and is designed for non-CUDA based workloads.
|
||||
* More to come soon
|
||||
* `gpu-deepspeed`: Comes compiled off of the `nvidia/cuda` image and includes core parts like `bitsandbytes` as well as the latest `deepspeed` version. Runs off python 3.10.
|
||||
* `gpu-fp8-transformerengine`: Comes compiled off of `nvcr.io/nvidia/pytorch` and is specifically for running the `benchmarks/fp8` scripts on devices which support FP8 operations using the `TransformerEngine` library (RTX 4090, H100, etc)
|
||||
|
||||
## Nightlies vs Releases
|
||||
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
# Builds CPU-only Docker image of PyTorch
|
||||
# Uses multi-staged approach to reduce size
|
||||
# Stage 1
|
||||
FROM python:3.8-slim as compile-image
|
||||
FROM python:3.10-slim as compile-image
|
||||
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
@ -25,7 +25,7 @@ RUN python3 -m pip install --no-cache-dir \
|
||||
--extra-index-url https://download.pytorch.org/whl/cpu
|
||||
|
||||
# Stage 2
|
||||
FROM python:3.8-slim AS build-image
|
||||
FROM python:3.10-slim AS build-image
|
||||
COPY --from=compile-image /opt/venv /opt/venv
|
||||
RUN useradd -ms /bin/bash user
|
||||
USER user
|
||||
|
||||
@ -4,7 +4,6 @@
|
||||
# Use base conda image to reduce time
|
||||
FROM continuumio/miniconda3:latest AS compile-image
|
||||
# Specify py version
|
||||
# Note: DeepSpeed beyond v0.12.6 requires py 3.10
|
||||
ENV PYTHON_VERSION=3.10
|
||||
# Install apt libs
|
||||
RUN apt-get update && \
|
||||
@ -25,12 +24,12 @@ RUN source activate accelerate && conda install -c conda-forge mpi4py
|
||||
RUN source activate accelerate && \
|
||||
python3 -m pip install --no-cache-dir \
|
||||
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers,deepspeed] \
|
||||
--extra-index-url https://download.pytorch.org/whl/cu117
|
||||
--extra-index-url https://download.pytorch.org/whl/cu126
|
||||
|
||||
RUN python3 -m pip install --no-cache-dir bitsandbytes
|
||||
|
||||
# Stage 2
|
||||
FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu20.04 AS build-image
|
||||
FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 AS build-image
|
||||
COPY --from=compile-image /opt/conda /opt/conda
|
||||
ENV PATH /opt/conda/bin:$PATH
|
||||
|
||||
|
||||
@ -4,7 +4,7 @@
|
||||
# Use base conda image to reduce time
|
||||
FROM continuumio/miniconda3:latest AS compile-image
|
||||
# Specify py version
|
||||
ENV PYTHON_VERSION=3.9
|
||||
ENV PYTHON_VERSION=3.10
|
||||
# Install apt libs
|
||||
RUN apt-get update && \
|
||||
apt-get install -y curl git wget && \
|
||||
@ -24,12 +24,12 @@ RUN source activate accelerate && conda install -c conda-forge mpi4py
|
||||
RUN source activate accelerate && \
|
||||
python3 -m pip install --no-cache-dir \
|
||||
git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers] \
|
||||
--extra-index-url https://download.pytorch.org/whl/cu117
|
||||
--extra-index-url https://download.pytorch.org/whl/cu126
|
||||
|
||||
RUN python3 -m pip install --no-cache-dir bitsandbytes
|
||||
|
||||
# Stage 2
|
||||
FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu20.04 AS build-image
|
||||
FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 AS build-image
|
||||
COPY --from=compile-image /opt/conda /opt/conda
|
||||
ENV PATH /opt/conda/bin:$PATH
|
||||
|
||||
|
||||
@ -16,7 +16,7 @@
|
||||
- local: basic_tutorials/tpu
|
||||
title: TPU training
|
||||
- local: basic_tutorials/launch
|
||||
title: Launching distributed code
|
||||
title: Launching Accelerate scripts
|
||||
- local: basic_tutorials/notebook
|
||||
title: Launching distributed training from Jupyter Notebooks
|
||||
title: Tutorials
|
||||
@ -34,7 +34,7 @@
|
||||
- local: usage_guides/profiler
|
||||
title: Profiler
|
||||
- local: usage_guides/checkpoint
|
||||
title: Save and load training states
|
||||
title: Checkpointing
|
||||
- local: basic_tutorials/troubleshooting
|
||||
title: Troubleshoot
|
||||
- local: usage_guides/training_zoo
|
||||
@ -50,18 +50,24 @@
|
||||
title: Low precision (FP8) training
|
||||
- local: usage_guides/deepspeed
|
||||
title: DeepSpeed
|
||||
- local: usage_guides/deepspeed_multiple_model
|
||||
title: Using multiple models with DeepSpeed
|
||||
- local: usage_guides/ddp_comm_hook
|
||||
title: DDP Communication Hooks
|
||||
- local: usage_guides/fsdp
|
||||
title: Fully Sharded Data Parallelism
|
||||
title: Fully Sharded Data Parallel
|
||||
- local: usage_guides/megatron_lm
|
||||
title: Megatron-LM
|
||||
- local: usage_guides/sagemaker
|
||||
title: Amazon SageMaker
|
||||
- local: usage_guides/mps
|
||||
title: Apple M1 GPUs
|
||||
- local: usage_guides/ipex
|
||||
title: IPEX training with CPU
|
||||
- local: usage_guides/intel_cpu
|
||||
title: Intel CPU
|
||||
- local: usage_guides/gaudi
|
||||
title: Intel Gaudi
|
||||
- local: usage_guides/compilation
|
||||
title: Compilation
|
||||
title: Training
|
||||
- isExpanded: true
|
||||
sections:
|
||||
@ -73,7 +79,7 @@
|
||||
title: How to guides
|
||||
- sections:
|
||||
- local: concept_guides/internal_mechanism
|
||||
title: 🤗 Accelerate's internal mechanism
|
||||
title: Accelerate's internal mechanism
|
||||
- local: concept_guides/big_model_inference
|
||||
title: Loading big models into memory
|
||||
- local: concept_guides/performance
|
||||
@ -84,24 +90,28 @@
|
||||
title: Gradient synchronization
|
||||
- local: concept_guides/fsdp_and_deepspeed
|
||||
title: FSDP vs DeepSpeed
|
||||
- local: concept_guides/fsdp1_vs_fsdp2
|
||||
title: FSDP1 vs FSDP2
|
||||
- local: concept_guides/context_parallelism
|
||||
title: Context parallelism
|
||||
- local: concept_guides/low_precision_training
|
||||
title: How training in low-precision environments is possible (FP8)
|
||||
title: Low precision training methods
|
||||
- local: concept_guides/training_tpu
|
||||
title: TPU best practices
|
||||
title: Training on TPUs
|
||||
title: Concepts and fundamentals
|
||||
- sections:
|
||||
- sections:
|
||||
- local: package_reference/accelerator
|
||||
title: Accelerator
|
||||
- local: package_reference/state
|
||||
title: Stateful configuration classes
|
||||
title: Stateful classes
|
||||
- local: package_reference/cli
|
||||
title: The Command Line
|
||||
- local: package_reference/torch_wrappers
|
||||
title: Torch wrapper classes
|
||||
title: DataLoaders, Optimizers, Schedulers
|
||||
- local: package_reference/tracking
|
||||
title: Experiment trackers
|
||||
- local: package_reference/launchers
|
||||
title: Distributed launchers
|
||||
title: Launchers
|
||||
- local: package_reference/deepspeed
|
||||
title: DeepSpeed utilities
|
||||
- local: package_reference/logging
|
||||
@ -109,15 +119,15 @@
|
||||
- local: package_reference/big_modeling
|
||||
title: Working with large models
|
||||
- local: package_reference/inference
|
||||
title: Distributed inference with big models
|
||||
title: Pipeline parallelism
|
||||
- local: package_reference/kwargs
|
||||
title: Kwargs handlers
|
||||
- local: package_reference/fp8
|
||||
title: FP8 Functionality
|
||||
title: FP8
|
||||
- local: package_reference/utilities
|
||||
title: Utility functions and classes
|
||||
- local: package_reference/megatron_lm
|
||||
title: Megatron-LM Utilities
|
||||
title: Megatron-LM utilities
|
||||
- local: package_reference/fsdp
|
||||
title: Fully Sharded Data Parallelism Utilities
|
||||
title: Fully Sharded Data Parallel utilities
|
||||
title: "Reference"
|
||||
|
||||
@ -13,31 +13,29 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Installation and Configuration
|
||||
# Installation
|
||||
|
||||
Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 Accelerate. 🤗 Accelerate is tested on **Python 3.8+**.
|
||||
Before you start, you will need to setup your environment, install the appropriate packages, and configure Accelerate. Accelerate is tested on **Python 3.8+**.
|
||||
|
||||
## Installing 🤗 Accelerate
|
||||
Accelerate is available on pypi and conda, as well as on GitHub. Details to install from each are below:
|
||||
|
||||
🤗 Accelerate is available on pypi and conda, as well as on GitHub. Details to install from each are below:
|
||||
## pip
|
||||
|
||||
### pip
|
||||
|
||||
To install 🤗 Accelerate from pypi, perform:
|
||||
To install Accelerate from pypi, perform:
|
||||
|
||||
```bash
|
||||
pip install accelerate
|
||||
```
|
||||
|
||||
### conda
|
||||
## conda
|
||||
|
||||
🤗 Accelerate can also be installed with conda with:
|
||||
Accelerate can also be installed with conda with:
|
||||
|
||||
```bash
|
||||
conda install -c conda-forge accelerate
|
||||
```
|
||||
|
||||
### Source
|
||||
## Source
|
||||
|
||||
New features are added every day that haven't been released yet. To try them out yourself, install
|
||||
from the GitHub repository:
|
||||
@ -56,9 +54,9 @@ cd accelerate
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
## Configuring 🤗 Accelerate
|
||||
## Configuration
|
||||
|
||||
After installing, you need to configure 🤗 Accelerate for how the current system is setup for training.
|
||||
After installing, you need to configure Accelerate for how the current system is setup for training.
|
||||
To do so run the following and answer the questions prompted to you:
|
||||
|
||||
```bash
|
||||
@ -70,7 +68,8 @@ To write a barebones configuration that doesn't include options such as DeepSpee
|
||||
```bash
|
||||
python -c "from accelerate.utils import write_basic_config; write_basic_config(mixed_precision='fp16')"
|
||||
```
|
||||
🤗 Accelerate will automatically utilize the maximum number of GPUs available and set the mixed precision mode.
|
||||
|
||||
Accelerate will automatically utilize the maximum number of GPUs available and set the mixed precision mode.
|
||||
|
||||
To check that your configuration looks fine, run:
|
||||
|
||||
@ -80,23 +79,36 @@ accelerate env
|
||||
|
||||
An example output is shown below, which describes two GPUs on a single machine with no mixed precision being used:
|
||||
|
||||
|
||||
```bash
|
||||
- `Accelerate` version: 0.11.0.dev0
|
||||
- Platform: Linux-5.10.0-15-cloud-amd64-x86_64-with-debian-11.3
|
||||
- Python version: 3.7.12
|
||||
- Numpy version: 1.19.5
|
||||
- PyTorch version (GPU?): 1.12.0+cu102 (True)
|
||||
- `Accelerate` version: 1.2.0.dev0
|
||||
- Platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.35
|
||||
- `accelerate` bash location: /home/zach/miniconda3/envs/accelerate/bin/accelerate
|
||||
- Python version: 3.10.13
|
||||
- Numpy version: 1.26.4
|
||||
- PyTorch version (GPU?): 2.5.1+cu124 (True)
|
||||
- PyTorch XPU available: False
|
||||
- PyTorch NPU available: False
|
||||
- PyTorch MLU available: False
|
||||
- PyTorch MUSA available: False
|
||||
- System RAM: 187.91 GB
|
||||
- GPU type: NVIDIA GeForce RTX 4090
|
||||
- `Accelerate` default config:
|
||||
- compute_environment: LOCAL_MACHINE
|
||||
- distributed_type: MULTI_GPU
|
||||
- mixed_precision: no
|
||||
- use_cpu: False
|
||||
- debug: False
|
||||
- num_processes: 2
|
||||
- machine_rank: 0
|
||||
- num_machines: 1
|
||||
- main_process_ip: None
|
||||
- main_process_port: None
|
||||
- gpu_ids: all
|
||||
- rdzv_backend: static
|
||||
- same_network: True
|
||||
- main_training_function: main
|
||||
- deepspeed_config: {}
|
||||
- fsdp_config: {}
|
||||
```
|
||||
- enable_cpu_affinity: False
|
||||
- downcast_bf16: no
|
||||
- tpu_use_cluster: False
|
||||
- tpu_use_sudo: False
|
||||
- tpu_env: []
|
||||
```
|
||||
|
||||
@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Launching your 🤗 Accelerate scripts
|
||||
# Launching Accelerate scripts
|
||||
|
||||
In the previous tutorial, you were introduced to how to modify your current training script to use 🤗 Accelerate.
|
||||
In the previous tutorial, you were introduced to how to modify your current training script to use Accelerate.
|
||||
The final version of that code is shown below:
|
||||
|
||||
```python
|
||||
@ -69,14 +69,14 @@ Next, you need to launch it with `accelerate launch`.
|
||||
<Tip warning={true}>
|
||||
|
||||
It's recommended you run `accelerate config` before using `accelerate launch` to configure your environment to your liking.
|
||||
Otherwise 🤗 Accelerate will use very basic defaults depending on your system setup.
|
||||
Otherwise Accelerate will use very basic defaults depending on your system setup.
|
||||
|
||||
</Tip>
|
||||
|
||||
|
||||
## Using accelerate launch
|
||||
|
||||
🤗 Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
|
||||
Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`.
|
||||
This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them is.
|
||||
|
||||
<Tip>
|
||||
@ -97,11 +97,14 @@ Since this runs the various torch spawn methods, all of the expected environment
|
||||
For example, here is how to use `accelerate launch` with a single GPU:
|
||||
|
||||
```bash
|
||||
# for cuda device:
|
||||
CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...
|
||||
# for xpu device:
|
||||
ZE_AFFINITY_MASK="0" accelerate launch {script_name.py} --arg1 --arg2 ...
|
||||
```
|
||||
|
||||
You can also use `accelerate launch` without performing `accelerate config` first, but you may need to manually pass in the right configuration parameters.
|
||||
In this case, 🤗 Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision.
|
||||
In this case, Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision.
|
||||
Here is how you would use all GPUs and train with mixed precision disabled:
|
||||
|
||||
```bash
|
||||
@ -129,14 +132,14 @@ accelerate launch -h
|
||||
|
||||
<Tip>
|
||||
|
||||
Even if you are not using 🤗 Accelerate in your code, you can still use the launcher for starting your scripts!
|
||||
Even if you are not using Accelerate in your code, you can still use the launcher for starting your scripts!
|
||||
|
||||
</Tip>
|
||||
|
||||
For a visualization of this difference, that earlier `accelerate launch` on multi-gpu would look something like so with `torchrun`:
|
||||
|
||||
```bash
|
||||
MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --num_machines=1 {script_name.py} {--arg1} {--arg2} ...
|
||||
MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --nnodes=1 {script_name.py} {--arg1} {--arg2} ...
|
||||
```
|
||||
|
||||
You can also launch your script utilizing the launch CLI as a python module itself, enabling the ability to pass in other python-specific
|
||||
@ -178,7 +181,7 @@ accelerate launch {script_name.py} {--arg1} {--arg2} ...
|
||||
## Custom Configurations
|
||||
|
||||
As briefly mentioned earlier, `accelerate launch` should be mostly used through combining set configurations
|
||||
made with the `accelerate config` command. These configs are saved to a `default_config.yaml` file in your cache folder for 🤗 Accelerate.
|
||||
made with the `accelerate config` command. These configs are saved to a `default_config.yaml` file in your cache folder for Accelerate.
|
||||
This cache folder is located at (with decreasing order of priority):
|
||||
|
||||
- The content of your environment variable `HF_HOME` suffixed with `accelerate`.
|
||||
@ -211,7 +214,7 @@ accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_nam
|
||||
```
|
||||
|
||||
## Multi-node training
|
||||
Multi-node training with 🤗Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following:
|
||||
Multi-node training with Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following:
|
||||
|
||||
- Copy your codebase and data to all nodes. (or place them on a shared filesystem)
|
||||
- Setup your python packages on all nodes.
|
||||
|
||||
@ -145,7 +145,7 @@ Set the mixed precision type to use in the [`Accelerator`], and then use the [`~
|
||||
```diff
|
||||
+ accelerator = Accelerator(mixed_precision="fp16")
|
||||
+ with accelerator.autocast():
|
||||
loss = complex_loss_function(outputs, target):
|
||||
loss = complex_loss_function(outputs, target)
|
||||
```
|
||||
|
||||
## Save and load
|
||||
@ -219,3 +219,6 @@ During training, you may want to save the current state of the model, optimizer,
|
||||
To further customize where and how states are saved through [`~Accelerator.save_state`], use the [`~utils.ProjectConfiguration`] class. For example, if `automatic_checkpoint_naming` is enabled, each saved checkpoint is stored at `Accelerator.project_dir/checkpoints/checkpoint_{checkpoint_number}`.
|
||||
|
||||
Any other stateful items to be stored should be registered with the [`~Accelerator.register_for_checkpointing`] method so they can be saved and loaded. Every object passed to this method to be stored must have a `load_state_dict` and `state_dict` function.
|
||||
|
||||
> [!TIP]
|
||||
> If you have [`torchdata>=0.8.0`](https://github.com/pytorch/data/tree/main) installed, you can additionally pass `use_stateful_dataloader=True` into your [`~utils.DataLoaderConfiguration`]. This extends Accelerate's DataLoader classes with a `load_state_dict` and `state_dict` function, and makes it so `Accelerator.save_state` and `Accelerator.load_state` also track how far into the training dataset it has read when persisting the model.
|
||||
|
||||
@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Launching Multi-GPU Training from a Jupyter Environment
|
||||
# Launching distributed training from Jupyter Notebooks
|
||||
|
||||
This tutorial teaches you how to fine tune a computer vision model with 🤗 Accelerate from a Jupyter Notebook on a distributed system.
|
||||
You will also learn how to setup a few requirements needed for ensuring your environment is configured properly, your data has been prepared properly, and finally how to launch training.
|
||||
@ -26,13 +26,13 @@ You will also learn how to setup a few requirements needed for ensuring your env
|
||||
|
||||
## Configuring the Environment
|
||||
|
||||
Before any training can be performed, a 🤗 Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
|
||||
Before any training can be performed, an Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts:
|
||||
|
||||
```bash
|
||||
accelerate config
|
||||
```
|
||||
|
||||
However, if general defaults are fine and you are *not* running on a TPU, 🤗Accelerate has a utility to quickly write your GPU configuration into a config file via [`utils.write_basic_config`].
|
||||
However, if general defaults are fine and you are *not* running on a TPU, Accelerate has a utility to quickly write your GPU configuration into a config file via [`utils.write_basic_config`].
|
||||
|
||||
The following code will restart Jupyter after writing the configuration, as CUDA code was called to perform this.
|
||||
|
||||
@ -52,7 +52,7 @@ os._exit(00) # Restart the notebook
|
||||
|
||||
## Preparing the Dataset and Model
|
||||
|
||||
Next you should prepare your dataset. As mentioned at earlier, great care should be taken when preparing the `DataLoaders` and model to make sure that **nothing** is put on *any* GPU.
|
||||
Next you should prepare your dataset. As mentioned earlier, great care should be taken when preparing the `DataLoaders` and model to make sure that **nothing** is put on *any* GPU.
|
||||
|
||||
If you do, it is recommended to put that specific code into a function and call that from within the notebook launcher interface, which will be shown later.
|
||||
|
||||
@ -327,7 +327,7 @@ def training_loop(mixed_precision="fp16", seed: int = 42, batch_size: int = 64):
|
||||
# Build dataloaders
|
||||
train_dataloader, eval_dataloader = get_dataloaders(batch_size)
|
||||
|
||||
# Instantiate the model (you build the model here so that the seed also controls new weight initaliziations)
|
||||
# Instantiate the model (you build the model here so that the seed also controls new weight initializations)
|
||||
model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))
|
||||
|
||||
# Freeze the base model
|
||||
@ -454,7 +454,7 @@ epoch 4: 94.71
|
||||
|
||||
And that's it!
|
||||
|
||||
Please note that [`notebook_launcher`] ignores the 🤗 Accelerate config file, to launch based on the config use:
|
||||
Please note that [`notebook_launcher`] ignores the Accelerate config file, to launch based on the config use:
|
||||
|
||||
```bash
|
||||
accelerate launch
|
||||
|
||||
@ -15,10 +15,10 @@ rendered properly in your Markdown viewer.
|
||||
|
||||
# Overview
|
||||
|
||||
Welcome to the 🤗 Accelerate tutorials! These introductory guides will help catch you up to speed on working with 🤗 Accelerate.
|
||||
Welcome to the Accelerate tutorials! These introductory guides will help catch you up to speed on working with Accelerate.
|
||||
You'll learn how to modify your code to have it work with the API seamlessly, how to launch your script properly,
|
||||
and more!
|
||||
|
||||
These tutorials assume some basic knowledge of Python and familiarity with the PyTorch framework.
|
||||
|
||||
If you have any questions about 🤗 Accelerate, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/accelerate/18).
|
||||
If you have any questions about Accelerate, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/accelerate/18).
|
||||
@ -111,17 +111,17 @@ Input shapes:
|
||||
|
||||
For early stopping in distributed training, if each process has a specific stopping condition (e.g. validation loss), it may not be synchronized across all processes. As a result, a break can happen on process 0 but not on process 1 which will cause your code to hang indefinitely until a timeout occurs.
|
||||
|
||||
If you have early stopping conditionals, use the `set_breakpoint` and `check_breakpoint` methods to make sure all the processes
|
||||
If you have early stopping conditionals, use the `set_trigger` and `check_trigger` methods to make sure all the processes
|
||||
are ended correctly.
|
||||
|
||||
```py
|
||||
# Assume `should_do_breakpoint` is a custom defined function that returns a conditional,
|
||||
# and that conditional might be true only on process 1
|
||||
if should_do_breakpoint(loss):
|
||||
accelerator.set_breakpoint()
|
||||
accelerator.set_trigger()
|
||||
|
||||
# Later in the training script when we need to check for the breakpoint
|
||||
if accelerator.check_breakpoint():
|
||||
if accelerator.check_trigger():
|
||||
break
|
||||
```
|
||||
|
||||
@ -142,9 +142,9 @@ hostnames for each of the nodes.
|
||||
mpirun -f hostfile -n {number of nodes} -ppn 1 hostname
|
||||
```
|
||||
|
||||
## CUDA Out-of-Memory
|
||||
## Out-of-Memory
|
||||
|
||||
One of the most frustrating errors when it comes to running training scripts is hitting "CUDA Out-of-Memory". The entire script needs to be restarted and any progress is lost.
|
||||
One of the most frustrating errors when it comes to running training scripts is hitting "Out-of-Memory" on devices like CUDA, XPU or CPU. The entire script needs to be restarted and any progress is lost.
|
||||
|
||||
To address this problem, Accelerate provides the [`find_executable_batch_size`] utility that is heavily based on [toma](https://github.com/BlackHC/toma).
|
||||
This utility retries code that fails due to OOM (out-of-memory) conditions and automatically lowers batch sizes. For each OOM condition, the algorithm decreases the batch size by half and retries the code until it succeeds.
|
||||
@ -153,7 +153,7 @@ To use [`find_executable_batch_size`], restructure your training function to inc
|
||||
|
||||
<Tip warning={true}>
|
||||
|
||||
The inner function **must** take batch size as the first parameter, but we do not pass one to it when called. The wrapper will handles this for you. Any object (models, optimizers) that consumes CUDA memory and is passed to the [`Accelerator`] also **must** be declared inside the inner function.
|
||||
The inner function **must** take batch size as the first parameter, but we do not pass one to it when called. The wrapper will handle this for you. Any object (models, optimizers) that consumes device memory and is passed to the [`Accelerator`] also **must** be declared inside the inner function.
|
||||
|
||||
</Tip>
|
||||
|
||||
@ -204,8 +204,8 @@ Vastly different GPUs within the same setup can lead to performance bottlenecks.
|
||||
|
||||
If none of the solutions and advice here helped resolve your issue, you can always reach out to the community and Accelerate team for help.
|
||||
|
||||
- Ask for help on the Hugging Face forums by posting your question in the [🤗 Accelerate category](https://discuss.huggingface.co/c/accelerate/18). Make sure to write a descriptive post with relevant context about your setup and reproducible code to maximize the likelihood that your problem is solved!
|
||||
- Ask for help on the Hugging Face forums by posting your question in the [Accelerate category](https://discuss.huggingface.co/c/accelerate/18). Make sure to write a descriptive post with relevant context about your setup and reproducible code to maximize the likelihood that your problem is solved!
|
||||
|
||||
- Post a question on [Discord](http://hf.co/join/discord), and let the team and the community help you.
|
||||
|
||||
- Create an Issue on the 🤗 Accelerate [GitHub repository](https://github.com/huggingface/accelerate/issues) if you think you've found a bug related to the library. Include context regarding the bug and details about your distributed setup to help us better figure out what's wrong and how we can fix it.
|
||||
- Create an Issue on the Accelerate [GitHub repository](https://github.com/huggingface/accelerate/issues) if you think you've found a bug related to the library. Include context regarding the bug and details about your distributed setup to help us better figure out what's wrong and how we can fix it.
|
||||
|
||||
@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Handling big models for inference
|
||||
# Loading big models into memory
|
||||
|
||||
When loading a pre-trained model in PyTorch, the usual workflow looks like this:
|
||||
|
||||
@ -46,7 +46,7 @@ This API is quite new and still in its experimental stage. While we strive to pr
|
||||
|
||||
### Instantiating an empty model
|
||||
|
||||
The first tool 🤗 Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
|
||||
The first tool Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works:
|
||||
|
||||
```py
|
||||
from accelerate import init_empty_weights
|
||||
@ -74,7 +74,7 @@ initializes an empty model with a bit more than 100B parameters. Behind the scen
|
||||
|
||||
It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split into several smaller files that we call checkpoint shards.
|
||||
|
||||
🤗 Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
|
||||
Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing:
|
||||
|
||||
```bash
|
||||
first_state_dict.bin
|
||||
@ -97,9 +97,9 @@ and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"l
|
||||
|
||||
### Loading weights
|
||||
|
||||
The second tool 🤗 Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
|
||||
The second tool Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard.
|
||||
|
||||
If you want to use big model inference with 🤗 Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
|
||||
If you want to use big model inference with Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
|
||||
|
||||
Here is how we can use this to load the [GPT2-1.5B](https://huggingface.co/marcsun13/gpt2-xl-linear-sharded) model.
|
||||
|
||||
@ -145,7 +145,7 @@ model = load_checkpoint_and_dispatch(
|
||||
)
|
||||
```
|
||||
|
||||
By passing `device_map="auto"`, we tell 🤗 Accelerate to determine automatically where to put each layer of the model depending on the available resources:
|
||||
By passing `device_map="auto"`, we tell Accelerate to determine automatically where to put each layer of the model depending on the available resources:
|
||||
- first, we use the maximum space available on the GPU(s)
|
||||
- if we still need space, we store the remaining weights on the CPU
|
||||
- if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors
|
||||
@ -159,7 +159,7 @@ include a residual connection of some kind.
|
||||
|
||||
#### The `device_map`
|
||||
|
||||
You can see the `device_map` that 🤗 Accelerate picked by accessing the `hf_device_map` attribute of your model:
|
||||
You can see the `device_map` that Accelerate picked by accessing the `hf_device_map` attribute of your model:
|
||||
|
||||
```py
|
||||
model.hf_device_map
|
||||
@ -210,7 +210,7 @@ outputs = model.generate(x1, max_new_tokens=10, do_sample=False)[0]
|
||||
tokenizer.decode(outputs.cpu().squeeze())
|
||||
```
|
||||
|
||||
Behind the scenes, 🤗 Accelerate added hooks to the model, so that:
|
||||
Behind the scenes, Accelerate added hooks to the model, so that:
|
||||
- at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works)
|
||||
- for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after
|
||||
- for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after
|
||||
@ -225,7 +225,7 @@ This way, your model can run for inference even if it doesn't fit on one of the
|
||||
|
||||
### Designing a device map
|
||||
|
||||
You can let 🤗 Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
|
||||
You can let Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go.
|
||||
|
||||
<Tip>
|
||||
|
||||
|
||||
204
docs/source/concept_guides/context_parallelism.md
Normal file
204
docs/source/concept_guides/context_parallelism.md
Normal file
@ -0,0 +1,204 @@
|
||||
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Context Parallel in 🤗`accelerate`
|
||||
|
||||
This guide will cover basics of using context parallelism in 🤗`accelerate`, for the more curious readers, we will also cover some technicalities in the later sections.
|
||||
|
||||
## Why context parallelism?
|
||||
|
||||
With the advent of large language models, and recently reasoning models, the sequence length has been growing rapidly. This, combined with quadratic memory complexity of attention, has led to a need for more efficient ways to train models with long sequences.
|
||||
With sequence length of 128k, the memory requirement of the attention matrix is `128k * 128k * 2 bytes * num_heads = ~32 GB * num_heads` for `bf16` precision, given vanilla attention implementation. Granted, with usage of `flash attention` or `SDPA` which do not materialize these attention weights, this decreases drastically, but the growth in memory requirements is still considerable.
|
||||
|
||||
Context parallelism allows us to shard the inputs to the attention computation along the sequence dimension and compute the attention in parallel on multiple GPUs. With this, we can train models with long sequences, scaling potentially to 1M+ sequence length.
|
||||
|
||||
## How to use context parallelism?
|
||||
|
||||
```diff
|
||||
from accelerate.utils import ParallelismConfig, TorchContextParallelConfig
|
||||
|
||||
+ cp_config = TorchContextParallelConfig(
|
||||
+ cp_comm_strategy="alltoall", # no need to use cp_config at all, if you want to use the default "allgather"
|
||||
+ )
|
||||
|
||||
+ parallelism_config = ParallelismConfig(
|
||||
+ cp_size=8,
|
||||
+ cp_handler=cp_config, # or just cp_size=8, if you want to use the default "allgather"
|
||||
+ )
|
||||
|
||||
accelerator = Accelerator(
|
||||
...,
|
||||
parallelism_config=parallelism_config,
|
||||
)
|
||||
```
|
||||
|
||||
As with any other feature in 🤗`accelerate`, you can enable context parallelism also by passing the corresponding flags to `accelerate launch`.
|
||||
In this case, it's no different:
|
||||
|
||||
```bash
|
||||
accelerate launch --parallelism-config-cp-size 8 --parallelism-config-cp-comm-strategy [allgather|alltoall] ...
|
||||
```
|
||||
|
||||
> [!Tip]
|
||||
> You can also set the `cp_size` and `cp_comm_strategy` in the `accelerate config` command, which will save them in your `accelerate` configuration file, so you don't have to pass them every time you launch your script.
|
||||
|
||||
> [!Tip]
|
||||
> Context parallelism is compatible with other parallelism strategies, such as data parallelism, tensor parallelism and FSDP2.
|
||||
> You can simply combine them by setting your parallelism sizes to the desired values, e.g. `--parallelism-config-dp-size 8 --parallelism-config-tp-size 2 --parallelism-config-cp-size 8`. Or you can use the `ParallelismConfig` class to set them programmatically.
|
||||
|
||||
> [!Warning]
|
||||
> Context parallelism is tightly coupled with `FSDP2`, which you can learn more about in the [FSDP2 introduction](fsdp1_vs_fsdp2.md). Meaning, context parallelism only works if you use `FullyShardedDataParallelPlugin` or `--use-fsdp` with version set to 2 to your
|
||||
> program. If no `FSDP2` is used, error will be raised.
|
||||
|
||||
> [!Warning]
|
||||
> Context parallelism works only with [SDPA](https://docs.pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) and only with no mask or causal mask. We can't properly detect this for you, so it's your responsibility to ensure that you are using `SDPA` with no mask or causal mask. If you use any other attention implementation, it will raise an error.
|
||||
|
||||
After enabling context parallelism with the methods mentioned above, you can then apply it to your training loop. We provide a thin wrapper around [`torch.distributed.tensor.experimental.context_parallel`](https://docs.pytorch.org/docs/stable/distributed.tensor.html#torch.distributed.tensor.experimental.context_parallel) that you can use in your training loop, that abstracts some of the complexity of using it (more on this later). To minimize the changes you have to do in your training loop, we provide a context manager that is a `noop` if context parallelism is not enabled, and applies the context parallelism if it is enabled. This way, you can use it in your training loop without changing any code based on your parallelism configuration.
|
||||
You can use it as follows:
|
||||
|
||||
```python
|
||||
for batch in dataloader:
|
||||
with accelerator.maybe_context_parallel(
|
||||
buffers=[batch["input_ids"], batch["attention_mask"]],
|
||||
buffer_seq_dims=[1, 1],
|
||||
no_restore_buffers={batch["input_ids"], batch["labels"]},
|
||||
):
|
||||
outputs = model(**batch)
|
||||
...
|
||||
```
|
||||
|
||||
> [!Warning]
|
||||
> This context manager has to be recreated with each training step, as shown in the example above. It's crucial to do so.
|
||||
|
||||
This can scale your context size to 1M+ sequence length potentially. Below, we showcase speed and memory usage of context parallelism for up-to 256k context size. We can see that when we double the context size and number of GPUs, we can achieve consistent memory usage, potentially enabling endless context length scaling.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/examples/fsdp2/cp_perf.png" alt="context parallelism memory usage" />
|
||||
<br>
|
||||
<em>Figure 1: Memory usage and speed of context parallelism for up-to 256k context size.</em>
|
||||
</p>
|
||||
|
||||
> [!Tip]
|
||||
> These examples were created with a script you can find [in the examples folder](https://github.com/huggingface/accelerate/blob/main/examples/fsdp2/nd_parallel.py). To run the example on 8 H100 GPUs (128k sequence length), you can use the following command:
|
||||
> ```bash
|
||||
> accelerate launch --use-fsdp --fsdp-activation-checkpointing=TRUE examples/fsdp2/nd_parallel.py --cp-size=8 --sequence-length=128000
|
||||
> ```
|
||||
|
||||
|
||||
## Accelerate's interface
|
||||
|
||||
The context manager takes a few arguments, that are used to configure the context parallelism.
|
||||
|
||||
- `buffers`: This is a list of tensors that are to be sharded across the sequence dimension. These tensors are usually input ids, labels and attention mask.
|
||||
- `buffer_seq_dims`: This is a list of integers, that specify the sequence dimension of the buffers, in the order of the `buffers` list. If you pass `buffers=[input_ids, shift_labels]` with both having shape `[batch_size, sequence_length]`, you would pass `buffer_seq_dims=[1, 1]`.
|
||||
as the sequence dimension is the second dimension of the tensors. This is required for correct computation of the model outputs.
|
||||
- `no_restore_buffers`: The implementation of context parallelism modifies the buffers in-place, converting them to `torch.distributed.tensor.Dtensor`s. After the context manager exits, a communication kernel would need to be launched to restore the buffers to their original state (usually all-gather). This takes some time, so it is recommended to pass the same tensors as in the `buffers` argument, to avoid unnecessary communication, unless you are sure that you need to use the buffers after the context manager exits.
|
||||
|
||||
|
||||
> [!Warning]
|
||||
> Context parallelism is not compatible with `labels` that are a copy of `input_ids`, which models from 🤗 transformers can shift to enable causal language modeling themselves.
|
||||
> Imagine this case:
|
||||
> labels = [l1, l2, l3, l4, ... li]
|
||||
> if we apply context parallelism, each rank would end up with a part of labels, such as this:
|
||||
> labels_rank_0 = [l1, l2], labels_rank_1 = [l3, l4], ...
|
||||
> after transformers modelling code shifts the labels, it would end up with:
|
||||
> labels_rank_0 = [l2, PAD], labels_rank_1 = [l3, PAD], ...
|
||||
> where `PAD` is a padding token. This would result in incorrect loss computation, as the labels are not aligned with the inputs anymore.
|
||||
> Because of this, you need to manually shift the labels before passing them in the model
|
||||
|
||||
|
||||
## Configurable options
|
||||
Accelerate provides only a single option to configure context parallelism (except for `cp_size`)
|
||||
|
||||
- `cp_comm_strategy`: The rotation method to use for the shards. We strongly recommend keeping this as `"allgather"`, as it's very likely it will outperform `"alltoall"` in most cases.
|
||||
|
||||
Context parallel size is rather self-explanatory, it's the number of ranks across which the inputs are to be-sharded.
|
||||
Context parallel shard rotation defines how the shards of the inputs are rotated across ranks. We'll cover the 2 options in more detail in the next section.
|
||||
|
||||
You can see an end-to-end example in the [ND parallel example](https://github.com/huggingface/accelerate/blob/main/examples/fsdp2/nd_parallel.py) file, where you can train an 8B model with up-to 128k context length on a single 8xH100 node. Using multi-node training, you can scale this to 1M+ sequence length on multiple GPUs. You can also seamlessly combine it with other parallelism strategies to fit your needs.
|
||||
|
||||
## Technical details
|
||||
|
||||
> [!Tip]
|
||||
> This section is fairly technical, so if you don't need to learn the internals of context parallelism, you can skip it and start building 🚀
|
||||
|
||||
We're going to be using word `shard` extensively in the following sections, so let's define it first. If we call tensor `sharded` across `Dth` dimension, across `N` ranks, we mean that this tensor is split into `N` parts, where each part of the tensor has shape `[..., D//N, ...]`.
|
||||
|
||||
|
||||
## So how does it work?
|
||||
|
||||
Context parallelism works on sharding the `Q, K and V` matrices across the sequence dimension. Each rank has its assigned shard of `Q`, let's call it `Q_i`. This matrix stays only on this rank, during the whole computation. Similarly, each rank has its own shard of `K` and `V`, let's call them `K_i` and `V_i`. Then, each rank calculates attention with its own shard of `Q_i`, `K_i` and `V_i`, let's call it `attn_i`. During this computation, a communication kernel is launched to gather the `Ks` and `Vs` from all other ranks. What communication primitive is used, depends on the `context_parallel_shard_rotation` option.
|
||||
This way, each rank gets to calculate local attention, first with `Q_i`, `K_i` and `V_i`, then with `K_j` and `V_j` from all other ranks. As each rank holds `Q, K and V` matrices that are sharded across the sequence dimension, the resulting matrices are smaller and can fit on a single GPU.
|
||||
|
||||
We can formalize this in the following pseudocode:
|
||||
```python
|
||||
comm_kernel = {"allgather": allgather, "alltoall": alltoall}[context_parallel_shard_rotation]
|
||||
Qi, Ki, Vi = shard(Q, K, V, seq_dim)
|
||||
attn[i] = attn(Qi, Ki, Vi)
|
||||
for j in range(context_parallel_size):
|
||||
Kj, Vj = comm_kernel()
|
||||
attn[j] = attn(Qi, Kj, Vj) # [batch, num_heads, seq_len // context_parallel_size, head_dim]
|
||||
|
||||
final_attn = combine(attn)
|
||||
```
|
||||
|
||||
## all-to-all vs all-gather
|
||||
|
||||
### all-gather
|
||||
So what's the difference between all-to-all and all-gather? With all-gather, the communication is very simple. After (well, before, as it usually takes longer) we compute the local attention `attn_i` we launch an all-gather to gather all other `Ks` and `Vs` from all other ranks. As this communication is done, each rank has all the `Ks` and `Vs` from all other ranks, and can compute the attention with them sequentially.
|
||||
In ideal scenario, all-gather finishes in the exact moment as the calculation of `attn_i` is done. However, this never happens in practice, so the ideal real overlap is achieved when the full `attn_i` is overlapped with a part of the communication, then to start the computation with `K_j` and `V_j`, we wait for the all-gather to finish.
|
||||
|
||||
### all-to-all
|
||||
All-to-all, or sometimes called `ring-rotation` utilizes a ring-like communication pattern. After concluding `attn_i` computation, an all-to-all is launched to send `K_i` and `V_i` to the neighbouring ranks. We then repeat this `context_parallel_size-1` times, so that each rank sees all the shards of `K` and `V` from all other ranks once. In ideal scenario, we prefetch shards `K_i+1` and `V_i+1` from the neighbouring rank and this communication is exactly overlapped with computation of our current `attn_i`. Again, realistically, this perfect overlap doesn't ever happen. Given the nature of this approach, if we don't achieve perfect overlap, the penalty is way larger than with all-gather.
|
||||
|
||||
## How to choose the right rotation method?
|
||||
In theory, all-to-all should be the better choice. Though in practice, it rarely is. Therefore, we default to all-gather, as it's more likely to achieve better performance. Extensive [benchmarks](https://discuss.pytorch.org/t/distributed-w-torchtitan-breaking-barriers-training-long-context-llms-with-1m-sequence-length-in-pytorch-using-context-parallel/215082) from the `torchtitan` team also show that all-to-all rarely outperforms all-gather. Though, we still provide both options, as you might find one to be better for your use case.
|
||||
|
||||
You can directly see this issue in the profiler output in the image below:
|
||||
<p align="center">
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/examples/fsdp2/cp_all_to_all.png" alt="all-to-all profiler output" />
|
||||
<br>
|
||||
<em>Figure 1: In red you can see the idle time, while we wait for the all-to-all kernel to finish. Highlighted in the first blue bar, you can see that it takes ~250us to finish, which is repeated N-1 times for each attention call, where N is the context parallel size.</em>
|
||||
</p>
|
||||
|
||||
|
||||
## Why only FSDP2?
|
||||
|
||||
We only support context parallelism with `FSDP2`, as we create a joint mesh of `context_parallel_size` and `dp_shard_size` to
|
||||
utilize its full potential.
|
||||
How it works is: we shard the model across the joint mesh of size `cp_size*dp_shard_size`, which maximizes the memory savings.
|
||||
This is a "free lunch" of sorts, as `FSDP` communication is fully overlapped with the computation of attention, as shown in the images below.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/examples/fsdp2/cp_why_fsdp2.png" alt="why FSDP2+CP" />
|
||||
<br>
|
||||
<em>Figure 2: In blue rectangles (Stream 23), you can see that the pre-fetch of `FSDP` shard is fully overlapped with the computation of attention (Stream 7), while in red rectangles (Stream 24), you can see that the all-gather kernel results in a bubble of idle time, in which our compute stream (7) is idle.</em>
|
||||
</p>
|
||||
|
||||
In the figure above, you can also note the difference between all-to-all and all-gather. While in all-to-all (Figure 1), we launch a communication kernel N-1 times for each attention call, in all-gather (Figure 2), we launch a communication kernel only once. This results in a bigger bubble, but it only happens once per attention call, while in all-to-all, it happens N-1 times.
|
||||
|
||||
## Data dispatching in joint mesh
|
||||
|
||||
We make sure to dispatch the same batch of data to the whole `cp` subgroup, so that the results are correct. (Meaning each rank in `cp` subgroup gets the same batch of data.) However, we also dispatch different batches to each rank of `dp_shard` group.
|
||||
Imagine it like this:
|
||||
```
|
||||
# 8 GPUS, --dp_shard_size 4, --cp_size 2
|
||||
# mesh = [[0, 1], [2, 3], [4, 5], [6, 7]]
|
||||
# model is sharded across the whole mesh (each GPU holds 1/8 of the model)
|
||||
# GPUs 0,1 = batch 0
|
||||
# GPUs 2,3 = batch 1
|
||||
... and so on.
|
||||
```
|
||||
|
||||
@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Deferring Executions
|
||||
# Executing and deferring jobs
|
||||
|
||||
When you run your usual script, instructions are executed in order. Using 🤗 Accelerate to deploy your script on several
|
||||
When you run your usual script, instructions are executed in order. Using Accelerate to deploy your script on several
|
||||
GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
|
||||
faster than others.
|
||||
|
||||
@ -127,4 +127,4 @@ for (x,y) in data_loader:
|
||||
# Later in the training script when we need to check for the breakpoint
|
||||
if accelerator.check_trigger():
|
||||
break
|
||||
```
|
||||
```
|
||||
|
||||
105
docs/source/concept_guides/fsdp1_vs_fsdp2.md
Normal file
105
docs/source/concept_guides/fsdp1_vs_fsdp2.md
Normal file
@ -0,0 +1,105 @@
|
||||
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# FSDP1 vs FSDP2
|
||||
|
||||
This guide explains the key differences between `FSDP1` and `FSDP2` and helps you migrate your existing code to use `FSDP2` with minimal changes.
|
||||
|
||||
## How is FSDP2 better than FSDP1?
|
||||
|
||||
First, we want to understand how `FSDP1` and `FSDP2` work internally to understand the differences between them. This also helps us understand the limitations of `FSDP1` and how `FSDP2` solves them.
|
||||
|
||||
We'll be discussing a scenario where we have a single `Layer` that contains 3 `Linear` layers and is wrapped using `FSDP` to be sharded across 2 GPUs.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/layer.png" alt="Layer">
|
||||
</div>
|
||||
|
||||
### FSDP1
|
||||
First, we have to understand the original `FSDP1` and the limitations it brings. It represents each `FSDP` module as a single `FlatParameter` which is a single 1D tensor that contains all of the module parameters, which then get sharded across ranks. I.e. if you wrap the `Layer` with `FSDP1`, you'd achieve something as such:
|
||||
|
||||
<div align="center">
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/fsdp1.png" alt="FSDP1">
|
||||
</div>
|
||||
|
||||
You might notice a problem. The whole `Layer` gets flattened into a single `FlatParameter`, which then gets sharded across ranks. But if it's a single `FlatParameter` object, how do we store metadata? That is one of the limitations. Properly storing per-parameter metadata such as `dtype`, `requires_grad`, etc. is not possible without some ugly hacks.
|
||||
|
||||
### FSDP2
|
||||
This is why `FSDP2` was introduced. It doesn't use `FlatParameter`, instead it uses `DTensor` which is short for "Distributed Tensor". Each `DTensor` basically represents a vanilla `torch.Tensor` that has been sharded across ranks. It contains metadata about the original `torch.Tensor` and how it's sharded, what is the [placement type](https://pytorch.org/docs/stable/distributed.tensor.html#module-torch.distributed.tensor.placement_types) and so on. This is why it's called `per-parameter sharding`. The following figure shows the difference:
|
||||
|
||||
<div align="center">
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate/fsdp2.png" alt="FSDP2">
|
||||
</div>
|
||||
|
||||
Each Parameter of the original `Layer` is sharded across the 0th dimension, and split between 2 GPUs. Now, each `Linear` layer is a separate `DTensor` and storing metadata per-parameter is possible and straightforward.
|
||||
|
||||
|
||||
> [!TIP]
|
||||
> In the image above, the tensors were sharded across the 1st dimension for the sake of fitting the image on the screen, in reality, they are sharded across the 0th dimension as stated above
|
||||
|
||||
## What does FSDP2 offer?
|
||||
|
||||
`FSDP2` is a new and improved version of PyTorch's fully-sharded data parallel training API. Its main advantage is using `DTensor` to represent sharded parameters. Compared to `FSDP1`, it offers:
|
||||
- Simpler internal implementation, where each `Parameter` is a separate `DTensor`
|
||||
- Enables simple partial parameter freezing because of the above, which makes methods as [`LORA`](https://arxiv.org/abs/2106.09685) work out of the box
|
||||
- With `DTensor`, `FSDP2` supports mixing `fp8` and other parameter types in the same model out of the box
|
||||
- Faster and simpler checkpointing without extra communication across ranks using `SHARDED_STATE_DICT` and [`torch.distributed.checkpoint`](https://pytorch.org/docs/stable/distributed.checkpoint.html), this way, each rank only saves its own shard and corresponding metadata
|
||||
- For loading, it uses a `state_dict` of the sharded model to directly load the sharded parameters
|
||||
- Support for asynchronous checkpointing, where parameters are first copied to CPU memory, after this, main thread continues training while another thread stores the parameters on disk
|
||||
- Memory efficiency and deterministic memory usage, `FSDP2` doesn't use `recordStream` anymore and uses stream-to-stream synchronization (for more technical details see [this forum post](https://dev-discuss.pytorch.org/t/fsdp-cudacachingallocator-an-outsider-newb-perspective/1486) and [this issue](https://github.com/pytorch/pytorch/issues/114299))
|
||||
- In the future, optimizations of the communication patterns via `torch.compile` are planned, further improving the performance and memory efficiency
|
||||
|
||||
|
||||
## API Differences
|
||||
|
||||
We have already discussed the internal differences, now let's discuss the differences, you, as a user, will need to know.
|
||||
|
||||
Here are the main changes in configuration options when using `FSDP2` through the `accelerate` CLI:
|
||||
|
||||
Previous (`FSDP1`) | New (`FSDP2`) | What Changed
|
||||
-- | -- | --
|
||||
`--fsdp_sharding_strategy` | `--fsdp_reshard_after_forward` | replaces `--fsdp_sharding_strategy`, changed to `true` (previously `FULL_SHARD`) or `false` (previously `SHARD_GRAD_OP`)
|
||||
`--fsdp_backward_prefetch` | \*\***REMOVED**\*\* | `FSDP2` uses previous `BACKWARD_PRE` option by default, as only this allows communication and computation overlap
|
||||
`--fsdp_forward_prefetch` | \*\***NOT YET IMPLEMENTED**\*\* | How to implement this is under active discussion, for now it is not supported in `FSDP2`
|
||||
`--fsdp_sync_module_states` | \*\***REMOVED**\*\* | with `FSDP2`, this parameter becomes redundant
|
||||
`--fsdp_cpu_ram_efficient_loading` | `--fsdp_cpu_ram_efficient_loading` | if `true`, `FSDP2` will similarly load the model only on rank 0, and then parameters get synced to other ranks, this is the same behavior as `FSDP1`, however, setting `--fsdp_sync_module_states` isn't required anymore
|
||||
`--fsdp_state_dict_type` | `--fsdp_state_dict_type` | `LOCAL_STATE_DICT` becomes obsolete and with `FSDP2` `SHARDED_STATE_DICT` is the default option, which results in no extra communication and each rank saving its own shard, other possible option is `FULL_STATE_DICT` which results in extra communication and spike in memory usage but saves the full model from rank 0.
|
||||
`--fsdp_use_orig_params` | \*\***REMOVED**\*\* | `FSDP2` uses a `DTensor` class on the background, which means it *always* uses the original parameters by default
|
||||
\*\***NEW**\*\* | `--fsdp_version` | `1` is the default option, to not break existing code, set to `2` to use `FSDP2`
|
||||
|
||||
For all other options that remain unchanged, see the [`FSDP` documentation](../usage_guides/fsdp.md).
|
||||
|
||||
## How to Switch to FSDP2
|
||||
|
||||
### If using Python code:
|
||||
Simply set `fsdp_version=2` when creating your plugin and replace options according to the table above.
|
||||
|
||||
```python
|
||||
from accelerate import FullyShardedDataParallelPlugin, Accelerator
|
||||
|
||||
fsdp_plugin = FullyShardedDataParallelPlugin(
|
||||
fsdp_version=2
|
||||
# other options...
|
||||
)
|
||||
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
|
||||
```
|
||||
|
||||
### If using YAML config:
|
||||
Use our conversion tool:
|
||||
```bash
|
||||
accelerate to-fsdp2 --config_file config.yaml --output_file new_config.yaml
|
||||
```
|
||||
|
||||
This will automatically convert all FSDP1 settings to their FSDP2 equivalents. Use `--overwrite` to update the existing file instead of creating a new one.
|
||||
@ -13,15 +13,15 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Moving between FSDP And DeepSpeed
|
||||
# FSDP vs DeepSpeed
|
||||
|
||||
🤗 Accelerate offers flexibilty of training frameworks, by integrating two extremely powerful tools for distributed training, namely [Pytorch FSDP](../usage_guides/fsdp) and [Microsoft DeepSpeed](../usage_guides/deepspeed). The aim of this tutorial is to draw parallels, as well as to outline potential differences, to empower the user to switch seamlessly between these two frameworks.
|
||||
Accelerate offers flexibilty of training frameworks, by integrating two extremely powerful tools for distributed training, namely [Pytorch FSDP](../usage_guides/fsdp) and [Microsoft DeepSpeed](../usage_guides/deepspeed). The aim of this tutorial is to draw parallels, as well as to outline potential differences, to empower the user to switch seamlessly between these two frameworks.
|
||||
|
||||
<Tip>
|
||||
|
||||
To switch between the frameworks, we recommend launching code 🤗 `accelerate launch` passing in the correct config file with `--config_file`, or passing in the respective arguments directly for [FSDP and DeepSpeed](../package_reference/cli#accelerate-launch) .
|
||||
To switch between the frameworks, we recommend launching code `accelerate launch` passing in the correct config file with `--config_file`, or passing in the respective arguments directly for [FSDP and DeepSpeed](../package_reference/cli#accelerate-launch) .
|
||||
|
||||
Example 🤗 Accelerate configurations can be found here for [DeepSpeed](../usage_guides/deepspeed#accelerate-deepspeed-plugin) and [FSDP](../usage_guides/fsdp#how-it-works-out-of-the-box), or in the [example zoo under "Launch Configurations"](../usage_guides/explore)
|
||||
Example Accelerate configurations can be found here for [DeepSpeed](../usage_guides/deepspeed#accelerate-deepspeed-plugin) and [FSDP](../usage_guides/fsdp#how-it-works-out-of-the-box), or in the [example zoo under "Launch Configurations"](../usage_guides/explore)
|
||||
|
||||
</Tip>
|
||||
|
||||
@ -47,7 +47,7 @@ parameters summoning | FSDP<br>DeepSpeed | `--fsdp_use_orig_params`<br>None | `t
|
||||
parameters syncing | FSDP<br>DeepSpeed | `--fsdp_sync_module_states`<br>None | `true` |
|
||||
training | FSDP<br>DeepSpeed | None<br>`--gradient_accumulation_steps`<br>`--gradient_clipping` | <br>`auto`<br>`auto` | Transparent to user
|
||||
|
||||
For detailed descriptions of the above, refer to [🤗 `Accelerate` launch documentation](../package_reference/cli#accelerate-launch).
|
||||
For detailed descriptions of the above, refer to [`Accelerate` launch documentation](../package_reference/cli#accelerate-launch).
|
||||
|
||||
<Tip>
|
||||
|
||||
@ -94,7 +94,7 @@ FSDP only allows *all-or-nothing* offload (i.e., either offload parameters, grad
|
||||
### Prefetching
|
||||
|
||||
FSDP allows two prefetching configurations `--fsdp_forward_prefetch` and `--fsdp_backward_prefetch` to improve overlap of comms / computation at a cost of extra memory, see [FSDP documentation](https://pytorch.org/docs/stable/fsdp.html).
|
||||
For DeepSpeed, the prefetching will be turned on when needed, and it turns on depending on certain hyper-params like `stage3_param_persistence_threshold`, `stage3_max_reuse_distance`, etc, [that can be configured for Zero3](https://www.deepspeed.ai/docs/config-json/#parameter-offloading); 🤗 `accelerate` may set these hyper-params automatically if you don't set those explicitly in the deepspeed config file.
|
||||
For DeepSpeed, the prefetching will be turned on when needed, and it turns on depending on certain hyper-params like `stage3_param_persistence_threshold`, `stage3_max_reuse_distance`, etc, [that can be configured for Zero3](https://www.deepspeed.ai/docs/config-json/#parameter-offloading); `accelerate` may set these hyper-params automatically if you don't set those explicitly in the deepspeed config file.
|
||||
|
||||
<Tip>
|
||||
|
||||
@ -104,12 +104,12 @@ For DeepSpeed, the prefetching will be turned on when needed, and it turns on de
|
||||
|
||||
### Model Loading
|
||||
|
||||
While FSDP require an explicit `--fsdp_cpu_ram_efficient_loading true` to activate efficient model loading, 🤗 `transformers` will activate the similar feature whenever DeepSpeed Zero3 is used.
|
||||
While FSDP require an explicit `--fsdp_cpu_ram_efficient_loading true` to activate efficient model loading, `transformers` will activate the similar feature whenever DeepSpeed Zero3 is used.
|
||||
|
||||
<Tip>
|
||||
|
||||
For FSDP, whenever setting `--fsdp_cpu_ram_efficient_loading true`, 🤗 `accelerate` will automatically set `sync_module_states` to true.
|
||||
For RAM efficient loading the weights will be loaded only in a singe rank, and thus requires `sync_module_states` to broadcast weights to other ranks.
|
||||
For FSDP, whenever setting `--fsdp_cpu_ram_efficient_loading true`, `accelerate` will automatically set `sync_module_states` to true.
|
||||
For RAM efficient loading the weights will be loaded only in a single rank, and thus requires `sync_module_states` to broadcast weights to other ranks.
|
||||
|
||||
</Tip>
|
||||
|
||||
@ -125,7 +125,7 @@ FSDP requires an explicit `--fsdp_auto_wrap_policy` for the algorithm to decide
|
||||
|
||||
### Parameters Summoning
|
||||
|
||||
FSDP requires an explicit `--fsdp_use_orig_params` flag if using `torch.compile`, see [the pytorch documenation](https://pytorch.org/docs/stable/fsdp.html#module-torch.distributed.fsdp). For DeepSpeed this is transparent to the user.
|
||||
FSDP requires an explicit `--fsdp_use_orig_params` flag if using `torch.compile`, see [the pytorch documentation](https://pytorch.org/docs/stable/fsdp.html#module-torch.distributed.fsdp). For DeepSpeed this is transparent to the user.
|
||||
|
||||
<Tip>
|
||||
|
||||
@ -147,7 +147,7 @@ Deepspeed requires explicit `--gradient_accumulation_steps` and `--gradient_clip
|
||||
|
||||
## On Differences in Data Precision Handling
|
||||
|
||||
To discuss the how data precision is handled in both FSDP and Deepspeed, it is instructive to first give an overview of how model parameters are handled in these frameworks. Before the model / optimizer parameters are distributed across GPUs, parameter preparation is involved to first "flatten" them to one-dimensional [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor). The implementation of FSDP / DeepSpeed varies in the respect of the `dtype` in which these "flattened" parameters are stored, and there are ramifications with regards to how [`torch.Optimizer`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) allocate their `dtype`s. The table below outlines the processes for both frameworks; the "Local" column indicates the process occurring at a per-gpu level, therefore any memory overheads by upcasting should be understood to be amortized by the number of gpus used.
|
||||
To discuss how data precision is handled in both FSDP and Deepspeed, it is instructive to first give an overview of how model parameters are handled in these frameworks. Before the model / optimizer parameters are distributed across GPUs, parameter preparation is involved to first "flatten" them to one-dimensional [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html#torch-tensor). The implementation of FSDP / DeepSpeed varies in the respect of the `dtype` in which these "flattened" parameters are stored, and there are ramifications with regards to how [`torch.Optimizer`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) allocate their `dtype`s. The table below outlines the processes for both frameworks; the "Local" column indicates the process occurring at a per-gpu level, therefore any memory overheads by upcasting should be understood to be amortized by the number of gpus used.
|
||||
|
||||
<Tip>
|
||||
|
||||
@ -166,7 +166,7 @@ Optimizer (Actual Step) | ✅ | FSDP<br>DeepSpeed | occurs in `torch_dtype` <br
|
||||
|
||||
<Tip warning={true}>
|
||||
|
||||
Therefore when using DeepSpeed a small number of GPUs, be aware of potentially significant memory overheads due to the upcasting during preperation.
|
||||
Therefore when using DeepSpeed a small number of GPUs, be aware of potentially significant memory overheads due to the upcasting during preparation.
|
||||
|
||||
</Tip>
|
||||
|
||||
|
||||
@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Gradient Synchronization
|
||||
# Gradient synchronization
|
||||
|
||||
PyTorch's distributed module operates by communicating back and forth between all of the GPUs in your system.
|
||||
This communication takes time, and ensuring all processes know the states of each other happens at particular triggerpoints
|
||||
@ -28,7 +28,7 @@ from torch.nn.parallel import DistributedDataParallel
|
||||
model = nn.Linear(10, 10)
|
||||
ddp_model = DistributedDataParallel(model)
|
||||
```
|
||||
In 🤗 Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model.
|
||||
In Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model.
|
||||
|
||||
```diff
|
||||
+ from accelerate import Accelerator
|
||||
@ -90,7 +90,7 @@ for index, batch in enumerate(dataloader):
|
||||
optimizer.step()
|
||||
```
|
||||
|
||||
In 🤗 Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
|
||||
In Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!),
|
||||
`ddp_model.no_sync` gets replaced with [`~Accelerator.no_sync`] and operates the same way:
|
||||
|
||||
```diff
|
||||
|
||||
@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# 🤗 Accelerate's internal mechanisms
|
||||
# Accelerate's internal mechanisms
|
||||
|
||||
Internally, 🤗 Accelerate works by first analyzing the environment in which the script is launched to determine which
|
||||
Internally, Accelerate works by first analyzing the environment in which the script is launched to determine which
|
||||
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
|
||||
that information is stored in the [`~AcceleratorState`].
|
||||
|
||||
@ -69,4 +69,6 @@ setting the same seed in the main random number generator in all processes.
|
||||
|
||||
</Tip>
|
||||
|
||||
For more details about the internals, see the [Internals page](package_reference/torch_wrappers).
|
||||
If you have [`torchdata>=0.8.0`](https://github.com/pytorch/data/tree/main) installed, and you have passed `use_stateful_dataloader=True` into your [`~utils.DataLoaderConfiguration`], these classes will directly inherit from `StatefulDataLoader` instead, and maintain a `state_dict`.
|
||||
|
||||
For more details about the internals, see the [Internals page](../package_reference/torch_wrappers).
|
||||
|
||||
@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Low Precision Training Methods
|
||||
# Low precision training methods
|
||||
|
||||
The release of new kinds of hardware led to the emergence of new training paradigms that better utilize them. Currently, this is in the form of training
|
||||
in 8-bit precision using packages such as [TransformersEngine](https://github.com/NVIDIA/TransformerEngine) (TE) or [MS-AMP](https://github.com/Azure/MS-AMP/tree/main).
|
||||
@ -36,7 +36,7 @@ MS-AMP O3 | FP8 | FP8 | FP8 | FP16 | FP8 | FP8+FP16
|
||||
|
||||
`TransformersEngine` is the first solution to trying to train in 8-bit floating point. It works by using drop-in replacement layers for certain ones in a model that utilizes their FP8-engine to reduce the number of bits (such as 32 to 8) without degrading the final accuracy of the model.
|
||||
|
||||
Specifically, 🤗 Accelerate will find and replace the following layers with `TransformersEngine` versions:
|
||||
Specifically, Accelerate will find and replace the following layers with `TransformersEngine` versions:
|
||||
|
||||
* `nn.LayerNorm` for `te.LayerNorm`
|
||||
* `nn.Linear` for `te.Linear`
|
||||
@ -50,7 +50,7 @@ The `TransformerEngine` can receive many different arguments that customize how
|
||||
|
||||
* `margin`: The margin to use for the gradient scaling.
|
||||
* `interval`: The interval to use for how often the scaling factor is recomputed.
|
||||
* `fp8_format``: The format to use for the FP8 recipe. Must be one of `E4M3` or `HYBRID`.
|
||||
* `fp8_format``: The format to use for the FP8 recipe. Must be one of `HYBRID` or `E4M3`. (Generally `HYBRID` for training, `E4M3` for evaluation)
|
||||
* `amax_history_len`: The length of the history to use for the scaling factor computation
|
||||
* `amax_compute_algo`: The algorithm to use for the scaling factor computation. Must be one of `max` or `most_recent`.
|
||||
* `override_linear_precision`: Whether or not to execute `fprop`, `dgrad`, and `wgrad` GEMMS in higher precision.
|
||||
@ -67,7 +67,7 @@ MS-AMP takes a different approach to `TransformersEngine` by providing three dif
|
||||
|
||||
* The second optimization level (`O2`) improves upon this by also reducing the precision of the optimizer states. One is in FP8 while the other is in FP16. Generally it's been shown that this will only provide a net-gain of no degraded end accuracy, increased training speed, and reduced memory as now every state is either in FP16 or FP8.
|
||||
|
||||
* Finally, MS-AMP has a third optimization level (`O3`) which helps during DDP scenarios such as DeepSpeed. The weights of the model in memory are fully cast to FP8, and the master weights are now stored in FP16. This fully reduces memory by the highest factor as now not only is almost everything in FP8, only two states are left in FP16. Currently, only DeepSpeed versions up through 0.9.2 are supported, so this capability is not included in the 🤗 Accelerate integration
|
||||
* Finally, MS-AMP has a third optimization level (`O3`) which helps during DDP scenarios such as DeepSpeed. The weights of the model in memory are fully cast to FP8, and the master weights are now stored in FP16. This fully reduces memory by the highest factor as now not only is almost everything in FP8, only two states are left in FP16. Currently, only DeepSpeed versions up through 0.9.2 are supported, so this capability is not included in the Accelerate integration
|
||||
|
||||
## Combining the two
|
||||
|
||||
|
||||
@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Comparing performance between different device setups
|
||||
# Comparing performance across distributed setups
|
||||
|
||||
Evaluating and comparing the performance from different setups can be quite tricky if you don't know what to look for.
|
||||
For example, you cannot run the same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate
|
||||
@ -43,13 +43,13 @@ Why is this important? Under the hood this will set **5** different seed setting
|
||||
random.seed(seed)
|
||||
np.random.seed(seed)
|
||||
torch.manual_seed(seed)
|
||||
torch.cuda.manual_seed_all(seed)
|
||||
torch.cuda.manual_seed_all(seed) # or torch.xpu.manual_seed_all, etc
|
||||
# ^^ safe to call this function even if cuda is not available
|
||||
if is_torch_xla_available():
|
||||
xm.set_rng_state(seed)
|
||||
```
|
||||
|
||||
The random state, numpy's state, torch, torch's cuda state, and if TPUs are available torch_xla's cuda state.
|
||||
The random state, numpy's state, torch, torch's device state, and if TPUs are available torch_xla's cuda state.
|
||||
|
||||
## Observed Batch Sizes
|
||||
|
||||
|
||||
@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Training on TPUs with 🤗 Accelerate
|
||||
# Training on TPUs
|
||||
|
||||
Training on TPUs can be slightly different from training on multi-gpu, even with 🤗 Accelerate. This guide aims to show you
|
||||
Training on TPUs can be slightly different from training on multi-gpu, even with Accelerate. This guide aims to show you
|
||||
where you should be careful and why, as well as the best practices in general.
|
||||
|
||||
## Training in a Notebook
|
||||
@ -81,7 +81,7 @@ notebook_launcher(training_function)
|
||||
|
||||
<Tip>
|
||||
|
||||
The `notebook_launcher` will default to 8 processes if 🤗 Accelerate has been configured for a TPU
|
||||
The `notebook_launcher` will default to 8 processes if Accelerate has been configured for a TPU
|
||||
|
||||
</Tip>
|
||||
|
||||
@ -128,10 +128,10 @@ And finally calling the training function with:
|
||||
|
||||
## Mixed Precision and Global Variables
|
||||
|
||||
As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), 🤗 Accelerate supports fp16 and bf16, both of which can be used on TPUs.
|
||||
As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), Accelerate supports fp16 and bf16, both of which can be used on TPUs.
|
||||
That being said, ideally `bf16` should be utilized as it is extremely efficient to use.
|
||||
|
||||
There are two "layers" when using `bf16` and 🤗 Accelerate on TPUs, at the base level and at the operation level.
|
||||
There are two "layers" when using `bf16` and Accelerate on TPUs, at the base level and at the operation level.
|
||||
|
||||
At the base level, this is enabled when passing `mixed_precision="bf16"` to `Accelerator`, such as:
|
||||
```python
|
||||
|
||||
@ -15,7 +15,7 @@ rendered properly in your Markdown viewer.
|
||||
|
||||
# Accelerate
|
||||
|
||||
🤗 Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.
|
||||
Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.
|
||||
|
||||
```diff
|
||||
+ from accelerate import Accelerator
|
||||
@ -37,7 +37,7 @@ rendered properly in your Markdown viewer.
|
||||
scheduler.step()
|
||||
```
|
||||
|
||||
Built on `torch_xla` and `torch.distributed`, 🤗 Accelerate takes care of the heavy lifting, so you don't have to write any custom code to adapt to these platforms.
|
||||
Built on `torch_xla` and `torch.distributed`, Accelerate takes care of the heavy lifting, so you don't have to write any custom code to adapt to these platforms.
|
||||
Convert existing codebases to utilize [DeepSpeed](usage_guides/deepspeed), perform [fully sharded data parallelism](usage_guides/fsdp), and have automatic support for mixed-precision training!
|
||||
|
||||
<Tip>
|
||||
@ -56,11 +56,11 @@ accelerate launch {my_script.py}
|
||||
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
|
||||
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./basic_tutorials/overview"
|
||||
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
|
||||
<p class="text-gray-700">Learn the basics and become familiar with using 🤗 Accelerate. Start here if you are using 🤗 Accelerate for the first time!</p>
|
||||
<p class="text-gray-700">Learn the basics and become familiar with using Accelerate. Start here if you are using Accelerate for the first time!</p>
|
||||
</a>
|
||||
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./usage_guides/explore"
|
||||
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
|
||||
<p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Accelerate to solve real-world problems.</p>
|
||||
<p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use Accelerate to solve real-world problems.</p>
|
||||
</a>
|
||||
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concept_guides/gradient_synchronization"
|
||||
><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
|
||||
@ -68,7 +68,7 @@ accelerate launch {my_script.py}
|
||||
</a>
|
||||
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/accelerator"
|
||||
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
|
||||
<p class="text-gray-700">Technical descriptions of how 🤗 Accelerate classes and methods work.</p>
|
||||
<p class="text-gray-700">Technical descriptions of how Accelerate classes and methods work.</p>
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@ -15,33 +15,96 @@ rendered properly in your Markdown viewer.
|
||||
|
||||
# Working with large models
|
||||
|
||||
## Dispatching and Offloading Models
|
||||
## Dispatch and offload
|
||||
|
||||
### init_empty_weights
|
||||
|
||||
[[autodoc]] big_modeling.init_empty_weights
|
||||
|
||||
### cpu_offload
|
||||
|
||||
[[autodoc]] big_modeling.cpu_offload
|
||||
|
||||
### cpu_offload_with_hook
|
||||
|
||||
[[autodoc]] big_modeling.cpu_offload_with_hook
|
||||
|
||||
### disk_offload
|
||||
|
||||
[[autodoc]] big_modeling.disk_offload
|
||||
|
||||
### dispatch_model
|
||||
|
||||
[[autodoc]] big_modeling.dispatch_model
|
||||
|
||||
### load_checkpoint_and_dispatch
|
||||
|
||||
[[autodoc]] big_modeling.load_checkpoint_and_dispatch
|
||||
|
||||
### load_checkpoint_in_model
|
||||
|
||||
[[autodoc]] big_modeling.load_checkpoint_in_model
|
||||
|
||||
### infer_auto_device_map
|
||||
|
||||
[[autodoc]] utils.infer_auto_device_map
|
||||
|
||||
## Model Hooks
|
||||
## Hooks
|
||||
|
||||
### Hook Classes
|
||||
### ModelHook
|
||||
|
||||
[[autodoc]] hooks.ModelHook
|
||||
|
||||
### AlignDevicesHook
|
||||
|
||||
[[autodoc]] hooks.AlignDevicesHook
|
||||
|
||||
### SequentialHook
|
||||
|
||||
[[autodoc]] hooks.SequentialHook
|
||||
|
||||
### Adding Hooks
|
||||
### LayerwiseCastingHook
|
||||
|
||||
[[autodoc]] hooks.LayerwiseCastingHook
|
||||
|
||||
## Adding Hooks
|
||||
|
||||
### add_hook_to_module
|
||||
|
||||
[[autodoc]] hooks.add_hook_to_module
|
||||
|
||||
### attach_execution_device_hook
|
||||
|
||||
[[autodoc]] hooks.attach_execution_device_hook
|
||||
|
||||
### attach_align_device_hook
|
||||
|
||||
[[autodoc]] hooks.attach_align_device_hook
|
||||
|
||||
### attach_align_device_hook_on_blocks
|
||||
|
||||
[[autodoc]] hooks.attach_align_device_hook_on_blocks
|
||||
|
||||
### Removing Hooks
|
||||
### attach_layerwise_casting_hooks
|
||||
|
||||
[[autodoc]] big_modeling.attach_layerwise_casting_hooks
|
||||
|
||||
## Removing Hooks
|
||||
|
||||
### remove_hook_from_module
|
||||
|
||||
[[autodoc]] hooks.remove_hook_from_module
|
||||
[[autodoc]] hooks.remove_hook_from_submodules
|
||||
|
||||
### remove_hook_from_submodules
|
||||
|
||||
[[autodoc]] hooks.remove_hook_from_submodules
|
||||
|
||||
## Utilities
|
||||
|
||||
### has_offloaded_params
|
||||
|
||||
[[autodoc]] utils.has_offloaded_params
|
||||
|
||||
### align_module_device
|
||||
|
||||
[[autodoc]] utils.align_module_device
|
||||
|
||||
@ -139,7 +139,7 @@ values. They can also be passed in manually.
|
||||
* `--cpu` (`bool`) -- Whether or not to force the training on the CPU.
|
||||
* `--multi_gpu` (`bool`) -- Whether or not this should launch a distributed GPU training.
|
||||
* `--tpu` (`bool`) -- Whether or not this should launch a TPU training.
|
||||
* `--ipex` (`bool`) -- Whether or not this should launch an Intel Pytorch Extension (IPEX) training.
|
||||
* `--ipex` (`bool`) -- Whether or not this should launch an Intel Pytorch Extension (IPEX) training. **This argument is deprecated, will be removed in Accelerate v1.10**
|
||||
|
||||
**Resource Selection Arguments**:
|
||||
|
||||
@ -158,13 +158,13 @@ The following arguments are useful for selecting which training paradigm to use.
|
||||
* `--use_deepspeed` (`bool`) -- Whether or not to use DeepSpeed for training.
|
||||
* `--use_fsdp` (`bool`) -- Whether or not to use FullyShardedDataParallel for training.
|
||||
* `--use_megatron_lm` (`bool`) -- Whether or not to use Megatron-LM for training.
|
||||
* `--use_xpu` (`bool`) -- Whether to use IPEX plugin to speed up training on XPU specifically.
|
||||
* `--use_xpu` (`bool`) -- Whether to use IPEX plugin to speed up training on XPU specifically. **This argument is deprecated and ignored, will be removed in Accelerate v1.10**
|
||||
|
||||
**Distributed GPU Arguments**:
|
||||
|
||||
The following arguments are only useful when `multi_gpu` is passed or multi-gpu training is configured through `accelerate config`:
|
||||
|
||||
* `--gpu_ids` (`str`) -- What GPUs (by id) should be used for training on this machine as a comma-seperated list
|
||||
* `--gpu_ids` (`str`) -- What GPUs (by id) should be used for training on this machine as a comma-separated list
|
||||
* `--same_network` (`bool`) -- Whether all machines used for multinode training exist on the same local network.
|
||||
* `--machine_rank` (`int`) -- The rank of the machine on which this script is launched.
|
||||
* `--main_process_ip` (`str`) -- The IP address of the machine of rank 0.
|
||||
@ -202,8 +202,8 @@ The following arguments are only useful when `use_deepspeed` is passed or `deeps
|
||||
* `--zero3_init_flag` (`str`) -- Decides Whether (true|false) to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with DeepSpeed ZeRO Stage-3.
|
||||
* `--zero3_save_16bit_model` (`str`) -- Decides Whether (true|false) to save 16-bit model weights when using ZeRO Stage-3. Only applicable with DeepSpeed ZeRO Stage-3.
|
||||
* `--deepspeed_hostfile` (`str`) -- DeepSpeed hostfile for configuring multi-node compute resources.
|
||||
* `--deepspeed_exclusion_filter` (`str`) -- DeepSpeed exclusion filter string when using mutli-node setup.
|
||||
* `--deepspeed_inclusion_filter` (`str`) -- DeepSpeed inclusion filter string when using mutli-node setup.
|
||||
* `--deepspeed_exclusion_filter` (`str`) -- DeepSpeed exclusion filter string when using multi-node setup.
|
||||
* `--deepspeed_inclusion_filter` (`str`) -- DeepSpeed inclusion filter string when using multi-node setup.
|
||||
* `--deepspeed_multinode_launcher` (`str`) -- DeepSpeed multi-node launcher to use.
|
||||
* `--deepspeed_moe_layer_cls_names` (`str`) -- comma-separated list of transformer MoE layer class names (case-sensitive) to wrap, e.g, `MixtralSparseMoeBlock` `Qwen2MoeSparseMoeBlock`, `JetMoEAttention,JetMoEBlock`
|
||||
|
||||
|
||||
@ -13,16 +13,32 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Utilities for DeepSpeed
|
||||
# DeepSpeed utilities
|
||||
|
||||
## DeepSpeedPlugin
|
||||
|
||||
## get_active_deepspeed_plugin
|
||||
|
||||
[[autodoc]] utils.get_active_deepspeed_plugin
|
||||
|
||||
[[autodoc]] utils.DeepSpeedPlugin
|
||||
|
||||
[[autodoc]] utils.deepspeed.DummyOptim
|
||||
|
||||
[[autodoc]] utils.deepspeed.DummyScheduler
|
||||
|
||||
## DeepSpeedEnginerWrapper
|
||||
|
||||
[[autodoc]] utils.deepspeed.DeepSpeedEngineWrapper
|
||||
|
||||
## DeepSpeedOptimizerWrapper
|
||||
|
||||
[[autodoc]] utils.deepspeed.DeepSpeedOptimizerWrapper
|
||||
|
||||
## DeepSpeedSchedulerWrapper
|
||||
|
||||
[[autodoc]] utils.deepspeed.DeepSpeedSchedulerWrapper
|
||||
|
||||
## DummyOptim
|
||||
|
||||
[[autodoc]] utils.deepspeed.DummyOptim
|
||||
|
||||
## DummyScheduler
|
||||
@ -13,16 +13,26 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# FP8 Functionality
|
||||
# FP8
|
||||
|
||||
Below are functions and classes relative to the underlying FP8 implementation
|
||||
|
||||
## FP8RecipeKwargs
|
||||
|
||||
[[autodoc]] utils.FP8RecipeKwargs
|
||||
|
||||
## convert_model
|
||||
|
||||
[[autodoc]] utils.convert_model
|
||||
|
||||
## has_transformer_engine_layers
|
||||
|
||||
[[autodoc]] utils.has_transformer_engine_layers
|
||||
|
||||
## contextual_fp8_autocast
|
||||
|
||||
[[autodoc]] utils.contextual_fp8_autocast
|
||||
|
||||
## apply_fp8_autowrap
|
||||
|
||||
[[autodoc]] utils.apply_fp8_autowrap
|
||||
|
||||
@ -13,12 +13,34 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Utilities for Fully Sharded Data Parallelism
|
||||
# Fully Sharded Data Parallel utilities
|
||||
|
||||
## enable_fsdp_ram_efficient_loading
|
||||
|
||||
[[autodoc]] utils.enable_fsdp_ram_efficient_loading
|
||||
|
||||
## disable_fsdp_ram_efficient_loading
|
||||
|
||||
[[autodoc]] utils.disable_fsdp_ram_efficient_loading
|
||||
|
||||
## merge_fsdp_weights
|
||||
|
||||
[[autodoc]] utils.merge_fsdp_weights
|
||||
|
||||
## FullyShardedDataParallelPlugin
|
||||
|
||||
[[autodoc]] utils.FullyShardedDataParallelPlugin
|
||||
|
||||
## fsdp2_load_full_state_dict
|
||||
|
||||
[[autodoc]] utils.fsdp2_load_full_state_dict
|
||||
|
||||
## fsdp2_switch_optimizer_parameters
|
||||
|
||||
[[autodoc]] utils.fsdp2_switch_optimizer_parameters
|
||||
|
||||
## fsdp2_prepare_model
|
||||
|
||||
[[autodoc]] utils.fsdp2_prepare_model
|
||||
|
||||
## fsdp2_prepare_auto_wrap_policy
|
||||
|
||||
@ -13,8 +13,10 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# The inference API
|
||||
# Pipeline parallelism
|
||||
|
||||
These docs refer to the [PiPPy](https://github.com/PyTorch/PiPPy) integration.
|
||||
Accelerate supports pipeline parallelism for large-scale training with the PyTorch [torch.distributed.pipelining](https://pytorch.org/docs/stable/distributed.pipelining.html) API.
|
||||
|
||||
## prepare_pippy
|
||||
|
||||
[[autodoc]] inference.prepare_pippy
|
||||
|
||||
@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Kwargs Handlers
|
||||
# Kwargs handlers
|
||||
|
||||
The following objects can be passed to the main [`Accelerator`] to customize how some PyTorch objects
|
||||
related to distributed training or mixed precision are created.
|
||||
|
||||
@ -17,6 +17,10 @@ rendered properly in your Markdown viewer.
|
||||
|
||||
Functions for launching training on distributed processes.
|
||||
|
||||
## notebook_launcher
|
||||
|
||||
[[autodoc]] accelerate.notebook_launcher
|
||||
|
||||
## debug_launcher
|
||||
|
||||
[[autodoc]] accelerate.debug_launcher
|
||||
@ -13,9 +13,9 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Logging with Accelerate
|
||||
# Logging
|
||||
|
||||
Refer to the [Troubleshooting guide](../usage_guides/troubleshooting#logging) or to the example below to learn
|
||||
how to use 🤗 Accelerate's logger.
|
||||
how to use Accelerate's logger.
|
||||
|
||||
[[autodoc]] logging.get_logger
|
||||
@ -13,20 +13,36 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Utilities for Megatron-LM
|
||||
# Megatron-LM utilities
|
||||
|
||||
## MegatronLMPlugin
|
||||
|
||||
[[autodoc]] utils.MegatronLMPlugin
|
||||
|
||||
## MegatronLMDummyScheduler
|
||||
|
||||
[[autodoc]] utils.MegatronLMDummyScheduler
|
||||
|
||||
## MegatronLMDummyDataLoader
|
||||
|
||||
[[autodoc]] utils.MegatronLMDummyDataLoader
|
||||
|
||||
## AbstractTrainStep
|
||||
|
||||
[[autodoc]] utils.AbstractTrainStep
|
||||
|
||||
## GPTTrainStep
|
||||
|
||||
[[autodoc]] utils.GPTTrainStep
|
||||
|
||||
## BertTrainStep
|
||||
|
||||
[[autodoc]] utils.BertTrainStep
|
||||
|
||||
## T5TrainStep
|
||||
|
||||
[[autodoc]] utils.T5TrainStep
|
||||
|
||||
## avg_losses_across_data_parallel_group
|
||||
|
||||
[[autodoc]] utils.avg_losses_across_data_parallel_group
|
||||
|
||||
@ -21,8 +21,14 @@ instances share the same state, which is initialized on the first instantiation.
|
||||
These classes are immutable and store information about certain configurations or
|
||||
states.
|
||||
|
||||
## PartialState
|
||||
|
||||
[[autodoc]] state.PartialState
|
||||
|
||||
## AcceleratorState
|
||||
|
||||
[[autodoc]] state.AcceleratorState
|
||||
|
||||
## GradientState
|
||||
|
||||
[[autodoc]] state.GradientState
|
||||
@ -13,25 +13,36 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Wrapper classes for torch Dataloaders, Optimizers, and Schedulers
|
||||
# DataLoaders, Optimizers, and Schedulers
|
||||
|
||||
The internal classes Accelerate uses to prepare objects for distributed training
|
||||
when calling [`~Accelerator.prepare`].
|
||||
|
||||
## Datasets and DataLoaders
|
||||
## DataLoader utilities
|
||||
|
||||
[[autodoc]] data_loader.prepare_data_loader
|
||||
[[autodoc]] data_loader.skip_first_batches
|
||||
|
||||
## BatchSamplerShard
|
||||
|
||||
[[autodoc]] data_loader.BatchSamplerShard
|
||||
|
||||
## IterableDatasetShard
|
||||
|
||||
[[autodoc]] data_loader.IterableDatasetShard
|
||||
|
||||
## DataLoaderShard
|
||||
|
||||
[[autodoc]] data_loader.DataLoaderShard
|
||||
|
||||
## DataLoaderDispatcher
|
||||
|
||||
[[autodoc]] data_loader.DataLoaderDispatcher
|
||||
|
||||
## Optimizers
|
||||
## AcceleratedOptimizer
|
||||
|
||||
[[autodoc]] optimizer.AcceleratedOptimizer
|
||||
|
||||
## Schedulers
|
||||
## AcceleratedScheduler
|
||||
|
||||
[[autodoc]] scheduler.AcceleratedScheduler
|
||||
@ -13,23 +13,48 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Experiment Tracking
|
||||
# Experiment Trackers
|
||||
|
||||
## The Base Tracker Class
|
||||
## GeneralTracker
|
||||
|
||||
[[autodoc]] tracking.GeneralTracker
|
||||
|
||||
## Integrated Trackers
|
||||
## TensorBoardTracker
|
||||
|
||||
[[autodoc]] tracking.TensorBoardTracker
|
||||
- __init__
|
||||
|
||||
## WandBTracker
|
||||
|
||||
[[autodoc]] tracking.WandBTracker
|
||||
- __init__
|
||||
|
||||
## Trackio
|
||||
|
||||
[[autodoc]] tracking.TrackioTracker
|
||||
- __init__
|
||||
|
||||
## CometMLTracker
|
||||
|
||||
[[autodoc]] tracking.CometMLTracker
|
||||
- __init__
|
||||
|
||||
## AimTracker
|
||||
|
||||
[[autodoc]] tracking.AimTracker
|
||||
- __init__
|
||||
|
||||
## MLflowTracker
|
||||
|
||||
[[autodoc]] tracking.MLflowTracker
|
||||
- __init__
|
||||
|
||||
## ClearMLTracker
|
||||
|
||||
[[autodoc]] tracking.ClearMLTracker
|
||||
- __init__
|
||||
|
||||
## SwanLabTracker
|
||||
|
||||
[[autodoc]] tracking.SwanLabTracker
|
||||
- __init__
|
||||
|
||||
@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Helpful Utilities
|
||||
# Utility functions and classes
|
||||
|
||||
Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case.
|
||||
|
||||
@ -126,6 +126,10 @@ These include data operations that mimic the same `torch` ops but can be used on
|
||||
|
||||
[[autodoc]] utils.gather_object
|
||||
|
||||
[[autodoc]] utils.get_grad_scaler
|
||||
|
||||
[[autodoc]] utils.get_mixed_precision_context_manager
|
||||
|
||||
[[autodoc]] utils.listify
|
||||
|
||||
[[autodoc]] utils.pad_across_processes
|
||||
@ -170,6 +174,8 @@ When setting up 🤗 Accelerate for the first time, rather than running `acceler
|
||||
|
||||
[[autodoc]] utils.environment.override_numa_affinity
|
||||
|
||||
[[autodoc]] utils.purge_accelerate_environment
|
||||
|
||||
## Memory
|
||||
|
||||
[[autodoc]] utils.find_executable_batch_size
|
||||
@ -202,8 +208,7 @@ These utilities relate to interacting with PyTorch models
|
||||
|
||||
[[autodoc]] utils.set_module_tensor_to_device
|
||||
|
||||
[[autodoc]] utils.shard_checkpoint
|
||||
|
||||
[[autodoc]] utils.get_module_children_bottom_up
|
||||
|
||||
## Parallel
|
||||
|
||||
@ -213,6 +218,8 @@ These include general utilities that should be used when working in parallel.
|
||||
|
||||
[[autodoc]] utils.save
|
||||
|
||||
[[autodoc]] utils.load
|
||||
|
||||
[[autodoc]] utils.wait_for_everyone
|
||||
|
||||
|
||||
|
||||
@ -53,6 +53,8 @@ accelerate launch path_to_script.py --args_for_the_script
|
||||
|
||||
To learn more, check out the [Launch distributed code](basic_tutorials/launch) tutorial for more information about launching your scripts.
|
||||
|
||||
We also have a [configuration zoo](https://github.com/huggingface/accelerate/blob/main/examples/config_yaml_templates) which showcases a number of premade **minimal** example configurations for a variety of setups you can run.
|
||||
|
||||
## Adapt training code
|
||||
|
||||
The next main feature of Accelerate is the [`Accelerator`] class which adapts your PyTorch code to run on different distributed setups.
|
||||
@ -166,13 +168,14 @@ with init_empty_weights():
|
||||
|
||||
The [`~accelerate.load_checkpoint_and_dispatch`] function loads full or sharded checkpoints into the empty model, and automatically distribute weights across all available devices.
|
||||
|
||||
The `device_map` parameter determines where to place each model layer, and specifiying `"auto"` places them on the GPU first, then the CPU, and finally the hard drive as memory-mapped tensors if there's still not enough memory. Use the `no_split_module_classes` parameter to indicate which modules shouldn't be split across devices (typically those with a residual connection).
|
||||
The `device_map` parameter determines where to place each model layer, and specifying `"auto"` places them on the GPU first, then the CPU, and finally the hard drive as memory-mapped tensors if there's still not enough memory. Use the `no_split_module_classes` parameter to indicate which modules shouldn't be split across devices (typically those with a residual connection).
|
||||
|
||||
```py
|
||||
from accelerate import load_checkpoint_and_dispatch
|
||||
|
||||
model_checkpoint = "your-local-model-folder"
|
||||
model = load_checkpoint_and_dispatch(
|
||||
model, checkpoint="mistralai/Mixtral-8x7B-Instruct-v0.1", device_map="auto", no_split_module_classes=['Block']
|
||||
model, checkpoint=model_checkpoint, device_map="auto", no_split_module_classes=['Block']
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
@ -13,15 +13,15 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Handling big models for inference
|
||||
# Big Model Inference
|
||||
|
||||
One of the biggest advancements 🤗 Accelerate provides is the concept of [large model inference](../concept_guides/big_model_inference) wherein you can perform *inference* on models that cannot fully fit on your graphics card.
|
||||
One of the biggest advancements Accelerate provides is [Big Model Inference](../concept_guides/big_model_inference), which allows you to perform inference with models that don't fully fit on your graphics card.
|
||||
|
||||
This tutorial will be broken down into two parts showcasing how to use both 🤗 Accelerate and 🤗 Transformers (a higher API-level) to make use of this idea.
|
||||
This tutorial will show you how to use Big Model Inference in Accelerate and the Hugging Face ecosystem.
|
||||
|
||||
## Using 🤗 Accelerate
|
||||
## Accelerate
|
||||
|
||||
For these tutorials, we'll assume a typical workflow for loading your model in such that:
|
||||
A typical workflow for loading a PyTorch model is shown below. `ModelClass` is a model that exceeds the GPU memory of your device (mps or cuda or xpu).
|
||||
|
||||
```py
|
||||
import torch
|
||||
@ -31,9 +31,7 @@ state_dict = torch.load(checkpoint_file)
|
||||
my_model.load_state_dict(state_dict)
|
||||
```
|
||||
|
||||
Note that here we assume that `ModelClass` is a model that takes up more video-card memory than what can fit on your device (be it `mps` or `cuda`).
|
||||
|
||||
The first step is to init an empty skeleton of the model which won't take up any RAM using the [`init_empty_weights`] context manager:
|
||||
With Big Model Inference, the first step is to init an empty skeleton of the model with the `init_empty_weights` context manager. This doesn't require any memory because `my_model` is "parameterless".
|
||||
|
||||
```py
|
||||
from accelerate import init_empty_weights
|
||||
@ -41,22 +39,14 @@ with init_empty_weights():
|
||||
my_model = ModelClass(...)
|
||||
```
|
||||
|
||||
With this `my_model` currently is "parameterless", hence leaving the smaller footprint than what one would normally get loading this onto the CPU directly.
|
||||
Next, the weights are loaded into the model for inference.
|
||||
|
||||
Next we need to load in the weights to our model so we can perform inference.
|
||||
The [`load_checkpoint_and_dispatch`] method loads a checkpoint inside your empty model and dispatches the weights for each layer across all available devices, starting with the fastest devices (GPU, MPS, XPU, NPU, MLU, SDAA, MUSA) first before moving to the slower ones (CPU and hard drive).
|
||||
|
||||
For this we will use [`load_checkpoint_and_dispatch`], which as the name implies will load a checkpoint inside your empty model and dispatch the weights for each layer across all the devices you have available (GPU/MPS and CPU RAM).
|
||||
Setting `device_map="auto"` automatically fills all available space on the GPU(s) first, then the CPU, and finally, the hard drive (the absolute slowest option) if there is still not enough memory.
|
||||
|
||||
To determine how this `dispatch` can be performed, generally specifying `device_map="auto"` will be good enough as 🤗 Accelerate
|
||||
will attempt to fill all the space in your GPU(s), then loading them to the CPU, and finally if there is not enough RAM it will be loaded to the disk (the absolute slowest option).
|
||||
|
||||
<Tip>
|
||||
|
||||
For more details on designing your own device map, see this section of the [concept guide](../concept_guides/big_model_inference#designing-a-device-map)
|
||||
|
||||
</Tip>
|
||||
|
||||
See an example below:
|
||||
> [!TIP]
|
||||
> Refer to the [Designing a device map](../concept_guides/big_model_inference#designing-a-device-map) guide for more details on how to design your own device map.
|
||||
|
||||
```py
|
||||
from accelerate import load_checkpoint_and_dispatch
|
||||
@ -66,42 +56,29 @@ model = load_checkpoint_and_dispatch(
|
||||
)
|
||||
```
|
||||
|
||||
<Tip>
|
||||
If there are certain “chunks” of layers that shouldn’t be split, pass them to `no_split_module_classes` (see [here](../concept_guides/big_model_inference#loading-weights) for more details).
|
||||
|
||||
If there are certain "chunks" of layers that shouldn't be split, you can pass them in as `no_split_module_classes`. Read more about it [here](../concept_guides/big_model_inference#loading-weights)
|
||||
A models weights can also be sharded into multiple checkpoints to save memory, such as when the `state_dict` doesn't fit in memory (see [here](../concept_guides/big_model_inference#sharded-checkpoints) for more details).
|
||||
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
|
||||
Also to save on memory (such as if the `state_dict` will not fit in RAM), a model's weights can be divided and split into multiple checkpoint files. Read more about it [here](../concept_guides/big_model_inference#sharded-checkpoints)
|
||||
|
||||
</Tip>
|
||||
|
||||
Now that the model is dispatched fully, you can perform inference as normal with the model:
|
||||
Now that the model is fully dispatched, you can perform inference.
|
||||
|
||||
```py
|
||||
input = torch.randn(2,3)
|
||||
input = input.to("cuda")
|
||||
device_type = next(iter(model.parameters())).device.type
|
||||
input = input.to(device_type)
|
||||
output = model(input)
|
||||
```
|
||||
|
||||
What will happen now is each time the input gets passed through a layer, it will be sent from the CPU to the GPU (or disk to CPU to GPU), the output is calculated, and then the layer is pulled back off the GPU going back down the line. While this adds some overhead to the inference being performed, through this method it is possible to run **any size model** on your system, as long as the largest layer is capable of fitting on your GPU.
|
||||
Each time an input is passed through a layer, it is sent from the CPU to the GPU (or disk to CPU to GPU), the output is calculated, and the layer is removed from the GPU going back down the line. While this adds some overhead to inference, it enables you to run any size model on your system, as long as the largest layer fits on your GPU.
|
||||
|
||||
<Tip>
|
||||
Multiple GPUs, or "model parallelism", can be utilized but only one GPU will be active at any given moment. This forces the GPU to wait for the previous GPU to send it the output. You should launch your script normally with Python instead of other tools like torchrun and accelerate launch.
|
||||
|
||||
Multiple GPUs can be utilized, however this is considered "model parallelism" and as a result only one GPU will be active at a given moment, waiting for the prior one to send it the output. You should launch your script normally with `python`
|
||||
and not need `torchrun`, `accelerate launch`, etc.
|
||||
> [!TIP]
|
||||
> You may also be interested in *pipeline parallelism* which utilizes all available GPUs at once, instead of only having one GPU active at a time. This approach is less flexbile though. For more details, refer to the [Memory-efficient pipeline parallelism](./distributed_inference#memory-efficient-pipeline-parallelism-experimental) guide.
|
||||
|
||||
</Tip>
|
||||
<Youtube id="MWCSGj9jEAo"/>
|
||||
|
||||
For a visual representation of this, check out the animation below:
|
||||
|
||||
<Youtube id="MWCSGj9jEAo" />
|
||||
|
||||
### Complete Example
|
||||
|
||||
Below is the full example showcasing what we performed above:
|
||||
Take a look at a full example of Big Model Inference below.
|
||||
|
||||
```py
|
||||
import torch
|
||||
@ -115,17 +92,18 @@ model = load_checkpoint_and_dispatch(
|
||||
)
|
||||
|
||||
input = torch.randn(2,3)
|
||||
input = input.to("cuda")
|
||||
device_type = next(iter(model.parameters())).device.type
|
||||
input = input.to(device_type)
|
||||
output = model(input)
|
||||
```
|
||||
|
||||
## Using 🤗 Transformers, 🤗 Diffusers, and other 🤗 Open Source Libraries
|
||||
## Hugging Face ecosystem
|
||||
|
||||
Libraries that support 🤗 Accelerate big model inference include all of the earlier logic in their `from_pretrained` constructors.
|
||||
Other libraries in the Hugging Face ecosystem, like Transformers or Diffusers, supports Big Model Inference in their [`~transformers.PreTrainedModel.from_pretrained`] constructors.
|
||||
|
||||
These operate by specifying a string representing the model to download from the [🤗 Hub](https://hf.co/models) and then denoting `device_map="auto"` along with a few extra parameters.
|
||||
You just need to add `device_map="auto"` in [`~transformers.PreTrainedModel.from_pretrained`] to enable Big Model Inference.
|
||||
|
||||
As a brief example, we will look at using `transformers` and loading in Big Science's T0pp model.
|
||||
For example, load Big Sciences T0pp 11 billion parameter model with Big Model Inference.
|
||||
|
||||
```py
|
||||
from transformers import AutoModelForSeq2SeqLM
|
||||
@ -133,9 +111,7 @@ from transformers import AutoModelForSeq2SeqLM
|
||||
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
|
||||
```
|
||||
|
||||
After loading the model in, the initial steps from before to prepare a model have all been done and the model is fully
|
||||
ready to make use of all the resources in your machine. Through these constructors, you can also save *more* memory by
|
||||
specifying the precision the model is loaded into as well, through the `torch_dtype` parameter, such as:
|
||||
After loading the model, the empty init and smart dispatch steps from before are executed and the model is fully ready to make use of all the resources in your machine. Through these constructors, you can also save more memory by specifying the `torch_dtype` parameter to load a model in a lower precision.
|
||||
|
||||
```py
|
||||
from transformers import AutoModelForSeq2SeqLM
|
||||
@ -143,8 +119,6 @@ from transformers import AutoModelForSeq2SeqLM
|
||||
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto", torch_dtype=torch.float16)
|
||||
```
|
||||
|
||||
To learn more about this, check out the 🤗 Transformers documentation available [here](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
|
||||
## Next steps
|
||||
|
||||
## Where to go from here
|
||||
|
||||
For a much more detailed look at big model inference, be sure to check out the [Conceptual Guide on it](../concept_guides/big_model_inference)
|
||||
For a more detailed explanation of Big Model Inference, make sure to check out the [conceptual guide](../concept_guides/big_model_inference)!
|
||||
|
||||
@ -15,8 +15,8 @@ rendered properly in your Markdown viewer.
|
||||
|
||||
# Checkpointing
|
||||
|
||||
When training a PyTorch model with 🤗 Accelerate, you may often want to save and continue a state of training. Doing so requires
|
||||
saving and loading the model, optimizer, RNG generators, and the GradScaler. Inside 🤗 Accelerate are two convenience functions to achieve this quickly:
|
||||
When training a PyTorch model with Accelerate, you may often want to save and continue a state of training. Doing so requires
|
||||
saving and loading the model, optimizer, RNG generators, and the GradScaler. Inside Accelerate are two convenience functions to achieve this quickly:
|
||||
- Use [`~Accelerator.save_state`] for saving everything mentioned above to a folder location
|
||||
- Use [`~Accelerator.load_state`] for loading everything stored from an earlier `save_state`
|
||||
|
||||
|
||||
76
docs/source/usage_guides/compilation.md
Normal file
76
docs/source/usage_guides/compilation.md
Normal file
@ -0,0 +1,76 @@
|
||||
# Compilation
|
||||
|
||||
## Overview
|
||||
|
||||
Pytorch 2.0 introduced `torch.compile`, a powerful feature that makes PyTorch code run faster by JIT-compiling PyTorch code into optimized kernels. Key features of `torch.compile` include:
|
||||
|
||||
- **Performance Improvement**: Significantly speeds up model execution by optimizing the computation graph.
|
||||
- **Ease of Use**: Requires minimal code changes to implement, making it highly accessible.
|
||||
- **Compatibility**: Works seamlessly with existing PyTorch code and models.
|
||||
|
||||
When used with Accelerate, `torch.compile` integrates smoothly into distributed training workflows, allowing you to benefit from both distributed execution and compilation optimizations simultaneously.
|
||||
|
||||
The first execution of compiled code typically takes longer as it includes the compilation time, but subsequent runs are significantly faster. For optimal performance in different scenarios, `torch.compile` offers various modes like `"default"`, `"reduce-overhead"` (which uses CUDA graphs to further reduce overhead), and `"max-autotune"` (which performs extensive autotuning to find the best kernels for your model).
|
||||
|
||||
## Using `torch.compile` with Accelerate
|
||||
|
||||
Accelerate provides `TorchDynamoPlugin` for easy and seemless integration of `torch.compile` into your training scripts.
|
||||
|
||||
```python
|
||||
from accelerate import Accelerator
|
||||
from accelerate.utils import TorchDynamoPlugin
|
||||
|
||||
# Configure the compilation backend
|
||||
dynamo_plugin = TorchDynamoPlugin(
|
||||
backend="inductor", # Options: "inductor", "aot_eager", "aot_nvfuser", etc.
|
||||
mode="default", # Options: "default", "reduce-overhead", "max-autotune"
|
||||
fullgraph=True,
|
||||
dynamic=False
|
||||
)
|
||||
|
||||
# Initialize accelerator with the plugin
|
||||
accelerator = Accelerator(dynamo_plugin=dynamo_plugin)
|
||||
# This will apply torch.compile to your model
|
||||
model = accelerator.prepare(model)
|
||||
```
|
||||
|
||||
It is compatible with all other features and plugins of Accelerate, including mixed precision, distributed training (DDP, FSDP, Deepspeed), etc.
|
||||
|
||||
## Regional Compilation
|
||||
|
||||
Instead of trying to compile the whole model, which usually has a big problem space for optimization. Regional compilation targets repeated blocks of the same class and compiles them sequentially to hit the compiler's cache. For example, in `GPT2LMHeadModel`, the repeated block/class is `GPT2Block`, and can be accessed as `model.transformer.h[0]`. The rest of the model (e.g model.lm_head) is compiled separately.
|
||||
|
||||
This allows us to speed up the compilation overhead / cold start of models like LLMs and Transformers in general.
|
||||
See <https://pytorch.org/tutorials/recipes/regional_compilation.html> for more details.
|
||||
|
||||
### How to Use Regional Compilation
|
||||
|
||||
It can be enabled by setting `use_regional_compilation=True` in the `TorchDynamoPlugin` configuration:
|
||||
|
||||
```python
|
||||
# Configure the compilation backend
|
||||
dynamo_plugin = TorchDynamoPlugin(
|
||||
use_regional_compilation=True,
|
||||
... # other parameters
|
||||
)
|
||||
# Initialize accelerator with the plugin
|
||||
accelerator = Accelerator(dynamo_plugin=dynamo_plugin)
|
||||
# This will apply compile_regions to your model
|
||||
model = accelerator.prepare(model)
|
||||
```
|
||||
|
||||
You could also use the `accelerate.utils.compile_regions` utility directly the same way you would use `torch.compile`.
|
||||
|
||||
### Benefits of Regional Compilation
|
||||
|
||||
We have conducted extensive benchmarks comparing full compilation and regional compilation using the `torch.compile` feature in PyTorch. The full results are available in the [accelerate repository](https://github.com/huggingface/accelerate/tree/main/benchmarks/torch.compile/regional_compilation). The key findings from our benchmarks are:
|
||||
|
||||
1. **Comparable Performance**: Regional compilation delivers performance speedups similar to full compilation, especially for larger models.
|
||||
2. **Faster Compilation**: Regional compilation significantly reduces the time taken to compile models, making it a more efficient choice for deployment.
|
||||
3. **Batch Size Impact**: The performance difference between compilation strategies diminishes with larger batch sizes, indicating that the overhead of compilation is less impactful in those scenarios.
|
||||
4. **Model Size Consideration**: The benefits of regional compilation are more pronounced in larger models, where the compilation time savings can be substantial.
|
||||
5. **Practical Application**: For real-world applications, regional compilation is a practical choice for optimizing training cold start times, especially when working with large models.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Both full and regional compilation can significantly speed up your models. Regional compilation offers a practical balance between compilation time and runtime performance, especially for training large models with substantial batch sizes.
|
||||
@ -23,7 +23,7 @@ Distributed Data Parallel (DDP) communication hooks provide a generic interface
|
||||
- **BF16 Compression Hook**: Similar to FP16, but uses the Brain Floating Point format (`torch.bfloat16`), which can be more efficient on certain hardware.
|
||||
- **PowerSGD Hook**: An advanced gradient compression algorithm that provides high compression rates and can accelerate bandwidth-bound distributed training.
|
||||
|
||||
In this tutorial, you will see how to quickly set up DDP communication hooks and perform training with the utilities provided in 🤗 Accelerate, which can be as simple as adding just one new line of code! This demonstrates how to use DDP communication hooks to optimize gradient communication in distributed training with the 🤗 Accelerate library.
|
||||
In this tutorial, you will see how to quickly set up DDP communication hooks and perform training with the utilities provided in Accelerate, which can be as simple as adding just one new line of code! This demonstrates how to use DDP communication hooks to optimize gradient communication in distributed training with the Accelerate library.
|
||||
|
||||
## FP16 Compression Hook
|
||||
|
||||
@ -34,6 +34,10 @@ In this tutorial, you will see how to quickly set up DDP communication hooks and
|
||||
import torch
|
||||
from torch.nn.parallel import DistributedDataParallel as DDP
|
||||
from torch.distributed.algorithms.ddp_comm_hooks import default_hooks
|
||||
from accelerate.test_utils.testing import get_backend
|
||||
|
||||
device_type, _, _ = get_backend()
|
||||
device_id = getattr(torch, device_type, torch.cuda).current_device()
|
||||
|
||||
class MyModel(torch.nn.Module):
|
||||
def __init__(self):
|
||||
@ -44,7 +48,7 @@ class MyModel(torch.nn.Module):
|
||||
return self.layer(x)
|
||||
|
||||
model = MyModel()
|
||||
model = DDP(model, device_ids=[torch.cuda.current_device()])
|
||||
model = DDP(model, device_ids=[device_id])
|
||||
model.register_comm_hook(state=None, hook=default_hooks.fp16_compress_hook)
|
||||
|
||||
# Training loop
|
||||
@ -108,6 +112,10 @@ BF16 Compression Hook API is experimental, and it requires NCCL version later th
|
||||
import torch
|
||||
from torch.nn.parallel import DistributedDataParallel as DDP
|
||||
from torch.distributed.algorithms.ddp_comm_hooks import default_hooks
|
||||
from accelerate.test_utils.testing import get_backend
|
||||
|
||||
device_type, _, _ = get_backend()
|
||||
device_id = getattr(torch, device_type, torch.cuda).current_device()
|
||||
|
||||
class MyModel(torch.nn.Module):
|
||||
def __init__(self):
|
||||
@ -118,7 +126,7 @@ class MyModel(torch.nn.Module):
|
||||
return self.layer(x)
|
||||
|
||||
model = MyModel()
|
||||
model = DDP(model, device_ids=[torch.cuda.current_device()])
|
||||
model = DDP(model, device_ids=[device_id])
|
||||
model.register_comm_hook(state=None, hook=default_hooks.bf16_compress_hook)
|
||||
|
||||
# Training loop
|
||||
@ -182,6 +190,10 @@ PowerSGD typically requires extra memory of the same size as the model’s gradi
|
||||
import torch
|
||||
from torch.nn.parallel import DistributedDataParallel as DDP
|
||||
from torch.distributed.algorithms.ddp_comm_hooks import powerSGD_hook
|
||||
from accelerate.test_utils.testing import get_backend
|
||||
|
||||
device_type, _, _ = get_backend()
|
||||
device_id = getattr(torch, device_type, torch.cuda).current_device()
|
||||
|
||||
class MyModel(torch.nn.Module):
|
||||
def __init__(self):
|
||||
@ -192,7 +204,7 @@ class MyModel(torch.nn.Module):
|
||||
return self.layer(x)
|
||||
|
||||
model = MyModel()
|
||||
model = DDP(model, device_ids=[torch.cuda.current_device()])
|
||||
model = DDP(model, device_ids=[device_id])
|
||||
state = powerSGD_hook.PowerSGDState(process_group=None)
|
||||
model.register_comm_hook(state=state, hook=powerSGD_hook.powerSGD_hook)
|
||||
|
||||
|
||||
@ -15,7 +15,7 @@ rendered properly in your Markdown viewer.
|
||||
|
||||
# DeepSpeed
|
||||
|
||||
[DeepSpeed](https://github.com/microsoft/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Some of the salient optimizations are:
|
||||
[DeepSpeed](https://github.com/deepspeedai/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Some of the salient optimizations are:
|
||||
|
||||
1. Optimizer state partitioning (ZeRO stage 1)
|
||||
2. Gradient partitioning (ZeRO stage 2)
|
||||
@ -33,7 +33,7 @@ DeepSpeed ZeRO-2 is primarily used only for training, as its features are of no
|
||||
DeepSpeed ZeRO-3 can be used for inference as well since it allows huge models to be loaded on multiple GPUs, which
|
||||
won't be possible on a single GPU.
|
||||
|
||||
🤗 Accelerate integrates [DeepSpeed](https://github.com/microsoft/DeepSpeed) via 2 options:
|
||||
Accelerate integrates [DeepSpeed](https://github.com/deepspeedai/DeepSpeed) via 2 options:
|
||||
|
||||
1. Integration of the DeepSpeed features via `deepspeed config file` specification in `accelerate config` . You just supply your custom config file or use our template. Most of
|
||||
this document is focused on this feature. This supports all the core features of DeepSpeed and gives user a lot of flexibility.
|
||||
@ -45,7 +45,7 @@ won't be possible on a single GPU.
|
||||
|
||||
Training:
|
||||
|
||||
1. 🤗 Accelerate integrates all features of DeepSpeed ZeRO. This includes all the ZeRO stages 1, 2 and 3 as well as ZeRO-Offload, ZeRO-Infinity (which can offload to disk/NVMe) and ZeRO++.
|
||||
1. Accelerate integrates all features of DeepSpeed ZeRO. This includes all the ZeRO stages 1, 2 and 3 as well as ZeRO-Offload, ZeRO-Infinity (which can offload to disk/NVMe) and ZeRO++.
|
||||
Below is a short description of Data Parallelism using ZeRO - Zero Redundancy Optimizer along with diagram from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)
|
||||

|
||||
|
||||
@ -74,7 +74,7 @@ Inference:
|
||||
|
||||
## How it works?
|
||||
|
||||
**Pre-Requisites**: Install DeepSpeed version >=0.6.5. Please refer to the [DeepSpeed Installation details](https://github.com/microsoft/DeepSpeed#installation)
|
||||
**Pre-Requisites**: Install DeepSpeed version >=0.6.5. Please refer to the [DeepSpeed Installation details](https://github.com/deepspeedai/DeepSpeed#installation)
|
||||
for more information.
|
||||
|
||||
We will first look at easy to use integration via `accelerate config`.
|
||||
@ -167,7 +167,7 @@ Currently, `Accelerate` supports following config through the CLI:
|
||||
`deepspeed_hostfile`: DeepSpeed hostfile for configuring multi-node compute resources.
|
||||
`deepspeed_exclusion_filter`: DeepSpeed exclusion filter string when using mutli-node setup.
|
||||
`deepspeed_inclusion_filter`: DeepSpeed inclusion filter string when using mutli-node setup.
|
||||
`deepspeed_multinode_launcher`: DeepSpeed multi-node launcher to use. If unspecified, will default to `pdsh`.
|
||||
`deepspeed_multinode_launcher`: DeepSpeed multi-node launcher to use, e.g. `pdsh`, `standard`, `openmpi`, `mvapich`, `mpich`, `slurm`, `nossh` (requires DeepSpeed >= 0.14.5). If unspecified, will default to `pdsh`.
|
||||
`deepspeed_config_file`: path to the DeepSpeed config file in `json` format. See the next section for more details on this.
|
||||
```
|
||||
To be able to tweak more options, you will need to use a DeepSpeed config file.
|
||||
@ -194,7 +194,7 @@ For instance, here is how you would run the NLP example `examples/by_feature/dee
|
||||
```bash
|
||||
compute_environment: LOCAL_MACHINE
|
||||
deepspeed_config:
|
||||
deepspeed_config_file: /home/ubuntu/accelerate/examples/configs/deepspeed_config_templates/zero_stage2_config.json
|
||||
deepspeed_config_file: /home/ubuntu/accelerate/examples/deepspeed_config_templates/zero_stage2_config.json
|
||||
zero3_init_flag: true
|
||||
distributed_type: DEEPSPEED
|
||||
fsdp_config: {}
|
||||
@ -275,7 +275,7 @@ accelerate launch examples/by_feature/deepspeed_with_config_support.py \
|
||||
```bash
|
||||
compute_environment: LOCAL_MACHINE
|
||||
deepspeed_config:
|
||||
deepspeed_config_file: /home/ubuntu/accelerate/examples/configs/deepspeed_config_templates/zero_stage3_offload_config.json
|
||||
deepspeed_config_file: /home/ubuntu/accelerate/examples/deepspeed_config_templates/zero_stage3_offload_config.json
|
||||
zero3_init_flag: true
|
||||
distributed_type: DEEPSPEED
|
||||
fsdp_config: {}
|
||||
@ -710,11 +710,18 @@ model, eval_dataloader = accelerator.prepare(model, eval_dataloader)
|
||||
2. Current integration doesn’t support `mpu`, limiting the tensor parallelism which is supported in Megatron-LM.
|
||||
3. Current integration doesn’t support multiple models.
|
||||
|
||||
## Multi-node DeepSpeed
|
||||
DeepSpeed supports multi-node inference and training over a variety of different launchers. You can specify a different launcher by setting the `deepspeed_multinode_launcher` config in the CLI or in the DeepSpeed config file.
|
||||
|
||||
Currently, accelerate supports passing configuration for the following DeepSpeed multi-node launchers: `pdsh` (default), `standard`, `openmpi`, `mvapich`, `mpich`, `slurm`, `nossh` (requires DeepSpeed >= 0.14.5).
|
||||
|
||||
Please read the [DeepSpeed documentation](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) for more information on the different launchers. By default, DeepSpeed will attempt to use passwordless SSH from the main machine node to the other nodes to perform the launcher command. In this configuration, the accelerate launch command only needs to be run on the main node. If using the `nossh` launcher, you will need to run the accelerate launch command on every node using copied configuration.
|
||||
|
||||
## DeepSpeed Resources
|
||||
|
||||
The documentation for the internals related to deepspeed can be found [here](../package_reference/deepspeed).
|
||||
|
||||
- [Project's github](https://github.com/microsoft/deepspeed)
|
||||
- [Project's github](https://github.com/deepspeedai/DeepSpeed)
|
||||
- [Usage docs](https://www.deepspeed.ai/getting-started/)
|
||||
- [API docs](https://deepspeed.readthedocs.io/en/latest/index.html)
|
||||
- [Blog posts](https://www.microsoft.com/en-us/research/search/?q=deepspeed)
|
||||
@ -727,8 +734,8 @@ Papers:
|
||||
- [ZeRO++: Extremely Efficient Collective Communication for Giant Model Training](https://arxiv.org/abs/2306.10209)
|
||||
|
||||
|
||||
Finally, please, remember that 🤗 `Accelerate` only integrates DeepSpeed, therefore if you
|
||||
have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues).
|
||||
Finally, please, remember that `Accelerate` only integrates DeepSpeed, therefore if you
|
||||
have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/deepspeedai/DeepSpeed/issues).
|
||||
|
||||
|
||||
<Tip>
|
||||
|
||||
246
docs/source/usage_guides/deepspeed_multiple_model.md
Normal file
246
docs/source/usage_guides/deepspeed_multiple_model.md
Normal file
@ -0,0 +1,246 @@
|
||||
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Using multiple models with DeepSpeed
|
||||
|
||||
<Tip warning={true}>
|
||||
|
||||
This guide assumes that you have read and understood the [DeepSpeed usage guide](./deepspeed.md).
|
||||
|
||||
</Tip>
|
||||
|
||||
Running multiple models with Accelerate and DeepSpeed is useful for:
|
||||
|
||||
* Knowledge distillation
|
||||
* Post-training techniques like RLHF (see the [TRL](https://github.com/huggingface/trl) library for more examples)
|
||||
* Training multiple models at once
|
||||
|
||||
Currently, Accelerate has a **very experimental API** to help you use multiple models.
|
||||
|
||||
This tutorial will focus on two common use cases:
|
||||
|
||||
1. Knowledge distillation, where a smaller student model is trained to mimic a larger, better-performing teacher. If the student model fits on a single GPU, we can use ZeRO-2 for training and ZeRO-3 to shard the teacher for inference. This is significantly faster than using ZeRO-3 for both models.
|
||||
2. Training multiple *disjoint* models at once.
|
||||
|
||||
## Knowledge distillation
|
||||
|
||||
Knowledge distillation is a good example of using multiple models, but only training one of them.
|
||||
|
||||
Normally, you would use a single [`utils.DeepSpeedPlugin`] for both models. However, in this case, there are two separate configurations. Accelerate allows you to create and use multiple plugins **if and only if** they are in a `dict` so that you can reference and enable the proper plugin when needed.
|
||||
|
||||
```python
|
||||
from accelerate.utils import DeepSpeedPlugin
|
||||
|
||||
zero2_plugin = DeepSpeedPlugin(hf_ds_config="zero2_config.json")
|
||||
zero3_plugin = DeepSpeedPlugin(hf_ds_config="zero3_config.json")
|
||||
|
||||
deepspeed_plugins = {"student": zero2_plugin, "teacher": zero3_plugin}
|
||||
```
|
||||
|
||||
The `zero2_config.json` should be configured for full training (so specify `scheduler` and `optimizer` if you are not utilizing your own), while `zero3_config.json` should only be configured for the inference model, as shown in the example below.
|
||||
|
||||
```json
|
||||
{
|
||||
"bf16": {
|
||||
"enabled": "auto"
|
||||
},
|
||||
"zero_optimization": {
|
||||
"stage": 3,
|
||||
"overlap_comm": true,
|
||||
"reduce_bucket_size": "auto",
|
||||
"stage3_prefetch_bucket_size": "auto",
|
||||
"stage3_param_persistence_threshold": "auto",
|
||||
"stage3_max_live_parameters": "auto",
|
||||
"stage3_max_reuse_distance": "auto",
|
||||
},
|
||||
"train_micro_batch_size_per_gpu": 1
|
||||
}
|
||||
```
|
||||
|
||||
An example `zero2_config.json` configuration is shown below.
|
||||
|
||||
```json
|
||||
{
|
||||
"bf16": {
|
||||
"enabled": "auto"
|
||||
},
|
||||
"optimizer": {
|
||||
"type": "AdamW",
|
||||
"params": {
|
||||
"lr": "auto",
|
||||
"weight_decay": "auto",
|
||||
"torch_adam": true,
|
||||
"adam_w_mode": true
|
||||
}
|
||||
},
|
||||
"scheduler": {
|
||||
"type": "WarmupLR",
|
||||
"params": {
|
||||
"warmup_min_lr": "auto",
|
||||
"warmup_max_lr": "auto",
|
||||
"warmup_num_steps": "auto"
|
||||
}
|
||||
},
|
||||
"zero_optimization": {
|
||||
"stage": 2,
|
||||
"offload_optimizer": {
|
||||
"device": "cpu",
|
||||
"pin_memory": true
|
||||
},
|
||||
},
|
||||
"gradient_accumulation_steps": 1,
|
||||
"gradient_clipping": "auto",
|
||||
"train_batch_size": "auto",
|
||||
"train_micro_batch_size_per_gpu": "auto",
|
||||
}
|
||||
```
|
||||
|
||||
<Tip>
|
||||
|
||||
DeepSpeed will raise an error if `train_micro_batch_size_per_gpu` isn't specified, even if this particular model isn't being trained.
|
||||
|
||||
</Tip>
|
||||
|
||||
From here, create a single [`Accelerator`] and pass in both configurations.
|
||||
|
||||
```python
|
||||
from accelerate import Accelerator
|
||||
|
||||
accelerator = Accelerator(deepspeed_plugins=deepspeed_plugins)
|
||||
```
|
||||
|
||||
Now let's see how to use them.
|
||||
|
||||
### Student model
|
||||
|
||||
By default, Accelerate sets the first item in the `dict` as the default or enabled plugin (`"student"` plugin). Verify this by using the [`utils.deepspeed.get_active_deepspeed_plugin`] function to see which plugin is enabled.
|
||||
|
||||
```python
|
||||
active_plugin = get_active_deepspeed_plugin(accelerator.state)
|
||||
assert active_plugin is deepspeed_plugins["student"]
|
||||
```
|
||||
|
||||
[`AcceleratorState`] also keeps the active DeepSpeed plugin saved in `state.deepspeed_plugin`.
|
||||
```python
|
||||
assert active_plugin is accelerator.deepspeed_plugin
|
||||
```
|
||||
|
||||
Since `student` is the currently active plugin, let's go ahead and prepare the model, optimizer, and scheduler.
|
||||
|
||||
```python
|
||||
student_model, optimizer, scheduler = ...
|
||||
student_model, optimizer, scheduler, train_dataloader = accelerator.prepare(student_model, optimizer, scheduler, train_dataloader)
|
||||
```
|
||||
|
||||
Now it's time to deal with the teacher model.
|
||||
|
||||
### Teacher model
|
||||
|
||||
First, you need to specify in [`Accelerator`] that the `zero3_config.json` configuration should be used.
|
||||
|
||||
```python
|
||||
accelerator.state.select_deepspeed_plugin("teacher")
|
||||
```
|
||||
|
||||
This disables the `"student"` plugin and enables the `"teacher"` plugin instead. The
|
||||
DeepSpeed stateful config inside of Transformers is updated, and it changes which plugin configuration gets called when using
|
||||
`deepspeed.initialize()`. This allows you to use the automatic `deepspeed.zero.Init` context manager integration Transformers provides.
|
||||
|
||||
```python
|
||||
teacher_model = AutoModel.from_pretrained(...)
|
||||
teacher_model = accelerator.prepare(teacher_model)
|
||||
```
|
||||
|
||||
Otherwise, you should manually initialize the model with `deepspeed.zero.Init`.
|
||||
```python
|
||||
with deepspeed.zero.Init(accelerator.deepspeed_plugin.config):
|
||||
model = MyModel(...)
|
||||
```
|
||||
|
||||
### Training
|
||||
|
||||
From here, your training loop can be whatever you like, as long as `teacher_model` is never being trained on.
|
||||
|
||||
```python
|
||||
teacher_model.eval()
|
||||
student_model.train()
|
||||
for batch in train_dataloader:
|
||||
with torch.no_grad():
|
||||
output_teacher = teacher_model(**batch)
|
||||
output_student = student_model(**batch)
|
||||
# Combine the losses or modify it in some way
|
||||
loss = output_teacher.loss + output_student.loss
|
||||
accelerator.backward(loss)
|
||||
optimizer.step()
|
||||
scheduler.step()
|
||||
optimizer.zero_grad()
|
||||
```
|
||||
|
||||
## Train multiple disjoint models
|
||||
|
||||
Training multiple models is a more complicated scenario.
|
||||
In its current state, we assume each model is **completely disjointed** from the other during training.
|
||||
|
||||
This scenario still requires two [`utils.DeepSpeedPlugin`]'s to be made. However, you also need a second [`Accelerator`], since different `deepspeed` engines are being called at different times. A single [`Accelerator`] can only carry one instance at a time.
|
||||
|
||||
Since the [`state.AcceleratorState`] is a stateful object though, it is already aware of both [`utils.DeepSpeedPlugin`]'s available. You can just instantiate a second [`Accelerator`] with no extra arguments.
|
||||
|
||||
```python
|
||||
first_accelerator = Accelerator(deepspeed_plugins=deepspeed_plugins)
|
||||
second_accelerator = Accelerator()
|
||||
```
|
||||
|
||||
You can call either `first_accelerator.state.select_deepspeed_plugin()` to enable or disable
|
||||
a particular plugin, and then call [`prepare`].
|
||||
|
||||
```python
|
||||
# can be `accelerator_0`, `accelerator_1`, or by calling `AcceleratorState().select_deepspeed_plugin(...)`
|
||||
first_accelerator.state.select_deepspeed_plugin("first_model")
|
||||
first_model = AutoModel.from_pretrained(...)
|
||||
# For this example, `get_training_items` is a nonexistent function that gets the setup we need for training
|
||||
first_optimizer, first_scheduler, train_dl, eval_dl = get_training_items(model1)
|
||||
first_model, first_optimizer, first_scheduler, train_dl, eval_dl = accelerator.prepare(
|
||||
first_model, first_optimizer, first_scheduler, train_dl, eval_dl
|
||||
)
|
||||
|
||||
second_accelerator.state.select_deepspeed_plugin("second_model")
|
||||
second_model = AutoModel.from_pretrained(...)
|
||||
# For this example, `get_training_items` is a nonexistent function that gets the setup we need for training
|
||||
second_optimizer, second_scheduler, _, _ = get_training_items(model2)
|
||||
second_model, second_optimizer, second_scheduler = accelerator.prepare(
|
||||
second_model, second_optimizer, second_scheduler
|
||||
)
|
||||
```
|
||||
|
||||
And now you can train:
|
||||
|
||||
```python
|
||||
for batch in dl:
|
||||
outputs1 = first_model(**batch)
|
||||
first_accelerator.backward(outputs1.loss)
|
||||
first_optimizer.step()
|
||||
first_scheduler.step()
|
||||
first_optimizer.zero_grad()
|
||||
|
||||
outputs2 = model2(**batch)
|
||||
second_accelerator.backward(outputs2.loss)
|
||||
second_optimizer.step()
|
||||
second_scheduler.step()
|
||||
second_optimizer.zero_grad()
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
To see more examples, please check out the [related tests](https://github.com/huggingface/accelerate/blob/main/src/accelerate/test_utils/scripts/external_deps/test_ds_multiple_model.py) currently in [Accelerate].
|
||||
@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Distributed Inference with 🤗 Accelerate
|
||||
# Distributed inference
|
||||
|
||||
Distributed inference can fall into three brackets:
|
||||
|
||||
@ -56,19 +56,20 @@ def run_inference(rank, world_size):
|
||||
```
|
||||
One will notice how we have to check the rank to know what prompt to send, which can be a bit tedious.
|
||||
|
||||
A user might then also think that with 🤗 Accelerate, using the `Accelerator` to prepare a dataloader for such a task might also be
|
||||
A user might then also think that with Accelerate, using the `Accelerator` to prepare a dataloader for such a task might also be
|
||||
a simple way to manage this. (To learn more, check out the relevant section in the [Quick Tour](../quicktour#distributed-evaluation))
|
||||
|
||||
Can it manage it? Yes. Does it add unneeded extra code however: also yes.
|
||||
|
||||
|
||||
With 🤗 Accelerate, we can simplify this process by using the [`Accelerator.split_between_processes`] context manager (which also exists in `PartialState` and `AcceleratorState`).
|
||||
With Accelerate, we can simplify this process by using the [`Accelerator.split_between_processes`] context manager (which also exists in `PartialState` and `AcceleratorState`).
|
||||
This function will automatically split whatever data you pass to it (be it a prompt, a set of tensors, a dictionary of the prior data, etc.) across all the processes (with a potential
|
||||
to be padded) for you to use right away.
|
||||
|
||||
Let's rewrite the above example using this context manager:
|
||||
|
||||
```python
|
||||
import torch
|
||||
from accelerate import PartialState # Can also be Accelerator or AcceleratorState
|
||||
from diffusers import DiffusionPipeline
|
||||
|
||||
@ -82,7 +83,7 @@ with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt:
|
||||
result.save(f"result_{distributed_state.process_index}.png")
|
||||
```
|
||||
|
||||
And then to launch the code, we can use the 🤗 Accelerate:
|
||||
And then to launch the code, we can use the Accelerate:
|
||||
|
||||
If you have generated a config file to be used using `accelerate config`:
|
||||
|
||||
@ -125,6 +126,7 @@ needs to be the same length. Basic inference does not require this.
|
||||
For instance:
|
||||
|
||||
```python
|
||||
import torch
|
||||
from accelerate import PartialState # Can also be Accelerator or AcceleratorState
|
||||
from diffusers import DiffusionPipeline
|
||||
|
||||
@ -144,22 +146,20 @@ You can find more complex examples [here](https://github.com/huggingface/acceler
|
||||
|
||||
## Memory-efficient pipeline parallelism (experimental)
|
||||
|
||||
This next part will discuss using *pipeline parallelism*. This is an **experimental** API utilizing the [PiPPy library by PyTorch](https://github.com/pytorch/PiPPy/) as a native solution.
|
||||
This next part will discuss using *pipeline parallelism*. This is an **experimental** API that utilizes [torch.distributed.pipelining](https://pytorch.org/docs/stable/distributed.pipelining.html#) as a native solution.
|
||||
|
||||
The general idea with pipeline parallelism is: say you have 4 GPUs and a model big enough it can be *split* on four GPUs using `device_map="auto"`. With this method you can send in 4 inputs at a time (for example here, any amount works) and each model chunk will work on an input, then receive the next input once the prior chunk finished, making it *much* more efficient **and faster** than the method described earlier. Here's a visual taken from the PyTorch repository:
|
||||
|
||||

|
||||

|
||||
|
||||
To illustrate how you can use this with Accelerate, we have created an [example zoo](https://github.com/huggingface/accelerate/tree/main/examples/inference) showcasing a number of different models and situations. In this tutorial, we'll show this method for GPT2 across two GPUs.
|
||||
|
||||
Before you proceed, please make sure you have the latest pippy installed by running the following:
|
||||
Before you proceed, please make sure you have the latest PyTorch version installed by running the following:
|
||||
|
||||
```bash
|
||||
pip install torchpippy
|
||||
pip install torch
|
||||
```
|
||||
|
||||
We require at least version 0.2.0. To confirm that you have the correct version, run `pip show torchpippy`.
|
||||
|
||||
Start by creating the model on the CPU:
|
||||
|
||||
```{python}
|
||||
@ -170,7 +170,7 @@ model = GPT2ForSequenceClassification(config)
|
||||
model.eval()
|
||||
```
|
||||
|
||||
Next you'll need to create some example inputs to use. These help PiPPy trace the model.
|
||||
Next you'll need to create some example inputs to use. These help `torch.distributed.pipelining` trace the model.
|
||||
|
||||
<Tip warning={true}>
|
||||
However you make this example will determine the relative batch size that will be used/passed
|
||||
|
||||
@ -13,14 +13,14 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Learning how to incorporate 🤗 Accelerate features quickly!
|
||||
# Start Here!
|
||||
|
||||
Please use the interactive tool below to help you get started with learning about a particular
|
||||
feature of 🤗 Accelerate and how to utilize it! It will provide you with a code diff, an explanation
|
||||
feature of Accelerate and how to utilize it! It will provide you with a code diff, an explanation
|
||||
towards what is going on, as well as provide you with some useful links to explore more within
|
||||
the documentation!
|
||||
|
||||
Most code examples start from the following python code before integrating 🤗 Accelerate in some way:
|
||||
Most code examples start from the following python code before integrating Accelerate in some way:
|
||||
|
||||
```python
|
||||
for batch in dataloader:
|
||||
|
||||
@ -79,7 +79,7 @@ Currently, `Accelerate` supports the following config through the CLI:
|
||||
|
||||
`fsdp_auto_wrap_policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP
|
||||
|
||||
`fsdp_transformer_layer_cls_to_wrap`: Only applicable for 🤗 Transformers. When using `fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP`, a user may provide a comma-separated string of transformer layer class names (case-sensitive) to wrap, e.g., `BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`. This is important because submodules that share weights (e.g., embedding layers) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by a couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer-based models. You can use the `model._no_split_modules` for 🤗 Transformer models by answering `yes` to `Do you want to use the model's `_no_split_modules` to wrap. It will try to use `model._no_split_modules` when possible.
|
||||
`fsdp_transformer_layer_cls_to_wrap`: Only applicable for Transformers. When using `fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP`, a user may provide a comma-separated string of transformer layer class names (case-sensitive) to wrap, e.g., `BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`. This is important because submodules that share weights (e.g., embedding layers) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by a couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer-based models. You can use the `model._no_split_modules` for Transformer models by answering `yes` to `Do you want to use the model's `_no_split_modules` to wrap. It will try to use `model._no_split_modules` when possible.
|
||||
|
||||
`fsdp_min_num_params`: minimum number of parameters when using `fsdp_auto_wrap_policy=SIZE_BASED_WRAP`.
|
||||
|
||||
@ -91,7 +91,7 @@ Currently, `Accelerate` supports the following config through the CLI:
|
||||
|
||||
`fsdp_use_orig_params`: If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable parameters. This setting is useful in cases such as parameter-efficient fine-tuning as discussed in [this post](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019). This option also allows one to have multiple optimizer param groups. This should be `True` when creating an optimizer before preparing/wrapping the model with FSDP.
|
||||
|
||||
`fsdp_cpu_ram_efficient_loading`: Only applicable for 🤗 Transformers models. If True, only the first process loads the pretrained model checkpoint while all other processes have empty weights. This should be set to False if you experience errors when loading the pretrained 🤗 Transformers model via `from_pretrained` method. When this setting is True `fsdp_sync_module_states` also must to be True, otherwise all the processes except the main process would have random weights leading to unexpected behaviour during training. For this to work, make sure the distributed process group is initialized before calling Transformers `from_pretrained` method. When using 🤗 Trainer API, the distributed process group is initialized when you create an instance of `TrainingArguments` class.
|
||||
`fsdp_cpu_ram_efficient_loading`: Only applicable for Transformers models. If True, only the first process loads the pretrained model checkpoint while all other processes have empty weights. This should be set to False if you experience errors when loading the pretrained Transformers model via `from_pretrained` method. When this setting is True `fsdp_sync_module_states` also must to be True, otherwise all the processes except the main process would have random weights leading to unexpected behaviour during training. For this to work, make sure the distributed process group is initialized before calling Transformers `from_pretrained` method. When using Trainer API, the distributed process group is initialized when you create an instance of `TrainingArguments` class.
|
||||
|
||||
`fsdp_sync_module_states`: If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0.
|
||||
|
||||
@ -187,7 +187,7 @@ accelerate merge-weights pytorch_model_fsdp_0/ output_path
|
||||
## A few caveats to be aware of
|
||||
|
||||
- In case of multiple models, pass the optimizers to the prepare call in the same order as corresponding models else `accelerator.save_state()` and `accelerator.load_state()` will result in wrong/unexpected behaviour.
|
||||
- This feature is incompatible with `--predict_with_generate` in the `run_translation.py` script of 🤗 `Transformers` library.
|
||||
- This feature is incompatible with `--predict_with_generate` in the `run_translation.py` script of `Transformers` library.
|
||||
|
||||
For more control, users can leverage the `FullyShardedDataParallelPlugin`. After creating an instance of this class, users can pass it to the Accelerator class instantiation.
|
||||
For more information on these options, please refer to the PyTorch [FullyShardedDataParallel](https://github.com/pytorch/pytorch/blob/0df2e863fbd5993a7b9e652910792bd21a516ff3/torch/distributed/fsdp/fully_sharded_data_parallel.py#L236) code.
|
||||
|
||||
38
docs/source/usage_guides/gaudi.md
Normal file
38
docs/source/usage_guides/gaudi.md
Normal file
@ -0,0 +1,38 @@
|
||||
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Intel Gaudi
|
||||
|
||||
Users can take advantage of Intel Gaudi AI accelerators for significantly faster and cost-effective model training and inference.
|
||||
The Intel Gaudi AI accelerator family currently includes three product generations: [Intel Gaudi 1](https://habana.ai/products/gaudi/), [Intel Gaudi 2](https://habana.ai/products/gaudi2/), and [Intel Gaudi 3](https://habana.ai/products/gaudi3/). Each server is equipped with 8 devices, known as Habana Processing Units (HPUs), providing 128GB of memory on Gaudi 3, 96GB on Gaudi 2, and 32GB on the first-gen Gaudi. For more details on the underlying hardware architecture, check out the [Gaudi Architecture Overview](https://docs.habana.ai/en/latest/Gaudi_Overview/Gaudi_Architecture.html).
|
||||
|
||||
## How it works out of the box
|
||||
|
||||
It is enabled by default if an Intel Gaudi device is detected.
|
||||
To disable it, pass `--cpu` flag to `accelerate launch` command or answer the corresponding question when answering the `accelerate config` questionnaire.
|
||||
|
||||
You can directly run the following script to test it out on Intel Gaudi:
|
||||
|
||||
```bash
|
||||
accelerate launch /examples/cv_example.py --data_dir images
|
||||
```
|
||||
|
||||
## Limitations
|
||||
|
||||
The following features are not part of the Accelerate library and requires [Optimum for Intel Gaudi](https://huggingface.co/docs/optimum/main/en/habana/index):
|
||||
|
||||
- `fast_ddp` which implements DDP by applying an all-reduce on gradients instead of the Torch DDP wrapper.
|
||||
- `minimize_memory` which is used for fp8 training and enables keeping fp8 weights in memory between the forward and backward passes, leading to a smaller memory footprint at the cost of additional fp8 casts.
|
||||
- `context_parallel_size` which is used for Context/Sequence Parallelism (CP/SP) and partitions the network inputs and activations along sequence dimension to reduce memory footprint and increase throughput.
|
||||
@ -13,7 +13,7 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Performing gradient accumulation with 🤗 Accelerate
|
||||
# Performing gradient accumulation with Accelerate
|
||||
|
||||
Gradient accumulation is a technique where you can train on bigger batch sizes than
|
||||
your machine would normally be able to fit into memory. This is done by accumulating gradients over
|
||||
@ -22,7 +22,7 @@ several batches, and only stepping the optimizer after a certain number of batch
|
||||
While technically standard gradient accumulation code would work fine in a distributed setup, it is not the most efficient
|
||||
method for doing so and you may experience considerable slowdowns!
|
||||
|
||||
In this tutorial you will see how to quickly setup gradient accumulation and perform it with the utilities provided in 🤗 Accelerate,
|
||||
In this tutorial you will see how to quickly setup gradient accumulation and perform it with the utilities provided in Accelerate,
|
||||
which can total to adding just one new line of code!
|
||||
|
||||
This example will use a very simplistic PyTorch training loop that performs gradient accumulation every two batches:
|
||||
@ -47,9 +47,9 @@ for index, batch in enumerate(training_dataloader):
|
||||
optimizer.zero_grad()
|
||||
```
|
||||
|
||||
## Converting it to 🤗 Accelerate
|
||||
## Converting it to Accelerate
|
||||
|
||||
First the code shown earlier will be converted to utilize 🤗 Accelerate without the special gradient accumulation helper:
|
||||
First the code shown earlier will be converted to utilize Accelerate without the special gradient accumulation helper:
|
||||
|
||||
```diff
|
||||
+ from accelerate import Accelerator
|
||||
@ -79,9 +79,9 @@ First the code shown earlier will be converted to utilize 🤗 Accelerate withou
|
||||
|
||||
</Tip>
|
||||
|
||||
## Letting 🤗 Accelerate handle gradient accumulation
|
||||
## Letting Accelerate handle gradient accumulation
|
||||
|
||||
All that is left now is to let 🤗 Accelerate handle the gradient accumulation for us. To do so you should pass in a `gradient_accumulation_steps` parameter to [`Accelerator`], dictating the number
|
||||
All that is left now is to let Accelerate handle the gradient accumulation for us. To do so you should pass in a `gradient_accumulation_steps` parameter to [`Accelerator`], dictating the number
|
||||
of steps to perform before each call to `step()` and how to automatically adjust the loss during the call to [`~Accelerator.backward`]:
|
||||
|
||||
```diff
|
||||
@ -120,7 +120,7 @@ As you can see the [`Accelerator`] is able to keep track of the batch number you
|
||||
<Tip>
|
||||
|
||||
Typically with gradient accumulation, you would need to adjust the number of steps to reflect the change in total batches you are
|
||||
training on. 🤗 Accelerate automagically does this for you by default. Behind the scenes we instantiate a [`GradientAccumulationPlugin`] configured to do this.
|
||||
training on. Accelerate automagically does this for you by default. Behind the scenes we instantiate a [`GradientAccumulationPlugin`] configured to do this.
|
||||
|
||||
</Tip>
|
||||
|
||||
@ -140,7 +140,7 @@ accelerator = Accelerator(..., gradient_accumulation_plugin=plugin)
|
||||
|
||||
## The finished code
|
||||
|
||||
Below is the finished implementation for performing gradient accumulation with 🤗 Accelerate
|
||||
Below is the finished implementation for performing gradient accumulation with Accelerate
|
||||
|
||||
```python
|
||||
from accelerate import Accelerator
|
||||
@ -171,7 +171,7 @@ To learn more about what magic this wraps around, read the [Gradient Synchroniza
|
||||
|
||||
## Self-contained example
|
||||
|
||||
Here is a self-contained example that you can run to see gradient accumulation in action with 🤗 Accelerate:
|
||||
Here is a self-contained example that you can run to see gradient accumulation in action with Accelerate:
|
||||
|
||||
```python
|
||||
import torch
|
||||
@ -187,38 +187,46 @@ set_seed(0)
|
||||
x = torch.tensor([1., 2., 3., 4., 5., 6., 7., 8.])
|
||||
y = torch.tensor([2., 4., 6., 8., 10., 12., 14., 16.])
|
||||
gradient_accumulation_steps = 4
|
||||
batch_size = len(x) // gradient_accumulation_steps
|
||||
per_device_batch_size = len(x) // gradient_accumulation_steps
|
||||
|
||||
# define dataset and dataloader
|
||||
dataset = TensorDataset(x, y)
|
||||
dataloader = DataLoader(dataset, batch_size=batch_size)
|
||||
dataloader = DataLoader(dataset, batch_size=per_device_batch_size)
|
||||
|
||||
# define model, optimizer and loss function
|
||||
model = torch.zeros((1, 1), requires_grad=True)
|
||||
class SimpleLinearModel(torch.nn.Module):
|
||||
def __init__(self):
|
||||
super(SimpleLinearModel, self).__init__()
|
||||
self.weight = torch.nn.Parameter(torch.zeros((1, 1)))
|
||||
|
||||
def forward(self, inputs):
|
||||
return inputs @ self.weight
|
||||
|
||||
model = SimpleLinearModel()
|
||||
model_clone = copy.deepcopy(model)
|
||||
criterion = torch.nn.MSELoss()
|
||||
model_optimizer = torch.optim.SGD([model], lr=0.02)
|
||||
model_optimizer = torch.optim.SGD(model.parameters(), lr=0.02)
|
||||
accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps)
|
||||
model, model_optimizer, dataloader = accelerator.prepare(model, model_optimizer, dataloader)
|
||||
model_clone_optimizer = torch.optim.SGD([model_clone], lr=0.02)
|
||||
print(f"initial model weight is {model.mean().item():.5f}")
|
||||
print(f"initial model weight is {model_clone.mean().item():.5f}")
|
||||
model_clone_optimizer = torch.optim.SGD(model_clone.parameters(), lr=0.02)
|
||||
print(f"initial model weight is {model.weight.mean().item():.5f}")
|
||||
print(f"initial model weight is {model_clone.weight.mean().item():.5f}")
|
||||
for i, (inputs, labels) in enumerate(dataloader):
|
||||
with accelerator.accumulate(model):
|
||||
inputs = inputs.view(-1, 1)
|
||||
print(i, inputs.flatten())
|
||||
labels = labels.view(-1, 1)
|
||||
outputs = inputs @ model
|
||||
outputs = model(inputs)
|
||||
loss = criterion(outputs, labels)
|
||||
accelerator.backward(loss)
|
||||
model_optimizer.step()
|
||||
model_optimizer.zero_grad()
|
||||
loss = criterion(x.view(-1, 1) @ model_clone, y.view(-1, 1))
|
||||
loss = criterion(x.view(-1, 1) @ model_clone.weight, y.view(-1, 1))
|
||||
model_clone_optimizer.zero_grad()
|
||||
loss.backward()
|
||||
model_clone_optimizer.step()
|
||||
print(f"w/ accumulation, the final model weight is {model.mean().item():.5f}")
|
||||
print(f"w/o accumulation, the final model weight is {model_clone.mean().item():.5f}")
|
||||
print(f"w/ accumulation, the final model weight is {model.weight.mean().item():.5f}")
|
||||
print(f"w/o accumulation, the final model weight is {model_clone.weight.mean().item():.5f}")
|
||||
```
|
||||
```
|
||||
initial model weight is 0.00000
|
||||
@ -230,3 +238,233 @@ initial model weight is 0.00000
|
||||
w/ accumulation, the final model weight is 2.04000
|
||||
w/o accumulation, the final model weight is 2.04000
|
||||
```
|
||||
|
||||
## Gradient accumulation on training samples of variable size
|
||||
|
||||
As was pointed out in this [blog-post](https://huggingface.co/blog/gradient_accumulation), which points out a common error that occurs when performing gradient accumulation on training samples of variable size:
|
||||
|
||||
> [...] for gradient accumulation across token-level tasks like causal LM training, the correct loss should be computed by the **total loss across all batches in a gradient accumulation step** divided by the **total number of all non padding tokens in those batches**. This is not the same as the average of the per-batch loss values.
|
||||
|
||||
In other words, some adjustments must be made on losses that operate on a token-level basis.
|
||||
|
||||
### Skeleton code
|
||||
|
||||
```python
|
||||
from accelerate import Accelerator
|
||||
import math
|
||||
import contextlib
|
||||
|
||||
gradient_accumulation_steps = 2
|
||||
accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps)
|
||||
model, optimizer, training_dataloader, scheduler = accelerator.prepare(
|
||||
model, optimizer, training_dataloader, scheduler
|
||||
)
|
||||
|
||||
training_iterator = iter(training_dataloader)
|
||||
num_samples_in_epoch = len(training_dataloader)
|
||||
remainder = num_samples_in_epoch % gradient_accumulation_steps
|
||||
remainder = remainder if remainder != 0 else gradient_accumulation_steps
|
||||
total_updates = math.ceil(num_samples_in_epoch / gradient_accumulation_steps)
|
||||
|
||||
|
||||
total_batched_samples = 0
|
||||
for update_step in range(total_updates):
|
||||
# In order to correctly the total number of non-padded tokens on which we'll compute the cross-entropy loss
|
||||
# we need to pre-load the full local batch - i.e the next per_device_batch_size * accumulation_steps samples
|
||||
batch_samples = []
|
||||
num_batches_in_step = gradient_accumulation_steps if update_step != (total_updates - 1) else remainder
|
||||
for _ in range(num_batches_in_step):
|
||||
batch_samples += [next(training_iterator)]
|
||||
|
||||
# get local num items in batch
|
||||
num_items_in_batch = sum([(batch["labels"].ne(-100)).sum() for batch in batch_samples])
|
||||
# to compute it correctly in a multi-device DDP training, we need to gather the total number of items in the full batch.
|
||||
num_items_in_batch = accelerator.gather(num_items_in_batch).sum().item()
|
||||
|
||||
for i, batch in enumerate(batch_samples):
|
||||
# if we perform gradient accumulation in a multi-devices set-up, we want to avoid unnecessary communications when accumulating
|
||||
# cf: https://muellerzr.github.io/blog/gradient_accumulation.html
|
||||
if (i < len(batch_samples) - 1 and accelerator.num_processes > 1):
|
||||
ctx = model.no_sync
|
||||
else:
|
||||
ctx = contextlib.nullcontext
|
||||
|
||||
total_batched_samples += 1
|
||||
|
||||
with ctx():
|
||||
inputs, targets = batch
|
||||
outputs = model(inputs)
|
||||
loss = loss_function(outputs, targets) # the loss function should sum over samples rather than averaging
|
||||
|
||||
# We multiply by num_processes because the DDP calculates the average gradient across all devices whereas dividing by num_items_in_batch already takes into account all devices
|
||||
# Same reason for gradient_accumulation_steps, but this times it's Accelerate that calculate the average gradient across the accumulated steps
|
||||
loss = (loss * gradient_accumulation_steps * accelerator.num_processes) / num_items_in_batch
|
||||
|
||||
accelerator.backward(loss)
|
||||
|
||||
# Sync gradients and perform optimization steps once every gradient_accumulation_steps
|
||||
optimizer.step()
|
||||
scheduler.step()
|
||||
optimizer.zero_grad()
|
||||
```
|
||||
|
||||
### Self-contained causal LM example
|
||||
|
||||
```py
|
||||
import torch
|
||||
import copy
|
||||
from accelerate import Accelerator
|
||||
from accelerate.utils import set_seed
|
||||
from accelerate.logging import get_logger
|
||||
from torch.utils.data import Dataset, DataLoader
|
||||
import math
|
||||
import contexlib
|
||||
|
||||
# seed
|
||||
set_seed(0)
|
||||
logger = get_logger(__name__)
|
||||
|
||||
class MyDataset(Dataset):
|
||||
def __init__(self, num_samples):
|
||||
super().__init__()
|
||||
self.len = num_samples
|
||||
|
||||
def __getitem__(self, index):
|
||||
input_ids = torch.arange(1, index+2, dtype=torch.float32)
|
||||
labels = torch.remainder(input_ids, 2)
|
||||
return {"input_ids": input_ids, "labels": labels}
|
||||
|
||||
def __len__(self):
|
||||
return self.len
|
||||
|
||||
def collate_fn(features):
|
||||
input_ids = torch.nn.utils.rnn.pad_sequence([f["input_ids"] for f in features], batch_first=True, padding_value=-100)
|
||||
labels = torch.nn.utils.rnn.pad_sequence([f["labels"] for f in features], batch_first=True, padding_value=-100)
|
||||
return {"input_ids": input_ids[..., None], "labels": labels[..., None]}
|
||||
|
||||
# define toy inputs and labels
|
||||
gradient_accumulation_steps = 2
|
||||
per_device_batch_size = 4
|
||||
|
||||
# define accelerator
|
||||
accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps)
|
||||
|
||||
# define dataset and dataloader
|
||||
# for this toy example, we'll compute gradient descent over one single global batch
|
||||
dataset = MyDataset(per_device_batch_size*gradient_accumulation_steps*accelerator.num_processes)
|
||||
dataloader = DataLoader(dataset, batch_size=per_device_batch_size, collate_fn=collate_fn)
|
||||
|
||||
# define model, model_optimizer and loss function
|
||||
model = torch.nn.Linear(1, 2, bias=False)
|
||||
model_clone = copy.deepcopy(model)
|
||||
criterion = torch.nn.CrossEntropyLoss(reduction="sum") # must sum over samples rather than averaging
|
||||
model_optimizer = torch.optim.SGD(model.parameters(), lr=0.08)
|
||||
|
||||
|
||||
logger.warning(f"initial model weight is {model.weight.detach().cpu().squeeze()}")
|
||||
logger.warning(f"initial model clone weight is {model_clone.weight.detach().cpu().squeeze()}")
|
||||
|
||||
# prepare artifacts - accelerator handles device placement and dataloader splitting
|
||||
model, model_optimizer = accelerator.prepare(model, model_optimizer)
|
||||
dataloader = accelerator.prepare_data_loader(dataloader, device_placement=True)
|
||||
training_iterator = iter(dataloader)
|
||||
|
||||
num_samples_in_epoch = len(dataloader)
|
||||
remainder = num_samples_in_epoch % gradient_accumulation_steps
|
||||
remainder = remainder if remainder != 0 else gradient_accumulation_steps
|
||||
total_gradient_updates = math.ceil(num_samples_in_epoch / gradient_accumulation_steps)
|
||||
|
||||
total_batched_samples = 0
|
||||
for update_step in range(total_gradient_updates):
|
||||
# In order to correctly the total number of non-padded tokens on which we'll compute the cross-entropy loss
|
||||
# we need to pre-load the full local batch - i.e the next per_device_batch_size * accumulation_steps samples
|
||||
batch_samples = []
|
||||
num_batches_in_step = gradient_accumulation_steps if update_step != (total_gradient_updates - 1) else remainder
|
||||
for _ in range(num_batches_in_step):
|
||||
batch_samples += [next(training_iterator)]
|
||||
|
||||
# get local num items in batch
|
||||
local_num_items_in_batch = sum([(batch["labels"].ne(-100)).sum() for batch in batch_samples])
|
||||
logger.warning(f"Step {update_step} - Device {accelerator.process_index} - num items in the local batch {local_num_items_in_batch}", main_process_only=False)
|
||||
|
||||
# to compute it correctly in a multi-device DDP training, we need to gather the total number of items in the full batch.
|
||||
num_items_in_batch = accelerator.gather(local_num_items_in_batch).sum().item()
|
||||
logger.warning(f"Total num items {num_items_in_batch}")
|
||||
|
||||
for i, batch in enumerate(batch_samples):
|
||||
inputs, labels = batch["input_ids"], batch["labels"]
|
||||
total_batched_samples += 1
|
||||
# if we perform gradient accumulation in a multi-devices set-up, we want to avoid unnecessary communications when accumulating
|
||||
# cf: https://muellerzr.github.io/blog/gradient_accumulation.html
|
||||
if (i < len(batch_samples) - 1 and accelerator.num_processes > 1):
|
||||
ctx = model.no_sync
|
||||
else:
|
||||
ctx = contextlib.nullcontext
|
||||
with ctx():
|
||||
|
||||
outputs = model(inputs)
|
||||
loss = criterion(outputs.view(-1, 2), labels.view(-1).to(torch.int64))
|
||||
|
||||
# We multiply by num_processes because the DDP calculates the average gradient across all devices whereas dividing by num_items_in_batch already takes into account all devices
|
||||
# Same reason for gradient_accumulation_steps, but this times it's Accelerate that calculate the average gradient across the accumulated steps
|
||||
loss = (loss * gradient_accumulation_steps * accelerator.num_processes) / num_items_in_batch
|
||||
accelerator.backward(loss)
|
||||
model_optimizer.step()
|
||||
model_optimizer.zero_grad()
|
||||
|
||||
|
||||
logger.warning(f"Device {accelerator.process_index} - w/ accumulation, the final model weight is {accelerator.unwrap_model(model).weight.detach().cpu().squeeze()}", main_process_only=False)
|
||||
|
||||
# We know do the same operation but on a single device and without gradient accumulation
|
||||
|
||||
if accelerator.is_main_process:
|
||||
# prepare one single entire batch
|
||||
dataloader = DataLoader(dataset, batch_size=len(dataset), collate_fn=collate_fn)
|
||||
full_batch_without_accum = next(iter(dataloader))
|
||||
total_inputs, total_labels = full_batch_without_accum["input_ids"], full_batch_without_accum["labels"]
|
||||
model_clone_optimizer = torch.optim.SGD(model_clone.parameters(), lr=0.08)
|
||||
|
||||
# train the cloned model
|
||||
loss = torch.nn.CrossEntropyLoss(reduction="mean")(model_clone(total_inputs).view(-1, 2), total_labels.view(-1).to(torch.int64))
|
||||
model_clone_optimizer.zero_grad()
|
||||
loss.backward()
|
||||
model_clone_optimizer.step()
|
||||
|
||||
# We should have the same final weights.
|
||||
logger.warning(f"w/o accumulation, the final model weight is {model_clone.weight.detach().cpu().squeeze()}")
|
||||
|
||||
```
|
||||
|
||||
Results on a single device - gradient accumulation steps set to 1 and batch_size set to 8:
|
||||
```
|
||||
initial model weight is tensor([-0.0075, 0.5364])
|
||||
initial model clone weight is tensor([-0.0075, 0.5364])
|
||||
Step 0 - Device 0 - num items in the local batch 36
|
||||
Total num items 36
|
||||
Device 0 - w/ accumulation, the final model weight is tensor([0.0953, 0.4337])
|
||||
w/o accumulation, the final model weight is tensor([0.0953, 0.4337])
|
||||
```
|
||||
|
||||
Results on a two devices set-up - gradient accumulation steps set to 2 and batch_size set to 4.
|
||||
```
|
||||
initial model weight is tensor([-0.0075, 0.5364])
|
||||
initial model clone weight is tensor([-0.0075, 0.5364])
|
||||
Step 0 - Device 0 - num items in the local batch 52
|
||||
Step 0 - Device 1 - num items in the local batch 84
|
||||
Total num items 136
|
||||
Device 1 - w/ accumulation, the final model weight is tensor([0.2117, 0.3172])
|
||||
Device 0 - w/ accumulation, the final model weight is tensor([0.2117, 0.3172])
|
||||
w/o accumulation, the final model weight is tensor([0.2117, 0.3172])
|
||||
```
|
||||
|
||||
### To go further:
|
||||
|
||||
Please find a complete example script on a real world training run in the examples folder at the path [`accelerate/examples/by_feature/gradient_accumulation_for_autoregressive_models.py`](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/gradient_accumulation_for_autoregressive_models.py).
|
||||
|
||||
Running it on several training configurations with constant global batch size equal to 32 gives the following graph:
|
||||
|
||||
<div style="text-align: center">
|
||||
<img src="https://huggingface.co/datasets/hf-audio/gradient_accumulation_example/resolve/main/training_losses.png">
|
||||
</div>
|
||||
|
||||
Note that the training losses are exactly the same up to training step 20. The small deviation after this training step occurs at the very end of the first epoch, because, by [default](https://huggingface.co/docs/accelerate/en/package_reference/torch_wrappers#accelerate.data_loader.prepare_data_loader.even_batches), the dataloader duplicates the samples at the beginning of the dataset when the total batch size doesn't exactly divide the dataset.
|
||||
|
||||
@ -13,34 +13,11 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Intel® Extension for PyTorch
|
||||
|
||||
[IPEX](https://github.com/intel/intel-extension-for-pytorch) is optimized for CPUs with AVX-512 or above, and functionally works for CPUs with only AVX2. So, it is expected to bring performance benefit for Intel CPU generations with AVX-512 or above while CPUs with only AVX2 (e.g., AMD CPUs or older Intel CPUs) might result in a better performance under IPEX, but not guaranteed. IPEX provides performance optimizations for CPU training with both Float32 and BFloat16. The usage of BFloat16 is the main focus of the following sections.
|
||||
|
||||
Low precision data type BFloat16 has been natively supported on the 3rd Generation Xeon® Scalable Processors (aka Cooper Lake) with AVX512 instruction set and will be supported on the next generation of Intel® Xeon® Scalable Processors with Intel® Advanced Matrix Extensions (Intel® AMX) instruction set with further boosted performance. The Auto Mixed Precision for CPU backend has been enabled since PyTorch-1.10. At the same time, the support of Auto Mixed Precision with BFloat16 for CPU and BFloat16 optimization of operators has been massively enabled in Intel® Extension for PyTorch, and partially upstreamed to PyTorch master branch. Users can get better performance and user experience with IPEX Auto Mixed Precision.
|
||||
|
||||
## IPEX installation:
|
||||
|
||||
IPEX release is following PyTorch, to install via pip:
|
||||
|
||||
| PyTorch Version | IPEX version |
|
||||
| :---------------: | :----------: |
|
||||
| 2.0 | 2.0.0 |
|
||||
| 1.13 | 1.13.0 |
|
||||
| 1.12 | 1.12.300 |
|
||||
| 1.11 | 1.11.200 |
|
||||
| 1.10 | 1.10.100 |
|
||||
|
||||
```
|
||||
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
|
||||
```
|
||||
|
||||
Check more approaches for [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html).
|
||||
|
||||
# Training on Intel CPU
|
||||
|
||||
## How It Works For Training optimization in CPU
|
||||
|
||||
🤗 Accelerate has integrated [IPEX](https://github.com/intel/intel-extension-for-pytorch), all you need to do is enabling it through the config.
|
||||
Accelerate has full support for Intel CPU, all you need to do is enabling it through the config.
|
||||
|
||||
**Scenario 1**: Acceleration of No distributed CPU training
|
||||
|
||||
@ -55,7 +32,6 @@ This machine
|
||||
Which type of machine are you using?
|
||||
No distributed training
|
||||
Do you want to run your training on CPU only (even if a GPU / Apple Silicon device is available)? [yes/NO]:yes
|
||||
Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU? [yes/NO]:yes
|
||||
Do you wish to optimize your script with torch dynamo?[yes/NO]:NO
|
||||
Do you want to use DeepSpeed? [yes/NO]: NO
|
||||
-----------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
@ -69,15 +45,12 @@ default options when doing
|
||||
accelerate launch my_script.py --args_to_my_script
|
||||
```
|
||||
|
||||
For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with IPEX enabled.
|
||||
default_config.yaml that is generated after `accelerate config`
|
||||
For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with `default_config.yaml` which is generated by `accelerate config`
|
||||
|
||||
```bash
|
||||
compute_environment: LOCAL_MACHINE
|
||||
distributed_type: 'NO'
|
||||
downcast_bf16: 'no'
|
||||
ipex_config:
|
||||
ipex: true
|
||||
machine_rank: 0
|
||||
main_training_function: main
|
||||
mixed_precision: bf16
|
||||
@ -94,6 +67,9 @@ use_cpu: true
|
||||
accelerate launch examples/nlp_example.py
|
||||
```
|
||||
|
||||
> [!CAUTION]
|
||||
> `accelerator.prepare` can currently only handle simultaneously preparing multiple models (and no optimizer) OR a single model-optimizer pair for training. Other attempts (e.g., two model-optimizer pairs) will raise a verbose error. To work around this limitation, consider separately using `accelerator.prepare` for each model-optimizer pair.
|
||||
|
||||
**Scenario 2**: Acceleration of distributed CPU training
|
||||
we use Intel oneCCL for communication, combined with Intel® MPI library to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. you could refer the [here](https://huggingface.co/docs/transformers/perf_train_cpu_many) for the installation guide
|
||||
|
||||
@ -114,7 +90,6 @@ What is the rank of this machine?
|
||||
What is the IP address of the machine that will host the main process? 36.112.23.24
|
||||
What is the port you will use to communicate with the main process? 29500
|
||||
Are all the machines on the same local network? Answer `no` if nodes are on the cloud and/or on different network hosts [YES/no]: yes
|
||||
Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU? [yes/NO]:yes
|
||||
Do you want accelerate to launch mpirun? [yes/NO]: yes
|
||||
Please enter the path to the hostfile to use with mpirun [~/hostfile]: ~/hostfile
|
||||
Enter the number of oneCCL worker threads [1]: 1
|
||||
@ -126,13 +101,11 @@ bf16
|
||||
```
|
||||
For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with IPEX enabled for distributed CPU training.
|
||||
|
||||
default_config.yaml that is generated after `accelerate config`
|
||||
`default_config.yaml` which is generated by `accelerate config`
|
||||
```bash
|
||||
compute_environment: LOCAL_MACHINE
|
||||
distributed_type: MULTI_CPU
|
||||
downcast_bf16: 'no'
|
||||
ipex_config:
|
||||
ipex: true
|
||||
machine_rank: 0
|
||||
main_process_ip: 36.112.23.24
|
||||
main_process_port: 29500
|
||||
@ -153,8 +126,10 @@ use_cpu: true
|
||||
|
||||
Set following env and using intel MPI to launch the training
|
||||
|
||||
In node0, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument.
|
||||
If you selected to have Accelerate launch `mpirun`, ensure that the location of your hostfile matches the path in the config.
|
||||
In `node0`, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument.
|
||||
|
||||
If you selected to let Accelerate launch `mpirun`, ensure that the location of your hostfile matches the path in the config.
|
||||
|
||||
```bash
|
||||
$ cat hostfile
|
||||
xxx.xxx.xxx.xxx #node0 ip
|
||||
@ -162,18 +137,18 @@ xxx.xxx.xxx.xxx #node1 ip
|
||||
xxx.xxx.xxx.xxx #node2 ip
|
||||
xxx.xxx.xxx.xxx #node3 ip
|
||||
```
|
||||
When Accelerate is launching `mpirun`, source the oneCCL bindings setvars.sh to get your Intel MPI environment, and then
|
||||
run your script using `accelerate launch`. Note that the python script and environment needs to exist on all of the
|
||||
machines being used for multi-CPU training.
|
||||
|
||||
Before executing `accelerate launch` command, you need source the oneCCL bindings `setvars.sh` to get your Intel MPI environment properly. Note that both the python script and environment need to be available on all of the machines being used for multi-CPU training.
|
||||
|
||||
```bash
|
||||
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
|
||||
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
|
||||
|
||||
accelerate launch examples/nlp_example.py
|
||||
```
|
||||
Otherwise, if you selected not to have Accelerate launch `mpirun`, run the following command in node0 and **16DDP** will
|
||||
be enabled in node0,node1,node2,node3 with BF16 mixed precision. When using this method, the python script, python
|
||||
environment, and accelerate config file need to be present on all of the machines used for multi-CPU training.
|
||||
|
||||
You can also directly launch distributed training with `mpirun` command, you need to run the following command in node0 and **16DDP** will be enabled in node0,node1,node2,node3 with BF16 mixed precision. When using this method, the python script, python environment, and accelerate config file need to be available on all of the machines used for multi-CPU training.
|
||||
|
||||
```bash
|
||||
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
|
||||
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
|
||||
@ -182,11 +157,3 @@ export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip
|
||||
export CCL_ATL_TRANSPORT=ofi
|
||||
mpirun -f hostfile -n 16 -ppn 4 accelerate launch examples/nlp_example.py
|
||||
```
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [Project's github](https://github.com/intel/intel-extension-for-pytorch)
|
||||
- [API docs](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/api_doc.html)
|
||||
- [Tuning guide](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/performance_tuning/tuning_guide.html)
|
||||
- [Blogs & Publications](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/blogs_publications.html)
|
||||
|
||||
@ -13,12 +13,12 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# Using Local SGD with 🤗 Accelerate
|
||||
# Using Local SGD with Accelerate
|
||||
|
||||
Local SGD is a technique for distributed training where gradients are not synchronized every step. Thus, each process updates its own version of the model weights and after a given number of steps these weights are synchronized by averaging across all processes. This improves communication efficiency and can lead to substantial training speed up especially when a computer lacks a faster interconnect such as NVLink.
|
||||
Unlike gradient accumulation (where improving communication efficiency requires increasing the effective batch size), Local SGD does not require changing a batch size or a learning rate / schedule. However, if necessary, Local SGD can be combined with gradient accumulation as well.
|
||||
|
||||
In this tutorial you will see how to quickly setup Local SGD 🤗 Accelerate. Compared to a standard Accelerate setup, this requires only two extra lines of code.
|
||||
In this tutorial you will see how to quickly setup Local SGD Accelerate. Compared to a standard Accelerate setup, this requires only two extra lines of code.
|
||||
|
||||
This example will use a very simplistic PyTorch training loop that performs gradient accumulation every two batches:
|
||||
|
||||
@ -42,9 +42,9 @@ for index, batch in enumerate(training_dataloader):
|
||||
optimizer.zero_grad()
|
||||
```
|
||||
|
||||
## Converting it to 🤗 Accelerate
|
||||
## Converting it to Accelerate
|
||||
|
||||
First the code shown earlier will be converted to use 🤗 Accelerate with neither a LocalSGD or a gradient accumulation helper:
|
||||
First the code shown earlier will be converted to use Accelerate with neither a LocalSGD or a gradient accumulation helper:
|
||||
|
||||
```diff
|
||||
+ from accelerate import Accelerator
|
||||
@ -67,9 +67,9 @@ First the code shown earlier will be converted to use 🤗 Accelerate with neit
|
||||
scheduler.step()
|
||||
```
|
||||
|
||||
## Letting 🤗 Accelerate handle model synchronization
|
||||
## Letting Accelerate handle model synchronization
|
||||
|
||||
All that is left now is to let 🤗 Accelerate handle model parameter synchronization **and** the gradient accumulation for us. For simplicity let us assume we need to synchronize every 8 steps. This is
|
||||
All that is left now is to let Accelerate handle model parameter synchronization **and** the gradient accumulation for us. For simplicity let us assume we need to synchronize every 8 steps. This is
|
||||
achieved by adding one `with LocalSGD` statement and one call `local_sgd.step()` after every optimizer step:
|
||||
|
||||
```diff
|
||||
@ -92,7 +92,7 @@ Under the hood, the Local SGD code **disables** automatic gradient synchronizati
|
||||
|
||||
## Limitations
|
||||
|
||||
The current implementation works only with basic multi-GPU (or multi-CPU) training without, e.g., [DeepSpeed.](https://github.com/microsoft/DeepSpeed).
|
||||
The current implementation works only with basic multi-GPU (or multi-CPU) training without, e.g., [DeepSpeed.](https://github.com/deepspeedai/DeepSpeed).
|
||||
|
||||
## References
|
||||
|
||||
|
||||
@ -15,22 +15,22 @@ rendered properly in your Markdown viewer.
|
||||
|
||||
# Low Precision Training Methods
|
||||
|
||||
🤗 Accelerate provides integrations to train on lower precision methods using specified supported hardware through the `TransformersEngine` and `MS-AMP` packages. This documentation will help guide you through what hardware is supported, how to configure your [`Accelerator`] to leverage the low precision methods, and what you can expect when training.
|
||||
Accelerate provides integrations to train on lower precision methods using specified supported hardware through the `TransformersEngine`, `MS-AMP`, and `torchao` packages. This documentation will help guide you through what hardware is supported, how to configure your [`Accelerator`] to leverage the low precision methods, and what you can expect when training.
|
||||
|
||||
## What training on FP8 means
|
||||
|
||||
To explore more of the nitty-gritty in training in FP8 with PyTorch and 🤗 Accelerate, check out the [concept_guide](../concept_guides/low_precision_training) on why this can be difficult. But essentially rather than training in BF16, some (or all) aspects of training a model can be performed using 8 bits instead of 16. The challenge is doing so without degrading final performance.
|
||||
To explore more of the nitty-gritty in training in FP8 with PyTorch and Accelerate, check out the [concept_guide](../concept_guides/low_precision_training) on why this can be difficult. But essentially rather than training in BF16, some (or all) aspects of training a model can be performed using 8 bits instead of 16. The challenge is doing so without degrading final performance.
|
||||
|
||||
This is only enabled on specific NVIDIA hardware, namely:
|
||||
|
||||
* Anything after the 3000 series consumer graphics cards (such as the 4090)
|
||||
* Hopper-based GPU architectures (such as the `H100` and `H200`)
|
||||
|
||||
What this will result in is some gain in the memory used (as we've cut the needed memory in half for some parts of training) and an increase in throughput *should* be seen as well for larger models that can replace certain layers with FP8-enabled ones.
|
||||
What this will result in is some reduction in the memory used (as we've cut the needed memory in half for some parts of training) and an increase in throughput *should* be seen as well for larger models that can replace certain layers with FP8-enabled ones.
|
||||
|
||||
## Configuring the Accelerator
|
||||
|
||||
Currently two different backends for FP8 are supported (`TransformersEngine` and `MS-AMP`), each with different capabilities and configurations.
|
||||
Currently three different backends for FP8 are supported (`TransformersEngine`, `torchao`, and `MS-AMP`), each with different capabilities and configurations.
|
||||
|
||||
To use either, the same core API is used. Just pass `mixed_precision="fp8"` to either the [`Accelerator`], during `accelerate config` when prompted about mixed precision, or as part of your `config.yaml` file in the `mixed_precision` key:
|
||||
|
||||
@ -39,27 +39,29 @@ from accelerate import Accelerator
|
||||
accelerator = Accelerator(mixed_precision="fp8")
|
||||
```
|
||||
|
||||
By default, if `MS-AMP` is available in your environment, 🤗 Accelerate will automatically utilize it as a backend. To specify it yourself (and customize other parts of the FP8 mixed precision setup), you can utilize the [`utils.FP8RecipeKwargs`] or clarify it in your config `yaml`/during `accelerate launch`:
|
||||
By default, if `MS-AMP` is available in your environment, Accelerate will automatically utilize it as a backend. To specify it yourself (and customize other parts of the FP8 mixed precision setup), you can utilize one of the `RecipeKwargs` dataclasses such as [`utils.AORecipeKwargs`], [`utils.TERecipeKwargs`], or [`utils.MSAMPRecipeKwargs`]; you can also clarify it in your config `yaml`/during `accelerate launch`:
|
||||
|
||||
```{python}
|
||||
from accelerate import Accelerator
|
||||
from accelerate.utils import FP8RecipeKwargs
|
||||
kwargs = [FP8RecipeKwargs(backend="msamp")]
|
||||
from accelerate.utils import MSAMPRecipeKwargs
|
||||
kwargs = [MSAMPRecipeKwargs()]
|
||||
# Or to specify the backend as `TransformersEngine` even if MS-AMP is installed
|
||||
# kwargs = [FP8RecipeKwargs(backend="te")]
|
||||
# kwargs = [TERecipeKwargs()]
|
||||
# Or to use torchao
|
||||
# kwargs = [AORecipeKwargs()]
|
||||
accelerator = Accelerator(mixed_precision="fp8", kwarg_handlers=kwargs)
|
||||
```
|
||||
|
||||
```{yaml}
|
||||
mixed_precision: fp8
|
||||
fp8_config:
|
||||
amax_compute_algorithm: max
|
||||
amax_history_length: 1024
|
||||
amax_compute_algo: max
|
||||
amax_history_len: 1024
|
||||
backend: TE
|
||||
fp8_format: E4M3
|
||||
fp8_format: HYBRID
|
||||
interval: 1
|
||||
margin: 0
|
||||
override_linear_precision: false
|
||||
override_linear_precision: (false, false, false)
|
||||
use_autocast_during_eval: false
|
||||
```
|
||||
|
||||
@ -67,7 +69,7 @@ fp8_config:
|
||||
|
||||
Of the two, `MS-AMP` is traditionally the easier one to configure as there is only a single argument: the optimization level.
|
||||
|
||||
Currently two levels of optimization are supported in the 🤗 Accelerate integration, `"O1"` and `"O2"` (using the letter 'o', not zero).
|
||||
Currently two levels of optimization are supported in the Accelerate integration, `"O1"` and `"O2"` (using the letter 'o', not zero).
|
||||
|
||||
* `"O1"` will cast the weight gradients and `all_reduce` communications to happen in 8-bit, while the rest are done in 16 bit. This reduces the general GPU memory usage and speeds up communication bandwidths.
|
||||
* `"O2"` will also cast first-order optimizer states into 8 bit, while the second order states are in FP16. (Currently just the `Adam` optimizer is supported). This tries its best to minimize final accuracy degradation and will save the highest potential memory.
|
||||
@ -94,9 +96,9 @@ fp8_config:
|
||||
|
||||
## Configuring TransformersEngine
|
||||
|
||||
TransformersEngine has much more available for customizing how and what FP8 calculations are performed. A full list of supported arguments and what they mean are available in [NVIDIA's documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html), however they are restated as part of [`FP8KwargsHandler`]'s docstring for your convenience.
|
||||
TransformersEngine has many options for customizing how and what FP8 calculations are performed. A full list of supported arguments and what they mean are available in [NVIDIA's documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html), however they are restated as part of [`FP8KwargsHandler`]'s docstring for your convenience.
|
||||
|
||||
🤗 Accelerate tries to set sensible defaults, but exploring and tweaking the various parameters yourself can lead to better performance potentially.
|
||||
Accelerate tries to set sensible defaults, but exploring and tweaking the various parameters yourself can lead to better performance potentially.
|
||||
|
||||
To use it, specify `backend="te"` and modify any of the arguments you want as part of your kwarg handler:
|
||||
|
||||
@ -114,16 +116,32 @@ Similarly this can be set in your `config.yaml`:
|
||||
```{yaml}
|
||||
mixed_precision: fp8
|
||||
fp8_config:
|
||||
amax_compute_algorithm: max
|
||||
amax_history_length: 1024
|
||||
amax_compute_algo: max
|
||||
amax_history_len: 1024
|
||||
backend: TE
|
||||
fp8_format: E4M3
|
||||
fp8_format: HYBRID
|
||||
interval: 1
|
||||
margin: 0
|
||||
override_linear_precision: false
|
||||
override_linear_precision: (false, false, false)
|
||||
use_autocast_during_eval: false
|
||||
```
|
||||
|
||||
## Configuring `torchao`
|
||||
|
||||
`torchao` is a [PyTorch-driven](https://github.com/pytorch/ao/tree/main/torchao/float8) hackable FP8 backend, aiming to be more approchable than the prior two engines. One of the core differences with `ao` compared to the prior two is that for numerical stability, it's found to be generally better off keeping the first *and* last layers in the model at the regular precision (be it FP32 or BF16), and then the other layers quantized down to FP8. As a result, a config for `ao` looks a bit differently:
|
||||
|
||||
> Note: this API is experimental and is subject to change
|
||||
|
||||
```{python}
|
||||
from accelerate import Accelerator
|
||||
from accelerate.utils import AORecipeKwargs
|
||||
kwargs = [AORecipeKwargs()]
|
||||
accelerator = Accelerator(mixed_precision="fp8", kwarg_handlers=kwargs)
|
||||
```
|
||||
|
||||
To learn more about the specific parameters to be used, please see the official `torchao` repo.
|
||||
|
||||
|
||||
## Example Zoo
|
||||
|
||||
We have examples showcasing training with FP8 both with accelerate and its underlying implementation available in the accelerate repo.
|
||||
@ -143,3 +161,4 @@ To learn more about training in FP8 please check out the following resources:
|
||||
* [Our concept guide](../concept_guides/low_precision_training) detailing into more about both TransformersEngine and MS-AMP
|
||||
* [The `transformers-engine` documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/common.html)
|
||||
* [The `MS-AMP` documentation](https://azure.github.io/MS-AMP/docs/)
|
||||
* [The `torchao` documentation](https://github.com/pytorch/ao/tree/main/torchao/float8)
|
||||
|
||||
@ -19,7 +19,7 @@ rendered properly in your Markdown viewer.
|
||||
[Megatron-LM](https://github.com/NVIDIA/Megatron-LM) enables training large transformer language models at scale.
|
||||
It provides efficient tensor, pipeline and sequence based model parallelism for pre-training transformer based
|
||||
Language Models such as [GPT](https://arxiv.org/abs/2005.14165) (Decoder Only), [BERT](https://arxiv.org/pdf/1810.04805.pdf) (Encoder Only) and [T5](https://arxiv.org/abs/1910.10683) (Encoder-Decoder).
|
||||
For detailed information and how things work behind the scene please refer the github [repo](https://github.com/NVIDIA/Megatron-LM).
|
||||
For detailed information and how things work behind the scene please refer to the github [repo](https://github.com/NVIDIA/Megatron-LM).
|
||||
|
||||
## What is integrated?
|
||||
|
||||
@ -30,9 +30,9 @@ a. **Tensor Parallelism (TP)**: Reduces memory footprint without much additional
|
||||
Each tensor is split into multiple chunks with each shard residing on separate GPU. At each step, the same mini-batch of data is processed
|
||||
independently and in parallel by each shard followed by syncing across all GPUs (`all-reduce` operation).
|
||||
In a simple transformer layer, this leads to 2 `all-reduces` in the forward path and 2 in the backward path.
|
||||
For more details, please refer research paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using
|
||||
For more details, please refer to the research paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using
|
||||
Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf) and
|
||||
this section of 🤗 blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#tensor-parallelism).
|
||||
this section of blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#tensor-parallelism).
|
||||
|
||||
|
||||
b. **Pipeline Parallelism (PP)**: Reduces memory footprint and enables large scale training via inter-node parallelization.
|
||||
@ -41,11 +41,11 @@ Layers are distributed uniformly across PP stages. For example, if a model has `
|
||||
pipeline parallelism, each GPU will have `6` layers (24/4). For more details on schedules to reduce the idle time of PP,
|
||||
please refer to the research paper [Efficient Large-Scale Language Model Training on GPU Clusters
|
||||
Using Megatron-LM](https://arxiv.org/pdf/2104.04473.pdf) and
|
||||
this section of 🤗 blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#pipeline-parallelism).
|
||||
this section of blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#pipeline-parallelism).
|
||||
|
||||
c. **Sequence Parallelism (SP)**: Reduces memory footprint without any additional communication. Only applicable when using TP.
|
||||
It reduces activation memory required as it prevents the same copies to be on the tensor parallel ranks
|
||||
post `all-reduce` by replacing then with `reduce-scatter` and `no-op` operation would be replaced by `all-gather`.
|
||||
post `all-reduce` by replacing them with `reduce-scatter` and `no-op` operation would be replaced by `all-gather`.
|
||||
As `all-reduce = reduce-scatter + all-gather`, this saves a ton of activation memory at no added communication cost.
|
||||
To put it simply, it shards the outputs of each transformer layer along sequence dimension, e.g.,
|
||||
if the sequence length is `1024` and the TP size is `4`, each GPU will have `256` tokens (1024/4) for each sample.
|
||||
@ -56,8 +56,8 @@ d. **Data Parallelism (DP)** via Distributed Optimizer: Reduces the memory footp
|
||||
(versus the traditional method of replicating the optimizer state across data parallel ranks).
|
||||
For example, when using Adam optimizer with mixed-precision training, each parameter accounts for 12 bytes of memory.
|
||||
This gets distributed equally across the GPUs, i.e., each parameter would account for 3 bytes (12/4) if we have 4 GPUs.
|
||||
For more details, please refer the research paper [ZeRO: Memory Optimizations Toward Training Trillion
|
||||
Parameter Models](https://arxiv.org/pdf/1910.02054.pdf) and following section of 🤗 blog
|
||||
For more details, please refer to the research paper [ZeRO: Memory Optimizations Toward Training Trillion
|
||||
Parameter Models](https://arxiv.org/pdf/1910.02054.pdf) and following section of blog
|
||||
[The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#zero-data-parallelism).
|
||||
|
||||
e. **Selective Activation Recomputation**: Reduces the memory footprint of activations significantly via smart activation checkpointing.
|
||||
@ -66,15 +66,15 @@ For example, for GPT-3, this leads to 70% reduction in required memory for activ
|
||||
only 2.7% FLOPs overhead for recomputation of activations. For more details, please refer to the research paper
|
||||
[Reducing Activation Recomputation in Large Transformer Models](https://arxiv.org/pdf/2205.05198.pdf).
|
||||
|
||||
f. **Fused Kernels**: Fused Softmax, Mixed Precision Fused Layer Norm and Fused gradient accumulation to weight gradient computation of linear layer.
|
||||
f. **Fused Kernels**: Fused Softmax, Mixed Precision Fused Layer Norm and Fused gradient accumulation to weight gradient computation of linear layer.
|
||||
PyTorch JIT compiled Fused GeLU and Fused Bias+Dropout+Residual addition.
|
||||
|
||||
g. **Support for Indexed datasets**: Efficient binary format of datasets for large scale training. Support for the `mmap`, `cached` index file and the `lazy` loader format.
|
||||
|
||||
h. **Checkpoint reshaping and interoperability**: Utility for reshaping Megatron-LM checkpoints of variable
|
||||
tensor and pipeline parallel sizes to the beloved 🤗 Transformers sharded checkpoints as it has great support with plethora of tools
|
||||
such as 🤗 Accelerate Big Model Inference, Megatron-DeepSpeed Inference etc.
|
||||
Support is also available for converting 🤗 Transformers sharded checkpoints to Megatron-LM checkpoint of variable tensor and pipeline parallel sizes
|
||||
tensor and pipeline parallel sizes to the beloved Transformers sharded checkpoints as it has great support with plethora of tools
|
||||
such as Accelerate Big Model Inference, Megatron-DeepSpeed Inference etc.
|
||||
Support is also available for converting Transformers sharded checkpoints to Megatron-LM checkpoint of variable tensor and pipeline parallel sizes
|
||||
for large scale training.
|
||||
|
||||
|
||||
@ -359,7 +359,7 @@ def main():
|
||||
2. For using the Megatron-LM datasets, a few more changes are required. Dataloaders for these datasets
|
||||
are available only on rank 0 of each tensor parallel group. As such, there are rank where dataloader won't be
|
||||
available and this requires tweaks to the training loop. Being able to do all this shows how
|
||||
flexible and extensible 🤗 Accelerate is. The changes required are as follows.
|
||||
flexible and extensible Accelerate is. The changes required are as follows.
|
||||
|
||||
a. For Megatron-LM indexed datasets, we need to use `MegatronLMDummyDataLoader`
|
||||
and pass the required dataset args to it such as `data_path`, `seq_length` etc.
|
||||
@ -391,7 +391,7 @@ c. Changes to training and evaluation loops as dataloader is only available on t
|
||||
So, we need to iterate only if the dataloader isn't `None` else provide empty dict
|
||||
As such, we loop using `while` loop and break when `completed_steps` is equal to `args.max_train_steps`
|
||||
This is similar to the Megatron-LM setup wherein user has to provide `max_train_steps` when using Megaton-LM indexed datasets.
|
||||
This displays how flexible and extensible 🤗 Accelerate is.
|
||||
This displays how flexible and extensible Accelerate is.
|
||||
|
||||
```python
|
||||
while completed_steps < args.max_train_steps:
|
||||
@ -414,10 +414,10 @@ while completed_steps < args.max_train_steps:
|
||||
|
||||
## Utility for Checkpoint reshaping and interoperability
|
||||
|
||||
1. The scripts for these are present in 🤗 Transformers library under respective models.
|
||||
1. The scripts for these are present in Transformers library under respective models.
|
||||
Currently, it is available for GPT model [checkpoint_reshaping_and_interoperability.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/checkpoint_reshaping_and_interoperability.py)
|
||||
|
||||
2. Below is an example of conversion of checkpoint from Megatron-LM to universal 🤗 Transformers sharded checkpoint.
|
||||
2. Below is an example of conversion of checkpoint from Megatron-LM to universal Transformers sharded checkpoint.
|
||||
```bash
|
||||
python checkpoint_reshaping_and_interoperability.py \
|
||||
--convert_checkpoint_from_megatron_to_transformers \
|
||||
@ -445,7 +445,7 @@ python checkpoint_utils/megatgron_gpt2/checkpoint_reshaping_and_interoperability
|
||||
## Megatron-LM GPT models support returning logits and `megatron_generate` function for text generation
|
||||
|
||||
1. Returning logits require setting `require_logits=True` in MegatronLMPlugin as shown below.
|
||||
These would be available on the in the last stage of pipeline.
|
||||
These would be available in the last stage of pipeline.
|
||||
```python
|
||||
megatron_lm_plugin = MegatronLMPlugin(return_logits=True)
|
||||
```
|
||||
@ -569,18 +569,18 @@ setting is synonymous with gradient accumulation.
|
||||
|
||||
7. When using Megatron-LM, use `accelerator.save_state` and `accelerator.load_state` for saving and loading checkpoints.
|
||||
|
||||
8. Below are the mapping from Megatron-LM model architectures to the the equivalent 🤗 transformers model architectures.
|
||||
Only these 🤗 transformers model architectures are supported.
|
||||
8. Below are the mapping from Megatron-LM model architectures to the equivalent transformers model architectures.
|
||||
Only these transformers model architectures are supported.
|
||||
|
||||
a. Megatron-LM [BertModel](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/bert_model.py) :
|
||||
🤗 transformers models with `megatron-bert` in config's model type, e.g.,
|
||||
transformers models with `megatron-bert` in config's model type, e.g.,
|
||||
[MegatronBERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)
|
||||
|
||||
b. Megatron-LM [GPTModel](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py) :
|
||||
🤗 transformers models with `gpt2` in config's model type, e.g.,
|
||||
transformers models with `gpt2` in config's model type, e.g.,
|
||||
[OpenAI GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)
|
||||
|
||||
c. Megatron-LM [T5Model](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/t5_model.py) :
|
||||
🤗 transformers models with `t5` in config's model type, e.g.,
|
||||
transformers models with `t5` in config's model type, e.g.,
|
||||
[T5](https://huggingface.co/docs/transformers/model_doc/t5) and
|
||||
[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user