mirror of
https://github.com/huggingface/accelerate.git
synced 2025-11-14 14:14:32 +08:00
Compare commits
12 Commits
v0.23.0
...
better-err
| Author | SHA1 | Date | |
|---|---|---|---|
| 9ed6000f8f | |||
| 956114ac92 | |||
| 76ee7f211d | |||
| 420743af22 | |||
| 206ab491ed | |||
| 936d2f4f5c | |||
| da98d601b5 | |||
| 658492fb41 | |||
| 80da9cfb09 | |||
| 03deec2a01 | |||
| 629d02c844 | |||
| a87c95da9e |
@ -55,6 +55,8 @@
|
||||
title: How to use 🤗 Accelerate with Intel® Extension for PyTorch for cpu
|
||||
title: How-To Guides
|
||||
- sections:
|
||||
- local: concept_guides/internal_mechanism
|
||||
title: 🤗 Accelerate's internal mechanism
|
||||
- local: concept_guides/big_model_inference
|
||||
title: Loading big models into memory
|
||||
- local: concept_guides/performance
|
||||
|
||||
@ -153,6 +153,15 @@ the below example enabling unbuffered stdout and stderr:
|
||||
python -u -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2}
|
||||
```
|
||||
|
||||
<Tip>
|
||||
|
||||
You can run your code on CPU as well! This is helpful for debugging and testing purposes on toy models and datasets.
|
||||
|
||||
```bash
|
||||
accelerate launch --cpu {script_name.py} {--arg1} {--arg2}
|
||||
```
|
||||
|
||||
</Tip>
|
||||
|
||||
## Why you should always use `accelerate config`
|
||||
|
||||
@ -200,3 +209,24 @@ Launching a script from the location of that custom yaml file looks like the fol
|
||||
```bash
|
||||
accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2} ...
|
||||
```
|
||||
|
||||
## Multi-node training
|
||||
Multi-node training with 🤗Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following:
|
||||
|
||||
- Copy your codebase and data to all nodes. (or place them on a shared filesystem)
|
||||
- Setup your python packages on all nodes.
|
||||
- Run `accelerate config` on the main single node first. After specifying the number of nodes, you will be asked to specify the rank of each node (this will be 0 for the main/master node), along with the IP address and port for the main process. This is required for the worker nodes to communicate with the main process. Afterwards, you can copy or send this config file across all of your nodes, changing the `machine_rank` to 1, 2,3, etc. to avoid having to run the command (or just follow their directions directly for launching with `torchrun` as well)
|
||||
|
||||
Once you have done this, you can start your multi-node training run by running `accelerate launch` (or `torchrun`) on all nodes.
|
||||
|
||||
<Tip>
|
||||
It is required that the command be ran on all nodes for everything to start, not just running it from the main node. You can use something like SLURM or a different process executor to wrap around this requirement and call everything from a single command.
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
|
||||
It is recommended to use the intranet IP of your main node over the public IP for better latency. This is the `192.168.x.x` or the `172.x.x.x` address you see when you run `hostname -I` on the main node.
|
||||
|
||||
</Tip>
|
||||
|
||||
To get a better idea about multi-node training, check out our example for [multi-node training with FSDP](https://huggingface.co/blog/ram-efficient-pytorch-fsdp).
|
||||
|
||||
72
docs/source/concept_guides/internal_mechanism.md
Normal file
72
docs/source/concept_guides/internal_mechanism.md
Normal file
@ -0,0 +1,72 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
-->
|
||||
|
||||
# 🤗 Accelerate's internal mechanisms
|
||||
|
||||
Internally, 🤗 Accelerate works by first analyzing the environment in which the script is launched to determine which
|
||||
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
|
||||
that information is stored in the [`~AcceleratorState`].
|
||||
|
||||
This class is initialized the first time you instantiate an [`~Accelerator`] as well as performing any
|
||||
specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of
|
||||
[`~state.AcceleratorState`]. (The same can also be done with the [`PartialState`], a more barebones version it inherits)
|
||||
|
||||
Then, when calling [`~Accelerator.prepare`], the library:
|
||||
|
||||
- wraps your model(s) in the container adapted for the distributed setup,
|
||||
- wraps your optimizer(s) in an [`~optimizer.AcceleratedOptimizer`],
|
||||
- wraps your scheduler(s) in an [`~scheduler.AcceleratedScheduler`]
|
||||
- creates a new version of your dataloader(s) in a [`~data_loader.DataLoaderShard`] or [`~data_loader.DataLoaderDispatcher`]
|
||||
|
||||
While the model(s), optimizer(s), and scheduler(s) are just put in simple wrappers, the dataloader(s) are re-created. This is mostly
|
||||
because PyTorch does not let the user change the `batch_sampler` of a dataloader once it's been created and the
|
||||
library handles the sharding of your data between processes by changing that `batch_sampler` to yield every other
|
||||
`num_processes` batches (if enabled).
|
||||
|
||||
The [`~data_loader.DataLoaderShard`] subclasses `DataLoader` to add the following functionality:
|
||||
|
||||
- it synchronizes the appropriate random number generator of all processes at each new iteration, to ensure any
|
||||
randomization (like shuffling) is done the exact same way across processes.
|
||||
- it puts the batches on the proper device before yielding them (unless you have opted out of
|
||||
`device_placement=True`).
|
||||
|
||||
The [`~data_loader.DataLoaderDispatcher`] subclasses differs from the [`~data_loader.DataLoaderShard`] in that when iterating through the `DataLoader`, the data is all starting from process 0 and *then* split and sent off to each process rather than it happening at the dataset level.
|
||||
|
||||
The random number generator synchronization will by default synchronize:
|
||||
|
||||
- the `generator` attribute of a given sampler (like the PyTorch `RandomSampler`) for PyTorch >= 1.6
|
||||
- the main random number generator in PyTorch <=1.5.1
|
||||
|
||||
You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main
|
||||
[`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on a local `generator` to avoid
|
||||
setting the same seed in the main random number generator in all processes.
|
||||
|
||||
<Tip warning={true}>
|
||||
|
||||
Synchronization of the main torch (or CUDA or XLA) random number generator will affect any other potential random
|
||||
artifacts you could have in your dataset (like random data augmentation) in the sense that all processes will get
|
||||
the same random numbers from the torch random modules (so will apply the same random data augmentation if it's
|
||||
controlled by torch).
|
||||
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
|
||||
The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local
|
||||
`torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example.
|
||||
|
||||
</Tip>
|
||||
|
||||
For more details about the internals, see the [Internals page](package_reference/torch_wrappers).
|
||||
@ -15,13 +15,20 @@ rendered properly in your Markdown viewer.
|
||||
|
||||
# Quick tour
|
||||
|
||||
Let's have a look at the 🤗 Accelerate main features and traps to avoid.
|
||||
This guide aims to help you get started with 🤗 Accelerate quickly. It covers the essential steps you need to take to
|
||||
enable distributed training, as well as the adjustments that you need to make in some common scenarios.
|
||||
|
||||
## Main use
|
||||
To help you navigate, the guide is split into two sections:
|
||||
* [Getting Started with 🤗 Accelerate](#getting-started-with--accelerate): start here to learn how to modify your script to enable distributed training with 🤗 Accelerate
|
||||
* [Common adaptations to the base case](#common-adaptations-to-the-base-case): check out this section for common deviations from the baseline scenario and what adjustments may need to be made to support them.
|
||||
|
||||
To use 🤗 Accelerate in your own script, you have to change four things:
|
||||
## Getting started with 🤗 Accelerate
|
||||
|
||||
1. Import the [`Accelerator`] main class and instantiate one in an `accelerator` object:
|
||||
### Enable distributed training in your script
|
||||
|
||||
To use 🤗 Accelerate in your own training script, you have to modify four things:
|
||||
|
||||
1. Import the [`Accelerator`] main class and instantiate one in an `accelerator` object.
|
||||
|
||||
```python
|
||||
from accelerate import Accelerator
|
||||
@ -29,27 +36,27 @@ from accelerate import Accelerator
|
||||
accelerator = Accelerator()
|
||||
```
|
||||
|
||||
This should happen as early as possible in your training script as it will initialize everything necessary for
|
||||
distributed training. You don't need to indicate the kind of environment you are in (just one machine with a GPU, one
|
||||
machines with several GPUs, several machines with multiple GPUs or a TPU), the library will detect this automatically.
|
||||
Add this at the beginning of your training script as it will initialize everything necessary for distributed training.
|
||||
You don't need to indicate the kind of environment you are in (a single machine with a GPU, a machine with several GPUs,
|
||||
or several machines with multiple GPUs or a TPU), the library will detect this automatically.
|
||||
|
||||
2. Remove the call `.to(device)` or `.cuda()` for your model and input data. The `accelerator` object
|
||||
will handle this for you and place all those objects on the right device for you. If you know what you're doing, you
|
||||
can leave those `.to(device)` calls but you should use the device provided by the `accelerator` object:
|
||||
`accelerator.device`.
|
||||
2. Remove the `.to(device)` or `.cuda()` calls for your model and input data.
|
||||
|
||||
To fully deactivate the automatic device placement, pass along `device_placement=False` when initializing your
|
||||
[`Accelerator`].
|
||||
The `accelerator` object will handle placing these objects on the right device for you.
|
||||
If you choose to leave those `.to(device)` calls, make sure to use the device provided by the `accelerator` object: `accelerator.device`.
|
||||
|
||||
<Tip warning={true}>
|
||||
|
||||
If you place your objects manually on the proper device, be careful to create your optimizer after putting your
|
||||
You can fully deactivate the automatic device placement by passing along `device_placement=False` when
|
||||
initializing the [`Accelerator`].
|
||||
However, if you place your objects manually on the proper device, be careful to create your optimizer after putting your
|
||||
model on `accelerator.device` or your training will fail on TPU.
|
||||
|
||||
</Tip>
|
||||
|
||||
3. Pass all objects relevant to training (optimizer, model, training dataloader, learning rate scheduler) to the
|
||||
[`~Accelerator.prepare`] method. This will make sure everything is ready for training.
|
||||
3. Pass all PyTorch objects relevant to training (optimizer, model, dataloader(s), learning rate scheduler) to the
|
||||
[`~Accelerator.prepare`] method as soon as these objects are created, before starting your actual
|
||||
training loop:
|
||||
|
||||
```python
|
||||
model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
|
||||
@ -57,60 +64,42 @@ model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
|
||||
)
|
||||
```
|
||||
|
||||
In particular, your training dataloader will be sharded across all GPUs/TPU cores available so that each one sees a
|
||||
different portion of the training dataset. Also, the random states of all processes will be synchronized at the
|
||||
beginning of each iteration through your dataloader, to make sure the data is shuffled the same way (if you decided to
|
||||
use `shuffle=True` or any kind of random sampler).
|
||||
**Important notes**:
|
||||
|
||||
* You should always pass the the learning rate scheduler to [`~Accelerator.prepare`], however if the scheduler should *not* be stepped at each optimization step, pass `step_with_optimizer=False` to the [`Accelerator`] init.
|
||||
* While you can send your dataloader to [`~Accelerator.prepare`] on its own (and there are cases for doing so, such as distributed inference), it's best to send it to [`~Accelerator.prepare`] together with the model and optimizer.
|
||||
* If you wish to run distributed evaluation, send your validation dataloader to [`~Accelerator.prepare`] as well. There are some nuances to distributed validation, check the [Distributed evaluation](#add-distributed-evaluation) section of the guide.
|
||||
* Any instruction using your training dataloader length (for instance if you want to log the number of total training
|
||||
steps) should go after the call to [`~Accelerator.prepare`].
|
||||
|
||||
Passing `DataLoader` objects to the [`~Accelerator.prepare`] method ensures that your dataloader will be sharded across
|
||||
all GPUs/TPU cores available so that each one sees a different portion of the training dataset. In other words, if there are 8 processes and a dataset of 64 items, each process will see 8 of these items per iteration. Also, the random states
|
||||
of all processes will be synchronized at the beginning of each iteration through your dataloader, to make sure the data
|
||||
is shuffled the same way (if you decided to use `shuffle=True` or any kind of random sampler).
|
||||
|
||||
<Tip>
|
||||
|
||||
The actual batch size for your training will be the number of devices used multiplied by the batch size you set in
|
||||
your script: for instance training on 4 GPUs with a batch size of 16 set when creating the training dataloader will
|
||||
train at an actual batch size of 64.
|
||||
|
||||
</Tip>
|
||||
|
||||
Alternatively, you can use the option `split_batches=True` when creating and initializing your
|
||||
[`Accelerator`], in which case the batch size will always stay the same, whether you run your
|
||||
script on 1, 2, 4, or 64 GPUs.
|
||||
|
||||
You should execute this instruction as soon as all objects for training are created, before starting your actual
|
||||
training loop.
|
||||
|
||||
<Tip warning={true}>
|
||||
|
||||
You should only pass the learning rate scheduler to [`~Accelerator.prepare`] when the scheduler needs to be stepped
|
||||
at each optimizer step.
|
||||
|
||||
</Tip>
|
||||
|
||||
<Tip warning={true}>
|
||||
|
||||
your script. For instance, training on 4 GPUs with a batch size of 16 set when creating the training dataloader will
|
||||
train at an actual batch size of 64 (4 * 16).
|
||||
If you want the batch size remain the same regardless of how many GPUs the script is run on, you can use the
|
||||
option `split_batches=True` when creating and initializing [`Accelerator`].
|
||||
Your training dataloader may change length when going through this method: if you run on X GPUs, it will have its
|
||||
length divided by X (since your actual batch size will be multiplied by X), unless you set
|
||||
`split_batches=True`.
|
||||
|
||||
</Tip>
|
||||
|
||||
Any instruction using your training dataloader length (for instance if you want to log the number of total training
|
||||
steps) should go after the call to [`~Accelerator.prepare`].
|
||||
|
||||
You can perfectly send your dataloader to [`~Accelerator.prepare`] on its own, but it's best to send the
|
||||
model and optimizer to [`~Accelerator.prepare`] together.
|
||||
|
||||
You may or may not want to send your validation dataloader to [`~Accelerator.prepare`], depending on
|
||||
whether you want to run distributed evaluation or not (see below).
|
||||
|
||||
4. Replace the line `loss.backward()` by `accelerator.backward(loss)`.
|
||||
4. Replace the `loss.backward()` line with `accelerator.backward(loss)`.
|
||||
|
||||
And you're all set! With all these changes, your script will run on your local machine as well as on multiple GPUs or a
|
||||
TPU! You can either use your favorite tool to launch the distributed training, or you can use the 🤗 Accelerate
|
||||
launcher.
|
||||
|
||||
### Add distributed evaluation
|
||||
|
||||
## Distributed evaluation
|
||||
|
||||
You can perform regular evaluation in your training script, if you leave your validation dataloader out of the
|
||||
You can perform regular evaluation in your training script if you leave your validation dataloader out of the
|
||||
[`~Accelerator.prepare`] method. In this case, you will need to put the input data on the
|
||||
`accelerator.device` manually.
|
||||
|
||||
@ -121,9 +110,9 @@ method:
|
||||
validation_dataloader = accelerator.prepare(validation_dataloader)
|
||||
```
|
||||
|
||||
As for your training dataloader, it will mean that (should you run your script on multiple devices) each device will
|
||||
only see part of the evaluation data. This means you will need to group your predictions together. This is very easy to
|
||||
do with the [`~Accelerator.gather_for_metrics`] method.
|
||||
Same as with your training dataloader, each device will only see part of the evaluation data should you run your script
|
||||
on multiple devices. This means you will need to group your predictions together which you can do with
|
||||
the [`~Accelerator.gather_for_metrics`] method.
|
||||
|
||||
```python
|
||||
for inputs, targets in validation_dataloader:
|
||||
@ -142,11 +131,9 @@ for inputs, targets in validation_dataloader:
|
||||
|
||||
</Tip>
|
||||
|
||||
Any instruction using your training dataloader length (for instance if you need the number of total training steps
|
||||
to create a learning rate scheduler) should go after the call to [`~Accelerator.prepare`].
|
||||
|
||||
Some data at the end of the dataset may be duplicated so the batch can be divided equally among all workers. As a result, metrics
|
||||
should be calculated through the [`~Accelerator.gather_for_metrics`] method to automatically remove the duplicated data while gathering.
|
||||
Some data at the end of the dataset may be duplicated so the batch can be divided equally among all workers. As a result,
|
||||
metrics should be calculated through the [`~Accelerator.gather_for_metrics`] method to automatically remove the duplicated
|
||||
data while gathering and provide a more accurate metric.
|
||||
|
||||
<Tip>
|
||||
|
||||
@ -165,36 +152,35 @@ should be calculated through the [`~Accelerator.gather_for_metrics`] method to a
|
||||
|
||||
</Tip>
|
||||
|
||||
## Launching your distributed script
|
||||
### Launch your distributed script
|
||||
|
||||
You can use the regular commands to launch your distributed training (like `torch.distributed.run` for
|
||||
PyTorch), they are fully compatible with 🤗 Accelerate.
|
||||
PyTorch) - they are fully compatible with 🤗 Accelerate.
|
||||
|
||||
🤗 Accelerate also provides a CLI tool that unifies all launchers, so you only have to remember one command. To use it,
|
||||
just run:
|
||||
Alternatively, 🤗 Accelerate provides a CLI tool that unifies all launchers, so you only have to remember one command. \
|
||||
To use it, run a quick configuration setup first on your machine and answer the questions:
|
||||
|
||||
```bash
|
||||
accelerate config
|
||||
```
|
||||
|
||||
on your machine and reply to the questions asked. This will save a *default_config.yaml* file in your cache folder for
|
||||
🤗 Accelerate. That cache folder is (with decreasing order of priority):
|
||||
At the end of the setup, a *default_config.yaml* file will be saved in your cache folder for 🤗 Accelerate. That cache
|
||||
folder is (with decreasing order of priority):
|
||||
|
||||
- The content of your environment variable `HF_HOME` suffixed with *accelerate*.
|
||||
- If it does not exist, the content of your environment variable `XDG_CACHE_HOME` suffixed with
|
||||
*huggingface/accelerate*.
|
||||
- If this does not exist either, the folder *~/.cache/huggingface/accelerate*
|
||||
- If this does not exist either, the folder *~/.cache/huggingface/accelerate*.
|
||||
|
||||
You can also specify with the flag `--config_file` the location of the file you want to save.
|
||||
|
||||
Once this is done, you can test everything is going well on your setup by running:
|
||||
By specifying the `--config_file` flag you can specify an alternative location of the configuration file.
|
||||
Once the configuration setup is complete, you can test your setup by running:
|
||||
|
||||
```bash
|
||||
accelerate test
|
||||
```
|
||||
|
||||
This will launch a short script that will test the distributed environment. If it runs fine, you are ready for the next
|
||||
step!
|
||||
This will launch a short script that will test the distributed environment. If it runs without issues, you are ready for
|
||||
the next step!
|
||||
|
||||
Note that if you specified a location for the config file in the previous step, you need to pass it here as well:
|
||||
|
||||
@ -214,19 +200,23 @@ If you stored the config file in a non-default location, you can indicate it to
|
||||
accelerate launch --config_file path_to_config.yaml path_to_script.py --args_for_the_script
|
||||
```
|
||||
|
||||
You can also override any of the arguments determined by your config file.
|
||||
To see the complete list of parameters that you can pass in, run `accelerate launch -h`.
|
||||
You can override any of the arguments determined by your config file. To see the complete list of parameters that you
|
||||
can pass in, run `accelerate launch -h`. (And further niche argument help by passing in partial commands, such as `accelerate launch --multi_gpu -h` for all `multi_gpu` args)
|
||||
|
||||
Check out the [Launch tutorial](basic_tutorials/launch) for more information about launching your scripts.
|
||||
Check out the [Launch tutorial](basic_tutorials/launch) for more information about launching your scripts.
|
||||
|
||||
## Common modifications of the base case
|
||||
|
||||
## Launching training from a notebook
|
||||
The previous section covers the minimal essential steps to move a training script into a distributed setup with 🤗 Accelerate.
|
||||
Here we describe common modifications/deviations from the base case scenario and the adjustments you need to make to accommodate for them.
|
||||
|
||||
In Accelerate 0.3.0, a new [`notebook_launcher`] has been introduced to help you launch your training
|
||||
function from a notebook. This launcher supports launching a training with TPUs on Colab or Kaggle, as well as training
|
||||
on several GPUs (if the machine on which you are running your notebook has them).
|
||||
### Launch distributed training from a notebook
|
||||
|
||||
Just define a function responsible for your whole training and/or evaluation in a cell of the notebook, then execute a
|
||||
Accelerate has a [`notebook_launcher`] to help you launch your training function from a
|
||||
notebook. This launcher supports launching a training with TPUs on Colab or Kaggle, as well as training on several GPUs and machines
|
||||
(if the machine on which you are running your notebook has them).
|
||||
|
||||
Define a function responsible for your whole training and/or evaluation in a cell of the notebook, then execute a
|
||||
cell with the following code:
|
||||
|
||||
```python
|
||||
@ -242,10 +232,9 @@ notebook_launcher(training_function)
|
||||
|
||||
</Tip>
|
||||
|
||||
Check out the [Notebook Launcher tutorial](basic_tutorials/notebook) for more information about training on TPUs.
|
||||
Check out the [Notebook Launcher tutorial](basic_tutorials/notebook) for more information about training on TPUs.
|
||||
|
||||
|
||||
## Training on TPU
|
||||
### Specifics of training on TPU
|
||||
|
||||
If you want to launch your script on TPUs, there are a few caveats you should be aware of. Behind the scenes, the TPUs
|
||||
will create a graph of all the operations happening in your training step (forward pass, backward pass and optimizer
|
||||
@ -284,12 +273,7 @@ passed your model to [`~Accelerator.prepare`]) will break the tying. You will ne
|
||||
after. You can find an example of this in the [run_clm_no_trainer](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) script in
|
||||
the Transformers repository.
|
||||
|
||||
Check out the [TPU tutorial](concept_guides/training_tpu) for more information about training on TPUs.
|
||||
|
||||
|
||||
## Other caveats
|
||||
|
||||
We list here all smaller issues you could have in your script conversion and how to resolve them.
|
||||
Check out the [TPU tutorial](concept_guides/training_tpu) for more information about training on TPUs.
|
||||
|
||||
### Execute a statement only on one processes
|
||||
|
||||
@ -323,14 +307,14 @@ For printing statements you only want executed once per machine, you can just re
|
||||
`accelerator.print`.
|
||||
|
||||
|
||||
### Defer execution
|
||||
### Defer execution on multiple GPUs
|
||||
|
||||
When you run your usual script, instructions are executed in order. Using 🤗 Accelerate to deploy your script on several
|
||||
GPUs at the same time introduces a complication: while each process executes all instructions in order, some may be
|
||||
faster than others.
|
||||
|
||||
You might need to wait for all processes to have reached a certain point before executing a given instruction. For
|
||||
instance, you shouldn't save a model before being sure every process is done with training. To do this, just write the
|
||||
instance, you shouldn't save a model before making sure every process is done with training. To do this, add the
|
||||
following line in your code:
|
||||
|
||||
```
|
||||
@ -341,7 +325,7 @@ This instruction will block all the processes that arrive first until all the ot
|
||||
point (if you run your script on just one GPU or CPU, this won't do anything).
|
||||
|
||||
|
||||
### Saving/loading a model
|
||||
### Save/load a model in a distributed setup
|
||||
|
||||
Saving the model you trained might need a bit of adjustment: first you should wait for all processes to reach that
|
||||
point in the script as shown above, and then, you should unwrap your model before saving it. This is because when going
|
||||
@ -349,15 +333,16 @@ through the [`~Accelerator.prepare`] method, your model may have been placed ins
|
||||
which deals with the distributed training. This in turn means that saving your model state dictionary without taking
|
||||
any precaution will take that potential extra layer into account, and you will end up with weights you can't load back
|
||||
in your base model. The [`~Accelerator.save_model`] method will help you to achieve that. It will unwrap your model and save
|
||||
the model state dictionnary.
|
||||
the model state dictionary.
|
||||
|
||||
Here is an example:
|
||||
|
||||
```
|
||||
accelerator.wait_for_everyone()
|
||||
accelerator.save_model(model, save_directory)
|
||||
```
|
||||
The [`~Accelerator.save_model`] method can also save a model into sharded checkpoints or with safetensors format.
|
||||
Here is an example:
|
||||
|
||||
The [`~Accelerator.save_model`] method can also save a model into sharded checkpoints or with safetensors format:
|
||||
|
||||
```python
|
||||
accelerator.wait_for_everyone()
|
||||
@ -376,15 +361,18 @@ unwrapped_model.load_state_dict(torch.load(path_to_checkpoint))
|
||||
|
||||
Note that since all the model parameters are references to tensors, this will load your weights inside `model`.
|
||||
|
||||
If you want to load a sharded checkpoint or a checkpoint with safetensors format into the model with a specific `device`, we recommend you to load it with [`~utils.load_checkpoint_in_model`] function. Here's an example:
|
||||
If you want to load a sharded checkpoint or a checkpoint with safetensors format into the model with a specific `device`,
|
||||
we recommend you to load it with [`~utils.load_checkpoint_in_model`] function. Here's an example:
|
||||
|
||||
```python
|
||||
load_checkpoint_in_model(unwrapped_model, save_directory, device_map={"":device})
|
||||
```
|
||||
|
||||
## Saving/loading entire states
|
||||
|
||||
When training your model, you may want to save the current state of the model, optimizer, random generators, and potentially LR schedulers to be restored in the _same script_.
|
||||
### Save/load entire states
|
||||
|
||||
When training your model, you may want to save the current state of the model, optimizer, random generators, and potentially
|
||||
learning rate schedulers to be restored in the _same script_.
|
||||
You can use [`~Accelerator.save_state`] and [`~Accelerator.load_state`] respectively to do so.
|
||||
|
||||
To further customize where and how states saved through [`~Accelerator.save_state`] the [`~utils.ProjectConfiguration`] class can be used. For example
|
||||
@ -399,19 +387,19 @@ If you have registered any other stateful items to be stored through [`~Accelera
|
||||
</Tip>
|
||||
|
||||
|
||||
### Gradient clipping
|
||||
### Use gradient clipping
|
||||
|
||||
If you are using gradient clipping in your script, you should replace the calls to
|
||||
`torch.nn.utils.clip_grad_norm_` or `torch.nn.utils.clip_grad_value_` with [`~Accelerator.clip_grad_norm_`]
|
||||
and [`~Accelerator.clip_grad_value_`] respectively.
|
||||
|
||||
|
||||
### Mixed Precision training
|
||||
### Train with mixed precision
|
||||
|
||||
If you are running your training in Mixed Precision with 🤗 Accelerate, you will get the best result with your loss being
|
||||
computed inside your model (like in Transformer models for instance). Every computation outside of the model will be
|
||||
executed in full precision (which is generally what you want for loss computation, especially if it involves a
|
||||
softmax). However you might want to put your loss computation inside the [`~Accelerator.autocast`] context manager:
|
||||
softmax). However, you might want to put your loss computation inside the [`~Accelerator.autocast`] context manager:
|
||||
|
||||
```
|
||||
with accelerator.autocast():
|
||||
@ -432,7 +420,7 @@ if not accelerator.optimizer_step_was_skipped:
|
||||
lr_scheduler.step()
|
||||
```
|
||||
|
||||
### Gradient Accumulation
|
||||
### Use gradient accumulation
|
||||
|
||||
To perform gradient accumulation use [`~Accelerator.accumulate`] and specify a `gradient_accumulation_steps`.
|
||||
This will also automatically ensure the gradients are synced or unsynced when on multi-device training, check if the step should
|
||||
@ -451,70 +439,3 @@ for input, label in training_dataloader:
|
||||
scheduler.step()
|
||||
optimizer.zero_grad()
|
||||
```
|
||||
|
||||
### DeepSpeed
|
||||
|
||||
DeepSpeed support is experimental, so the underlying API will evolve in the near future and may have some slight
|
||||
breaking changes. In particular, 🤗 Accelerate does not support DeepSpeed config you have written yourself yet, this
|
||||
will be added in a next version.
|
||||
|
||||
<Tip warning={true}>
|
||||
|
||||
The [`notebook_launcher`] does not support the DeepSpeed integration yet.
|
||||
|
||||
</Tip>
|
||||
|
||||
## Internal mechanism
|
||||
|
||||
Internally, the library works by first analyzing the environment in which the script is launched to determine which
|
||||
kind of distributed setup is used, how many different processes there are and which one the current script is in. All
|
||||
that information is stored in the [`~AcceleratorState`].
|
||||
|
||||
This class is initialized the first time you instantiate an [`~Accelerator`] as well as performing any
|
||||
specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of
|
||||
[`~state.AcceleratorState`].
|
||||
|
||||
Then, when calling [`~Accelerator.prepare`], the library:
|
||||
|
||||
- wraps your model(s) in the container adapted for the distributed setup,
|
||||
- wraps your optimizer(s) in a [`~optimizer.AcceleratedOptimizer`],
|
||||
- creates a new version of your dataloader(s) in a [`~data_loader.DataLoaderShard`].
|
||||
|
||||
While the model(s) and optimizer(s) are just put in simple wrappers, the dataloader(s) are re-created. This is mostly
|
||||
because PyTorch does not let the user change the `batch_sampler` of a dataloader once it's been created and the
|
||||
library handles the sharding of your data between processes by changing that `batch_sampler` to yield every other
|
||||
`num_processes` batches.
|
||||
|
||||
The [`~data_loader.DataLoaderShard`] subclasses `DataLoader` to add the following functionality:
|
||||
|
||||
- it synchronizes the appropriate random number generator of all processes at each new iteration, to ensure any
|
||||
randomization (like shuffling) is done the exact same way across processes.
|
||||
- it puts the batches on the proper device before yielding them (unless you have opted out of
|
||||
`device_placement=True`).
|
||||
|
||||
The random number generator synchronization will by default synchronize:
|
||||
|
||||
- the `generator` attribute of a given sampler (like the PyTorch `RandomSampler`) for PyTorch >= 1.6
|
||||
- the main random number generator in PyTorch <=1.5.1
|
||||
|
||||
You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main
|
||||
[`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on a local `generator` to avoid
|
||||
setting the same seed in the main random number generator in all processes.
|
||||
|
||||
<Tip warning={true}>
|
||||
|
||||
Synchronization of the main torch (or CUDA or XLA) random number generator will affect any other potential random
|
||||
artifacts you could have in your dataset (like random data augmentation) in the sense that all processes will get
|
||||
the same random numbers from the torch random modules (so will apply the same random data augmentation if it's
|
||||
controlled by torch).
|
||||
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
|
||||
The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local
|
||||
`torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example.
|
||||
|
||||
</Tip>
|
||||
|
||||
For more details about the internals, see the [Internals page](package_reference/torch_wrappers).
|
||||
|
||||
@ -130,7 +130,7 @@ As a brief example, we will look at using `transformers` and loading in Big Scie
|
||||
```py
|
||||
from transformers import AutoModelForSeq2SeqLM
|
||||
|
||||
model = AutoModelForSeq2SeqLM("bigscience/T0pp", device_map="auto")
|
||||
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
|
||||
```
|
||||
|
||||
After loading the model in, the initial steps from before to prepare a model have all been done and the model is fully
|
||||
@ -140,11 +140,11 @@ specifying the precision the model is loaded into as well, through the `torch_dt
|
||||
```py
|
||||
from transformers import AutoModelForSeq2SeqLM
|
||||
|
||||
model = AutoModelForSeq2SeqLM("bigscience/T0pp", device_map="auto", torch_dtype=torch.float16)
|
||||
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto", torch_dtype=torch.float16)
|
||||
```
|
||||
|
||||
To learn more about this, check out the 🤗 Transformers documentation available [here](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading).
|
||||
|
||||
## Where to go from here
|
||||
|
||||
For a much more detailed look at big model inference, be sure to check out the [Conceptual Guide on it](../concept_guides/big_model_inference)
|
||||
For a much more detailed look at big model inference, be sure to check out the [Conceptual Guide on it](../concept_guides/big_model_inference)
|
||||
|
||||
@ -154,7 +154,7 @@ When using transformers `save_pretrained`, pass `state_dict=accelerator.get_stat
|
||||
args.output_dir,
|
||||
is_main_process=accelerator.is_main_process,
|
||||
save_function=accelerator.save,
|
||||
+ state_dict=accelerator.get_state_dict(model),
|
||||
+ state_dict=accelerator.get_state_dict(model, unwrap=False),
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
@ -602,15 +602,22 @@ def main():
|
||||
resume_step -= starting_epoch * num_update_steps_per_epoch
|
||||
completed_steps = resume_step
|
||||
|
||||
# update progress bar if resumed from checkpoint
|
||||
progress_bar.update(completed_steps)
|
||||
|
||||
for epoch in range(starting_epoch, args.num_train_epochs):
|
||||
model.train()
|
||||
if args.with_tracking:
|
||||
total_loss = 0
|
||||
|
||||
# skip new `skip_first_batches` to skip the batches when resuming from ckpt
|
||||
if args.resume_from_checkpoint:
|
||||
train_dataloader = accelerator.skip_first_batches(train_dataloader, num_batches=resume_step)
|
||||
for step, batch in enumerate(train_dataloader):
|
||||
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
|
||||
# We need to skip steps until we reach the resumed step
|
||||
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
|
||||
else:
|
||||
# After the first iteration though, we need to go back to the original dataloader
|
||||
active_dataloader = train_dataloader
|
||||
for step, batch in enumerate(active_dataloader):
|
||||
# In particular, DeepSpeed handles `gradient_accumulation` via `DeepSpeedEngine`.
|
||||
# Below, we use `accelerator.accumulate` if the user
|
||||
# wants to switch to other approaches such as plain DDP, PyTorch FSDP ...
|
||||
|
||||
2
setup.py
2
setup.py
@ -32,7 +32,7 @@ extras["sagemaker"] = [
|
||||
|
||||
setup(
|
||||
name="accelerate",
|
||||
version="0.23.0.dev0",
|
||||
version="0.24.0.dev0",
|
||||
description="Accelerate",
|
||||
long_description=open("README.md", "r", encoding="utf-8").read(),
|
||||
long_description_content_type="text/markdown",
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
__version__ = "0.23.0.dev0"
|
||||
__version__ = "0.24.0.dev0"
|
||||
|
||||
from .accelerator import Accelerator
|
||||
from .big_modeling import (
|
||||
|
||||
@ -2508,6 +2508,10 @@ class Accelerator:
|
||||
f (`str` or `os.PathLike`): Where to save the content of `obj`.
|
||||
safe_serialization (`bool`, *optional*, defaults to `False`): Whether to save `obj` using `safetensors`
|
||||
|
||||
Note:
|
||||
If `save_on_each_node` was passed in as a `ProjectConfiguration`, will save the object once per node,
|
||||
rather than only once on the main node.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
@ -2518,7 +2522,12 @@ class Accelerator:
|
||||
>>> accelerator.save(arr, "array.pkl")
|
||||
```
|
||||
"""
|
||||
save(obj, f, safe_serialization=safe_serialization)
|
||||
save(
|
||||
obj,
|
||||
f,
|
||||
save_on_each_node=self.project_configuration.save_on_each_node,
|
||||
safe_serialization=safe_serialization,
|
||||
)
|
||||
|
||||
def save_model(
|
||||
self,
|
||||
@ -2793,10 +2802,16 @@ class Accelerator:
|
||||
hook(self._models, weights, output_dir)
|
||||
|
||||
save_location = save_accelerator_state(
|
||||
output_dir, weights, optimizers, schedulers, self.state.process_index, self.scaler
|
||||
output_dir,
|
||||
weights,
|
||||
optimizers,
|
||||
schedulers,
|
||||
self.state.process_index,
|
||||
self.scaler,
|
||||
save_on_each_node=self.project_configuration.save_on_each_node,
|
||||
)
|
||||
for i, obj in enumerate(self._custom_objects):
|
||||
save_custom_state(obj, output_dir, i)
|
||||
save_custom_state(obj, output_dir, i, save_on_each_node=self.project_configuration.save_on_each_node)
|
||||
self.project_configuration.iteration += 1
|
||||
return save_location
|
||||
|
||||
@ -2876,7 +2891,7 @@ class Accelerator:
|
||||
return list(map(int, re.findall(r"[\/]?([0-9]+)(?=[^\/]*$)", folder)))[0]
|
||||
|
||||
folders.sort(key=_inner)
|
||||
input_dir = os.path.join(input_dir, folders[-1])
|
||||
input_dir = folders[-1]
|
||||
else:
|
||||
raise ValueError("No input_dir provided and automatic checkpoint naming is disabled.")
|
||||
logger.info(f"Loading states from {input_dir}")
|
||||
|
||||
@ -51,6 +51,7 @@ def save_accelerator_state(
|
||||
schedulers: list,
|
||||
process_index: int,
|
||||
scaler: GradScaler = None,
|
||||
save_on_each_node: bool = False,
|
||||
):
|
||||
"""
|
||||
Saves the current states of the models, optimizers, scaler, and RNG generators to a given directory.
|
||||
@ -68,32 +69,34 @@ def save_accelerator_state(
|
||||
The current process index in the Accelerator state
|
||||
scaler (`torch.cuda.amp.GradScaler`, *optional*):
|
||||
An optional gradient scaler instance to save
|
||||
save_on_each_node (`bool`, *optional*):
|
||||
Whether to save on every node, or only the main node.
|
||||
"""
|
||||
# Model states
|
||||
for i, state in enumerate(model_states):
|
||||
weights_name = f"{MODEL_NAME}.bin" if i == 0 else f"{MODEL_NAME}_{i}.bin"
|
||||
output_model_file = os.path.join(output_dir, weights_name)
|
||||
save(state, output_model_file)
|
||||
save(state, output_model_file, save_on_each_node=save_on_each_node)
|
||||
logger.info(f"Model weights saved in {output_model_file}")
|
||||
# Optimizer states
|
||||
for i, opt in enumerate(optimizers):
|
||||
state = opt.state_dict()
|
||||
optimizer_name = f"{OPTIMIZER_NAME}.bin" if i == 0 else f"{OPTIMIZER_NAME}_{i}.bin"
|
||||
output_optimizer_file = os.path.join(output_dir, optimizer_name)
|
||||
save(state, output_optimizer_file)
|
||||
save(state, output_optimizer_file, save_on_each_node=save_on_each_node)
|
||||
logger.info(f"Optimizer state saved in {output_optimizer_file}")
|
||||
# Scheduler states
|
||||
for i, scheduler in enumerate(schedulers):
|
||||
state = scheduler.state_dict()
|
||||
scheduler_name = f"{SCHEDULER_NAME}.bin" if i == 0 else f"{SCHEDULER_NAME}_{i}.bin"
|
||||
output_scheduler_file = os.path.join(output_dir, scheduler_name)
|
||||
save(state, output_scheduler_file)
|
||||
save(state, output_scheduler_file, save_on_each_node=save_on_each_node)
|
||||
logger.info(f"Scheduler state saved in {output_scheduler_file}")
|
||||
# GradScaler state
|
||||
if scaler is not None:
|
||||
state = scaler.state_dict()
|
||||
output_scaler_file = os.path.join(output_dir, SCALER_NAME)
|
||||
torch.save(state, output_scaler_file)
|
||||
torch.save(state, output_scaler_file, save_on_each_node=save_on_each_node)
|
||||
logger.info(f"Gradient scaler state saved in {output_scaler_file}")
|
||||
# Random number generator states
|
||||
states = {}
|
||||
@ -197,14 +200,14 @@ def load_accelerator_state(
|
||||
logger.info("Could not load random states")
|
||||
|
||||
|
||||
def save_custom_state(obj, path, index: int = 0):
|
||||
def save_custom_state(obj, path, index: int = 0, save_on_each_node: bool = False):
|
||||
"""
|
||||
Saves the state of `obj` to `{path}/custom_checkpoint_{index}.pkl`
|
||||
"""
|
||||
# Should this be the right way to get a qual_name type value from `obj`?
|
||||
save_location = Path(path) / f"custom_checkpoint_{index}.pkl"
|
||||
logger.info(f"Saving the state of {get_pretty_name(obj)} to {save_location}")
|
||||
torch.save(obj.state_dict(), save_location)
|
||||
save(obj.state_dict(), save_location, save_on_each_node=save_on_each_node)
|
||||
|
||||
|
||||
def load_custom_state(obj, path, index: int = 0):
|
||||
|
||||
@ -30,13 +30,15 @@ DYNAMO_BACKENDS = [
|
||||
"EAGER",
|
||||
"AOT_EAGER",
|
||||
"INDUCTOR",
|
||||
"NVFUSER",
|
||||
"AOT_NVFUSER",
|
||||
"AOT_CUDAGRAPHS",
|
||||
"AOT_TS_NVFUSER",
|
||||
"NVPRIMS_NVFUSER",
|
||||
"CUDAGRAPHS",
|
||||
"OFI",
|
||||
"FX2TRT",
|
||||
"ONNXRT",
|
||||
"TENSORRT",
|
||||
"IPEX",
|
||||
"TVM",
|
||||
]
|
||||
|
||||
|
||||
|
||||
@ -580,7 +580,7 @@ class DataLoaderDispatcher(DataLoader, DataLoaderStateMixin):
|
||||
|
||||
if batch is None:
|
||||
raise ValueError(
|
||||
f"Batch does not contain any data (`{batch}`). At the end of all iterable data available before expected stop iteration."
|
||||
f"Batch does not contain any data (`{batch}`). At the end of all iterable data available ({batch_index-1} batches) before expected stop iteration."
|
||||
)
|
||||
|
||||
observed_batch_size = find_batch_size(batch)
|
||||
|
||||
@ -155,17 +155,17 @@ def add_hook_to_module(module: nn.Module, hook: ModelHook, append: bool = False)
|
||||
module = hook.init_hook(module)
|
||||
module._hf_hook = hook
|
||||
|
||||
@functools.wraps(old_forward)
|
||||
def new_forward(*args, **kwargs):
|
||||
def new_forward(module, *args, **kwargs):
|
||||
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
|
||||
if module._hf_hook.no_grad:
|
||||
with torch.no_grad():
|
||||
output = old_forward(*args, **kwargs)
|
||||
output = module._old_forward(*args, **kwargs)
|
||||
else:
|
||||
output = old_forward(*args, **kwargs)
|
||||
output = module._old_forward(*args, **kwargs)
|
||||
return module._hf_hook.post_forward(module, output)
|
||||
|
||||
module.forward = new_forward
|
||||
module.forward = functools.update_wrapper(functools.partial(new_forward, module), old_forward)
|
||||
|
||||
return module
|
||||
|
||||
|
||||
|
||||
@ -32,6 +32,7 @@ import torch
|
||||
|
||||
from .constants import FSDP_AUTO_WRAP_POLICY, FSDP_BACKWARD_PREFETCH, FSDP_STATE_DICT_TYPE
|
||||
from .environment import str_to_bool
|
||||
from .imports import is_xpu_available
|
||||
from .versions import compare_versions
|
||||
|
||||
|
||||
@ -200,6 +201,29 @@ class FP8RecipeKwargs(KwargsHandler):
|
||||
raise ValueError("`amax_compute_algo` must be 'max' or 'most_recent'")
|
||||
|
||||
|
||||
class EnumWithContains(enum.EnumMeta):
|
||||
"A metaclass that adds the ability to check if `self` contains an item with the `in` operator"
|
||||
|
||||
def __contains__(cls, item):
|
||||
try:
|
||||
cls(item)
|
||||
except ValueError:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
class BaseEnum(enum.Enum, metaclass=EnumWithContains):
|
||||
"An enum class that can get the value of an item with `str(Enum.key)`"
|
||||
|
||||
def __str__(self):
|
||||
return self.value
|
||||
|
||||
@classmethod
|
||||
def list(cls):
|
||||
"Method to list all the possible items in `cls`"
|
||||
return list(map(str, cls))
|
||||
|
||||
|
||||
class DistributedType(str, enum.Enum):
|
||||
"""
|
||||
Represents a type of distributed environment.
|
||||
@ -259,7 +283,7 @@ class ComputeEnvironment(str, enum.Enum):
|
||||
AMAZON_SAGEMAKER = "AMAZON_SAGEMAKER"
|
||||
|
||||
|
||||
class DynamoBackend(str, enum.Enum):
|
||||
class DynamoBackend(str, BaseEnum):
|
||||
"""
|
||||
Represents a dynamo backend (see https://github.com/pytorch/torchdynamo).
|
||||
|
||||
@ -273,19 +297,21 @@ class DynamoBackend(str, enum.Enum):
|
||||
- **INDUCTOR** -- Uses TorchInductor backend with AotAutograd and cudagraphs by leveraging codegened Triton
|
||||
kernels. [Read
|
||||
more](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747)
|
||||
- **NVFUSER** -- nvFuser with TorchScript. [Read
|
||||
- **AOT_TS_NVFUSER** -- nvFuser with AotAutograd/TorchScript. [Read
|
||||
more](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593)
|
||||
- **AOT_NVFUSER** -- nvFuser with AotAutograd. [Read
|
||||
- **NVPRIMS_NVFUSER** -- nvFuser with PrimTorch. [Read
|
||||
more](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593)
|
||||
- **AOT_CUDAGRAPHS** -- cudagraphs with AotAutograd. [Read
|
||||
more](https://github.com/pytorch/torchdynamo/pull/757)
|
||||
- **CUDAGRAPHS** -- cudagraphs with AotAutograd. [Read more](https://github.com/pytorch/torchdynamo/pull/757)
|
||||
- **OFI** -- Uses Torchscript optimize_for_inference. Inference only. [Read
|
||||
more](https://pytorch.org/docs/stable/generated/torch.jit.optimize_for_inference.html)
|
||||
- **FX2TRT** -- Uses Nvidia TensorRT for inference optimizations. Inference only. [Read
|
||||
more](https://github.com/pytorch/TensorRT/blob/master/docsrc/tutorials/getting_started_with_fx_path.rst)
|
||||
- **ONNXRT** -- Uses ONNXRT for inference on CPU/GPU. Inference only. [Read more](https://onnxruntime.ai/)
|
||||
- **TENSORRT** -- Uses ONNXRT to run TensorRT for inference optimizations. [Read
|
||||
more](https://github.com/onnx/onnx-tensorrt)
|
||||
- **IPEX** -- Uses IPEX for inference on CPU. Inference only. [Read
|
||||
more](https://github.com/intel/intel-extension-for-pytorch).
|
||||
- **TVM** -- Uses Apach TVM for inference optimizations. [Read more](https://tvm.apache.org/)
|
||||
|
||||
"""
|
||||
|
||||
@ -294,36 +320,15 @@ class DynamoBackend(str, enum.Enum):
|
||||
EAGER = "EAGER"
|
||||
AOT_EAGER = "AOT_EAGER"
|
||||
INDUCTOR = "INDUCTOR"
|
||||
NVFUSER = "NVFUSER"
|
||||
AOT_NVFUSER = "AOT_NVFUSER"
|
||||
AOT_CUDAGRAPHS = "AOT_CUDAGRAPHS"
|
||||
AOT_TS_NVFUSER = "AOT_TS_NVFUSER"
|
||||
NVPRIMS_NVFUSER = "NVPRIMS_NVFUSER"
|
||||
CUDAGRAPHS = "CUDAGRAPHS"
|
||||
OFI = "OFI"
|
||||
FX2TRT = "FX2TRT"
|
||||
ONNXRT = "ONNXRT"
|
||||
TENSORRT = "TENSORRT"
|
||||
IPEX = "IPEX"
|
||||
|
||||
|
||||
class EnumWithContains(enum.EnumMeta):
|
||||
"A metaclass that adds the ability to check if `self` contains an item with the `in` operator"
|
||||
|
||||
def __contains__(cls, item):
|
||||
try:
|
||||
cls(item)
|
||||
except ValueError:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
class BaseEnum(enum.Enum, metaclass=EnumWithContains):
|
||||
"An enum class that can get the value of an item with `str(Enum.key)`"
|
||||
|
||||
def __str__(self):
|
||||
return self.value
|
||||
|
||||
@classmethod
|
||||
def list(cls):
|
||||
"Method to list all the possible items in `cls`"
|
||||
return list(map(str, cls))
|
||||
TVM = "TVM"
|
||||
|
||||
|
||||
class LoggerType(BaseEnum):
|
||||
@ -415,6 +420,16 @@ class ProjectConfiguration:
|
||||
metadata={"help": "The current save iteration."},
|
||||
)
|
||||
|
||||
save_on_each_node: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": (
|
||||
"When doing multi-node distributed training, whether to save models and checkpoints on each node, or"
|
||||
" only on the main one"
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
def set_directories(self, project_dir: str = None):
|
||||
"Sets `self.project_dir` and `self.logging_dir` to the appropriate values."
|
||||
self.project_dir = project_dir
|
||||
@ -916,7 +931,8 @@ class FullyShardedDataParallelPlugin:
|
||||
self.activation_checkpointing = str_to_bool(os.environ.get(prefix + "ACTIVATION_CHECKPOINTING", "False")) == 1
|
||||
|
||||
if self.sync_module_states:
|
||||
self.param_init_fn = lambda x: x.to_empty(device=torch.cuda.current_device(), recurse=False)
|
||||
device = torch.cuda.current_device() if not is_xpu_available() else torch.xpu.current_device()
|
||||
self.param_init_fn = lambda x: x.to_empty(device=device, recurse=False)
|
||||
|
||||
@staticmethod
|
||||
def get_module_class_from_name(module, name):
|
||||
|
||||
@ -21,7 +21,6 @@ from typing import Any, Dict, List, Tuple
|
||||
import torch
|
||||
|
||||
from ..commands.config.config_args import SageMakerConfig
|
||||
from ..commands.config.config_utils import DYNAMO_BACKENDS
|
||||
from ..utils import (
|
||||
DynamoBackend,
|
||||
PrecisionType,
|
||||
@ -89,7 +88,9 @@ def prepare_simple_launcher_cmd_env(args: argparse.Namespace) -> Tuple[List[str]
|
||||
try:
|
||||
dynamo_backend = DynamoBackend(args.dynamo_backend.upper())
|
||||
except ValueError:
|
||||
raise ValueError(f"Unknown dynamo backend: {args.dynamo_backend.upper()}. Choose between {DYNAMO_BACKENDS}.")
|
||||
raise ValueError(
|
||||
f"Unknown dynamo backend: {args.dynamo_backend.upper()}. Choose between {DynamoBackend.list()}."
|
||||
)
|
||||
current_env["ACCELERATE_DYNAMO_BACKEND"] = dynamo_backend.value
|
||||
current_env["ACCELERATE_DYNAMO_MODE"] = args.dynamo_mode
|
||||
current_env["ACCELERATE_DYNAMO_USE_FULLGRAPH"] = str(args.dynamo_use_fullgraph)
|
||||
@ -163,7 +164,9 @@ def prepare_multi_gpu_env(args: argparse.Namespace) -> Dict[str, str]:
|
||||
try:
|
||||
dynamo_backend = DynamoBackend(args.dynamo_backend.upper())
|
||||
except ValueError:
|
||||
raise ValueError(f"Unknown dynamo backend: {args.dynamo_backend.upper()}. Choose between {DYNAMO_BACKENDS}.")
|
||||
raise ValueError(
|
||||
f"Unknown dynamo backend: {args.dynamo_backend.upper()}. Choose between {DynamoBackend.list()}."
|
||||
)
|
||||
current_env["ACCELERATE_DYNAMO_BACKEND"] = dynamo_backend.value
|
||||
current_env["ACCELERATE_DYNAMO_MODE"] = args.dynamo_mode
|
||||
current_env["ACCELERATE_DYNAMO_USE_FULLGRAPH"] = str(args.dynamo_use_fullgraph)
|
||||
@ -419,7 +422,9 @@ def prepare_sagemager_args_inputs(
|
||||
try:
|
||||
dynamo_backend = DynamoBackend(args.dynamo_backend.upper())
|
||||
except ValueError:
|
||||
raise ValueError(f"Unknown dynamo backend: {args.dynamo_backend.upper()}. Choose between {DYNAMO_BACKENDS}.")
|
||||
raise ValueError(
|
||||
f"Unknown dynamo backend: {args.dynamo_backend.upper()}. Choose between {DynamoBackend.list()}."
|
||||
)
|
||||
|
||||
# Environment variables to be set for use during training job
|
||||
environment = {
|
||||
|
||||
@ -15,6 +15,7 @@
|
||||
import os
|
||||
import socket
|
||||
from contextlib import contextmanager
|
||||
from functools import partial
|
||||
from types import MethodType
|
||||
|
||||
import torch
|
||||
@ -109,22 +110,27 @@ def wait_for_everyone():
|
||||
PartialState().wait_for_everyone()
|
||||
|
||||
|
||||
def save(obj, f, safe_serialization=False):
|
||||
def save(obj, f, save_on_each_node: bool = False, safe_serialization: bool = False):
|
||||
"""
|
||||
Save the data to disk. Use in place of `torch.save()`.
|
||||
|
||||
Args:
|
||||
obj: The data to save
|
||||
f: The file (or file-like object) to use to save the data
|
||||
safe_serialization (`bool`, *optional*, defaults to `False`): Whether to save `obj` using `safetensors`
|
||||
obj:
|
||||
The data to save
|
||||
f:
|
||||
The file (or file-like object) to use to save the data
|
||||
save_on_each_node (`bool`, *optional*, defaults to `False`):
|
||||
Whether to only save on the global main process
|
||||
safe_serialization (`bool`, *optional*, defaults to `False`):
|
||||
Whether to save `obj` using `safetensors`
|
||||
"""
|
||||
save_func = torch.save if not safe_serialization else partial(safe_save_file, metadata={"format": "pt"})
|
||||
if PartialState().distributed_type == DistributedType.TPU:
|
||||
xm.save(obj, f)
|
||||
elif PartialState().local_process_index == 0:
|
||||
if safe_serialization:
|
||||
safe_save_file(obj, f, metadata={"format": "pt"})
|
||||
else:
|
||||
torch.save(obj, f)
|
||||
elif PartialState().is_main_process and not save_on_each_node:
|
||||
save_func(obj, f)
|
||||
elif PartialState().is_local_main_process and save_on_each_node:
|
||||
save_func(obj, f)
|
||||
|
||||
|
||||
@contextmanager
|
||||
|
||||
@ -11,7 +11,7 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import copy
|
||||
import os
|
||||
import unittest
|
||||
from tempfile import TemporaryDirectory
|
||||
@ -45,6 +45,18 @@ class ModelForTest(nn.Module):
|
||||
return self.linear2(self.batchnorm(self.linear1(x)))
|
||||
|
||||
|
||||
class ModelForTestCopy(nn.Module):
|
||||
def __init__(self, id: int):
|
||||
super().__init__()
|
||||
self.id = id
|
||||
self.linear1 = nn.Linear(3, 4)
|
||||
self.batchnorm = nn.BatchNorm1d(4)
|
||||
self.linear2 = nn.Linear(4, 5)
|
||||
|
||||
def forward(self, x):
|
||||
return self.linear2(self.batchnorm(self.linear1(x))), self.id
|
||||
|
||||
|
||||
class ModelForTestTiedWeights(nn.Module):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
@ -325,6 +337,25 @@ class BigModelingTester(unittest.TestCase):
|
||||
output = model(x)
|
||||
self.assertTrue(torch.allclose(expected, output.cpu(), atol=1e-5))
|
||||
|
||||
@require_cuda
|
||||
def test_dispatch_model_copy(self):
|
||||
original_model = ModelForTestCopy(id=1)
|
||||
device_map = {"linear1": 0, "batchnorm": "cpu", "linear2": 0}
|
||||
|
||||
x = torch.randn(2, 3)
|
||||
expected, original_output_id = original_model(x)
|
||||
|
||||
dispatch_model(original_model, device_map)
|
||||
|
||||
copied_model = copy.deepcopy(original_model)
|
||||
copied_model.id = 2
|
||||
output, copied_output_id = copied_model(x)
|
||||
|
||||
self.assertEqual(original_model.id, original_output_id)
|
||||
self.assertEqual(copied_model.id, copied_output_id)
|
||||
self.assertFalse(copied_model.linear1.forward is original_model.linear1.forward)
|
||||
self.assertTrue(torch.allclose(expected, output.cpu(), atol=1e-5))
|
||||
|
||||
@require_cuda
|
||||
def test_dispatch_model_move_offloaded_model(self):
|
||||
model = ModelForTest()
|
||||
|
||||
@ -92,11 +92,11 @@ class KwargsHandlerTester(unittest.TestCase):
|
||||
prefix = "ACCELERATE_DYNAMO_"
|
||||
# nvfuser's dynamo backend name is "nvprims_nvfuser"
|
||||
# use "nvfuser" here to cause exception if this test causes os.environ changed permanently
|
||||
os.environ[prefix + "BACKEND"] = "nvfuser"
|
||||
os.environ[prefix + "BACKEND"] = "aot_ts_nvfuser"
|
||||
os.environ[prefix + "MODE"] = "reduce-overhead"
|
||||
|
||||
dynamo_plugin_kwargs = TorchDynamoPlugin().to_kwargs()
|
||||
self.assertEqual(dynamo_plugin_kwargs, {"backend": "nvfuser", "mode": "reduce-overhead"})
|
||||
self.assertEqual(dynamo_plugin_kwargs, {"backend": "aot_ts_nvfuser", "mode": "reduce-overhead"})
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@ -19,6 +19,8 @@ import random
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
import uuid
|
||||
from contextlib import contextmanager
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
@ -201,6 +203,71 @@ class CheckpointTest(unittest.TestCase):
|
||||
self.assertEqual(opt_state1, opt_state3)
|
||||
self.assertEqual(ground_truth_rands, test_rands)
|
||||
|
||||
def test_can_resume_training_checkpoints_relative_path(self):
|
||||
# See #1983
|
||||
# This test is like test_can_resume_training but uses a relative path for the checkpoint and automatically
|
||||
# infers the checkpoint path when loading.
|
||||
@contextmanager
|
||||
def temporary_relative_directory():
|
||||
# This is equivalent to tempfile.TemporaryDirectory() except that it returns a relative path
|
||||
rand_dir = f"test_path_{uuid.uuid4()}"
|
||||
os.mkdir(rand_dir)
|
||||
try:
|
||||
yield rand_dir
|
||||
finally:
|
||||
shutil.rmtree(rand_dir)
|
||||
|
||||
with temporary_relative_directory() as tmpdir:
|
||||
set_seed(42)
|
||||
model = DummyModel()
|
||||
optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-3)
|
||||
train_dataloader, valid_dataloader = dummy_dataloaders()
|
||||
project_config = ProjectConfiguration(automatic_checkpoint_naming=True)
|
||||
|
||||
# Train baseline
|
||||
accelerator = Accelerator(project_dir=tmpdir, project_config=project_config)
|
||||
model, optimizer, train_dataloader, valid_dataloader = accelerator.prepare(
|
||||
model, optimizer, train_dataloader, valid_dataloader
|
||||
)
|
||||
# Save initial
|
||||
accelerator.save_state()
|
||||
(a, b) = model.a.item(), model.b.item()
|
||||
opt_state = optimizer.state_dict()
|
||||
ground_truth_rands = train(3, model, train_dataloader, optimizer, accelerator)
|
||||
(a1, b1) = model.a.item(), model.b.item()
|
||||
opt_state1 = optimizer.state_dict()
|
||||
|
||||
# Train partially
|
||||
set_seed(42)
|
||||
model = DummyModel()
|
||||
optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-3)
|
||||
train_dataloader, valid_dataloader = dummy_dataloaders()
|
||||
project_config = ProjectConfiguration(iteration=1, automatic_checkpoint_naming=True)
|
||||
accelerator = Accelerator(project_dir=tmpdir, project_config=project_config)
|
||||
model, optimizer, train_dataloader, valid_dataloader = accelerator.prepare(
|
||||
model, optimizer, train_dataloader, valid_dataloader
|
||||
)
|
||||
accelerator.load_state() # <= infer the directory automatically
|
||||
(a2, b2) = model.a.item(), model.b.item()
|
||||
opt_state2 = optimizer.state_dict()
|
||||
self.assertEqual(a, a2)
|
||||
self.assertEqual(b, b2)
|
||||
self.assertEqual(opt_state, opt_state2)
|
||||
|
||||
test_rands = train(2, model, train_dataloader, optimizer, accelerator)
|
||||
# Save everything
|
||||
accelerator.save_state()
|
||||
|
||||
# Load everything back in and make sure all states work
|
||||
accelerator.load_state(os.path.join(tmpdir, "checkpoints", "checkpoint_1"))
|
||||
test_rands += train(1, model, train_dataloader, optimizer, accelerator)
|
||||
(a3, b3) = model.a.item(), model.b.item()
|
||||
opt_state3 = optimizer.state_dict()
|
||||
self.assertEqual(a1, a3)
|
||||
self.assertEqual(b1, b3)
|
||||
self.assertEqual(opt_state1, opt_state3)
|
||||
self.assertEqual(ground_truth_rands, test_rands)
|
||||
|
||||
def test_invalid_registration(self):
|
||||
t = torch.tensor([1, 2, 3])
|
||||
t1 = torch.tensor([2, 3, 4])
|
||||
|
||||
Reference in New Issue
Block a user