2021-02-11 14:46:31 -05:00
2021-02-10 19:44:14 -05:00
2021-02-10 14:46:31 -05:00
2021-02-10 16:32:26 -05:00
2021-02-10 14:46:31 -05:00
2021-02-10 14:46:31 -05:00
2020-10-30 14:27:14 +01:00
2021-02-11 14:46:31 -05:00
2020-10-30 14:27:14 +01:00
2021-02-10 14:47:43 -05:00
2020-11-12 14:43:23 -05:00
2021-02-11 14:46:31 -05:00
2020-11-24 11:21:00 -05:00
2021-02-10 15:36:01 -05:00



License GitHub release Contributor Covenant

Run your *raw* PyTorch training script on any kind of device

Installation

Install PyTorch, then

git clone https://github.com/huggingface/accelerate.git
cd accelerate
pip install -e .

Tests

Using the accelerate CLI

Create a default config for your environment with

accelerate config

then launch the GLUE example with

accelerate launch examples/glue_example.py --task_name mrpc --model_name_or_path bert-base-cased

Traditional launchers

To run the example script on multi-GPU:

python -m torch.distributed.launch --nproc_per_node 2 --use_env examples/glue_example.py \
    --task_name mrpc --model_name_or_path bert-base-cased

To run the example script on TPUs:

python tests/xla_spawn.py --num_cores 8 examples/glue_example.py\
    --task_name mrpc --model_name_or_path bert-base-cased
Description
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Readme Apache-2.0 28 MiB
Languages
Python 99.6%
Dockerfile 0.2%
Makefile 0.2%