mirror of
https://github.com/huggingface/accelerate.git
synced 2025-10-20 18:13:46 +08:00
6c81eb21e05d121259b73a5c226f06b7cbc61e0d
Run your *raw* PyTorch training script on any kind of device
Installation
Install PyTorch, then
git clone https://github.com/huggingface/accelerate.git
cd accelerate
pip install -e .
Tests
Using the accelerate CLI
Create a default config for your environment with
accelerate config
then launch the GLUE example with
accelerate launch examples/glue_example.py --task_name mrpc --model_name_or_path bert-base-cased
Traditional launchers
To run the example script on multi-GPU:
python -m torch.distributed.launch --nproc_per_node 2 --use_env examples/glue_example.py \
--task_name mrpc --model_name_or_path bert-base-cased
To run the example script on TPUs:
python tests/xla_spawn.py --num_cores 8 examples/glue_example.py\
--task_name mrpc --model_name_or_path bert-base-cased
Description
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Languages
Python
99.6%
Dockerfile
0.2%
Makefile
0.2%