68 Commits

Author SHA1 Message Date
e2cc537db8 trackio (#3669)
* trackio

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Abubakar Abid <abubakar@huggingface.co>

* seven -> eight

* Add trackio as a real tracker instead

* Sort

* Style

* Style

* Remove step

* Disable trackio on Python < 3.10

* Update src/accelerate/tracking.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* More style

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Abubakar Abid <abubakar@huggingface.co>
2025-07-15 17:17:49 +02:00
6597dae780 Integrate SwanLab for offline/online experiment tracking for Accelerate (#3605)
* add support for SwanLabTracker and update related documentation

* add emoji in FRAMWORK

* apply the style corrections and quality control

* add support for SwanLabTracker in tests

* fix bug in test_tracking
2025-06-18 15:42:29 +02:00
2eaf5cdbbc remove ipex.optimize in accelerate (#3608)
* remove ipex.optimize in accelerate

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix mis-style

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* Update intel_cpu.md

* Update launch.py

* fix comments

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* add logging

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* Update launch.py

* Apply style fixes

---------

Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-17 11:08:19 +02:00
7013365791 fix typos (#3549) 2025-05-08 14:10:12 +02:00
d02e51cc21 Update big_modeling.md for layerwise casting (#3548)
* Update big_modeling.md for layerwise casting

* doc fix
2025-05-06 09:50:53 +02:00
67a768be07 remove use_xpu to fix ut issues, we don't need this since XPU is OOB … (#3460)
* remove use_xpu to fix ut issues, we don't need this since XPU is OOB supported now

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* fix style

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>

* add deprecate warnings

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

* fix

Signed-off-by: YAO Matrix <matrix.yao@intel.com>

---------

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-04-01 11:55:37 +02:00
d7c741a6bc Initial FSDP2 support (#3394)
* Feat: initial conversion tool draft

* Feat: add value mapping to conversion tool

* Refactor: move from os to pathlib

* Feat: add first tests

* Feat: more tests

* Feat: minor fixes + dataclass conversions

* Feat: more remapping

* Fix: namespace has no attribute version + style

* Fix: offload params behavior

* Feat: add option to only rename keys in the config file to

* Fix: wrong attr name

* Fix: partially resolve comments

* Feat: work on config command + minor fixes to reflect changes

* Refactor: style + quality

* Feat: fsdp2 initial work

* Feat: some cleanups and first running fsdp2

* Fix: version checks + mixed precision policy

* Refactor: style + quality

* Remove obsolete todos

* Feat: grad norm clipping

* Fix: tests + rename attrs

* Refactor: style + quality

* Fix: None object is not iterable

* Fix: default cpu_offload for fsdp2

* Fix: cpu offload now behaves correctly

* Feat: apply_activation_checkpointing

* Fix: append to models

* Feat: start on concept guide

* wip: concept guide

* Fix: toctree

* cleanup of the concept guide

* Fix: minor fixes + mp

* Fix: quality + | to union

* Feat: backwards compatibility + args cleanup

* Fix: style + quality

* Feat: enable dropping refs when getting named params

* Fix: memory footprint with fsdp2

* Feat: cpu ram efficient loading

* Fix: mp

* Fix: not warn about sync_modules if fsdp version is 1

* Refactor: minor changes

* Small fixes + refactors

* Feat: docs + cleanup

* Feat: saving works (not sure about optim)

* More loading/saving work

* Feat: disable local_state_dict for fsdp2

* Fix: fsdp2 convergence

* Feat: working comparison script

* Feat: memory tracking fsdp2

* Feat: memory visualizer

* Feat: more work on benchmark

* Fix: raise error if model+optimizer arent prepared together

* Minor fixes

* Style

* More warnings

* Fix: reshard_after_forward vs sharding_strategy conflict

* Refactor: clean up accelerator

* Feat: more testing in fsdp2 benchmark

* Fix: memory visualizer

* Untested: support load/save_state

* Feat: concept guide improvements

* Refactor: concept guide

* Feat: benchmark works

* Feat: more work on fsdp2 benchmark

* Fix: note syntax

* Fix: small fixes + make original tests work

* Fix: grad scaling

* Feat: reshard after forward tests

* Feat: backward prefetch tests

* Feat: tests for fsdp2

* Refactor: minor fixes

* Feat: fsdp_utils docstrings

* Feat: autodoc fsdp.md

* Docs: get_module_children_bottom_up

* Fix: remove unused images

* Refactor: benchmark cleanup

* Fix: docs

* Feat: final doc changes

* Fix: torch.distributed has no attribute tensor

* Fix: style

* Feat: tests include version in failures

* Fix: benchmark force model to load in fp32

* Fix: rename runs

* Feat: last minor fixes

* Feat: new benchmark images
2025-03-27 15:01:18 -04:00
5cc99e6e02 fix: typos in documentation files (#3388)
* Update test_scheduler.py

* Update test_big_modeling.py

* Update test_state_checkpointing.py

* Update test_script.py

* Update cli.md

* Update quicktour.md
2025-02-10 13:11:50 -05:00
29be478862 [WIP] FEAT Decorator to purge accelerate env vars (#3252)
* [WIP] FEAT Decorator to purge accelerate env vars

In some circumstances, calling certain classes or functions can result
in accelerate env vars being set and not being cleaned up afterwards. As
an example, when calling:

TrainingArguments(fp16=True, ...)

The following env var will be set:

ACCELERATE_MIXED_PRECISION=fp16

This can affect subsequent code, since the env var takes precedence over
TrainingArguments(fp16=False). This is especially relevant for unit
testing, where we want to avoid the individual tests to have side
effects on one another. Decorate the unit test function or whole class
with this decorator to ensure that after each test, the env vars are
cleaned up. This works for both unittest.TestCase and normal
classes (pytest); it also works when decorating the parent class.

In its current state, this PR adds the new decorator and tests it, but
the decorator is not yet applied to potentially problematic functions or
classes.

* Linter

* Refactor code to be more readable

---------

Co-authored-by: [[ -z $EMAIL ]] && read -e -p "Enter your email (for git configuration): " EMAIL <muellerzr@gmail.com>
2024-11-25 12:04:56 -05:00
bf4572b6ce [Utils] align_module_device (#3204)
* implement align_module

* add docs

* move to modeling utils, integrate into existing source code

* update source, expose through utils

* Suggested docstring

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Rewrite for readability, add try finally

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* Use try-finally when aligning with hook

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* apply style

* improve get_state_dict_from_offload readability

* Update docstring

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* rename to align_module_device, update docstring

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-11-01 09:05:50 -04:00
735dfa3018 [Utils] has_offloaded_params (#3188)
* implement has_offloaded_params

* update docstring

* expose to utils

* add docs

* apply style, quality

* add tests
2024-10-23 16:44:02 +02:00
6f79b63b86 Trigger weights_only=True by default for all compatible objects (#3036)
* rebase

* Update torch v

* Rename

* Prop to docs

* Actually reverse states

* Rebase fully

* Restore old state

* Keep as load()

* No need for explicit anymore

* Check numpy version, dtypes was added in 1.25

* Clean up diff

* Fix hang
2024-10-10 14:08:24 -04:00
fb68cb9d0e Refactor scaler to util (#3142)
* Refactor scaler to util

* Document

* Use the distributed_type directly
2024-10-08 11:07:01 -04:00
e9e5a73fcc POC: multiple model/configuration DeepSpeed support (#3097)
* Bookmark

* Migratory

* Uncomment

* Rm name to model for now

* Rm container

* Left: test

* Allow only wrapping one model

* Add warning but only ref once

* Refine

* Update src/accelerate/accelerator.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Finish stas nits

* Clean

* Fixup test + test writing

* Fully working

* Fin

* Nit

* Quality

* Update src/accelerate/accelerator.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Actionable error

* Make note of when its enabled

* Apply suggestions from code review

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Merge tests

* Merge

* Add currently broken test script

* Push the working implementation

* Fin

* Add guards for user behavior

* Test nits

* TODO: finish knowledge distillation example

* Update tests/deepspeed/test_deepspeed_multiple_model.py

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Allow for dict-like interface

* Get rid of disable

* Uncomment

* Complete rewrite to force a dict to be used

* Working tests/fin

* Use name as stas suggestion

* Clean

* docnit

* toctree

* toctree

* Missing ref

* Put in break

* Smaller diff

* Make note on how to use zeroinit

* Make note about accelerator ds plugin

* More docnits

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Limit users to not pass in another ds plugin to another accelerator

* not implemented err + Make a note about why no params

* Apply suggestions from code review from Stas

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Add deepspeed_plugins arg + update doc

* Plugin -> plugins

* Change enable() -> select()

* Update ref properly + test

* Be consistent, model1,model2...

* first_, second_

* A few more auto values

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
2024-09-13 07:28:06 -04:00
79a8426416 🚨🚨🚨 The Great Deprecation 🚨🚨🚨 (#3098)
* The great purge

* Clean

* Some more fixings

* Some more deprecations Benjamin found

* Fix kwarghandler test
2024-09-12 21:12:32 -04:00
fc52fa969e [docs] Doc sprint (#3099)
* docs sprint

* youtube id

* feedback
2024-09-11 13:31:47 -04:00
a452327e8e Enable FSDP & Deepspeed + FP8 (#2983)
* Working version rebased from main

* kwargs

* Clean

* Fix more nits

* Fin

* Delay autocast flag

* Enable FP8 autocast during eval only if specified

* Fin

* Rm comment

* All done

* Zero3 works!

* Let the wrapper come off during unwrap_model

* Add import check

* Migrate all to benchmarks folder and make TE import check work

* Add readme

* Add README to benchmarks folder

* Update CLI to now include fp8 args

* Add test config for 0_34

* Finish adding to config yaml

* Write docs

* Expound docs w/ FP8

* Add to toctree
2024-08-14 14:57:01 -04:00
90d5023901 Add small util to enable FSDP offloading quickly (#3006)
* Wrap up util

* Add small util

* Update doc

* Don't req

* Clean
2024-08-12 11:53:02 -04:00
YH
5d5d07abfc Add Profiler Support for Performance Analysis (#2883)
* Add torch profiler

* Add example

* Fix rank 0 saving

* Add docstring

* Add profile readme

* Fix minor

* Fix example path

* Add exp test code

* Rename profile dir

* Change readme

* Change save format

* Minor

* Enhance docstring example

* Add user guide

* Add memory profile guide

* Enhance error msg

* Fix type hinting

* Minor refactor

* Fix hf tag

* Fix copyright year

* Mv toctree

* Fix image path

* Fix license year

* Change profiler pattern name

* Update package reference

* Add slow decorator

* Check output value
2024-07-01 18:01:09 -04:00
4ba436eccc Introduce shard-merging util for FSDP (#2772)
* Initial commit

* Now to test

* Store false

* Slight tweaks

* Fix naming

* Got it all working with tests

* Use not for safetensors arg

* rm change

* Add docs

* Adjust based on Marc's feedback

* Specify just weights

* Update tests to include CLI and swap namings

* Fin

* Rm unused

* Rm again
2024-05-16 13:49:50 -04:00
2c767338f2 Fix Documentation in FSDP and DeepSpeed Concept Guide (#2725)
* address part of stats comments

* automatically set sync_module_states if low_cpu_mem is set

* Apply suggestions from @stas00

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* add links from fsdp and deepspeed docs. fix deepspeed imports

* replace raise in accelerate.launch

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2024-05-01 09:25:18 -04:00
9557598c45 Add Upcasting for FSDP in Mixed Precision. Add Concept Guide for FSPD and DeepSpeed. (#2674)
* draft fsdp vs ds

* reframe to migration doc

* updated functionality section

* cast to float32

* improvements to float32 casting

* some cleanup

* addressed @pacman100's comments

* Apply some of @muellerz suggestions

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* change to subsections

* changed the manner upcasting warnings are surfaced

* update document to discuss fsdp and ds plugins. minor fixes.

* @muellerzr's new suggestions

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

* explain all-or-nothing

* add @pacman100's comments

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* minor fix

---------

Co-authored-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
2024-04-29 11:19:03 -04:00
b2fc3a3b0e Refactor affinity and make it stateful (#2579)
* Move under initialized check

* One more

* Numa affinity

* Docs

* Import

* Add verbosity

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Improve import err

* Test + fix bug

* Update src/accelerate/utils/environment.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Clean

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
2024-03-26 09:51:37 -04:00
85a75d4c3d [docs] Missing functions from API (#2580) 2024-03-22 13:40:21 -04:00
c0b16b684f [docs] Accelerator API (#2465)
* update

* make style

* align toctree title

* feedback
2024-02-28 08:55:36 -08:00
3fb9a3a231 DOC: Fixes to Accelerator docstring (#2443)
* DOC Fixes to Accelerator docstring

- Add more links to accelerator classes where applicable
- Fix a typo: KwargHandler => KwargsHandler

* Fix syntax issues

Not sure how to add a link of the type is `list[SomeType]`, so just
removed it for now.

* Fixing link for KwargsHandler

* Add KwargsHandler to API docs

* Also add doc entry to kwargs.md
2024-02-26 14:11:36 -05:00
164193fa7e [Big deprecation] Introduces a DataLoaderConfig (#2441)
* Deprecate and introduce dataloader_config

* Update docs

* Doc nits

* More tests, adjust based on PR review

* Fixup tests

* Nits

* Update docs/source/quicktour.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Clean

* Actually create one

* Forgot to change one

* Use pytest

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-02-14 13:26:02 -05:00
b443be70fb Make torch xla available on GPU (#2176)
* Make torch xla available on GPU

* format code

* fix documentation build error

* update according to the comments

* Replace DistributedType.TPU with DistributedType.XLA

* make all ut pass

* format code

* update comments

* skip test

* format code

* skip FSDPPluginIntegration for torchxla

* bring back custom_sampler_check

* fix ut

* format code

* format code

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2024-02-14 10:19:25 -05:00
0867c09318 torch-native pipeline parallelism for big models (#2345)
* Broken version

* Timing I would expect

* Working version!

* Use MethodType

* working test

* Tests

* Use no split module classes explicitly

* Put split_points in pipelien

* Store split points in hf_split_points

* fix case num_process=1

* Allow for dynamic batch padding (#2352)

* Allow for dynamic batch paddign

* Fix test

* Update src/accelerate/inference.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Break early after the first valid bs is found

* Less slicy-dicy

* Test cv model

* Start, need to test

* Use dataloader-like logic

* Refactor to utils

* With tests

* Update the source

* Clean

* bs=1 case

* Add test

* add some failing test

* Almost working version

* Much cleaner implementation

* Use pad_input_tensor

* All tests passing!

* Do it at tracing too

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Marc Sun <marc@huggingface.co>

* Rm literal

* Allow users to pass in max_memory

* Note about recursion

* Document, document, document

* Right import check

* Fix bug, add tests to multigpu runners

* Change default to None

* Start of docs

* Try again?

* Try again x2

* Trailing comma

* Move import

* Clean

* typehint

* typo

* From code review

* Use num_chunks

* Update tests/test_utils.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Bad copy/paste

* hf_split_points

---------

Co-authored-by: Marc Sun <marc@huggingface.co>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-02-06 13:00:40 -05:00
fce61a99ec Fixed typos in readme files of docs folder. (#2329) 2024-01-12 05:44:28 -05:00
5cac878984 Add more missing items (#2309)
* Add more missing items

* Update docs/source/package_reference/utilities.md

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-01-08 14:58:23 -05:00
b0528392c8 Integrate MS-AMP Support for FP8 as a seperate backend (#2232)
* Redo with new version

* Store

* Working version

* Seperate for now

* Min diff

* check if available

* Better docstring

* Check for multiple models and optimizers

* Check for TE and MSAMP args seperately

* String clarity

* Better docstring and types

* Quality

* Simplify a bunch for fp8

* Convert literals to type alias

* Better err

* Docs

* toc typo

* Apply suggestions from code review

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Maria Khalusova <kafooster@gmail.com>

* Address doc nits

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Maria Khalusova <kafooster@gmail.com>
2023-12-15 13:07:55 -05:00
54d670be41 [Docs] Add doc for cpu/disk offload (#2231)
* Add doc offload

* fix

* Update docs/source/concept_guides/big_model_inference.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-12-07 12:02:06 -05:00
2b53a9089c [docs] troubleshooting guide (#2133)
* first take at troubleshooting guide

* logging moved to the troubleshooting guide

* TOC updates and gudie edits

* minor edits

* moved to tutorials

* feedback addressed

* batch size clarifications

* typo

* kernel, early stopping hanging, feedback
2023-11-13 17:58:56 -05:00
183c9dd3ce Allow for ACCELERATE_SEED env var (#2126)
* Manual seeds

* None

* Add to docs

* Document

* Use torch seed for simplicity

* Rm from doc

* Better version
2023-11-07 12:05:42 -05:00
217faafe08 Fix flag typo (#2090) 2023-10-27 08:46:13 -04:00
e1fab05ce7 Add ClearML tracker (#2034)
* add clearml tracker

* fix style in tracking.py

* run ruff --fix

* run ruff fix on src/accelerate/utils/__init__.py as well

* properly run make style

* add tests

* modify code based on code review

* changes based on code review

* quote data_frame

* fix docs

* remove pandas req in log_table

* style changes

* add tracker to docs
2023-10-26 12:13:28 -04:00
409a9df0a4 Introduce model memory estimator (#1876)
* Estimator

* Right err

* Fixup tests

* trust remote code

* Print output for debugging purposes

* trust_remote_code

* Address some comments

* change doc to req arg

* Properly check for _no_split_modules in transformer models

* Note on transformer models

* Check/handle pentabytes

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* Tests are passing locally again, better handle for no_split

* Adjust setup?

* Let's see if the cleaner version works

* Refactor and clean up for testing

* Specify in comments

* Better error handling

* A million tests later

* More tests + err handling

* Require hub

* More with remote code

* Clean up

* Add a test for no_split

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Docstring

* Address some comments

* rm einops

* Let it err out

* Adjust errs

* Tests

* Reduce test repeats

* Clean up borders

* Tip on 20%

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-08-24 12:12:01 -04:00
0768905f77 remove casting to FP32 when saving state dict (#1868)
* remove casting to FP32 when saving state dict

* update docs.
2023-08-21 19:08:29 +05:30
a6291e43b0 Expose autocast kwargs and simplify autocast wrapper (#1740)
* kwarg handler

* Proper default

* Enabled

* Rework

* Clean

* Ref autocast properly
2023-07-20 12:49:30 -04:00
243288627d fix KwargsHandler.to_kwargs not working with os.environ initialization in __post_init__ (#1738)
* fix KwargsHandler.to_kwargs not working with os.environ initialization in __post_init__

* fix test_torch_dynamo_plugin such that it wouldn't change os.environ permanently

* move clear_os_environ func to utils/other and rename it

* reformat code in order to pass ci quality check

* modifiy the comment of utils.other.clear_environment
2023-07-19 12:00:53 -04:00
daa1952f47 Update docs (#1736)
* Still in works

* Utils to check

* More references

* Fin

* add utils

* toctree
2023-07-18 07:28:01 -04:00
dfbfbdfea8 Add docs for saving Transformers models (#1671)
* add section to package_reference/accelerator.md explaining saving for Transformers models

* rename `model` to `unwrapped_model`

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-03 10:34:30 -04:00
40f822a1e3 replace save funct in doc (#1672) 2023-06-30 17:03:19 -04:00
a0bfe2140c Bnb quantization (#1626)
* Add get_quantized_model func

* Add tests for 4bit and 8bit quantization

* Add tests

* Fix style

* Add offload tests

* Fix style

* Fix

* Fix conflit

* fix generate quality test

* fix style

* add check for bnb layers and fix .to(cpu)

* Fix 8bit serialization and memory issue

* add import

* Change quantize_model to load_and_quantize_model

* Add tests for saving 8bit model

* Fix bnb dataclass

* fix style

* fix tests

* fix style

* remove depedency on tie_weights

* remove depedency on base_model_prefix

* remove depedency on device

* fix style

* Add doc about quantization

* fix import

* Fix text

* fix func name

* fix arg in dataclass

* Update docs/source/usage_guides/quantization.md

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix funct name

* Add real model

* Fix doc

* put bash tag

* Update src/accelerate/utils/bnb.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

---------

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-30 10:59:04 -04:00
bc49d0f9b3 Doc save model (#1650)
* add doc for save_model func

* fix doc

* fix path issue

* add load_checkpoint_in_model doc in utilities

* oups

* Update docs/source/package_reference/utilities.md

Co-authored-by: Zach Mueller <muellerzr@gmail.com>

---------

Co-authored-by: Zach Mueller <muellerzr@gmail.com>
2023-06-27 16:08:56 -04:00
7b4d12623a Doc to md (#1618)
* Convert doc files to MD

* Convert doc files to Markdown
2023-06-20 18:12:19 -04:00
109f3272f5 Swap env vars for XPU and IPEX + CLI (#1513)
* Swap env vars

* Clean up CLI

* use_xpu

* Add CLI docs

* Ipex only

* Nit

* Check

* Capitolize

* Make changes from review
2023-06-02 13:30:16 -04:00
af12e7b023 Add rdzv-backend (#1490)
* Add rdzv

* rm print

* Doc

* Better help
2023-05-31 08:06:55 -04:00
419c9ce22a Update gradient accumulation docs, and remove redundant example (#1461) 2023-05-24 10:43:42 -04:00