`Sparsity` as a term doesn't reflect the tools that are developed by the AO. The `torch/ao/sparsity` also has utilities for structured pruning, which internally we always referred to as just "pruning". To avoid any confusion, we renamed `Sparsity` to `Prune`. We will not be introducing the backwards compatibility, as so far this toolset was kept under silent development.
This change will reflect the changes in the documentation as well.
**TODO:**
- [ ] Change the tutorials
- [ ] Confirm no bc-breakages
- [ ] Reflect the changes in the trackers and RFC docs
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84867
Approved by: https://github.com/supriyar
Some of the subpackages were not included in the 'torch.nn.quantized'.
That would cause some specific cases fail.
For example, `from torch.nn.quantized import dynamic` would work,
but `import torch; torch.nn.quantized.dynamic` would fail.
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84141
Approved by: https://github.com/andrewor14
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69415
Adding the imports inside the torch/ao/__init__.py has a high chance of causing circular dependencies, especially if sparsity and quantization use each other's resources.
To avoid the dependency issues, we can just keep the __init__ empty.
Notes:
- This means that the user will have to explicitly import the `torch.ao.quantization` or `torch.ao.sparsity` instead of `from torch import ao; ao.quantization.???`.
- The issue of circular dependencies that are caused by the imports with binding submodules is [fixed in Python 3.7](https://docs.python.org/3/whatsnew/3.7.html#other-language-changes), which means this solution will become obsolete at the [3.6's EoL](https://www.python.org/dev/peps/pep-0494/#and-beyond-schedule), which comes [12/23/2022](https://devguide.python.org/#status-of-python-branches).
Future options to resolve the circular dependencies (subject to discussion):
1. Use interfaces for binding submodules. For example, have a torch/ao/_nn with all the source code, and an interface torch/ao/nn with only the __init__.py file. The __init__ files inside the torch/ao/_nn will be empty
2. Completely isolate the common code into a separate submodule, s.a. torch/ao/common. The other submodules will not be referencing each other.
Test Plan: Imported from OSS
Reviewed By: vkuzo
Differential Revision: D32860168
Pulled By: z-a-f
fbshipit-source-id: e3fe77e285992d34c87d8742e1a5e449ce417c36
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66058
After the initial migration from `torch.quantization` to `torch.ao.quantization`, some of the files did not change.
This happened because the migration was done in parallel, and some of the files were landed while the others were still in the original location.
This is the last fix in the AO migration phase 1, which completely enables the ao.quantization namespace.
Test Plan: `python test/test_quantization.py`
Reviewed By: vkuzo
Differential Revision: D31366066
Pulled By: z-a-f
fbshipit-source-id: bf4a74885be89d098df2d87e685795a2a64026c5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58707
Minor refactor that changes the format of the import.
This is done to avoid accidental circular dependencies.
Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS
Differential Revision:
D28970961
D28970961
Reviewed By: raghuramank100
Pulled By: z-a-f
fbshipit-source-id: c312742f5e218c435a1a643532f5842116bfcfff
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56617
This migrates the sparsity to the open source
Test Plan: `buck test mode/opt //caffe2/test:ao`
Reviewed By: raghuramank100
Differential Revision: D27812207
fbshipit-source-id: cc87d9d2b486269901a4ad9b483615741a1cd712