12 Commits

Author SHA1 Message Date
bee84e88f8 [BE][Easy] improve submodule discovery for torch.ao type annotations (#144680)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144680
Approved by: https://github.com/Skylion007
2025-01-13 17:16:19 +00:00
b5042cfa58 Revert "remove allow-untyped-defs from torch/ao/__init__.py (#143604)"
This reverts commit 1598d458797e69376a9a148bd37fb6e8167d22e3.

Reverted https://github.com/pytorch/pytorch/pull/143604 on behalf of https://github.com/wdvr due to failing typing checks in torchao ([comment](https://github.com/pytorch/pytorch/pull/143604#issuecomment-2564043233))
2024-12-27 21:30:02 +00:00
1598d45879 remove allow-untyped-defs from torch/ao/__init__.py (#143604)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143604
Approved by: https://github.com/aorenste
2024-12-26 23:27:16 +00:00
c04f70bb30 [BE] enable UFMT for torch/ao/ (#128864)
Part of #123062

- #123062

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128864
Approved by: https://github.com/ezyang
2024-07-25 11:30:14 +00:00
afe15d2d2f Flip default value for mypy disallow_untyped_defs [3/11] (#127840)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127840
Approved by: https://github.com/oulgen
2024-06-08 18:28:01 +00:00
0e30da3f2f [refactor] Renaming ao.sparsity to ao.pruning (#84867)
`Sparsity` as a term doesn't reflect the tools that are developed by the AO. The `torch/ao/sparsity` also has utilities for structured pruning, which internally we always referred to as just "pruning". To avoid any confusion, we renamed `Sparsity` to `Prune`. We will not be introducing the backwards compatibility, as so far this toolset was kept under silent development.

This change will reflect the changes in the documentation as well.

**TODO:**
- [ ] Change the tutorials
- [ ] Confirm no bc-breakages
- [ ] Reflect the changes in the trackers and RFC docs

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84867
Approved by: https://github.com/supriyar
2022-10-07 00:58:41 +00:00
521d1071f8 [quant] Subpackage import in nn.quantized (#84141)
Some of the subpackages were not included in the 'torch.nn.quantized'.
That would cause some specific cases fail.
For example, `from torch.nn.quantized import dynamic` would work,
but `import torch; torch.nn.quantized.dynamic` would fail.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84141
Approved by: https://github.com/andrewor14
2022-09-01 11:35:03 +00:00
4f5806dee7 [AO] Clear the contents of the torch/ao/__init__.py (#69415)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69415

Adding the imports inside the torch/ao/__init__.py has a high chance of causing circular dependencies, especially if sparsity and quantization use each other's resources.
To avoid the dependency issues, we can just keep the __init__ empty.

Notes:
- This means that the user will have to explicitly import the `torch.ao.quantization` or `torch.ao.sparsity` instead of `from torch import ao; ao.quantization.???`.
- The issue of circular dependencies that are caused by the imports with binding submodules is [fixed in Python 3.7](https://docs.python.org/3/whatsnew/3.7.html#other-language-changes), which means this solution will become obsolete at the [3.6's EoL](https://www.python.org/dev/peps/pep-0494/#and-beyond-schedule), which comes [12/23/2022](https://devguide.python.org/#status-of-python-branches).

Future options to resolve the circular dependencies (subject to discussion):
1. Use interfaces for binding submodules. For example, have a torch/ao/_nn with all the source code, and an interface torch/ao/nn with only the __init__.py file. The __init__ files inside the torch/ao/_nn will be empty
2. Completely isolate the common code into a separate submodule, s.a. torch/ao/common. The other submodules will not be referencing each other.

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32860168

Pulled By: z-a-f

fbshipit-source-id: e3fe77e285992d34c87d8742e1a5e449ce417c36
2021-12-09 01:21:30 -08:00
0d020effab [quant] Fix the parts that were missing after initial migration (#66058)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66058

After the initial migration from `torch.quantization` to `torch.ao.quantization`, some of the files did not change.
This happened because the migration was done in parallel, and some of the files were landed while the others were still in the original location.
This is the last fix in the AO migration phase 1, which completely enables the ao.quantization namespace.

Test Plan: `python test/test_quantization.py`

Reviewed By: vkuzo

Differential Revision: D31366066

Pulled By: z-a-f

fbshipit-source-id: bf4a74885be89d098df2d87e685795a2a64026c5
2021-10-05 11:45:37 -07:00
7ab2729481 [sparsity][refactor] Import factoring out (#58707)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58707

Minor refactor that changes the format of the import.
This is done to avoid accidental circular dependencies.

Test Plan:
```
python test/test_ao_sparsity.py
```
Imported from OSS

Differential Revision:
D28970961
D28970961

Reviewed By: raghuramank100

Pulled By: z-a-f

fbshipit-source-id: c312742f5e218c435a1a643532f5842116bfcfff
2021-07-02 16:32:39 -07:00
b0fd3ca542 [sparse] Add the AO namespace to torch (#58703)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58703

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D28970962

Pulled By: z-a-f

fbshipit-source-id: 0d0f62111a0883af4143a933292dfaaf8fae220d
2021-06-09 19:47:21 -07:00
375687839e [sparsity] Moving the sparsity python files to OSS (#56617)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56617

This migrates the sparsity to the open source

Test Plan: `buck test mode/opt //caffe2/test:ao`

Reviewed By: raghuramank100

Differential Revision: D27812207

fbshipit-source-id: cc87d9d2b486269901a4ad9b483615741a1cd712
2021-04-22 14:07:31 -07:00