Commit Graph

221 Commits

Author SHA1 Message Date
c914ca7577 [quant][be] Add TestPT2ERepresentation test case (#108923)
Summary:
att

Test Plan:
python test/test_quantization.py TestPT2ERepresentation
Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108923
Approved by: https://github.com/andrewor14
2023-09-14 02:01:38 +00:00
8b34fa5e9b add basic cuda support for float8 dtypes (#105807)
Summary:

Ensures that creating tensors, copying, filling with zeroes, checking for nan works on cuda for the `float8` dtypes.  This should be enough for float8 emulation on cuda.

Note that I skipped the mul test - it's less trivial to add (need a new c++ macro), and there is no use case for it. We can follow up on that in the future.

Test Plan:

```
python test/test_quantization.py TestFloat8Dtype
```

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105807
Approved by: https://github.com/ezyang, https://github.com/jerryzh168, https://github.com/albanD
2023-07-25 03:43:36 +00:00
fa6be2fa6f [Quant][PT2E] Remove x86 inductor pt2e backend config (#105039)
**Summary**
For the Quantization PT2E path, we recommend to use `X86InductorQuantizer` instead of backend config of `x86_inductor_pt2e_backend_config`. Remove the `x86_inductor_pt2e_backend_config` and the relevant testing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105039
Approved by: https://github.com/jgong5, https://github.com/jerryzh168
2023-07-19 23:18:29 +00:00
73e1455327 [BE] Enable ruff's UP rules and autoformat test/ (#105434)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105434
Approved by: https://github.com/albanD
2023-07-19 20:36:06 +00:00
dbc8eb2a8f [Quant][PT2E]Enable x86 inductor quantizer (#98730)
**Summary**

- Enable `X86InductorQuantizer` basics.
- Recipe to annotate conv2d is added.

**Test Plan**
```
python -u -m pytest -s -v test_x86inductor_quantizer.py -k TestQuantizePT2EX86Inductor
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98730
Approved by: https://github.com/jgong5, https://github.com/jerryzh168
2023-06-17 06:10:23 +00:00
0cd155b042 [reland][quant][pt2e] Annotate GRU module (#103358) (#103526)
Summary:

att, we use module partition API to identify the GRU submodule and annotate all necessary patterns

Test Plan: buck2 test mode/opt caffe2/test:quantization_pt2e -- 'caffe2/test:quantization_pt2e'

Differential Revision: D46689428

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103526
Approved by: https://github.com/andrewor14
2023-06-13 23:43:10 +00:00
13777e3391 Revert "[quant][pt2e] Annotate GRU module (#103358)"
This reverts commit 23892d8ee44c33abafe9b96ccb788033ffbc63ad.

Reverted https://github.com/pytorch/pytorch/pull/103358 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/103358#issuecomment-1588729657))
2023-06-13 07:45:40 +00:00
23892d8ee4 [quant][pt2e] Annotate GRU module (#103358)
Summary: att, we use module partition API to identify the GRU submodule and annotate all necessary patterns

Test Plan: buck2 test mode/opt caffe2/test:quantization_pt2e -- 'caffe2/test:quantization_pt2e'

Reviewed By: kimishpatel

Differential Revision: D46384329

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103358
Approved by: https://github.com/HDCharles
2023-06-13 04:10:13 +00:00
a1142053f0 [reland][quant][test] Fix broken PT2 import, add warnings (#102819)
Summary:
We are currently silently skipping all PT2 quantization
tests due to a recent typo. This commit fixes this and also adds
warnings so it'll be easier to debug similar issues in the future.

Test Plan: python test/test_quantization.py

Differential Revision: D46383546

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102819
Approved by: https://github.com/jerryzh168
2023-06-02 22:35:30 +00:00
8b03a59e4d Revert "[quant][test] Fix broken PT2 import, add warnings (#102644)"
This reverts commit f18b9f86ba1343270d790d2b66e1903af1a7df5c.

Reverted https://github.com/pytorch/pytorch/pull/102644 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/102644#issuecomment-1572818537))
2023-06-01 21:36:27 +00:00
f18b9f86ba [quant][test] Fix broken PT2 import, add warnings (#102644)
Summary:
We are currently silently skipping all PT2 quantization
tests due to a recent typo. This commit fixes this and also adds
warnings so it'll be easier to debug similar issues in the future.

Test Plan: python test/test_quantization.py

Differential Revision: D46329480

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102644
Approved by: https://github.com/jerryzh168
2023-06-01 19:02:36 +00:00
4cb6add471 [PT2][Quant] Use module partition for fused patterns (#102394)
This diff introduces utility `find_sequential_partitions`.
This utility allows one to specify sequential pattern of
nn.Module/nn.functional and returns a list. Each item in the list contains a
List[SourcePartition] that represents sequentially connected partitions that
are of the pattern requested.
For example `find_sequential_partitions(model, [nn.Conv2d, nn.ReLU])` will find
all nn.Conv2d and nn.ReLU partitions that are sequentially connected.

Furthmore, move to using `find_sequential_partitions` for conv_bn/conv_bn_relu
for QAT.

Differential Revision: [D45948057](https://our.internmc.facebook.com/intern/diff/D45948057/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D45948057/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102394
Approved by: https://github.com/jerryzh168
2023-05-28 05:29:16 +00:00
c97dd8e134 Fix the pt2e UT path after refactor (#99402)
**Summary**
After https://github.com/pytorch/pytorch/pull/99064 and https://github.com/pytorch/pytorch/pull/99065 merged, the pt2e UT path has changed, also need to change the module path in `test/test_quantization.py`. Then we can run these tests in top level's test directory.

**Test Plan**
```
cd test && python -u -m pytest test_quantization.py -k TestQuantizePT2E
cd test && python -u -m pytest test_quantization.py -k TestQuantizePT2EModels
cd test && python -u -m pytest test_quantization.py -k TestQuantizePT2EFX
cd test && python -u -m pytest test_quantization.py -k TestQuantizePT2EFXX86Inductor
cd test && python -u -m pytest test_quantization.py -k TestQuantizePT2EFXModels
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99402
Approved by: https://github.com/jerryzh168
2023-04-18 10:48:52 +00:00
a6d8c70933 Init quantization backend config for inductor (#96476)
**Summary**
Init the backend config file with quantization recipes for quantization 2.0 inductor path. In this PR, we only init the recipe for `convolution` and `convolution_relu`.

**Test Plan**
```
clear && python -m pytest test_quantization.py -k test_inductor_backend_config_conv
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96476
Approved by: https://github.com/jgong5, https://github.com/EikanWang, https://github.com/jerryzh168
2023-03-22 07:56:56 +00:00
dc70e8175f Add various uninterpreted bit tensor data types (try 2) (#95860)
Summary:

This is a retry of https://github.com/pytorch/pytorch/pull/94992 which was reverted due to CI issues.

This PR adds a set of unintrepreted data types on PyTorch which can be used to implement experimental functionality out of core (think fp8, int4, int16 quant, etc).

@bypass-github-export-checks

Test Plan:

```
python test/test_quantization.py -k TestBits
```

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95860
Approved by: https://github.com/atalman
2023-03-04 03:35:59 +00:00
3bafecf719 Revert "Add various uninterpreted bit tensor data types (#94992)"
This reverts commit 9dbfca7840680ccd8d43f3e12594420ab9cd82e4.

Reverted https://github.com/pytorch/pytorch/pull/94992 on behalf of https://github.com/atalman due to breaks libtorch windows nightly builds see: https://github.com/pytorch/pytorch/pull/95406
2023-02-23 23:54:23 +00:00
9dbfca7840 Add various uninterpreted bit tensor data types (#94992)
Summary:

This PR adds a set of unintrepreted data types on PyTorch which can be used to implement experimental functionality out of core (think fp8, int4, int16 quant, etc).

Note: this is a copy-pasta of https://github.com/pytorch/pytorch/pull/89990 with a bug fix for clang9, easier to just to put up another PR since I'm not sure how comandeering works with Meta-only changes.

@bypass-github-export-checks

Test Plan:

```
python test/test_quantization.py -k TestBits
```

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94992
Approved by: https://github.com/angelayi
2023-02-18 00:04:30 +00:00
8fa66a6337 [quant][pt2e] Add a test to confirm we can set qconfig according to module_name (#91977)
Summary:
att

Test Plan:
python test/test_quantization.py TestQuantizePT2E.test_qconfig_none

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91977
Approved by: https://github.com/jcaip
2023-01-12 21:59:02 +00:00
f7b384cc46 [reland][quant][pt2e] Add early prototype top level quantize_pt2e APIs (#91035)
Summary:
This PR introduces the top level APIs for quantization support in PyTorch 2.0 Export stack
* torch.ao.quantization.quantize_pt2e.prepare_pt2e
Takes a model that is captured by the PyTorch 2.0 export (torchdynamo full graph mode) and prepares the model for calibration
for post training quantization

* torch.ao.quantization.quantize_pt2e.convert_pt2e
Takes a calibrated model and converts that to a reference quantized model that can be lowered later to quantized operator libraries or delegation modules

Also added a backend config for the qnnpack_pt2e backend:
* torch.ao.quantization.backend_config.get_qnnpack_pt2e_backend_config

Note: everything related to quantize_pt2e are experimental (prototype), and we don't have any bc guarantees

Test Plan:
python test/test_quantization.py TestQuantizePT2EModels

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91035
Approved by: https://github.com/HDCharles
2022-12-17 02:15:53 +00:00
ad1b04c4a9 Revert "[reland][quant][pt2e] Add early prototype top level quantize_pt2e APIs (#90971)"
This reverts commit 7dd5e554971411cbb50fc2eb157057c1e8a0de63.

Reverted https://github.com/pytorch/pytorch/pull/90971 on behalf of https://github.com/ezyang due to still broke tons of master jobs sorry
2022-12-16 09:29:39 +00:00
7dd5e55497 [reland][quant][pt2e] Add early prototype top level quantize_pt2e APIs (#90971)
Summary:
This PR introduces the top level APIs for quantization support in PyTorch 2.0 Export stack
* torch.ao.quantization.quantize_pt2e.prepare_pt2e
Takes a model that is captured by the PyTorch 2.0 export (torchdynamo full graph mode) and prepares the model for calibration
for post training quantization

* torch.ao.quantization.quantize_pt2e.convert_pt2e
Takes a calibrated model and converts that to a reference quantized model that can be lowered later to quantized operator libraries or delegation modules

Also added a backend config for the qnnpack_pt2e backend:
* torch.ao.quantization.backend_config.get_qnnpack_pt2e_backend_config

Note: everything related to quantize_pt2e are experimental (prototype), and we don't have any bc guarantees

Test Plan:
python test/test_quantization.py TestQuantizePT2EModels

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90971
Approved by: https://github.com/HDCharles
2022-12-16 06:24:28 +00:00
9c912c7dd0 Revert "[quant][pt2e] Add early prototype top level quantize_pt2e APIs (#90802)"
This reverts commit a66af1feba90cc64381bec45b0aa20ec778c92c5.

Reverted https://github.com/pytorch/pytorch/pull/90802 on behalf of https://github.com/malfet due to somehow broke test_resnet18 (quantization.fx.test_quantize_pt2e.TestQuantizePT2EModels), see a66af1feba
2022-12-15 23:28:21 +00:00
a66af1feba [quant][pt2e] Add early prototype top level quantize_pt2e APIs (#90802)
Summary:
This PR introduces the top level APIs for quantization support in PyTorch 2.0 Export stack
* torch.ao.quantization.quantize_pt2e.prepare_pt2e
Takes a model that is captured by the PyTorch 2.0 export (torchdynamo full graph mode) and prepares the model for calibration
for post training quantization

* torch.ao.quantization.quantize_pt2e.convert_pt2e
Takes a calibrated model and converts that to a reference quantized model that can be lowered later to quantized operator libraries or delegation modules

Also added a backend config for the qnnpack_pt2e backend:
* torch.ao.quantization.backend_config.get_qnnpack_pt2e_backend_config

Note: everything related to quantize_pt2e are experimental (prototype), and we don't have any bc guarantees

Test Plan:
python test/test_quantization.py TestQuantizePT2EModels

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90802
Approved by: https://github.com/qihqi
2022-12-15 21:50:29 +00:00
bdb14238ec [Reland][ONNX] Move all torch.onnx.export related tests to test/onnx (#87292)
Moving torch.onnx.export related tests to test/onnx integrates ONNX tests to the same CI machine, so the testing environment can be better managed.

Fixes https://github.com/pytorch/pytorch/issues/87320
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87292
Approved by: https://github.com/thiagocrepaldi, https://github.com/BowenBao, https://github.com/kit1980, https://github.com/malfet
2022-11-01 14:22:46 +00:00
237316aa1d PNP: early FX numeric suite tool to quantize each layer N times (#80521)
Summary:

This PR is an early prototype of a tool to quantize each layer of a model
N times, with N qconfigs each. We follow the design agreed upon in
https://fburl.com/gdoc/e1gaq3ih .

Current API:

```
m = M().eval()
example_input = (torch.randn(2, 2),)
qconfig_mappings = [
    QConfigMapping().set_global(torch.quantization.default_qconfig),
    QConfigMapping().set_global(torch.quantization.default_dynamic_qconfig),
]
backend_config = get_native_backend_config()

msp = prepare_n_shadows_model(
    m, example_input, qconfig_mappings, backend_config)

for _ in range(2):
    msp(*example_input)

msq = convert_n_shadows_model(msp)
msq(*example_input)

results = extract_results_n_shadows_model(msq)
print_comparisons_n_shadows_model(results)

// example output

subgraph_idx    ref_node_name      best_idx        1        2
--------------  ---------------  ----------  -------  -------
subgraph_0      fc1                       2  42.0834  42.6279
subgraph_1      fc2                       2  43.7259  50.0593
```

Test plan:

```
python test/test_quantization.py -k test_n_shadows
```

Differential Revision: [D37650332](https://our.internmc.facebook.com/intern/diff/D37650332)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80521
Approved by: https://github.com/jerryzh168, https://github.com/andrewor14
2022-10-06 02:30:45 +00:00
zaf
d542aab5c1 [quant][ao_migration] nn.intrinsic migration to ao (#84842)
All quantization-related modules are being migrated to `torch.ao`. This migrates the `nn.intrinsic.modules`. Please, see the [tracker](https://github.com/pytorch/pytorch/issues/81667) for the timeline.

Differential Revision: [D39419733](https://our.internmc.facebook.com/intern/diff/D39419733/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39419733/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84842
Approved by: https://github.com/jerryzh168
2022-09-28 23:54:29 +00:00
58170fb8aa Remove DBR quantization from the codebase (#83642)
Summary:

DBR quantization is a no-go for now because it does not align well with
PyTorch 2.0 plans and we do not want to build yet another tracing system.

Deleting it from the codebase for now since there are no plans to develop
this in the near future. We can bring it back at a later time if necessary.

Test plan:

CI

Differential Revision: [D38839556](https://our.internmc.facebook.com/intern/diff/D38839556)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83642
Approved by: https://github.com/andrewor14, https://github.com/jerryzh168
2022-08-23 15:18:40 +00:00
zaf
78c8a0d752 [quant][ao_migration] torch.nn.quantized.functionaltorch.ao.nn.quantized.functional (#78712)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
  - [X] [Current PR] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
  - [ ] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
  - [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
  - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
  - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
  - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
  - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
  - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
  - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
    - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
    - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- [Documentation](docs/source/quantization-support.rst) @vkuzo
- [Public API test list](test/allowlist_for_publicAPI.json) @peterbell10

Differential Revision: [D36792967](https://our.internmc.facebook.com/intern/diff/D36792967/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36792967/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78712
Approved by: https://github.com/jerryzh168
2022-08-18 17:51:54 +00:00
3f612b58be fix quantization/core/test_docs for Buck2 (#83341)
Summary:
We extract the test to its own target, fixing the relative path to the
quantization docs. This allows us to find the docs with a more simple
implementation.

Test Plan: Tested locally with buck1 and buck2.

Differential Revision: D38662169

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83341
Approved by: https://github.com/huydhn, https://github.com/seemethere, https://github.com/ZainRizvi
2022-08-18 13:03:00 +00:00
194255bb56 [Quant][fx] Implement BackendConfig (part 1) (#81469)
Summary: Following https://github.com/pytorch/pytorch/pull/78452
and https://github.com/pytorch/pytorch/pull/79066, this commit
is part 1 of the broader effort to replace `backend_config_dict`
with a python config object, a more formal and robust API that
leads to better user experience. Note that there is no change in
behavior in this commit by itself. A future commit (part 2) will
replace all existing usages of `backend_config_dict` with the
`BackendConfig` object added in this commit.

Test Plan:
python test/test_quantization.py TestBackendConfig

Reviewers: jerryzh168

Subscribers: jerryzh168
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81469
Approved by: https://github.com/jerryzh168
2022-07-24 00:34:48 +00:00
d0ce1fbbe2 [ao] Created Skeleton for ModelReportVisualizer class (#81523)
Summary: This introduces the skeleton for the ModelReportVisualizer
class. This class helps visualize the information generated by the
ModelReport class `generate_report()` output. This class aims to provide
visualizations in a table, plot (line graph) and histogram view.

This also introduces an empty test class for testing visualizations. As
implementations start occuring for this class, tests will also be
approrpriately added.

This includes the high level descriptions for each of the methods as
well. Expected use cases will be added to the class description in a
future commit as that gets finalized.

Test Plan: python test/test_quantization.py TestFxModelReportVisualizer

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81523
Approved by: https://github.com/andrewor14
2022-07-20 02:39:14 +00:00
e5162dcfa7 [ao] Added framework for ModelReport Outlier Detector (#80743)
Summary: This adds the class framework for the ModelReport
OutlierDetector. This detector will be in charge of looking at
activation data and figuring out whether there are significant oultiers
present in them. It will average this data across batches to make a
recommendation / warning if significant outliers are found.

This commit contains just the class framework and a base test class.
Implementations will follow in following commits.

Test Plan: python test/test_quantization.py TestFxDetectOutliers

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80743
Approved by: https://github.com/HDCharles
2022-07-01 01:03:31 +00:00
845021db2c [ao] Adds framework for InputWeightEqualization Detector (#79916)
Summary: This adds the framework (method signatures and descriptors) for
the InputWeightEqualization Detector. There is no code implemenation yet
so the test suite for this is a simple pass. This Detector will be used
to determine whether input weight equalization should be recommended.

Test Plan: python test/test_quantization.py TestFxDetectInputWeightEqualization

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79916
Approved by: https://github.com/HDCharles
2022-06-24 14:51:15 +00:00
ffdc5eebc7 [ao][docs] tests for quantization docs (#79923)
Summary: per https://github.com/pytorch/pytorch/issues/79135 the code
snippets in the docs don't run. This is a recurring problem since
previously there was no unit test to check that these code snippets
actually ran. This PR adds support for such a test, importing the
snippet as a string and evaluating it to make sure that it actually runs
if the code snippet has user defined code, you can pass in dummy
versions using global_inputs. Sometimes the imports of the code snippets
behave oddly but you can pass them in as in test_quantization_doc_custom
where nnq is passed in.

Test Plan: python test/test_quantization.py TestQuantizationDocs
also see https://github.com/pytorch/pytorch/pull/79994 to see what shows up in CI when the docs get broken

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79923
Approved by: https://github.com/z-a-f, https://github.com/vspenubarthi
2022-06-23 20:50:31 +00:00
01720ae3b6 [ao] Added ModelReport class outline for Fx Graph Modules
Summary: The ModelReport class in model_report.py combines the
functionality of the detectors and the ModelReportObserver. It creates
an end-to-end system where a user can pass in a prepared Graph Model to
insert the ModelReportObservers, then after the user callibrates their
model, the callibrated model can then be used by the ModelReport class
to generate reports based on what the user wished to gather information
on.

This contains the init method and the signatures and docs for each
of the proposed helper functions.

This also address and fixes a revert issue.

Test Plan: python test/test_quantization.py TestFxModelReportClass

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80052

Approved by: https://github.com/HDCharles
2022-06-22 21:12:58 +00:00
ea6fa8dc95 Revert "[ao] Added ModelReport class outline for Fx Graph Modules"
This reverts commit 0f95e1846c1e8da2e0524243bbd6761434e43b5a.

Reverted https://github.com/pytorch/pytorch/pull/79595 on behalf of https://github.com/malfet due to Broke tests on MacOS, see 0f95e1846c
2022-06-22 12:43:07 +00:00
0f95e1846c [ao] Added ModelReport class outline for Fx Graph Modules
Summary: The ModelReport class in model_report.py combines the
functionality of the detectors and the ModelReportObserver. It creates
an end-to-end system where a user can pass in a prepared Graph Model to
insert the ModelReportObservers, then after the user callibrates their
model, the callibrated model can then be used by the ModelReport class
to generate reports based on what the user wished to gather information
on.

This contains the init method and the signatures and docs for each
of the proposed helper functions.

Test Plan: python test/test_quantization.py TestFxModelReportClass

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79595

Approved by: https://github.com/andrewor14
2022-06-22 02:47:24 +00:00
38952d9350 [ao] Added function to inform dynamic vs static appropriate
Summary: The _detect_dynamic_vs_static function was added to take in a
prepared fx graph model that already had ModelReportObservers built into
it and uses the collected information to determine whether input and
output are stationary or non-stationary and provides feedback on whether
to make linear modules static or dynamic based on this information.

This PR will be followed up soon with another PR that will more
rigoursly test the whole end to end performance of this system, which is
primarily how the function in this PR will be tested for functionality,
which is why this one only has 1 test.

Test Plan: python test/quantization/fx/test_model_report_fx.py TestModelReportDetectDynamicStatic

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79326

Approved by: https://github.com/HDCharles
2022-06-15 02:51:27 +00:00
8e05513152 [ao] Added ModelReportObserver to inform on dynamic vs static
Summary: The purpose of this is to add to the module report functioality
by creating an observer that will take a prepared fx module and suggest
whether static or dynamic quantization is more appropriate. The tests
for this have been written and included in the location indicated by the
Test Plan

Test Plan: python test/quantization/fx/test_model_report_fx.py TestModelReportObserver

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79243

Approved by: https://github.com/jerryzh168, https://github.com/andrewor14
2022-06-14 19:08:40 +00:00
28c541776c [ao] Added fx model report per_channel detector
Summary: This code is meant to be a tool to help people get the most out
of their backend by hinting them to use per_channel quantization if it's
supported, which will help increase accuracy significantly. The code is
completed and ready to be reviewed.

Test Plan: test/quantization/fx/test_model_report_fx.py

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79104

Approved by: https://github.com/HDCharles
2022-06-10 08:09:59 +00:00
7ea5fa3dd4 [reland][quant] Add utility function get_fqn_to_example_inputs
Summary:
After https://github.com/pytorch/pytorch/pull/77608 `example_inputs` is required input for `prepare_fx` and `prepare_qat_fx`.
This makes quantizing submodules harder, so we added this utility function to get a dictionary from fqn to submodule example_inputs

Example Call:

```
example_inputs = (tensor0,)
get_fqn_to_example_inputs(m, example_inputs)
```

Example output:
```
{
   "linear1": (tensor1,),
   "linear2": (tensor2,),
   "sub": (tensor3,),
   "sub.linear1": (tensor4,),
   ...
}
```

Test Plan:
python test/test_quantization.py TestUtils

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78286

Approved by: https://github.com/dzdang
2022-05-25 23:31:51 +00:00
87148f2b59 Revert "[quant] Add utility function get_fqn_to_example_inputs"
This reverts commit 50a44fe461d5026e0aa69b95d7dc6e87d07cf3c7.

Reverted https://github.com/pytorch/pytorch/pull/78146 on behalf of https://github.com/suo due to as it broke master
2022-05-25 06:37:32 +00:00
50a44fe461 [quant] Add utility function get_fqn_to_example_inputs
Summary:
After https://github.com/pytorch/pytorch/pull/77608 `example_inputs` is required input for `prepare_fx` and `prepare_qat_fx`.
This makes quantizing submodules harder, so we added this utility function to get a dictionary from fqn to submodule example_inputs

Example Call:

```
example_inputs = (tensor0,)
get_fqn_to_example_inputs(m, example_inputs)
```

Example output:
```
{
   "linear1": (tensor1,),
   "linear2": (tensor2,),
   "sub": (tensor3,),
   "sub.linear1": (tensor4,),
   ...
}
```

Test Plan:
python test/test_quantization.py TestUtils

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78146

Approved by: https://github.com/vkuzo
2022-05-25 03:07:16 +00:00
81437e66c1 [quant][fx] Add RNN reference module (#73386)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73386

This PR adds support for RNN reference module, following https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md
This includes: RNNCell, LSTMCell, GRUCell, LSTM

Test Plan:
will be tested in the lowering flow in a separate PR

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D34469445

fbshipit-source-id: 71a13d7d056f7aaccdd98fb477c8a3a38aecc249
(cherry picked from commit 0b10f0d127515556b677eae3150f026ac8cd9acd)
2022-03-02 10:30:37 +00:00
4e90fa6a8c dbr quant: break up test class into multiple classes (#70246)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70246

Breaks up the large `TestQuantizeDBR` test case into
1. `TestQuantizeDBRIndividualOps` for testing functionality of ops
2. `TestQuantizeDBRMultipleOps` for testing non-fusion interactions between ops
3. `TestQuantizeDBR` for everything else

We may need to refactor this more in the future, but this should
unblock things for the near future.

Test Plan:
```
python test/test_quantization.py TestQuantizeDBR
python test/test_quantization.py TestQuantizeDBRIndividualOps
python test/test_quantization.py TestQuantizeDBRMultipleOps
```

Reviewed By: jerryzh168

Differential Revision: D33255925

Pulled By: vkuzo

fbshipit-source-id: 82db1a644867e9303453cfedffed2d81d083c9cd
2022-01-05 06:36:41 -08:00
ef6f776e82 [quant][be] Cleanup test cases for eager mode workflow (#69880)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69880

Making the test cases more standardized, in general we would like to have
```
TestQuantizeEager,
TestQuantizeEagerOps,
TestQuantizeEagerModels,
```

but currently since we have separate ptq static, ptq dynamic and qat static apis, we only partially cleaned
up the test cases, we can merge all of them later when we merge all the apis

Test Plan:
python test/test_quantization.py

Imported from OSS

Reviewed By: supriyar

Differential Revision: D33081418

fbshipit-source-id: fcb96559b76bbc51eb1b0625e0d4b193dbb37532
2021-12-16 17:47:30 -08:00
1940cc028e [quant][graphmode][fx] Fork subgraph_rewriter from torch.fx to quantization (#68228)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68228

Forking this for now so that we can make changes as we need, the changes can be merged back to torch.fx
later

Test Plan:
```
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32537713

fbshipit-source-id: 326598d13645fcc28ef2c66baaac6a077b80fd0c
2021-11-24 10:49:05 -08:00
a6d862c50a [quant][graphmode][fx] Add support for weight and bias dtype in backend_config_dict (#68602)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68602

This PR adds support for configuring weight/bias dtype in backend_config_dict
and refactor the current code that checks when to insert observers

Test Plan:
```
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32537712

fbshipit-source-id: 28eb7c61a8dcad8c1f3f6622d490a34cff0c59e2
2021-11-19 13:01:50 -08:00
7ee84ad321 Refactoring quantized op tests to combine test classes (#68282)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68282

Combined 3 Dynamic quantized op test classes into 1

Test Plan:
python test/test_quantization.py TestDynamicQuantizedOps

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D32402163

fbshipit-source-id: 696b7ef5d823632941dc7afc95161501445d0e18
2021-11-15 20:47:02 -08:00
09615cd0b0 Adding Dynamic Conv and ConvT ops/modules (#68176)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68176

it should be noted that for the modules, reduce_range is set to
true by default in a similar fashion to linear_dynamic.

Test Plan:
python test/test_quantization.py TestDynamicQuantizedModule
python test/test_quantization.py TestDynamicQuantizedConv
python test/test_quantization.py TestQuantizedConv

Imported from OSS

Reviewed By: kimishpatel

Differential Revision: D32374003

fbshipit-source-id: 011562bd0f4d817387d53bb113df2600aa60a7a3
2021-11-15 16:42:25 -08:00