Commit Graph

32 Commits

Author SHA1 Message Date
3bf922a6ce Apply UFMT to low traffic torch modules (#106249)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106249
Approved by: https://github.com/Skylion007
2023-07-29 23:37:30 +00:00
1577c106dc torch.ao migration: numeric suite, eager and fx (#64817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64817

This migrates `torch.quantization._numeric_suite` to `torch.ao.ns._numeric_suite`, and `torch.quantization._numeric_suite_fx` to `torch.ao.ns._numeric_suite_fx`.

1. move the files
```
HG: move eager mode
hg mv caffe2/torch/quantization/_numeric_suite.py caffe2/torch/ao/ns/
HG: move fx
hg mv caffe2/torch/quantization/_numeric_suite_fx.py caffe2/torch/ao/ns/
hg mv caffe2/torch/quantization/ns/* caffe2/torch/ao/ns/fx/
```

2. create new versions of `_numeric_suite.py` and `_numeric_suite_fx.py` with
imports

3. update all FB callsites

Test Plan: buck test mode/dev //caffe2/test:quantization

Reviewed By: z-a-f

Differential Revision: D30867538

fbshipit-source-id: 120ee830434ca490c1183a187a518eebcbbaf22c
2021-09-12 12:00:45 -07:00
b524a1101a ns for fx: add ref_node_target_type (#62685)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62685

Adds a `ref_node_target_type` field to hold the string type
of the base node. This is needed because in some cases
the previous node does not match ref_node (if we have observers,
or if we are logging inputs), and it is useful to know the type
of ref_node.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D30082947

fbshipit-source-id: 98ded7b25a5d8d5ea820e0ef62c3799b65c3fc77
2021-08-05 09:26:10 -07:00
72c943a2ac ns for fx: fix bug for user function in weight extraction (#62333)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62333

We incorrectly ignored any custom relationships the user specified
in the `extract_weights` API.  Fixing this and adding a test case.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_user_defined_function
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D29963502

fbshipit-source-id: 33ce3d4df1acb6298b6c7dcb6674015c8d14bdf4
2021-07-28 16:05:51 -07:00
04c95a0638 ns for fx: expose hook to define custom weight extraction functions (#62047)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62047

Adds a hook for user to define a weight extraction function for a
custom type.

Example usage:
```
op_to_type_to_weight_extraction_fn = \
    get_op_to_type_to_weight_extraction_fn()
op_to_type_to_weight_extraction_fn['call_function'][_wrapped_linear] = \
    torch.quantization.ns.weight_utils.get_linear_fun_weight

results = extract_weights_impl(
    'a', m1, 'b', m2,
    op_to_type_to_weight_extraction_fn=op_to_type_to_weight_extraction_fn)
```

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_user_defined_function
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D29853625

fbshipit-source-id: 183916ef54ba303bc818e0eba00b52e33c4633ad
2021-07-23 09:31:37 -07:00
eaba16d665 ns for fx: change weight extraction to direct mapping (#62038)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62038

Updates the logic to extract weights from nodes to use a
direct mapping from type to weight extraction function.

This is needed for a future PR which will allow users to
specify custom weight extraction functions for user defined
types.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D29853627

fbshipit-source-id: 3ef90ef4bd7b28f6316c0af215a2bd3ff8a2aeca
2021-07-23 09:30:08 -07:00
2a2bc1fc8a ns for fx: add fqn to results, when present (#61377)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61377

Both the quantization tracer and the NS tracer record
`_node_name_to_scope`, which contains the mapping from
node name to FQN.

This PR adds the FQN information to the NS results, so that it is
more convenient for users to attribute a NS result to the corresponding
module in their model.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_extract_weights_fqn
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_match_activations_fqn
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_shadow_activations_fqn
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D29600349

fbshipit-source-id: df489e03daff97dd380f59c83ffdc2b0012a0a53
2021-07-17 20:53:41 -07:00
4acd14da02 ns for fx: preserve observers and fake_quants through passes (#61323)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61323

Before this PR, all observers and fake quants were silently removed
when adding loggers with NS. This is problematic for QAT models because
we need the fake quants to run in order to properly capture intermediate
outputs.

This PR fixes the issue by preserving the observers throughout
the passes which add loggers.  In detail:
* for each quantization module or fusion, add additional patterns with that fusion and an observer/fake_quant at the end
* remove the places in the logger model creation code which removed observers
* add unit testing that QAT numerics do not change after adding loggers

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_loggers_preserve_qat_numerics
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_shadow_loggers_preserve_qat_numerics
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D29600351

fbshipit-source-id: 5f25118b79eb47860c49bca882de6a8eae7a4456
2021-07-17 20:53:33 -07:00
4ddb2b43b7 ns for fx: expose function to add comparisons between logged values (#60311)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60311

Adds a user facing utility function to FX Numeric Suite Core APIs
for comparing the values extracted by the loggers to each other.
This is needed for any kind of analysis, so would be great to
provide an example implementation.

Example:

```
// code

m = nn.Sequential(nn.Conv2d(1, 1, 1), nn.Conv2d(1, 1, 1)).eval()
qconfig_dict = {'': torch.quantization.default_qconfig}
mp = torch.quantization.quantize_fx.prepare_fx(m, qconfig_dict)
mq = torch.quantization.quantize_fx.convert_fx(copy.deepcopy(mp))
results = extract_weights('fp32', mp, 'int8', mq)
extend_logger_results_with_comparison(
    results, 'fp32', 'int8', compute_sqnr, 'sqnr_int8_vs_fp32')

print(results)

// results

{
  '_1': {'weight': {
    'fp32': [
      {'type': 'weight', 'values': [tensor([[[[-0.3284]]]])], 'prev_node_name': '_1', 'prev_node_target_type': "<class 'torch.nn.modules.conv.Conv2d'>", 'ref_node_name': '_1', 'index_within_arg': 0, 'index_of_arg': 0}
    ],
    'int8': [
      {'type': 'weight', 'values': [tensor([[[[-0.3297]]]], size=(1, 1, 1, 1), dtype=torch.qint8,
       quantization_scheme=torch.per_tensor_affine, scale=0.002575645223259926,
       zero_point=0)], 'prev_node_name': '_1', 'prev_node_target_type': "<class 'torch.nn.quantized.modules.conv.Conv2d'>", 'ref_node_name': '_1', 'index_within_arg': 0, 'index_of_arg': 0, 'sqnr_int8_vs_fp32': [tensor(48.1308)]}
    ]
  }},
  '_0': {'weight': {
    'fp32': [{'type': 'weight', 'values': [tensor([[[[0.5205]]]])], 'prev_node_name': '_0', 'prev_node_target_type': "<class 'torch.nn.modules.conv.Conv2d'>", 'ref_node_name': '_0', 'index_within_arg': 0, 'index_of_arg': 0}],
    'int8': [{'type': 'weight', 'values': [tensor([[[[0.5184]]]], size=(1, 1, 1, 1), dtype=torch.qint8,
       quantization_scheme=torch.per_tensor_affine, scale=0.004082232713699341,
       zero_point=0)], 'prev_node_name': '_0', 'prev_node_target_type': "<class 'torch.nn.quantized.modules.conv.Conv2d'>", 'ref_node_name': '_0', 'index_within_arg': 0, 'index_of_arg': 0, 'sqnr_int8_vs_fp32': [tensor(48.1309)]}]
  }}
}

```

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_extend_logger_results_with_comparison
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D29244715

fbshipit-source-id: a5547b449ea54e046c752119559be49bd738beea
2021-06-24 13:42:16 -07:00
31fe1c1323 ns for fx: rekey results by model node names (#60305)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60305

Adjusts the NS for FX weight and activation extraction APIs
to require a model name, and rekeys the results of these APIs
to use the node names of the specified model as layer keys.

For example, before

```
// API call
results = ns.extract_logger_info(
  model_a, model_b, ns.OutputLogger)

// results
{'base_op_1_0': {'node_output':
  {'model_a': [{'ref_node_name': 'linear1', ...}]}}}
```

and after

```
// API call
results = ns.extract_logger_info(
  model_a, model_b, ns.OutputLogger, 'model_b_name')

// results
// note: instead of `base_op_1_0`, the layer is named `linear1`
{'linear1': {'node_output':
  {'model_a': [{'ref_node_name': 'linear1', ...}]}}}
```

Note: we cannot use these names while collecting data because
node names are not guaranteed to be consistent across graphs.
This is why we only rekey as the very last step.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_layer_names
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D29243045

fbshipit-source-id: d39ecdfdd18b07291e3ecefed2ede287b100b7d0
2021-06-24 13:41:01 -07:00
5a45103139 ns for fx: add API usage logging (#60103)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60103

Adds internal logging for NS for FX API usage.

Test Plan: CI

Reviewed By: jerryzh168

Differential Revision: D29166710

fbshipit-source-id: 2a1bf2f6038b0c6c5945b57b2db2de25c585a04a
2021-06-18 10:25:59 -07:00
a9dc9535f6 ns for fx: move relatedness mapping to mappings file (#57171)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57171

No logic change, just moving the mapping to a file where
the other mappings are.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D28077978

fbshipit-source-id: 4049d6a498156a5dffe3a03d2f4abc79da7bf907
2021-05-05 06:29:11 -07:00
a359cfac22 ns for fx: add option to skip matching classes and functions (#57026)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57026

Adds a config option to skip matching classes by class type
and functions by function type.

This is useful when users make custom modules which return
types other than tensors. With the current implementation of
Logger, these are not scriptable.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_user_module_scriptable
```

Reviewed By: jerryzh168

Differential Revision: D28030093

Pulled By: vkuzo

fbshipit-source-id: 71dc54dd935d2071c4b017260ea2a1e5c2298bfe
2021-04-27 16:29:00 -07:00
e8a5490c0a ns for fx: support binary ops when adding unshadowed loggers for inputs (#57025)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57025

Adds the ability to log unshadowed inputs of binary ops such as `add`
and `mul`, when indices 0, 1, or 0 and 1 are tensors.

Note: making shadowing support this is saved for a future PR.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_add_mul_inputs_activations
```

Reviewed By: jerryzh168

Differential Revision: D28030098

Pulled By: vkuzo

fbshipit-source-id: fd46760faac153975cd7688e70c44991ec1d5dff
2021-04-27 16:28:58 -07:00
782a0a1469 ns for fx: allow user functions in shadowing (#57022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57022

Allows usage of user functions in NS shadow APIs. We expose the
i/o mapping to the user APIs, and thread them throughout the code.

Note: the format of the mapping is currently not the best.  Saving
improving that for a future PR.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_user_defined_function
```

Reviewed By: jerryzh168

Differential Revision: D28030095

Pulled By: vkuzo

fbshipit-source-id: 2863312362223ad276437e2aeeec4a3f71b691c7
2021-04-27 16:28:53 -07:00
45e96b5410 Revert D27833189: ns for fx: allow user functions in shadowing
Test Plan: revert-hammer

Differential Revision:
D27833189 (1917350977)

Original commit changeset: dac418e294d1

fbshipit-source-id: c6f58dac1a35806ea7d1dfb993d67e698196dee1
2021-04-27 01:01:06 -07:00
abb8b6c1c1 Revert D27864296: ns for fx: support binary ops when adding unshadowed loggers for inputs
Test Plan: revert-hammer

Differential Revision:
D27864296 (c004346c88)

Original commit changeset: 3cbeb728297a

fbshipit-source-id: bc87cb707b14a0965452e9a1aa0d4e37ffbe5bf1
2021-04-27 01:01:01 -07:00
cc8c5c1447 Revert D27886107: ns for fx: add option to skip matching classes and functions
Test Plan: revert-hammer

Differential Revision:
D27886107 (92c7aec5f5)

Original commit changeset: ec92c4f7ab71

fbshipit-source-id: 87d3b91c3d601f1706b61a2b2ce287a7b44f3d81
2021-04-27 01:00:59 -07:00
92c7aec5f5 ns for fx: add option to skip matching classes and functions (#56493)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56493

Adds a config option to skip matching classes by class type
and functions by function type.

This is useful when users make custom modules which return
types other than tensors. With the current implementation of
Logger, these are not scriptable.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_user_module_scriptable
```

needs more testing before land

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27886107

fbshipit-source-id: ec92c4f7ab7141021bc022f07b3b558b42bbb986
2021-04-26 17:03:28 -07:00
c004346c88 ns for fx: support binary ops when adding unshadowed loggers for inputs (#56408)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56408

Adds the ability to log unshadowed inputs of binary ops such as `add`
and `mul`, when indices 0, 1, or 0 and 1 are tensors.

Note: making shadowing support this is saved for a future PR.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_add_mul_inputs_activations
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27864296

fbshipit-source-id: 3cbeb728297aa192d1ea17e815299709fd9db056
2021-04-26 17:03:26 -07:00
1917350977 ns for fx: allow user functions in shadowing (#56301)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56301

Allows usage of user functions in NS shadow APIs. We expose the
i/o mapping to the user APIs, and thread them throughout the code.

Note: the format of the mapping is currently not the best.  Saving
improving that for a future PR.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_user_defined_function
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27833189

fbshipit-source-id: dac418e294d1c9b204efbf4071d5cc12a9e784c0
2021-04-26 17:03:21 -07:00
8dbf6ae8fa ns for fx: handling for user functions in weight and unshadowed act APIs (#56292)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56292

Adds hooks for specifying user defined functions to NS weight and
unshadowed activation APIs.

Adding it to shadowed activation APIs will be a bit more work, upcoming
in a separate PR.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_user_defined_function
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27830409

fbshipit-source-id: 6bbddc3062c0b3e412a3147244795319c0785a92
2021-04-26 17:03:18 -07:00
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
3786c2719d ns for fx: make NSTracer inherit from QuantizationTracer (#55505)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55505

This necessary to add support in NS for QAT modules, to avoid
duplicating logic between NSTracer and QuantizationTracer.

The eng work to expose the custom module and class names to
the user will be in a future PR.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27650407

fbshipit-source-id: 431f47c5353b41c11371c5efa79657bfd085459a
2021-04-15 16:11:14 -07:00
457fac0a33 ns for fx: move more weight matching logic to weight_utils.py (#55288)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55288

No logic change, just moving util-like code to the utils file.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D27575423

fbshipit-source-id: cd5188a0940bb664be7d0275faa7df8ea18401a8
2021-04-14 09:04:26 -07:00
13d7b40ea0 ns for fx: add F.conv2d and F.conv3d weight extraction (#55287)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55287

Adds support for extracting weights from F.conv2d and F.conv3d.
F.conv1d and the fused variants are saved for future PRs.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_extract_weights_conv_fun
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D27575424

fbshipit-source-id: e945912d7d0ab320f47cab30d00d60ddb7497158
2021-04-14 09:04:24 -07:00
8062545c63 ns for fx: weight extaction for conv1d and conv3d (#55079)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55079

Extends weight extraction to conv1d and conv3d.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27474696

fbshipit-source-id: 9d5f892160b1b003aa557cfd099c6834e3f70ded
2021-04-02 09:35:34 -07:00
f6b25e758d ns for fx: move it to top level file (#55060)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55060

Removes the previous iteration of Numeric Suite for FX graph mode
quantization, and moves the current iteration into the top level
file.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXGraphMatcher
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27467725

fbshipit-source-id: 4c22b5a3221857231f9f59cf6d2908820e6a7f12
2021-04-02 09:35:27 -07:00
74ec9e7ccf compare_model_outputs_fx API implementation (#49266)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49266

compare_model_outputs_fx API implementation
ghstack-source-id: 120828880

Test Plan:
buck test mode/dev caffe2/test:quantization_fx -- 'test_compare_model_outputs_linear_static_fx'
buck test mode/dev caffe2/test:quantization_fx -- 'test_compare_model_outputs_conv_static_fx'
buck test mode/dev caffe2/test:quantization_fx -- 'test_compare_model_stub_linear_static_fx'
buck test mode/dev caffe2/test:quantization_fx -- 'test_compare_model_stub_conv_static_fx'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_static'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_conv_static'

Reviewed By: vkuzo

Differential Revision: D25507933

fbshipit-source-id: 1b502b5eadb0fafbe9e8c2e843410bca03c63fd6
2021-02-02 10:43:25 -08:00
c354888e5d compare_model_stub_fx API implementation (#48951)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48951

compare_model_stub_fx API implementation
ghstack-source-id: 120817825

Test Plan:
buck test mode/dev caffe2/test:quantization_fx -- 'test_compare_model_stub_conv_static_fx'
buck test mode/dev caffe2/test:quantization_fx -- 'test_compare_model_stub_linear_static_fx'

Reviewed By: vkuzo

Differential Revision: D25379000

fbshipit-source-id: f1321d37b60b56b202e7d227e370ce13addb10cc
2021-02-01 22:16:14 -08:00
14edc726d9 Clean up some type annotations in caffe2/torch/quantization (#49942)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49942

Upgrades type annotations from Python2 to Python3

Test Plan: Sandcastle tests

Reviewed By: vkuzo

Differential Revision: D25717551

fbshipit-source-id: 1b63dc485ecf6641641b05f7ce095ae1d2d87346
2020-12-29 15:43:50 -08:00
f8722825b5 Compare Weights FX Implementation (#48056)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48056

PyTorch FX Quantization API:  Compare weights
ghstack-source-id: 117255311

Test Plan:
buck test mode/dev caffe2/test:quantization -- 'test_remove_qconfig_observer_fx'
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_dynamic_fx'
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_static_fx'
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_conv_static_fx'

Reviewed By: hx89

Differential Revision: D24940516

fbshipit-source-id: 301c1958c0e64ead9072e0fd002e4b21e8cb5b79
2020-11-20 17:17:19 -08:00