842 Commits

Author SHA1 Message Date
92d24e3060 Revert D27855386: [pytorch][PR] Support factory kwargs in torch.nn modules
Test Plan: revert-hammer

Differential Revision:
D27855386 (40483acc51)

Original commit changeset: dabd505d2a04

fbshipit-source-id: f5bf3120d87861b30a8e1bf11977ad7d27cd8500
2021-04-19 20:07:20 -07:00
e3900d2ba5 Add lint for unqualified noqa (#56272)
Summary:
As this diff shows, currently there are a couple hundred instances of raw `noqa` in the codebase, which just ignore all errors on a given line. That isn't great, so this PR changes all existing instances of that antipattern to qualify the `noqa` with respect to a specific error code, and adds a lint to prevent more of this from happening in the future.

Interestingly, some of the examples the `noqa` lint catches are genuine attempts to qualify the `noqa` with a specific error code, such as these two:
```
test/jit/test_misc.py:27:            print(f"{hello + ' ' + test}, I'm a {test}") # noqa E999
test/jit/test_misc.py:28:            print(f"format blank") # noqa F541
```
However, those are still wrong because they are [missing a colon](https://flake8.pycqa.org/en/3.9.1/user/violations.html#in-line-ignoring-errors), which actually causes the error code to be completely ignored:

- If you change them to anything else, the warnings will still be suppressed.
- If you add the necessary colons then it is revealed that `E261` was also being suppressed, unintentionally:
  ```
  test/jit/test_misc.py:27:57: E261 at least two spaces before inline comment
  test/jit/test_misc.py:28:35: E261 at least two spaces before inline comment
  ```

I did try using [flake8-noqa](https://pypi.org/project/flake8-noqa/) instead of a custom `git grep` lint, but it didn't seem to work. This PR is definitely missing some of the functionality that flake8-noqa is supposed to provide, though, so if someone can figure out how to use it, we should do that instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56272

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI run (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2365189927

Reviewed By: janeyx99

Differential Revision: D27830127

Pulled By: samestep

fbshipit-source-id: d6dcf4f945ebd18cd76c46a07f3b408296864fcb
2021-04-19 13:16:18 -07:00
40483acc51 Support factory kwargs in torch.nn modules (#54508)
Summary:
Continuation of https://github.com/pytorch/pytorch/pull/53144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508

Reviewed By: bdhirsh

Differential Revision: D27855386

Pulled By: jbschlosser

fbshipit-source-id: dabd505d2a04208e74b158570fb2859c736eea2c
2021-04-19 12:24:58 -07:00
d05e7c163f Revert D27600457: [pytorch][PR] Support factory kwargs in torch.nn modules
Test Plan: revert-hammer

Differential Revision:
D27600457 (1077f87269)

Original commit changeset: b58bfee61c39

fbshipit-source-id: 19d5bfc5133a3880383731d0332503ca1f3bce0c
2021-04-19 07:47:24 -07:00
1077f87269 Support factory kwargs in torch.nn modules (#54508)
Summary:
Continuation of https://github.com/pytorch/pytorch/pull/53144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508

Reviewed By: mrshenli

Differential Revision: D27600457

Pulled By: jbschlosser

fbshipit-source-id: b58bfee61c3917524b4622f63ef216c27a588eb1
2021-04-19 06:58:40 -07:00
48e675ac75 fx quant: fix subtle bug in BinaryOpQuantizeHanlder logic in matching (#56294)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56294

When matching a pattern to `BinaryOpQuantizeHandler`, we need to make
sure we check for dtype support on the base node, instead of the current
node.  This is important in cases such as `add-relu` and `mul-relu`,
when the current node is `relu`, but the base node is `add|mul`.

Test Plan:
```
python test/test_quantization.py TestQuantizeFx
```

There is no good test case to check this in current logic.  Created an
add-relu model manually, and verified with pdb that the add node was
being used to match against dtypes.

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27831070

fbshipit-source-id: 3697f1328dff9fec3eb910bae49a73793ef36d63
2021-04-16 18:19:22 -07:00
98933866a9 [quant][graphmode][fx] Optimize cat (#54813)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54813

Previously we have a cat that takes a list of Tensors with different qparams and dequantize them
cacatenate them and requantize with the output qparams. This adds some unnecessary overhead in dequantizing
and quantizing Tensors.

This PR adds an optimization for cat operator, we'll make sure inputs and output of cat
uses same observer/fake_quant and produce a cat that does not do rescaling.

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D27408377

fbshipit-source-id: 6a4bdcfd15e57ea1fe0f7e72d1e1288eb3ece4db
2021-04-16 16:00:43 -07:00
9f216b9499 ns for fx: enable shadowing int8 to int8 (#56205)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56205

Allows for int8 modules to shadow int8 modules. This is useful when
comparing quantized models with different qconfigs.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_int8_shadows_int8
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27807405

fbshipit-source-id: 10c3bc7ab9bb1e6808aa1af23a34c7cf380465fd
2021-04-16 10:34:47 -07:00
ae0af8bb51 ns for fx: move unmatchable mod/fun/meth mapping to mappings file (#56197)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56197

No logic change, just moving code around.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_op_io_dtype_coverage
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27805332

fbshipit-source-id: 0a63cf6ef7e5c4f655cdd5a18d54cc988424ac80
2021-04-16 10:34:46 -07:00
6de5d13e0f ns for fx: make call_method nodes work in NS APIs (#56196)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56196

Enables `call_method` nodes to work in NS APIs for unshadowed
and shadowed activations.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_op_io_dtype_coverage
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_match_activations_meth_ptq
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_add_shadow_loggers_meth_ptq
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27805335

fbshipit-source-id: 39b9c02c5c5faf098f2dd4f36d1ea8296d51a63c
2021-04-16 10:34:44 -07:00
07f3eaa716 ns for fx: remove deprecated code (#56195)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56195

This is outdated, removing (forgot to clean up in a previous PR).

Test Plan:
```
python test/test_quantization.py TestFXGraphMatcher
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27805334

fbshipit-source-id: 3b035945b4928a3c727e96e0f7fe0efe201f42c0
2021-04-16 10:34:42 -07:00
0fbc2be234 ns for fx: enable call_method nodes in graph matching (#56194)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56194

Enables the NS graph matcher to also match `call_method` nodes.
These are useful for ops such as `torch.sigmoid`.

Test Plan:
```
python test/test_quantization.py TestFXGraphMatcher.test_methods
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27805333

fbshipit-source-id: 509ae283db6b245671f11e3eb6b7fcb3a5735ef5
2021-04-16 10:34:41 -07:00
2380cc7d65 ns for fx: fill out coverage for node I/O types (#55918)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55918

Adds coverage for determining I/O dtype for various ops. This will
enable shadowing of these ops.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_op_io_dtype_coverage
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27740661

fbshipit-source-id: c5ce873ec56bffa50ca46d2fe134c70ed677e37e
2021-04-16 10:34:39 -07:00
430fc03e3f ns for fx: add category for ops which accept fp32 or int8 input (#55859)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55859

Adds mappings for ops which can accept either fp32 or int8 input,
such as `F.relu`.  A future PR will fill out the op coverage.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_op_with_either_fp32_or_int8_input
```

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D27740659

fbshipit-source-id: cfc3dd58319b7161ca7f1fe05cd22d9a3ff11141
2021-04-16 10:34:37 -07:00
5ec6434945 ns for fx: move op dtype category mapping to separate file (#55858)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55858

Moves the mappings of input and output dtypes of various ops
into its own file, and makes the variable names more clear. No logic
change.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D27740662

fbshipit-source-id: d384e7e542d9cc868d9cee9c53c2ac2f74a15a48
2021-04-16 10:33:05 -07:00
f59244ec16 ns for fx: add test for op relationship coverage (#55837)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55837

Adds a test that checks that all of the relevant op pairs defined in
`quantization_mappings.py` are also defined as related by Numerical
Suite.

Note: this does not cover all the ops, just the ones in
`quantization_mappings.py`.  A future PR will fill out the remainder.

Test Plan:
```
python test/test_quantization.py TestFXGraphMatcher.test_op_relationship_mapping
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27719979

fbshipit-source-id: 9e852ef94da5f7a653ea15ba52c68a89c8e30208
2021-04-15 16:11:26 -07:00
c8209a7336 ns for fx: move pattern utils to separate file (#55805)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55805

No logic change, just moving util functions to separate file.

Test Plan:
```
python test/test_quantization.py TestFXGraphMatcher
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27719982

fbshipit-source-id: c80d5397c1efeb9fc83eacaa532ecbde557cca3f
2021-04-15 16:11:24 -07:00
b461104554 ns for fx: make get_reversed_fusions reuse quantization fusions (#55803)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55803

Makes the NS `graph_matcher.get_reversed_fusions` use the fusions
defined the FX quantization code instead of duplicating them.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27719980

fbshipit-source-id: 12e3183405181bb9001f10e765cfb4d2ffdfdd88
2021-04-15 16:11:23 -07:00
1cbc4023e9 ns for fx: add qat handling for weight extraction (#55506)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55506

Makes the NS weight extraction tests also test QAT, and fixes
the mappings where necessary to cover all the fusions and make
the tests pass.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_extract_weights_mod_ptq
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_extract_weights_mod_qat
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27650409

fbshipit-source-id: c5bd9268d1bc559afc27d4c5109effd77bf1538a
2021-04-15 16:11:16 -07:00
3786c2719d ns for fx: make NSTracer inherit from QuantizationTracer (#55505)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55505

This necessary to add support in NS for QAT modules, to avoid
duplicating logic between NSTracer and QuantizationTracer.

The eng work to expose the custom module and class names to
the user will be in a future PR.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27650407

fbshipit-source-id: 431f47c5353b41c11371c5efa79657bfd085459a
2021-04-15 16:11:14 -07:00
5ad3bc715c ns for fx: change node I/O determination to strict allowlist (#55434)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55434

Before this PR, there was some hacky logic which determined
the input and output types of nodes based on heuristics such
as inspecting `__module__`, or assuming that an op has an
I/O dtype of `torch.float` when the heuristics did not find
any matches.  This is problematic because the heuristics were not exact,
and this could result in non-sensical shadow graphs when the heuristics
would return an incorrect dtype.

This PR switches the dtype determination to an allowlist system,
where we specify exactly what the dtypes are for the nodes or modules
which are in an allowlist, and we add an `UNKNOWN` type for everything
else.  The shadow logic is changed to skip inserting shadows on any
function or module where the I/O dtype is unknown.

The current allowlist only contains functions necessary for the
currently existing tests.  Filling out the allowlist with all necessary
torch functions is left for a future PR.

As a result of this, we can do the following (also implemented in this PR):
1. enable graph matching on nodes with equal types (for example,
F.linear and F.linear). The restriction that only nodes with equal types
was in the code as a placeholder, it's better to allow comparisons of
nodes of equal types. One case where this is useful is unshadowed
activations.
2. enable models with user defined modules to be passed to Numeric Suite
APIs without errors.

Test Plan:
```
python test/test_quantization.py TestFXGraphMatcher
python test/test_quantization.py TestFXGraphMatcherModels
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27622418

fbshipit-source-id: 40dcba0222c01154c141467640c1eb89725f33a7
2021-04-15 16:09:51 -07:00
8188d18f8d ns for fx: add functional conv-relu fusion support (#55433)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55433

Makes `F.conv{n}d -> F.relu` patterns work for NS weight
extraction.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_extract_weights_conv_fun
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D27622417

fbshipit-source-id: d3ee08bd19865874cff3776c3b69e232fdfc5912
2021-04-14 09:04:37 -07:00
784ae23d43 ns for fx: fix bug in weight extraction testing (#55431)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55431

Fixes a bug in the test cases, returning early resulted
in some tests not being run.  Adds logic for `nni.LinearReLU`,
which was unmasked by making the tests run

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_extract_weights_mod
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D27622415

fbshipit-source-id: 79d9e3125e5d881d9d13645abbe4bd007a5e1d44
2021-04-14 09:04:32 -07:00
8b992ab0e4 ns for fx: add conv1d weight extraction (#55327)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55327

Adds NS functionality for extracting weights from `F.conv1d` nodes.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_extract_weights_conv_fun
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27575425

fbshipit-source-id: 65fa194802ac7a9fb75b7616d962c5c2e71321ff
2021-04-14 09:04:30 -07:00
8fc1ca0d22 fx quant: fix prepacking for F.conv1d (#55311)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55311

Before this PR, `F.conv1d` was matched by FX graph mode quant patterns
but the prepacking was happening inline.  There was also a bug with
argument type mismatch.

This PR fixes both issues and adds a test. Thanks jerryzh168 for the
code tip.

Test Plan:
```
python test/test_quantization.py TestQuantizeFx.test_functional_not_reference
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27575422

fbshipit-source-id: 42301e23cb101a9e64e46800813bc771317e233e
2021-04-14 09:04:28 -07:00
457fac0a33 ns for fx: move more weight matching logic to weight_utils.py (#55288)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55288

No logic change, just moving util-like code to the utils file.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D27575423

fbshipit-source-id: cd5188a0940bb664be7d0275faa7df8ea18401a8
2021-04-14 09:04:26 -07:00
13d7b40ea0 ns for fx: add F.conv2d and F.conv3d weight extraction (#55287)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55287

Adds support for extracting weights from F.conv2d and F.conv3d.
F.conv1d and the fused variants are saved for future PRs.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_extract_weights_conv_fun
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D27575424

fbshipit-source-id: e945912d7d0ab320f47cab30d00d60ddb7497158
2021-04-14 09:04:24 -07:00
1fb2abc7ad ns for fx: rename SugraphTypeRelationship to SubgraphTypeRelationship (#55155)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55155

Fixes typo in enum name, no logic change

Test Plan:
CI

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27504625

fbshipit-source-id: 21605dadb48225987f1da5ad5f6c30b0183278f2
2021-04-14 09:04:22 -07:00
37a404610f ns for fx: add allowlist for ops with same signature across dtypes (#55154)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55154

Adds functionality to NS to allow matching nodes which have the
same signature across dtypes.  For now, only the skeleton is added,
we can fill out the rest of the ops later.  This is to unblock
the work to change `cat` to have the same signature for fp32 and int8,
and keep the testing we have for `cat` in NS.

For context, the main reason we are not matching nodes with equal types,
for now, is user defined types for which we do not know the signature.
For now, the design is strictly allowlist of everything. In the future,
we may adjust the design to safely match user defined types.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_ops_with_same_fp32_and_int8_signature
python test/test_quantization.py TestFXGraphMatcher.test_nodes_with_equal_types_do_not_get_matched
```

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D27504624

fbshipit-source-id: 4f8eb4f3258caf6f99aa373ca7ba516ebbcf4779
2021-04-14 09:04:20 -07:00
444b318a90 ns for fx: add linear-relu mod weight extraction (#55080)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55080

Adds support for extracting weights of linear-relu module pattern.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D27474701

fbshipit-source-id: 69ceaadc28d7fdcebd16d519367274d348b0dd29
2021-04-14 09:02:51 -07:00
c96b5b2a20 [quant][graphmode][fx][fix] Fix fp16 reference patterns for linear (#55727)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55727

number of dequantize for fp16 reference pattern was incorrect before, this
PR fixes the problem

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D27713390

fbshipit-source-id: 72b8d4cda0bdcea74abe27a76f918d1b47819b01
2021-04-13 23:19:45 -07:00
4753100a3b Un-ignore F403 in .flake8 (#55838)
Summary:
Generally wildcard imports are bad for the reasons described here: https://www.flake8rules.com/rules/F403.html

This PR replaces wildcard imports with an explicit list of imported items where possible, and adds a `# noqa: F403` comment in the other cases (mostly re-exports in `__init__.py` files).

This is a prerequisite for https://github.com/pytorch/pytorch/issues/55816, because currently [`tools/codegen/dest/register_dispatch_key.py` simply fails if you sort its imports](https://github.com/pytorch/pytorch/actions/runs/742505908).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55838

Test Plan: CI. You can also run `flake8` locally.

Reviewed By: jbschlosser

Differential Revision: D27724232

Pulled By: samestep

fbshipit-source-id: 269fb09cb4168f8a51fd65bfaacc6cda7fb87c34
2021-04-13 09:24:07 -07:00
ec9b20ddc0 fx quant: fix edge case with copynode after user function (#55710)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55710

In the current code, there is an edge case which leads to an error
after the prepare step:

1. have a pattern like this:

```
user_func_unmatched_to_qhandler -> node_matched_to_copy_node_qhandler
```

2. the user function returns a type which is not observable (i.e. not a
Tensor)

3. if this is run through `prepare_fx`, calibrating it with data leads
to a runtime error, because observers cannot observe non-tensor types.

This PR fixes the issue.  If a node matched to `CopyNodeQuantizeHandler`
is after an unmatched node, we delete the observer.

Test Plan:
```
python test/test_quantization.py TestQuantizeFx.test_no_obs_between_unmatched_node_and_copy_node
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27686811

fbshipit-source-id: 320be41b1f383c6352ff89fb39a9f480822a3bb2
2021-04-12 08:47:44 -07:00
3e8ebb17aa [reland][quant][graphmode][fx][refactor] Factor out insert_observers_for_model to a separate function (#54733) (#55307)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55307

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D27567475

fbshipit-source-id: 74b7db63f7e1e795e7ac7ed6027cf786d922e7bf
2021-04-09 17:56:55 -07:00
bbd2b1bd3c [quant][graphmode][fx] Add shape to nontensor op list (#55529)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55529

x.shape outputs a non-Tensor, add this to the all_node_args_have_no_tensors function
to avoid inserting observer for the getattr "shape" node.

Test Plan: Imported from OSS

Reviewed By: wat3rBro

Differential Revision: D27628145

fbshipit-source-id: 4729294ab80c0a1e72440396d31e7e82257b1092
2021-04-08 23:27:05 -07:00
4d449f915f [quant][graphmode][fx] Separate handling Copy operator to a helper function (#54644) (#55429)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55429

Previously we special case copy operator in normal insert observer code, this PR tries to split the
special case logic to a separate function and keep the rest of the code clean.

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D27609972

fbshipit-source-id: 378f6aa70f18c0b477b62b6efe236648748aae7e
2021-04-08 22:12:24 -07:00
add49e7e4e Enforce PEP263 for PyTorch python codebase (#55346)
Summary:
All python files containing non-ASCII characters should be correctly annotated with `# -*- coding: utf-8 -*-` comment

Delete number of superfluous UTF-8 characters, most commonly UTF-8 opening closing quotation mark U+2019 (’) instead of ascii apostrophe ', for example `Module’s`->`Module's`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55346

Reviewed By: samestep

Differential Revision: D27582044

Pulled By: malfet

fbshipit-source-id: c1cd89655915858ff3a41f675cdfffff795a8e44
2021-04-06 18:31:38 -07:00
8eaa4a97b7 Back out "[quant][graphmode][fx] Separate handling Copy operator to a helper function" (#55388)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55388

temporarily revert D27314678 (c57541ce06), it appears to cause a perf regression that makes quantization of some models take too long to complete tests.

Reviewed By: houseroad

Differential Revision: D27583809

fbshipit-source-id: e9c088ccbfd3bfb3a1d4c7eafee3eca29ee7717b
2021-04-06 14:20:36 -07:00
8062545c63 ns for fx: weight extaction for conv1d and conv3d (#55079)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55079

Extends weight extraction to conv1d and conv3d.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27474696

fbshipit-source-id: 9d5f892160b1b003aa557cfd099c6834e3f70ded
2021-04-02 09:35:34 -07:00
80b1b7e4b1 ns for fx: ensure kwargs are handled when graph matching (#55078)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55078

Fixes a TODO, make sure we iterate through kwargs as well as args
when navigating graphs.  We can use `node.all_input_nodes` convenience
property to accomplish this.

Test Plan:
```
python test/test_quantization.py TestFXGraphMatcher
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27474699

fbshipit-source-id: 8a6e3db5a73328c4f296ac5fce951e81213b6f58
2021-04-02 09:35:32 -07:00
a590fa7af4 ns for fx: clean up debug print statements (#55077)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55077

Deletes debugging prints from the code, no logic change.

Test Plan:
CI

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27474700

fbshipit-source-id: 3d9d73da6615ddffdfdb0df270bcdfd2c4b50be3
2021-04-02 09:35:30 -07:00
f6b25e758d ns for fx: move it to top level file (#55060)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55060

Removes the previous iteration of Numeric Suite for FX graph mode
quantization, and moves the current iteration into the top level
file.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXGraphMatcher
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27467725

fbshipit-source-id: 4c22b5a3221857231f9f59cf6d2908820e6a7f12
2021-04-02 09:35:27 -07:00
c6cb99a6c7 ns for fx: weight extraction for nni.ConvReLU2d (#54335)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54335

Simple fix to enable weight extraction for nni.ConvReLU2d.

Note: this module only appears if the internal GraphModule APIs are
called, so we add testing for this path.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_extract_weights_mod
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D27192844

fbshipit-source-id: 923cf63e29e4638fd77ca42e69aedb15fb20a330
2021-04-02 09:35:25 -07:00
5319d17be4 ns for fx: make input logging work for multi node subgraphs (#54327)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54327

Makes input logging work properly for multi-node subgraphs.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_linear_fp16_shadow_activations
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D27190137

fbshipit-source-id: 3f39bfd5112d5ee92c1e66c133e970c28db40d46
2021-04-02 09:35:22 -07:00
b8019cee0e ns for fx: make input logging work for multi-node subgraphs (#54326)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54326

Fixes unshadowed activation input logging for subgraphs where start_node does
not equal end_node. In detail:
* instead of passing around a single list of nodes, pass around a list
of nodes to instrument inputs, and a list of nodes to instrument
outputs. This way we can handle multi-node subgraphs properly, and we
also keep the subgraph instance definition out of the public APIs.
* add a test case

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_linear_fp16_activations
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D27190138

fbshipit-source-id: 58e2377c1c128baaf3b760c1ad29098fb21f53d3
2021-04-02 09:35:20 -07:00
757e3cbf82 ns for fx: add support for shadowing linear fp16 patterns (#54275)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54275

Adds support for NS shadow activations path for the fp16 emulation
pattern such as

```
... -> dequantize -> linear -> relu -> to(torch.float16) -> ...
```

There are a couple of changes necessary here:

1. removing the restriction on the shadowing graph pass that the B
subgraph is a single node (since this subgraph is four nodes), and
modifying the code to correctly add the relevant inputs versus output
loggers (input loggers and subgraph copy if we are at start_node,
and output logger if we are at end_node)

2. modifying the logic for calculating node input and output type
to work correcty for the `to` and `dequantize` nodes:
2a. make the function return the first input and output, instead of just
the first input
2b. make the function handle `dequantize` correctly by recursively
using the output if its input
2c. make the function handle `to` correctyl by recursively using the
output of its input and the target dtype

3. a bug fix to handle observers in kwargs, while copying subgraphs

Note: input logging for these patterns is not tested yet,
this will be in the next PR.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_linear_fp16
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27172655

fbshipit-source-id: 3bdc86618b2a5782627fcf303d58af7f47fbc30d
2021-04-02 09:33:36 -07:00
15f04e3466 Revert D27408378: [quant][graphmode][fx][refactor] Factor out insert_observers_for_model to a separate function
Test Plan: revert-hammer

Differential Revision:
D27408378 (c445f4ee93)

Original commit changeset: 9143f0a6f939

fbshipit-source-id: ae65ea798a6d72f2ec724c4c1b492937edddf721
2021-03-31 20:51:42 -07:00
c445f4ee93 [quant][graphmode][fx][refactor] Factor out insert_observers_for_model to a separate function (#54733)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54733

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D27408378

fbshipit-source-id: 9143f0a6f939fa80f1d1d6bae4b2d37aa21cb9b9
2021-03-31 18:50:47 -07:00
c57541ce06 [quant][graphmode][fx] Separate handling Copy operator to a helper function (#54644)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54644

Previously we special case copy operator in normal insert observer code, this PR tries to split the
special case logic to a separate function and keep the rest of the code clean.

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D27314678

fbshipit-source-id: d36870ceb3717bc01eaeaa6f3f1532ad562cbaf1
2021-03-31 17:50:32 -07:00
c0d6dbdce4 [quant][fx][graphmode][refactor] Change activation_post_process_map to track the observer name instead (#54643)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54643

A refactor needed for future changes.

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D27314677

fbshipit-source-id: 972fbfb506f86da13f8817b3eaa5e6d0ad16ffe1
2021-03-31 17:50:30 -07:00