Commit Graph

48 Commits

Author SHA1 Message Date
62ecfa8b79 Fix typo under torch/csrc/jit/passes directory (#97222)
This PR fixes typo in comments under `torch/csrc/jit/passes` directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97222
Approved by: https://github.com/davidberard98, https://github.com/kit1980
2023-03-23 04:08:42 +00:00
e57a694d77 Add some missing moves to torch jit passes (#92317)
Add some missing moves in torch/jit/passes

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92317
Approved by: https://github.com/ezyang
2023-01-22 16:33:08 +00:00
5c7e801c50 [pytorch][on device quant] Finalize method for ondevice quant (#83571)
Summary:
After inserting quant dequant nodes in the graph, we need
1. Insert packed param creation and quantized op
2. Create packed_params attribute in the top module. For this we need
graph that inlined except for calculate_qparams method calls. But they
can be inlined too. So perhaps we need to make sure no other callmethods
exist.
3. Insert SetAttr for the packed param
4. Insert GetAttr for the packed param
5. Use GetAttr output for quantized op where applicable, e.g.
linear_dynamic

The above is added to quantize_<method-name> method created inprevious
step. Once the above steps are done clone the method into
quantized_<method-name>

Modify quantize_<method-name>:
1. Remove all outputs from the method.
2. Run dce
3. Remove all inputs from the method except self.

Modify quantized_<method-name>:
1. Remove all packed_param setAttr nodes.
2. Run dce.

This should result in removal of all nodes that generate packed param.

Test Plan: To be written

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D38771416](https://our.internmc.facebook.com/intern/diff/D38771416)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83571
Approved by: https://github.com/jerryzh168
2022-08-29 17:53:11 +00:00
30214aef2d [BE] irangefy (#62928)
Summary:
Replace for loop with for `irange` loop. Also fix some unused variable warnings in range loop cases

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62928

Reviewed By: driazati

Differential Revision: D30171904

Pulled By: malfet

fbshipit-source-id: 1b437a0f7e3515f4a2e324f3450e93312f1933ae
2021-08-07 13:34:13 -07:00
b39b28ced3 irange-ify 10 (#62122)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/62122

Test Plan: Sandcastle

Reviewed By: malfet

Differential Revision: D29879694

fbshipit-source-id: 87cd8ab17061129c835d9f961b67587c84d181d1
2021-07-28 13:35:23 -07:00
b162d95e46 Fix a number of lint perf and safety issues in torch (#59897)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/59897

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D29037012

fbshipit-source-id: 7c16286d5fc2b67964fb65f8374dfff4d1a7aefb
2021-06-15 13:14:51 -07:00
4cb534f92e Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os

def get_compiled_files_list():
    import json
    with open("build/compile_commands.json") as f:
        data = json.load(f)
    files = [os.path.relpath(node['file']) for node in data]
    for idx, fname in enumerate(files):
        if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
            files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
    return files

def run_clang_tidy(fname):
    check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
    changes = check_output(["git", "ls-files", "-m"])
    if len(changes) == 0:
        return
    check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])

def main():
    git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
    compiled_files = get_compiled_files_list()
    for idx, fname in enumerate(git_files):
        if fname not in compiled_files:
            continue
        if fname.startswith("caffe2/contrib/aten/"):
            continue
        print(f"[{idx}/{len(git_files)}] Processing {fname}")
        run_clang_tidy(fname)

if __name__ == "__main__":
    main()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892

Reviewed By: H-Huang

Differential Revision: D27991944

Pulled By: malfet

fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 14:10:25 -07:00
26419815af Modernize for-loops (#52330)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52330

Test Plan: Sandcastle

Reviewed By: mruberry

Differential Revision: D26001961

fbshipit-source-id: e75cc8f1a8d30917b4d55df9e1a3c7836c271820
2021-02-23 17:32:33 -08:00
958c208666 [quant] conv_transpose graph patterns (#45078)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45078

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23821580

Pulled By: z-a-f

fbshipit-source-id: 813a4ef1bbc429720765d61791fe754b6678a334
2020-09-25 18:14:29 -07:00
ed8b08a3ba Update quantize_jit to handle new upsample overloads (#43407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43407

ghstack-source-id: 110404846

Test Plan:
test_general_value_ops passes with D21209991 applied.
(Without this diff D21209991 breaks that test.)

Reviewed By: jerryzh168

Differential Revision: D23256503

fbshipit-source-id: 0f75e50a9f7fccb5b4325604319a5f76b42dfe5e
2020-08-24 13:33:47 -07:00
6bd46b583e [quant][graph] Add support for FP16 dynamic quant (#42222)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42222

This change adds the necessary passes to perform FP16 dynamic quantization.
We skip inserting observers for activations based on the dtype (torch.float16) and only insert the Fp16Observer for weights

Test Plan:
python test/test_quantization.py TestQuantizeJitOps

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D22849220

fbshipit-source-id: 2c53594ecd2485e9e3dd0b380eceaf7c5ab5fc50
2020-07-31 12:33:53 -07:00
dde18041a6 [quant][graphmode] Refactor quantization patterns (#40894)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40894

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D22403901

fbshipit-source-id: e0bcf8a628c6a1acfe6fa10a52912360a619bc62
2020-07-08 10:36:25 -07:00
26543e6caf [quant][graphmode] FP16 quant support - Operator Fusion (#40710)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40710

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D22335975

fbshipit-source-id: 5c176bb6b9c300e1beb83df972149dd5a400b854
2020-07-01 14:15:53 -07:00
8f5b28674c [JIT] Remove dead store in quantization_patterns.h (#40724)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40724

Test Plan: Continuous integration.

Differential Revision: D22294600

Pulled By: SplitInfinity

fbshipit-source-id: 04546579273d8864d91c3c74a654aa75ba34ee45
2020-06-29 16:55:15 -07:00
0a19534dd2 [JIT] Remove dead store in quantization_patterns.h (#40623)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40623

Test Plan: Continuous integration.

Reviewed By: jerryzh168

Differential Revision: D22259209

fbshipit-source-id: 90c9e79e039100f2961195504bb81230bba5c5fe
2020-06-26 19:43:43 -07:00
e3a97688cc [quant][graphmode][fix] dequantize propagation for {add/mul}_scalar (#40596)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40596

Previously the fusion patterns for {add/mul}_scalar is inconsistent since the op pattern
produces a non-quantized tensor and the op replacement graph produces a quantized tensor

Test Plan: Imported from OSS

Differential Revision: D22251072

fbshipit-source-id: e16eb92cf6611578cca1ed8ebde961f8d0610137
2020-06-25 22:17:08 -07:00
ab8a99bd36 graph mode: add hardswish inplace handling (#40284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40284

Adds graph mode handling for inplace hardswish, and test coverage for functional hardswish.

Test Plan:
```
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_hardswish
```

Imported from OSS

Differential Revision: D22140628

fbshipit-source-id: 55a514f7dc1130d510f69ee4e611d7cb5e08d02e
2020-06-21 09:40:50 -07:00
c6dbfcaf9e quantized elu: graph mode handling (#40111)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40111

Adds graph mode handling for quantized elu.

Test Plan:
```
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_elu
```

Imported from OSS

Differential Revision: D22075080

fbshipit-source-id: 37fb1b9e390f2a33d47cbd025157532379b6aa64
2020-06-21 09:40:48 -07:00
13d54c6471 quantized elu: require observation (#40100)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40100

ELU has a range of [-1, inf]. In the original PR which added
the quantized operator we decided to pass the quantization params
from the input.  However, it makes more sense to require observation
for this op.

This PR changes the API to require observation. Next PRs in this stack
will add the eager and graph mode handling.

Test Plan:
```
python test/test_quantization.py TestQuantizedOps.test_qelu
```

Imported from OSS

Differential Revision: D22075083

fbshipit-source-id: 0ea0fd05a00cc7a5f122a2b1de09144bbd586f32
2020-06-21 09:38:28 -07:00
9da277c635 [quant][graphmodel] linear_relu (#40021)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40021

This replaces #36889 due to significant merge conflicts

Test Plan: Imported from OSS

Differential Revision: D22087061

Pulled By: z-a-f

fbshipit-source-id: 6a65cdd3c0c0c957968a9d017902fb6d03b58150
2020-06-19 23:32:54 -07:00
e04a611b91 [quant][graphmode] clang format changes (#40329)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40329

Test Plan: Imported from OSS

Differential Revision: D22149706

fbshipit-source-id: 3c07cb0c09a53a01fc69185943ddc409264a6ff5
2020-06-19 23:22:43 -07:00
37362fff66 graph mode: util for fusion of functions which require observation (#39413)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39413

Implementing the request from
https://github.com/pytorch/pytorch/pull/39095

WIP so we can align on the API, once it looks good
will amend the PR to apply to all relevant functions.

Test Plan:
```
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_hardswish
```

Imported from OSS

Differential Revision: D21885263

fbshipit-source-id: 029339a99f8c50e45dd1dfb7fd89c20e3188720d
2020-06-18 10:21:20 -07:00
f42c948df5 [quant][graphmode] Support another use pattern of mean (#40038)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40038

Test Plan: Imported from OSS

Differential Revision: D22055696

fbshipit-source-id: 776196ce3d743deb8335d237bf5ef0fa67f7f26d
2020-06-16 18:37:21 -07:00
144e8dc5a3 [quant][graphmode] Use quantizedbatch_norm in graph mode (#39911)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39911

Test Plan: Imported from OSS

Differential Revision: D22012282

fbshipit-source-id: 98af55172cbeaa7080865d6533df21647a7cedfa
2020-06-16 00:58:11 -07:00
99084104b6 [quant][graphmode][refactor] isScalar check (#39892)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39892

Test Plan: Imported from OSS

Differential Revision: D22009856

fbshipit-source-id: fbc407499bcff0f25e44eedba3d6cd1225325c24
2020-06-12 10:53:35 -07:00
004aa089a6 [jit][subgraph_rewriter] Support list of filters (#39867)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39867

Support list of filters in subgraph rewriter, the rewrite will execute only
when the match passes all filter check. this is useful for different matches
to share the same filter.

Test Plan: Imported from OSS

Differential Revision: D22009855

fbshipit-source-id: 67aab8d6326b2011a9061397699dc62ee9ad4e2d
2020-06-12 08:24:49 -07:00
246d7bb41d [quant][graphmode] Quantizing traced modules (#39826)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39826

Expanding operator test coverage to traced modules

Test Plan: Imported from OSS

Differential Revision: D21991266

fbshipit-source-id: 73b1d94caa6ad41bb0d6cbde7ba0de343da3e7ff
2020-06-12 00:55:11 -07:00
9551fb22d6 [quant][graphmode] Preserve numerics in debug option for clamp ops (#39219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39219

We didn't model clamp ops correctly right now, this PR fixes that.

Reason is quantized clamp op quantizes the scalar arguments in the op implementation: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp#L614-L617

So we'll need to model this explicitly in the IR.
When we see a `aten::dequantize - aten::clamp(%x, %min, %max)`
we first make a scalar tensor with `aten::scalar_tensor(%scalar, ...)`, then we quantize the tensor with the same quantization parameters from the input tensor of the `aten::clamp`, dequantize the tensor, then convert the dequantized tensor to scalar using `aten::item`.

Test Plan: Imported from OSS

Differential Revision: D21831350

fbshipit-source-id: d60731459a0465d64946aabc62065d25d92faefc
2020-06-08 17:15:39 -07:00
ebdff07d49 instancenorm: static quant graph mode support (#39096)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39096

Hooks up instancenorm for graph mode static quant

Test Plan:
```
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_instance_norm
```

Imported from OSS

Differential Revision: D21885258

fbshipit-source-id: 650cc5b162dda044866176fea6c345082d9788ed
2020-06-07 13:38:28 -07:00
b443ca26c5 groupnorm: graph mode static quant support (#39095)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39095

Hooks up groupnorm to graph mode static quant

Test Plan:
```
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_group_norm
```

Imported from OSS

Differential Revision: D21885257

fbshipit-source-id: 3415c4de76181b026d2f5bfebab130fea29e1d1e
2020-06-07 13:38:22 -07:00
e4627e5dba [quant][graphmode] Fix add_relu patterns for scripting and tracing (#39455)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39455

1. enable filters in PatternInfo
2. add aten_add_alpha_is_one filter
3. add is_functional_relu filter
4. add is_relu_module filter
5. fix the relu module method call matching in traced modules with regex
6. add aten::add - aten::relu patterns for traced modules

Test Plan: Imported from OSS

Differential Revision: D21917118

fbshipit-source-id: e67b55cd1c070fd4238f563d933a6f10a3582ae3
2020-06-06 23:51:34 -07:00
625f4e39a7 [quant] Fix fusion pattern for add_relu (#39367)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39367

We shouldn't match `%alpha` argument since it could be used by multiple functions

Test Plan: Imported from OSS

Differential Revision: D21829295

fbshipit-source-id: 6daa320a4b56df4e142b8e02e04a3ecb36284d1b
2020-06-01 20:15:13 -07:00
9cacbe29e5 [quant] Add reduce_range argument for qlinear_dynamic (#39041)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39041

reduce_range option restricts the activation tensor to 7 bits instead of 8.
This is necessary to enable per channel quant for RNNs and LSTMs

Test Plan:
python test/test_quantization.py TestDynamicQuantizedLinear

Imported from OSS

Reviewed By: akinh

Differential Revision: D21769691

fbshipit-source-id: ef0e9873367f3c1b34091b0b3af788233ef60c6c
2020-05-29 18:19:36 -07:00
a8d8fc5532 [quant][graphmode] Different rule for add/add_/mul/mul_ (#38667)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38667

Test Plan: Imported from OSS

Differential Revision: D21633555

fbshipit-source-id: 03b0298e83bf4dbda41b048c0edc7bb92cd4e1df
2020-05-20 19:43:46 -07:00
ec9b2f9a9d [quant][graphmode][refactor] Factor out getFixedQParamOpFusionInfo (#38359)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38359

Test Plan: Imported from OSS

Differential Revision: D21559807

fbshipit-source-id: 13a67049a189ca43dcdae4b42bab0847821b3cd5
2020-05-14 21:37:59 -07:00
ee52501976 [quant][graphmode][refactor] Factor out getInputTensorQParamOpFusionInfo (#38358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38358

Test Plan: Imported from OSS

Differential Revision: D21559806

fbshipit-source-id: b243b811c5c5917f50a11ef5b26174baf46e683f
2020-05-14 19:59:09 -07:00
504637a171 [quant][graphmode] Support ops with fixed quantization parameters (#38278)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38278

Support ops like aten::hardsigmoid that has a fixed quantization parameters:
```
  constexpr float o_scale = 1.0f / 256.0f;
  constexpr int32_t o_zero_point = 0;
```

Ops supported:
- hardsigmoid
- sigmoid
- tanh

Test Plan: Imported from OSS

Differential Revision: D21559811

fbshipit-source-id: 26f3c9c3389dea4f07b350172e2974fac8c5c470
2020-05-14 16:36:06 -07:00
8e732514cd [quant][graphmode] Add support for quantized conv1d + relu fusion (#38441)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38441

Test Plan:
python test/test_quantization.py test_quantized_conv1d_relu

Imported from OSS

Differential Revision: D21575919

fbshipit-source-id: d43e33052ce1be5e38acef8fac16f22cb11c0695
2020-05-14 16:09:46 -07:00
7ce733d218 [quant][graphmode] Move leaky_relu to general value op map (#38166)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38166

Test Plan: Imported from OSS

Differential Revision: D21559813

fbshipit-source-id: 8521f7ad2b0fcd6f87090fb40517d5d92c37ba54
2020-05-13 17:51:14 -07:00
16696186e1 [quant][graphmode] Move elu to general value ops map (#38165)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38165

Test Plan: Imported from OSS

Differential Revision: D21559812

fbshipit-source-id: 55bc28d71d0b8a1c33e05bce20a802db1015ea0b
2020-05-13 17:51:09 -07:00
98d78a7f20 [quant][graphmode] Move hardtanh to general value ops map (#38164)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38164

Test Plan: Imported from OSS

Differential Revision: D21559808

fbshipit-source-id: 7b00e40cfa58806ce8675a61073778c4d77f8a8b
2020-05-13 17:51:03 -07:00
1fde373f2f [quant][graphmode] Move clamp to general value ops map (#38163)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38163

Test Plan: Imported from OSS

Differential Revision: D21559805

fbshipit-source-id: db02bd17fbc6d1335fe021265955d02d52d139e6
2020-05-13 17:50:57 -07:00
e988b4fbb1 [quant][graphmode] Move interpolate to general value ops (#38162)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38162

Test Plan: Imported from OSS

Differential Revision: D21559810

fbshipit-source-id: 2d975fc71f73c18f594108172850dfcfdb0cb9a0
2020-05-13 17:49:08 -07:00
7d7d73655d [quant][graphmode] Add quantizedconv1d to graphmode (#38341)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38341

Test Plan:
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_quantized_conv1d

Imported from OSS

Differential Revision: D21554256

fbshipit-source-id: baf78c7788a38acd9362204990f0b22c21263dfb
2020-05-13 16:59:24 -07:00
b668bbc404 [quant][graphmode][refactor] Factor out common parts of general value ops (#38161)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38161

Test Plan: Imported from OSS

Differential Revision: D21512972

fbshipit-source-id: 61425f7c51fe5972527432b74407486aa479d999
2020-05-13 14:17:45 -07:00
d403b85c00 [quant][graphmode] Move aten::mean to general value ops (#38160)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38160

Test Plan: Imported from OSS

Differential Revision: D21512971

fbshipit-source-id: 98cb1cc0eec5e7b140dcdf4e756bdbcd724b98f3
2020-05-13 11:39:22 -07:00
f2c6346ebe [quant][graphmode] Move avg_pool/adaptive_avg_pool to general value ops (#38330)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38330

Test Plan:
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_quantize_general_value_ops

Imported from OSS

Differential Revision: D21533452

fbshipit-source-id: 56928d93624f7c3d5c61f2627a19c5d3bb595202
2020-05-13 09:22:24 -07:00
0ed7fc581c [quant][graphmode][refactor] Split quantization.cpp (#37975)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37975

Test Plan:
.

Imported from OSS

Differential Revision: D21468497

fbshipit-source-id: 35cbf98a344ca6e4094d616a4040eacf017fd2de
2020-05-08 12:24:50 -07:00