Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50748
Adds support for Linear + BatchNorm1d fusion to quantization.
This is a redo of dreiss's https://github.com/pytorch/pytorch/pull/37467, faster
to copy-paste it than rebase and deal with conflicts.
Test Plan:
```
python test/test_quantization.py TestFusion.test_fusion_linear_bn_eval
```
Imported from OSS
Reviewed By: supriyar
Differential Revision: D25957432
fbshipit-source-id: 24e5b760f70186aa953ef65ab0182770e89495e4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43286
We need to use this in graph mode quantization on fx
Test Plan: Imported from OSS
Reviewed By: vkuzo
Differential Revision: D23221734
fbshipit-source-id: 7c3c3840ce5bdc185b962e081aff1618f4c58e85
Summary:
1. While do convert() preserve module's **pre and post forward** hooks
2. While do fusion preserve only module's **pre forward** hooks (because after fusion output no longer the same)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37233
Differential Revision: D22425141
Pulled By: jerryzh168
fbshipit-source-id: e69b81821d507dcd110d2ff3594ba94b9593c8da
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36173
Previously we were ignoring the conv bias during training if it existed
This PR adds the bias from the conv op during the conv+bn fusion process
Test Plan:
python test/quantization/test_quantization.py
Imported from OSS
Differential Revision: D20921613
fbshipit-source-id: eacb2ccf9107f413ac4ef23163ba914af9b90924
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26457
Enhancement to fuse module to support sequentials, fuse list can now be just like the state dict.
Also add support for Conv-Relu and linear-relu fusion
Also support inplace and out of place fusion of models.
ghstack-source-id: 91076386
Test Plan:
buck test caffe2/test:quantization -- 'test_fusion_sequential_model_train \(test_quantization\.FusionTest\)' --print-passing-details
buck test caffe2/test:quantization -- 'test_fusion_sequential_model_eval \(test_quantization\.FusionTest\)' --print-passing-details
Differential Revision: D17466382
fbshipit-source-id: 0a548f8f4c366f3ecc59db693bac725ccd62328e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23003
torch.quantization.fuse_module and torch.nn._intrinsic convRelu and LinearRelu
Fusion function to combine specific modules: (conv,bn) and (conv,bn,relu).
In all cases, replace modules in place. The first module is replaced with the _intrinsic fused module and the remaining modules are replaced by nn.Identity.
Support both training and eval. For training, the modules are "fused" with a sequential container. This is to allow for further module swaps for quantization aware training.
Also add: torch.nn._intrinsic for convRelu and LinearRelu.
TODO: Add tests for _intrinsic modules.
Conv BN fusion code is based on DsKhudia's implementation
Differential Revision: D16199720
fbshipit-source-id: 95fb9ffe72b361d280313b2ec57de2acd4f9dda2