mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
fx quant: enable linear-bn1d fusion for PTQ (#66484)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66484 https://github.com/pytorch/pytorch/pull/50748 added linear - bn1d fusion in Eager mode, for PTQ only. This PR also enables this in FX graph mode. We reuse the existing conv-bn-relu fusion handler, renaming `conv` to `conv_or_linear` for readability. The QAT version is saved for a future PR, for both eager and FX graph. Test Plan: ``` python test/test_quantization.py TestFuseFx.test_fuse_linear_bn_eval ``` Imported from OSS Reviewed By: bdhirsh Differential Revision: D31575392 fbshipit-source-id: f69d80ef37c98cbc070099170e335e250bcdf913
This commit is contained in:
committed by
Facebook GitHub Bot
parent
9d287d0b63
commit
d549c8de78
@ -8,6 +8,6 @@ here.
|
||||
"""
|
||||
from torch.ao.quantization.fx.fusion_patterns import (
|
||||
FuseHandler,
|
||||
ConvBNReLUFusion,
|
||||
ConvOrLinearBNReLUFusion,
|
||||
ModuleReLUFusion
|
||||
)
|
||||
|
Reference in New Issue
Block a user