[quant][pt2e] Enable constant folding for quantize ops (#109343)

Summary:
This PR added constant folding for quantize ops so that instead of storing fp32 weight in the
quantized model, we'll get int8/int16 etc. weight

Test Plan:
python test/test_quantization.py TestQuantizePT2E.test_fold_quantize

also will verify in executorch later

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D49399210](https://our.internmc.facebook.com/intern/diff/D49399210)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109343
Approved by: https://github.com/kimishpatel, https://github.com/jgong5
This commit is contained in:
Jerry Zhang
2023-09-26 19:09:30 -07:00
committed by PyTorch MergeBot
parent 6138750ab1
commit 1b51d29b66
11 changed files with 359 additions and 148 deletions

View File

@ -83,6 +83,8 @@ except ImportError as e:
try:
# To be moved to compiler side later
from quantization.pt2e.test_graph_utils import TestGraphUtils # noqa: F401
from quantization.pt2e.test_duplicate_dq import TestDuplicateDQPass # noqa: F401
from quantization.pt2e.test_metadata_porting import TestMetaDataPorting # noqa: F401
from quantization.pt2e.test_quantize_pt2e import TestQuantizePT2E # noqa: F401
from quantization.pt2e.test_quantize_pt2e import TestQuantizePT2EOps # noqa: F401
from quantization.pt2e.test_quantize_pt2e import TestQuantizePT2EModels # noqa: F401