Files
pytorch/caffe2/python/helpers/quantization.py
Frank Seide 29f0e1e2ce Fused8BitRowwiseQuantizedToFloat operator support (#48407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48407

T79817692: Fused8BitRowwiseQuantizedToFloat operator support for c2_pt_converter.

Also refactored some repeated code from the existing test functions. (Initial commit only has refactoring.)

Test Plan: buck test //caffe2/torch/fb/model_transform/c2_convert:c2_pt_converter_test

Reviewed By: bugra

Differential Revision: D25069936

fbshipit-source-id: 72f6a845a1b4639b9542c6b230c8cd74b06bc5a0
2020-11-30 17:11:39 -08:00

10 lines
261 B
Python

# @package quantization
# Module caffe2.python.helpers.quantization
def fused_8bit_rowwise_quantized_to_float(
model, blob_in, blob_out
):
"""Fused8BitRowwiseQuantizedToFloat"""
return model.net.Fused8BitRowwiseQuantizedToFloat(blob_in, blob_out)