Files
pytorch/torch/quantization/fake_quantize.py
Vasiliy Kuznetsov 6101cbcedb torch.ao migration: fake_quantize.py, phase 1 (#64814)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64814

1. move the file
```
hg mv caffe2/torch/quantization/fake_quantize.py caffe2/torch/ao/quantization/
```

2. create a new file in the old location and copy the imports
3. fix all callsites inside `torch`

Test Plan:
```
buck test mode/dev //caffe2/test:quantization
```

Reviewed By: z-a-f

Differential Revision: D30866792

fbshipit-source-id: 7a221cb46c0ab01f1c5de9be061f09ecc83ce23e
2021-09-13 15:22:28 -07:00

33 lines
1007 B
Python

# flake8: noqa: F401
r"""
This file is in the process of migration to `torch/ao/quantization`, and
is kept here for compatibility while the migration process is ongoing.
If you are adding a new entry/functionality, please, add it to the
`torch/ao/quantization/fake_quantize.py`, while adding an import statement
here.
"""
from torch.ao.quantization.fake_quantize import (
_is_per_channel,
_is_per_tensor,
_is_symmetric_quant,
FakeQuantizeBase,
FakeQuantize,
FixedQParamsFakeQuantize,
FusedMovingAvgObsFakeQuantize,
default_fake_quant,
default_weight_fake_quant,
default_symmetric_fixed_qparams_fake_quant,
default_affine_fixed_qparams_fake_quant,
default_per_channel_weight_fake_quant,
default_histogram_fake_quant,
default_fused_act_fake_quant,
default_fused_wt_fake_quant,
default_fused_per_channel_wt_fake_quant,
_is_fake_quant_script_module,
disable_fake_quant,
enable_fake_quant,
disable_observer,
enable_observer,
)