Files
pytorch/torch/quantization/observer.py
Charles David Hernandez f309f8fbd4 [quant] ao migration of observer and qconfig (#64982)
Summary:
(Had to recreate this diff so it wasn't dependent on the stack)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64982

migration of qconfig.py and observer.py to torch/ao/quantization using new test format
ghstack-source-id: 138215256

Test Plan:
buck test mode/opt //caffe2/test:quantization

https://www.internalfb.com/intern/testinfra/testconsole/testrun/8444249354294701/

buck test mode/dev //caffe2/test:quantization -- TestAOMigrationQuantization

https://www.internalfb.com/intern/testinfra/testrun/3940649742829796

Reviewed By: z-a-f

Differential Revision: D30982534

fbshipit-source-id: 48d08969b1984311ceb036eac0877c811cd6add9
2021-09-16 10:33:16 -07:00

37 lines
1.1 KiB
Python

# flake8: noqa: F401
r"""
This file is in the process of migration to `torch/ao/quantization`, and
is kept here for compatibility while the migration process is ongoing.
If you are adding a new entry/functionality, please, add it to the
`torch/ao/quantization/observer.py`, while adding an import statement
here.
"""
from torch.ao.quantization.observer import (
_PartialWrapper,
_with_args,
_with_callable_args,
ABC,
ObserverBase,
_ObserverBase,
MinMaxObserver,
MovingAverageMinMaxObserver,
PerChannelMinMaxObserver,
MovingAveragePerChannelMinMaxObserver,
HistogramObserver,
PlaceholderObserver,
RecordingObserver,
NoopObserver,
_is_activation_post_process,
_is_per_channel_script_obs_instance,
get_observer_state_dict,
load_observer_state_dict,
default_observer,
default_placeholder_observer,
default_debug_observer,
default_weight_observer,
default_histogram_observer,
default_per_channel_weight_observer,
default_dynamic_quant_observer,
default_float_qparams_observer,
)