Fixes#112632
Before: 171
```
torch/backends/_nnapi/prepare.py:24 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/_nnapi/prepare.py:46 in public method `init`:
D102: Missing docstring in public method
torch/backends/_nnapi/prepare.py:60 in public method `forward`:
D102: Missing docstring in public method
torch/backends/_nnapi/prepare.py:94 in public function `convert_model_to_nnapi`:
D103: Missing docstring in public function
torch/backends/_nnapi/prepare.py:153 in public function `process_for_nnapi`:
D103: Missing docstring in public function
torch/backends/_nnapi/prepare.py:177 in private nested class `ShapeComputeModule`:
D400: First line should end with a period (not 'n')
torch/backends/_nnapi/serializer.py:19 in public class `NNAPI_OperandCode`:
D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:35 in public class `NNAPI_OperationCode`:
D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:133 in public class `NNAPI_FuseCode`:
D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:140 in public class `OperandValueSourceType`:
D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:150 in public class `TorchScalarTypes`:
D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:154 in public function `approx_equal`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:158 in public function `tensor_size`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:172 in public function `change_element`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:194 in public class `DimOrder`:
D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:225 in public method `use_nchw`:
D102: Missing docstring in public method
torch/backends/_nnapi/serializer.py:233 in public function `broadcast_shapes`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:260 in public function `get_conv_pool_shape`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:284 in public function `fix_shape`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:301 in public function `reverse_map_dim`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:312 in public function `flex_name`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:1337 in private method `_do_add_binary`:
D400: First line should end with a period (not 's')
torch/backends/_nnapi/serializer.py:1337 in private method `_do_add_binary`:
D401: First line should be in imperative mood; try rephrasing (found 'Helper')
torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`:
D202: No blank lines allowed after function docstring (found 1)
torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`:
D205: 1 blank line required between summary line and description (found 0)
torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`:
D400: First line should end with a period (not ':')
torch/backends/cuda/__init__.py:1 at module level:
D104: Missing docstring in public package
torch/backends/cuda/__init__.py:30 in public function `is_built`:
D205: 1 blank line required between summary line and description (found 0)
torch/backends/cuda/__init__.py:30 in public function `is_built`:
D209: Multi-line docstring closing quotes should be on a separate line
torch/backends/cuda/__init__.py:30 in public function `is_built`:
D400: First line should end with a period (not 's')
torch/backends/cuda/__init__.py:30 in public function `is_built`:
D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/backends/cuda/__init__.py:37 in public class `cuFFTPlanCacheAttrContextProp`:
D101: Missing docstring in public class
torch/backends/cuda/__init__.py:40 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/cuda/__init__.py:44 in public method `__get__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:47 in public method `__set__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:54 in public class `cuFFTPlanCache`:
D205: 1 blank line required between summary line and description (found 0)
torch/backends/cuda/__init__.py:54 in public class `cuFFTPlanCache`:
D400: First line should end with a period (not 'e')
torch/backends/cuda/__init__.py:60 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/cuda/__init__.py:73 in public method `clear`:
D102: Missing docstring in public method
torch/backends/cuda/__init__.py:78 in public class `cuFFTPlanCacheManager`:
D205: 1 blank line required between summary line and description (found 0)
torch/backends/cuda/__init__.py:78 in public class `cuFFTPlanCacheManager`:
D400: First line should end with a period (not ',')
torch/backends/cuda/__init__.py:89 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/cuda/__init__.py:93 in public method `__getitem__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:106 in public method `__getattr__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:109 in public method `__setattr__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:116 in public class `cuBLASModule`:
D101: Missing docstring in public class
torch/backends/cuda/__init__.py:117 in public method `__getattr__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:126 in public method `__setattr__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:147 in public function `preferred_linalg_library`:
D202: No blank lines allowed after function docstring (found 1)
torch/backends/cuda/__init__.py:204 in public class `SDPBackend`:
D204: 1 blank line required after class docstring (found 0)
torch/backends/cudnn/__init__.py:1 at module level:
D104: Missing docstring in public package
torch/backends/cudnn/__init__.py:81 in public function `version`:
D400: First line should end with a period (not 'N')
torch/backends/cudnn/__init__.py:81 in public function `version`:
D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/backends/cudnn/__init__.py:95 in public function `is_available`:
D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/backends/cudnn/__init__.py:99 in public function `is_acceptable`:
D103: Missing docstring in public function
torch/backends/cudnn/__init__.py:122 in public function `set_flags`:
D103: Missing docstring in public function
torch/backends/cudnn/__init__.py:150 in public function `flags`:
D103: Missing docstring in public function
torch/backends/cudnn/__init__.py:174 in public class `CudnnModule`:
D101: Missing docstring in public class
torch/backends/cudnn/__init__.py:175 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/mkl/__init__.py:1 at module level:
D104: Missing docstring in public package
torch/backends/mkl/__init__.py:5 in public function `is_available`:
D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/backends/mkl/__init__.py:14 in public class `verbose`:
D205: 1 blank line required between summary line and description (found 0)
torch/backends/mkl/__init__.py:14 in public class `verbose`:
D400: First line should end with a period (not 'y')
torch/backends/mkl/__init__.py:41 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/mkl/__init__.py:44 in public method `__enter__`:
D105: Missing docstring in magic method
torch/backends/mkl/__init__.py:53 in public method `__exit__`:
D105: Missing docstring in magic method
torch/backends/mkldnn/__init__.py:1 at module level:
D104: Missing docstring in public package
torch/backends/mkldnn/__init__.py:9 in public function `is_available`:
D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/backends/mkldnn/__init__.py:19 in public class `verbose`:
D205: 1 blank line required between summary line and description (found 0)
torch/backends/mkldnn/__init__.py:19 in public class `verbose`:
D400: First line should end with a period (not 'y')
torch/backends/mkldnn/__init__.py:47 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/mkldnn/__init__.py:50 in public method `__enter__`:
D105: Missing docstring in magic method
torch/backends/mkldnn/__init__.py:59 in public method `__exit__`:
D105: Missing docstring in magic method
torch/backends/mkldnn/__init__.py:64 in public function `set_flags`:
D103: Missing docstring in public function
torch/backends/mkldnn/__init__.py:71 in public function `flags`:
D103: Missing docstring in public function
torch/backends/mkldnn/__init__.py:81 in public class `MkldnnModule`:
D101: Missing docstring in public class
torch/backends/mkldnn/__init__.py:82 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/openmp/__init__.py:1 at module level:
D104: Missing docstring in public package
torch/backends/openmp/__init__.py:5 in public function `is_available`:
D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/nn/intrinsic/qat/modules/conv_fused.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/intrinsic/qat/modules/linear_fused.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/intrinsic/qat/modules/linear_relu.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/qat/__init__.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/qat/dynamic/__init__.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/qat/dynamic/modules/linear.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/qat/modules/__init__.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/qat/modules/conv.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/qat/modules/embedding_ops.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/qat/modules/linear.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantizable/modules/activation.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantizable/modules/rnn.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/_reference/modules/__init__.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/_reference/modules/conv.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/_reference/modules/linear.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/_reference/modules/rnn.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/_reference/modules/sparse.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/_reference/modules/utils.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/dynamic/modules/__init__.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/dynamic/modules/conv.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/dynamic/modules/linear.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/dynamic/modules/rnn.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/functional.py:1 at module level:
D400: First line should end with a period (not 'l')
torch/nn/quantized/modules/__init__.py:1 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/modules/activation.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/modules/batchnorm.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/modules/conv.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/modules/dropout.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/modules/embedding_ops.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/modules/functional_modules.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/modules/linear.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/modules/normalization.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/modules/rnn.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/quantized/modules/utils.py:2 at module level:
D400: First line should end with a period (not 's')
torch/nn/utils/_expanded_weights/conv_utils.py:13 in public function `conv_picker`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:23 in public function `conv_args_and_kwargs`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:31 in public function `conv_normalizer`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:35 in public function `conv_input_for_string_padding`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:43 in public function `int_padding_for_string_padding`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:59 in public function `conv_padding_for_same`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:66 in public function `conv_backward`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:131 in public function `conv_unfold_weight_grad_sample`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:166 in public function `conv_group_weight_grad_sample`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`:
D202: No blank lines allowed after function docstring (found 1)
torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`:
D205: 1 blank line required between summary line and description (found 0)
torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`:
D401: First line should be in imperative mood (perhaps 'Extract', not 'Extracts')
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:6 in public function `is_batch_first`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`:
D205: 1 blank line required between summary line and description (found 0)
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`:
D300: Use """triple double quotes""" (found '''-quotes)
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`:
D400: First line should end with a period (not 'e')
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`:
D205: 1 blank line required between summary line and description (found 0)
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`:
D300: Use """triple double quotes""" (found '''-quotes)
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`:
D400: First line should end with a period (not ')')
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:84 in public function `maybe_scale_by_batch_size`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:90 in public function `set_grad_sample_if_exists`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:108 in public function `unpack_expanded_weight_or_tensor`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`:
D205: 1 blank line required between summary line and description (found 0)
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`:
D400: First line should end with a period (not 't')
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`:
D401: First line should be in imperative mood (perhaps 'Calculate', not 'Calculates')
torch/nn/utils/convert_parameters.py:1 at module level:
D100: Missing docstring in public module
torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`:
D202: No blank lines allowed after function docstring (found 1)
torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`:
D205: 1 blank line required between summary line and description (found 0)
torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`:
D400: First line should end with a period (not 'd')
torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`:
D401: First line should be in imperative mood; try rephrasing (found 'This')
torch/nn/utils/rnn.py:1 at module level:
D100: Missing docstring in public module
torch/nn/utils/rnn.py:28 in public class `PackedSequence`:
D204: 1 blank line required after class docstring (found 0)
torch/nn/utils/rnn.py:63 in public method `__new__`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:73 in public method `pin_memory`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:80 in public method `cuda`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:87 in public method `cpu`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:94 in public method `double`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:97 in public method `float`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:100 in public method `half`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:103 in public method `long`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:106 in public method `int`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:109 in public method `short`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:112 in public method `char`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:115 in public method `byte`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:119 in public method `to`:
D202: No blank lines allowed after function docstring (found 1)
torch/nn/utils/rnn.py:119 in public method `to`:
D401: First line should be in imperative mood (perhaps 'Perform', not 'Performs')
torch/nn/utils/rnn.py:146 in public method `is_cuda`:
D400: First line should end with a period (not 'u')
torch/nn/utils/rnn.py:150 in public method `is_pinned`:
D400: First line should end with a period (not 'y')
torch/nn/utils/rnn.py:150 in public method `is_pinned`:
D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/nn/utils/rnn.py:198 in public function `invert_permutation`:
D103: Missing docstring in public function
torch/nn/utils/rnn.py:274 in public function `pad_packed_sequence`:
D401: First line should be in imperative mood (perhaps 'Pad', not 'Pads')
torch/nn/utils/rnn.py:347 in public function `pad_sequence`:
D202: No blank lines allowed after function docstring (found 1)
torch/nn/utils/rnn.py:347 in public function `pad_sequence`:
D400: First line should end with a period (not '`')
torch/nn/utils/rnn.py:408 in public function `unpad_sequence`:
D202: No blank lines allowed after function docstring (found 1)
torch/nn/utils/rnn.py:408 in public function `unpad_sequence`:
D400: First line should end with a period (not 's')
torch/nn/utils/rnn.py:454 in public function `pack_sequence`:
D400: First line should end with a period (not 's')
torch/nn/utils/rnn.py:490 in public function `unpack_sequence`:
D202: No blank lines allowed after function docstring (found 1)
torch/nn/utils/rnn.py:490 in public function `unpack_sequence`:
D400: First line should end with a period (not 's')
171
```
After: 81
```
torch/backends/_nnapi/prepare.py:24 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/_nnapi/prepare.py:46 in public method `init`:
D102: Missing docstring in public method
torch/backends/_nnapi/prepare.py:60 in public method `forward`:
D102: Missing docstring in public method
torch/backends/_nnapi/prepare.py:94 in public function `convert_model_to_nnapi`:
D103: Missing docstring in public function
torch/backends/_nnapi/prepare.py:153 in public function `process_for_nnapi`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:19 in public class `NNAPI_OperandCode`:
D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:35 in public class `NNAPI_OperationCode`:
D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:133 in public class `NNAPI_FuseCode`:
D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:140 in public class `OperandValueSourceType`:
D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:150 in public class `TorchScalarTypes`:
D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:154 in public function `approx_equal`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:158 in public function `tensor_size`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:172 in public function `change_element`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:194 in public class `DimOrder`:
D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:225 in public method `use_nchw`:
D102: Missing docstring in public method
torch/backends/_nnapi/serializer.py:233 in public function `broadcast_shapes`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:260 in public function `get_conv_pool_shape`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:284 in public function `fix_shape`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:301 in public function `reverse_map_dim`:
D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:312 in public function `flex_name`:
D103: Missing docstring in public function
torch/backends/cuda/__init__.py:1 at module level:
D104: Missing docstring in public package
torch/backends/cuda/__init__.py:39 in public class `cuFFTPlanCacheAttrContextProp`:
D101: Missing docstring in public class
torch/backends/cuda/__init__.py:42 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/cuda/__init__.py:46 in public method `__get__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:49 in public method `__set__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:63 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/cuda/__init__.py:76 in public method `clear`:
D102: Missing docstring in public method
torch/backends/cuda/__init__.py:91 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/cuda/__init__.py:95 in public method `__getitem__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:108 in public method `__getattr__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:111 in public method `__setattr__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:118 in public class `cuBLASModule`:
D101: Missing docstring in public class
torch/backends/cuda/__init__.py:119 in public method `__getattr__`:
D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:128 in public method `__setattr__`:
D105: Missing docstring in magic method
torch/backends/cudnn/__init__.py:1 at module level:
D104: Missing docstring in public package
torch/backends/cudnn/__init__.py:99 in public function `is_acceptable`:
D103: Missing docstring in public function
torch/backends/cudnn/__init__.py:122 in public function `set_flags`:
D103: Missing docstring in public function
torch/backends/cudnn/__init__.py:150 in public function `flags`:
D103: Missing docstring in public function
torch/backends/cudnn/__init__.py:174 in public class `CudnnModule`:
D101: Missing docstring in public class
torch/backends/cudnn/__init__.py:175 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/mkl/__init__.py:1 at module level:
D104: Missing docstring in public package
torch/backends/mkl/__init__.py:42 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/mkl/__init__.py:45 in public method `__enter__`:
D105: Missing docstring in magic method
torch/backends/mkl/__init__.py:54 in public method `__exit__`:
D105: Missing docstring in magic method
torch/backends/mkldnn/__init__.py:1 at module level:
D104: Missing docstring in public package
torch/backends/mkldnn/__init__.py:48 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/mkldnn/__init__.py:51 in public method `__enter__`:
D105: Missing docstring in magic method
torch/backends/mkldnn/__init__.py:60 in public method `__exit__`:
D105: Missing docstring in magic method
torch/backends/mkldnn/__init__.py:65 in public function `set_flags`:
D103: Missing docstring in public function
torch/backends/mkldnn/__init__.py:72 in public function `flags`:
D103: Missing docstring in public function
torch/backends/mkldnn/__init__.py:82 in public class `MkldnnModule`:
D101: Missing docstring in public class
torch/backends/mkldnn/__init__.py:83 in public method `__init__`:
D107: Missing docstring in __init__
torch/backends/openmp/__init__.py:1 at module level:
D104: Missing docstring in public package
torch/nn/utils/_expanded_weights/conv_utils.py:13 in public function `conv_picker`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:23 in public function `conv_args_and_kwargs`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:31 in public function `conv_normalizer`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:35 in public function `conv_input_for_string_padding`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:43 in public function `int_padding_for_string_padding`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:59 in public function `conv_padding_for_same`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:66 in public function `conv_backward`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:131 in public function `conv_unfold_weight_grad_sample`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:166 in public function `conv_group_weight_grad_sample`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:6 in public function `is_batch_first`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:87 in public function `maybe_scale_by_batch_size`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:93 in public function `set_grad_sample_if_exists`:
D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:111 in public function `unpack_expanded_weight_or_tensor`:
D103: Missing docstring in public function
torch/nn/utils/convert_parameters.py:1 at module level:
D100: Missing docstring in public module
torch/nn/utils/rnn.py:1 at module level:
D100: Missing docstring in public module
torch/nn/utils/rnn.py:64 in public method `__new__`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:74 in public method `pin_memory`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:81 in public method `cuda`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:88 in public method `cpu`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:95 in public method `double`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:98 in public method `float`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:101 in public method `half`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:104 in public method `long`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:107 in public method `int`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:110 in public method `short`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:113 in public method `char`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:116 in public method `byte`:
D102: Missing docstring in public method
torch/nn/utils/rnn.py:198 in public function `invert_permutation`:
D103: Missing docstring in public function
81
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112695
Approved by: https://github.com/mikaylagawarecki
Some of the subpackages were not included in the 'torch.nn.quantized'.
That would cause some specific cases fail.
For example, `from torch.nn.quantized import dynamic` would work,
but `import torch; torch.nn.quantized.dynamic` would fail.
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84141
Approved by: https://github.com/andrewor14
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [X] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [X] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [X] [Current PR] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- None
Differential Revision: [D36860927](https://our.internmc.facebook.com/intern/diff/D36860927/)
**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36860927/)!
Differential Revision: [D36860927](https://our.internmc.facebook.com/intern/diff/D36860927)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78715
Approved by: https://github.com/jerryzh168
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [X] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [X] [Current PR] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- [Documentation](docs/source/quantization-support.rst) @vkuzo
- [Public API test list](test/allowlist_for_publicAPI.json) @peterbell10
- [BC test](test/quantization/bc/test_backward_compatibility.py) @vkuzo
- [IR emitter](torch/csrc/jit/frontend/ir_emitter.cpp) @jamesr66a
- [JIT serialization](torch/csrc/jit/serialization/import_source.cpp) @IvanKobzarev @jamesr66a
Differential Revision: [D36860660](https://our.internmc.facebook.com/intern/diff/D36860660/)
**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36860660/)!
Differential Revision: [D36860660](https://our.internmc.facebook.com/intern/diff/D36860660)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78714
Approved by: https://github.com/jerryzh168
Fix use-dict-literal pylint suggestions by changing `dict()` to `{}`. This PR should do the change for every Python file except test/jit/test_list_dict.py, where I think the intent is to test the constructor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83718
Approved by: https://github.com/albanD
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [X] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [X] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [X] [Current PR] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- None
Differential Revision: [D36860927](https://our.internmc.facebook.com/intern/diff/D36860927/)
**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36860927/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78715
Approved by: https://github.com/jerryzh168
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [X] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [X] [Current PR] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- [Documentation](docs/source/quantization-support.rst) @vkuzo
- [Public API test list](test/allowlist_for_publicAPI.json) @peterbell10
- [BC test](test/quantization/bc/test_backward_compatibility.py) @vkuzo
- [IR emitter](torch/csrc/jit/frontend/ir_emitter.cpp) @jamesr66a
- [JIT serialization](torch/csrc/jit/serialization/import_source.cpp) @IvanKobzarev @jamesr66a
Differential Revision: [D36860660](https://our.internmc.facebook.com/intern/diff/D36860660/)
**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36860660/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78714
Approved by: https://github.com/jerryzh168
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] [Current PR] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [ ] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- [Documentation](docs/source/quantization-support.rst) @vkuzo
- [Public API test list](test/allowlist_for_publicAPI.json) @peterbell10
Differential Revision: [D36792967](https://our.internmc.facebook.com/intern/diff/D36792967/)
**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36792967/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78712
Approved by: https://github.com/jerryzh168
Summary: Until we add quant_{min, max} args to `torch.quantize_per_{channel, tensor}`, this patch will make sure we will honor observer's restrictions on quantized values.
Test Plan: Added new tests, run with - `buck run caffe2/test:quantization -- quantization.core.test_utils`
Differential Revision: D38624119
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83438
Approved by: https://github.com/andrewor14
This is a new version of #15648 based on the latest master branch.
Unlike the previous PR where I fixed a lot of the doctests in addition to integrating xdoctest, I'm going to reduce the scope here. I'm simply going to integrate xdoctest, and then I'm going to mark all of the failing tests as "SKIP". This will let xdoctest run on the dashboards, provide some value, and still let the dashboards pass. I'll leave fixing the doctests themselves to another PR.
In my initial commit, I do the bare minimum to get something running with failing dashboards. The few tests that I marked as skip are causing segfaults. Running xdoctest results in 293 failed, 201 passed tests. The next commits will be to disable those tests. (unfortunately I don't have a tool that will insert the `#xdoctest: +SKIP` directive over every failing test, so I'm going to do this mostly manually.)
Fixes https://github.com/pytorch/pytorch/issues/71105
@ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82797
Approved by: https://github.com/ezyang
Add prelu op and module for quantized CPU backend.
The PR includes:
- Quantized version of prelu op
- Native prelu kernel for quantized CPU
- Prelu modules in `nn` and `nn.quantized`
- FX support for prelu
- Unit tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73491
Approved by: https://github.com/jerryzh168
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/78117
Fixes: https://github.com/pytorch/pytorch/issues/73463
This PR adds a normalization pass that normalizes all the args to keyword args in positional order and fixes lowering code that previously
only uses node.args to use both args and kwargs instead.
Also tried to add a test for F.conv2d, but since conv2d matches multiple schemas we are doing an extra schema match, and because we are using symbolic values
in `transform`, we don't have a schema match, so F.conv2d still fails with runtime errors. we can resolve this issue later when there is a need.
Another thing I'm considering is to do the normalization with real inputs instead of symbolic inputs and not rely on operator_schemas (which is based on torchscript),
and rely on inspect.signature, I tried this briefly but didn't get too far, it looks like we cannot get the python signature for `torch._C._nn.linear`, it might be possible to fix as well, but will need follow up discussions.
The goal for this PR is just to introduce normalization in our codebase so that we can adapt some downstream code to this, and also fix the F.linear issue.
Test Plan:
python test/test_quantization.py TestQuantizeFx.test_normalize_args
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: [D37163228](https://our.internmc.facebook.com/intern/diff/D37163228)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79095
Approved by: https://github.com/andrewor14
Summary:
Some of the util functions in FX graph mode quantization throw warnings
such as:
```
/Users/vasiliy/pytorch/torch/ao/quantization/fx/utils.py:410: UserWarning: To copy construct from
a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().
requires_grad_(True), rather than torch.tensor(sourceTensor).
```
This PR fixes the warnings by moving the code to the recommended syntax if the
value is a tensor.
Test plan:
```
python test/test_quantization.py -k test_conv_linear_reference
// warning appeared before this PR and disappeared after this PR
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80883
Approved by: https://github.com/jerryzh168
Add prelu op and module for quantized CPU backend.
The PR includes:
- Quantized version of prelu op
- Native prelu kernel for quantized CPU
- Prelu modules in `nn` and `nn.quantized`
- FX support for prelu
- Unit tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73491
Approved by: https://github.com/jerryzh168
The nn.MultiheadAttention is quantized through the custom module mechanism, which uses the nn.quantizable.MultiheadAttention for both observed and quantized paths. This is potentially a source of confusion. This creates a quantized.MultiheadAttention class, which completely takes the quantized path. Note that after this, the old usage will throw an error.
New way of using it:
```
>>> custom_module_config = {
... 'float_to_observed_custom_module_class': {
... nn.MultiheadAttention: nn.quantizable.MultiheadAttention,
... },
... 'observed_to_quantized_custom_module_class': {
... nn.quantizable.MultiheadAttention: nn.quantized.MultiheadAttention,
... }
... }
>>> tq.prepare(model, prepare_custom_module_class=custom_module_config)
>>> tq.convert(model, convert_custom_module_class=custom_module_config)
```
due to weird CI issues with previous PR,
old discussion can be found: https://github.com/pytorch/pytorch/pull/71190
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79956
Approved by: https://github.com/z-a-f
The nn.LSTM is quantized through the custom module mechanism, which uses the nn.quantizable.LSTM for both observed and quantized paths. This is potentially a source of confusion. This creates a `quantized.LSTM` class, which completely takes the quantized path. Note that after this, the old usage will throw an error.
New way of using it:
```
>>> custom_module_config = {
... 'float_to_observed_custom_module_class': {
... nn.LSTM: nn.quantizable.LSTM,
... },
... 'observed_to_quantized_custom_module_class': {
... nn.quantizable.LSTM: nn.quantized.LSTM,
... }
... }
>>> tq.prepare(model, prepare_custom_module_class=custom_module_config)
>>> tq.convert(model, convert_custom_module_class=custom_module_config)
```
due to weird CI issues with previous PR,
old discussion can be found: https://github.com/pytorch/pytorch/pull/71189
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79959
Approved by: https://github.com/z-a-f
In general, if we are expecting the users to use the base class,
such as `_ConvNd`, we should rename it to something like
`BaseConv`. However, because this base class is only used inside of the
AO packages, there is no need to expose it to the users.
Test Plan:
```
python test/test_quantization.py
python test/test_module_init.py
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77344
Approved by: https://github.com/jerryzh168
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74846
This PR primarily allows the PTQ convert function to work with
parametrized modules. Given that the parametrized weight is what is used
by default in convert, as long as sparsifier.step() has already been
called, the converted model will use the sparisified weights. There is
currently no way to handle things if sparsifier.step() has not been
called. Lastly, added the is_leaf_or_only_parametrized function because
parametrized modules no longer look like leaves due to the
parametrizations module attached to them
Test Plan:
python test/test_ao_sparsity.py TestComposability
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D35240275
fbshipit-source-id: 48529f2a83edfe6d8a2d2dff8ca3d08a3fb0d553
(cherry picked from commit 9d6361482e2885db964e02b0222cd23c9f4d469e)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73863
This PR fully aligns the convert function with the design: https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md
and simplifies the implementation of convert function by always produce a reference quantized model (with reference patterns) first,
and then lower the model to a quantized model that is runnable with PyTorch native backend (fbgemm/qnnpack).
This PR makes the convert.py much easier to understand than the previous implementation, and we are able to remove majority of code
in quantization_patterns.py as well (in followup PRs).
Test Plan:
```
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels
```
and other internal/oss regression tests
Imported from OSS
Reviewed By: andrewor14
Differential Revision: D34778506
fbshipit-source-id: 0678b66addf736039a8749b352f6f569caca962b
(cherry picked from commit 33ec9caf23f3ab373d827117efbd9db0668b2437)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73493
This PR enables basic support for reference modules in DBR quant.
For now, the support is limited to:
1. modules that have reference versions defined only (no functions)
2. torch.qint32 dtype only
Currently, the reference module logic is enabled whenever dtype is
torch.qint32. This is done because this is needed the earliest for
the first use case. A future PR will support more dtypes and also
add the `is_reference` flag to the API.
Test Plan:
```
python test/test_quantization.py TestQuantizeDBR.test_conv_int32_reference_model
```
Reviewed By: jerryzh168
Differential Revision: D34520759
Pulled By: vkuzo
fbshipit-source-id: 363db715315c5c7c20962a1818330ce288948778
(cherry picked from commit 6ccdfe2889c252211f191edc49f4147f66e803a4)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73436
This PR adds support reference module support for Embedding and EmbeddingBag, following https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md
* the reference module inherits from the corresponding float module (e.g. nn.Embedding), and the ReferenceQuantizedModule (which defines some utility functions to store qparms for a single weight)
* in forward, we first quantize and then dequantize weight (to generate the pattern) and then feed the weight to the original fp32 op
We'll connect this with fx grpah mode quantization later, in the final PR that deprecates the current convert implementation. Since current convert doesn't
support emitting quantize_per_tensor_dynamic ops, we don't want to implement it and immediately throw away the code, so might be better to just implement this
in the final flow.
Test Plan:
Will be tested later, in the final PR that deprecates the current convert implementation
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D34480325
fbshipit-source-id: bc353f3be035a364e013fa9132d0422f19120ac3
(cherry picked from commit 1722ec2f8d82e9763ef252fed5796fd09d120e34)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72717
This will be renamed to WeightedQuantizedModule to
minimize confusion with reference modules.
Test Plan:
python test/test_quantization.py TestQuantizeFx
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D34172554
fbshipit-source-id: 4cd77d6048fde4875218386f7e55f864a73d5bd3
(cherry picked from commit b7af4cedb4275b6f9c06c0773f2997bc4e61578a)