70925bdf82
[1/N] Use "is" in python type comparison ( #165037 )
...
It generally recommended to use `is/is not` to compare types. Therefore this series of changes apply this suggestion in the code base, and it aims to finally enabling related linter checks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165037
Approved by: https://github.com/mlazos
2025-10-10 12:36:50 +00:00
b13cd141b3
Add pyrefly suppressions ( #164748 )
...
Adds suppressions to pyrefly will typecheck clean: https://github.com/pytorch/pytorch/issues/163283
Test plan:
dmypy restart && python3 scripts/lintrunner.py -a
pyrefly check
step 1: delete lines in the pyrefly.toml file from the `project-excludes` field
step 2: run pyrefly check
step 3: add suppressions, clean up unused suppressions
before: https://gist.github.com/maggiemoss/4b3bf2037014e116bc00706a16aef199
after:
0 errors (4,263 ignored)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164748
Approved by: https://github.com/oulgen
2025-10-07 17:31:18 +00:00
69cc99525c
[nn]: updated type alias for padddingmode in module/conv.py ( #158843 )
...
Fixes #152280
Changed type of `padding_mode` from `str` to `Literal["zeros", "reflect", "replicate", "circular"]`
**cc** @Skylion007
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158843
Approved by: https://github.com/mikaylagawarecki
2025-07-25 23:05:02 +00:00
e95e8eed0a
mypy 1.16.0 ( #155821 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155821
Approved by: https://github.com/ezyang , https://github.com/zou3519
2025-06-14 18:18:43 +00:00
279cae52e7
[BE][PYFMT] migrate PYFMT for torch/ao/
to ruff format
( #148185 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148185
Approved by: https://github.com/ezyang
2025-06-14 16:47:04 +00:00
bd97ce0b45
PEP585 update - torch/ao ( #145199 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145199
Approved by: https://github.com/bobrenjc93
2025-01-20 22:32:35 +00:00
612122af8f
Fix type-safety of torch.nn.Module instances ( #141240 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141240
Approved by: https://github.com/Skylion007 , https://github.com/malfet
2024-11-22 00:05:05 +00:00
12e95aa4ee
[BE]: Apply PERF401 autofixes from ruff ( #140980 )
...
* Automatically applies ruff rule 401. Turns loops into equivalent list comprehensions which are faster and do not leak the scope of the loop variables.
* list comprehensions not only often have better typing, but are 50+% faster than for loops on overhead. They also preserve length information etc and are better for the interpreter to optimize.
* Manually went back and made mypy happy after the change.
* Also fixed style lints in files covered by flake8 but not by pyfmt
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140980
Approved by: https://github.com/justinchuby , https://github.com/malfet
2024-11-20 17:52:07 +00:00
31715be72a
[BE]: Update mypy to 1.11.2 ( #133816 )
...
Updates mypy to 1.11.1 to improve type inference
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133816
Approved by: https://github.com/ezyang
2024-09-16 19:44:11 +00:00
3117f2cf67
Revert "[BE]: Update mypy to 1.11.2 ( #133816 )"
...
This reverts commit 55299cfc223fa838aadd8d6d6fa3ed541fa5acd1.
Reverted https://github.com/pytorch/pytorch/pull/133816 on behalf of https://github.com/jeanschmidt due to seems to have broken https://github.com/pytorch/pytorch/actions/runs/10865710499/job/30155699792 on main ([comment](https://github.com/pytorch/pytorch/pull/133816#issuecomment-2352377684 ))
2024-09-16 09:11:16 +00:00
55299cfc22
[BE]: Update mypy to 1.11.2 ( #133816 )
...
Updates mypy to 1.11.1 to improve type inference
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133816
Approved by: https://github.com/ezyang
2024-09-14 21:40:36 +00:00
973a1362b9
[BE] enable UFMT for torch/ao/nn/
( #128861 )
...
Part of #123062
- #123062
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128861
Approved by: https://github.com/ezyang
2024-07-25 02:49:19 +00:00
afe15d2d2f
Flip default value for mypy disallow_untyped_defs [3/11] ( #127840 )
...
See #127836 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127840
Approved by: https://github.com/oulgen
2024-06-08 18:28:01 +00:00
c404b2968c
Support min/max carry over for eager mode from_float method ( #127309 )
...
Summary:
After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.
Dynamic QAT also does not require to re-run weight observer again by design.
This is a fix
Test Plan: Signals
Differential Revision: D57747749
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127309
Approved by: https://github.com/jerryzh168
2024-05-29 19:33:26 +00:00
46712b019d
Enable local_partial_types ( #118467 )
...
When using dmypy, this setting is enabled and cannot be turned off. Force it for regular mypy too.
Signed-off-by: Edward Z. Yang <ezyang@meta.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118467
Approved by: https://github.com/Skylion007
ghstack dependencies: #118414 , #118418 , #118432
2024-01-28 13:38:22 +00:00
c0d8a4af0a
[BE] Enable ruff's UP rules and autoformat ao/ ( #105430 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105430
Approved by: https://github.com/albanD , https://github.com/malfet
2023-07-19 13:44:37 +00:00
482f87a7bc
[quantized] Fix return values of _get_name() in quantized ConvTranspose ( #97678 )
...
This PR fixes incorrect return values of _get_name() in quantized `ConvTranspose?d`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97678
Approved by: https://github.com/vkuzo , https://github.com/kit1980
2023-04-07 01:14:42 +00:00
5b1cedacde
[BE] [2/3] Rewrite super()
calls in functorch and torch ( #94588 )
...
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.
- #94587
- #94588
- #94592
Also, methods with only a `super()` call are removed:
```diff
class MyModule(nn.Module):
- def __init__(self):
- super().__init__()
-
def forward(self, ...):
...
```
Some cases that change the semantics should be kept unchanged. E.g.:
f152a79be9/caffe2/python/net_printer.py (L184-L190)
f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94588
Approved by: https://github.com/ezyang , https://github.com/albanD
2023-02-10 21:16:33 +00:00
f15ab8a7f2
AO migration: replace torch internal callsites ( #94170 )
...
Summary:
Do the following renames:
`torch.quantization` -> `torch.ao.quantization`
`torch.nn.quantized` -> `torch.ao.nn.quantized`
`torch.nn.quantizable` -> `torch.ao.nn.quantizable`
`torch.nn.qat` -> `torch.ao.nn.qat`
`torch.nn.intrinsic` -> `torch.ao.nn.intrinsic`
And then, do
`torch.ao.nn.quantized._reference` -> `torch.ao.nn.quantized.reference` to clean up the aftermath of https://github.com/pytorch/pytorch/pull/84974
Then, manually update `test/test_module_init.py` to fix hanging whitespace due to the replace.
Run this script to do the replacements: https://gist.github.com/vkuzo/7f7afebf8c31b9ba48306223e68a1c82
This is for https://github.com/pytorch/pytorch/issues/81667
Test plan: CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94170
Approved by: https://github.com/jerryzh168
2023-02-07 02:32:23 +00:00
0f802eedc2
[Quant][FX] Lower QConvAddReLU2d for onednn backend ( #91155 )
...
**Summary**
Add quantization mappings for QConvAddReLU2d for int8 inference for onednn backend. The fusion and lowering is supported only in FX mode.
**Test plan**
```
python -m pytest test_quantization.py -k test_fuse_conv_bn_add_relu_onednn
python -m pytest test_quantization.py -k test_fuse_conv_bn_add_relu_by_default
python -m pytest test_quantization.py -k test_fuse_conv_bn_add_relu_lowering
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91155
Approved by: https://github.com/jgong5 , https://github.com/jerryzh168
2023-02-01 01:18:52 +00:00
ef4118e435
[Quant][FX] Lower QConvAdd2d for onednn backend ( #91153 )
...
**Summary**
Add quantization mappings for QConvAdd2d for int8 inference for onednn backend. The fusion and lowering is supported only in FX mode.
**Test plan**
```
python -m pytest test_quantization.py -k test_fuse_conv_bn_add_relu_onednn
python -m pytest test_quantization.py -k test_fuse_conv_bn_add_relu_by_default
python -m pytest test_quantization.py -k test_fuse_conv_bn_add_relu_lowering
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91153
Approved by: https://github.com/jgong5 , https://github.com/jerryzh168
2023-02-01 01:14:12 +00:00
ad782ff7df
Enable xdoctest runner in CI for real this time ( #83816 )
...
Builds on #83317 and enables running the doctests. Just need to figure out what is causing the failures.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83816
Approved by: https://github.com/ezyang , https://github.com/malfet
2022-12-29 05:32:42 +00:00
f51f6aa387
Fix non-existing parameters in docstrings ( #90505 )
...
Continuation after https://github.com/pytorch/pytorch/pull/90163 .
Here is a script I used to find all the non-existing arguments in the docstrings (the script can give false positives in presence of *args/**kwargs or decorators):
_Edit:_
I've realized that the indentation is wrong for the last `break` in the script, so the script only gives output for a function if the first docstring argument is wrong. I'll create a separate PR if I find more issues with corrected script.
``` python
import ast
import os
import docstring_parser
for root, dirs, files in os.walk('.'):
for name in files:
if root.startswith("./.git/") or root.startswith("./third_party/"):
continue
if name.endswith(".py"):
full_name = os.path.join(root, name)
with open(full_name, "r") as source:
tree = ast.parse(source.read())
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
all_node_args = node.args.args
if node.args.vararg is not None:
all_node_args.append(node.args.vararg)
if node.args.kwarg is not None:
all_node_args.append(node.args.kwarg)
if node.args.posonlyargs is not None:
all_node_args.extend(node.args.posonlyargs)
if node.args.kwonlyargs is not None:
all_node_args.extend(node.args.kwonlyargs)
args = [a.arg for a in all_node_args]
docstring = docstring_parser.parse(ast.get_docstring(node))
doc_args = [a.arg_name for a in docstring.params]
clean_doc_args = []
for a in doc_args:
clean_a = ""
for c in a.split()[0]:
if c.isalnum() or c == '_':
clean_a += c
if clean_a:
clean_doc_args.append(clean_a)
doc_args = clean_doc_args
for a in doc_args:
if a not in args:
print(full_name, node.lineno, args, doc_args)
break
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90505
Approved by: https://github.com/malfet , https://github.com/ZainRizvi
2022-12-09 21:43:09 +00:00
3a02873183
[quant][ao_migration] nn.intrinsic.quantized migration to ao ( #86172 )
...
All quantization-related modules are being migrated to `torch.ao`. This migrates the `nn.intrinsic.quantized`. Please, see the [tracker](https://github.com/pytorch/pytorch/issues/81667 ) for the timeline.
```
python test/test_quantization.py -- TestAOMigrationNNIntrinsic
```
Internal:
```
buck2 test @mode/dev-nosan //caffe2/test:quantization -- TestAOMigrationNNIntrinsic
```
Differential Revision: [D39425515](https://our.internmc.facebook.com/intern/diff/D39425515/ )
Differential Revision: [D39425515](https://our.internmc.facebook.com/intern/diff/D39425515 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86172
Approved by: https://github.com/jerryzh168
2022-10-08 00:01:38 +00:00
efccb6401c
[quant][ao_migration] nn.intrinsic.qat migration to ao ( #86171 )
...
All quantization-related modules are being migrated to `torch.ao`. This migrates the `nn.intrinsic.qat`. Please, see the [tracker](https://github.com/pytorch/pytorch/issues/81667 ) for the timeline.
```
python test/test_quantization.py TestAOMigrationNNIntrinsic
```
Differential Revision: [D39419993](https://our.internmc.facebook.com/intern/diff/D39419993/ )
**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39419993/ )!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86171
Approved by: https://github.com/jerryzh168
2022-10-07 17:29:42 +00:00
c92e5ac95b
[quant][ao_migration] torch.nn.quantized.modules
→ torch.ao.nn.quantized.modules
( #78713 )
...
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- Documentation @vkuzo
- docs/source/conf.py
- docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
- torch/csrc/jit/passes/hoist_conv_packed_params.cpp
- torch/csrc/jit/passes/quantization/helper.h
- torch/csrc/jit/serialization/import_source.cpp
Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012/ )
Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-25 16:50:33 +00:00
6a9c02339d
Revert "[quant][ao_migration] torch.nn.quantized.modules
→ torch.ao.nn.quantized.modules
( #78713 )"
...
This reverts commit 432f037498e3f470f1f6d2a5cc7c6ae8eb4fc870.
Reverted https://github.com/pytorch/pytorch/pull/78713 on behalf of https://github.com/janeyx99 due to Reverting for breaking (trunk-only) ios build
2022-08-22 07:32:37 +00:00
432f037498
[quant][ao_migration] torch.nn.quantized.modules
→ torch.ao.nn.quantized.modules
( #78713 )
...
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- Documentation @vkuzo
- docs/source/conf.py
- docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
- torch/csrc/jit/passes/hoist_conv_packed_params.cpp
- torch/csrc/jit/passes/quantization/helper.h
- torch/csrc/jit/serialization/import_source.cpp
Differential Revision: [D36860145](https://our.internmc.facebook.com/intern/diff/D36860145/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-22 01:38:55 +00:00