d8c8ba2440
Fix unused Python variables in test/[e-z]* ( #136964 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136964
Approved by: https://github.com/justinchuby , https://github.com/albanD
2024-12-18 23:02:30 +00:00
221350e3a4
Add None return type to init -- tests ( #132352 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132352
Approved by: https://github.com/ezyang
ghstack dependencies: #132335 , #132351
2024-08-01 15:44:51 +00:00
6980c5048d
UFMT formatting on test/mobile ( #123521 )
...
Partially addresses https://github.com/pytorch/pytorch/issues/123062
Ran lintrunner on:
test/mobile
Detail:
```Shell
$ lintrunner -a --take UFMT --all-files
ok No lint issues.
Successfully applied all patches.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123521
Approved by: https://github.com/shink , https://github.com/ezyang
2024-04-09 14:06:22 +00:00
bd10fea79a
[BE]: Enable F821 and fix bugs ( #116579 )
...
Fixes #112371
I tried to fix as many of the bugs as I could, a few I could not figure out what the proper fix for them was though and so I left them with noqas.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116579
Approved by: https://github.com/ezyang
2024-01-01 08:40:46 +00:00
046e88a291
[BE] [3/3] Rewrite super()
calls in test ( #94592 )
...
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.
- #94587
- #94588
- #94592
Also, methods with only a `super()` call are removed:
```diff
class MyModule(nn.Module):
- def __init__(self):
- super().__init__()
-
def forward(self, ...):
...
```
Some cases that change the semantics should be kept unchanged. E.g.:
f152a79be9/caffe2/python/net_printer.py (L184-L190)
f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94592
Approved by: https://github.com/ezyang , https://github.com/seemethere
2023-02-12 22:20:53 +00:00
19cacecf34
Fix and Re-enable test_quantize_fx_lite_script_module.py ( #88897 )
...
Summary: After D35984526 (416899d1a9
), ```torch.ao.quantization.quantize_fx.prepare_fx``` requires passing in ```example_args```. This diff fixes the calls to ```prepare_fx``` in this test by adding in ```example_args``` as necessary.
Test Plan:
```
buck test caffe2/test:fx_quantization_lite
```
```
✓ ListingSuccess: caffe2/test:fx_quantization_lite : 3 tests discovered (39.689)
✓ Pass: caffe2/test:fx_quantization_lite - test_conv2d (mobile.test_quantize_fx_lite_script_module.TestLiteFuseFx) (44.451)
✓ Pass: caffe2/test:fx_quantization_lite - test_embedding (mobile.test_quantize_fx_lite_script_module.TestLiteFuseFx) (45.462)
✓ Pass: caffe2/test:fx_quantization_lite - test_submodule (mobile.test_quantize_fx_lite_script_module.TestLiteFuseFx) (45.933)
Summary
Pass: 3
ListingSuccess: 1
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/3096224827259146
```
Differential Revision: D41227335
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88897
Approved by: https://github.com/dagitses
2022-11-16 00:56:12 +00:00
c92e5ac95b
[quant][ao_migration] torch.nn.quantized.modules
→ torch.ao.nn.quantized.modules
( #78713 )
...
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- Documentation @vkuzo
- docs/source/conf.py
- docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
- torch/csrc/jit/passes/hoist_conv_packed_params.cpp
- torch/csrc/jit/passes/quantization/helper.h
- torch/csrc/jit/serialization/import_source.cpp
Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012/ )
Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-25 16:50:33 +00:00
6a9c02339d
Revert "[quant][ao_migration] torch.nn.quantized.modules
→ torch.ao.nn.quantized.modules
( #78713 )"
...
This reverts commit 432f037498e3f470f1f6d2a5cc7c6ae8eb4fc870.
Reverted https://github.com/pytorch/pytorch/pull/78713 on behalf of https://github.com/janeyx99 due to Reverting for breaking (trunk-only) ios build
2022-08-22 07:32:37 +00:00
432f037498
[quant][ao_migration] torch.nn.quantized.modules
→ torch.ao.nn.quantized.modules
( #78713 )
...
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- Documentation @vkuzo
- docs/source/conf.py
- docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
- torch/csrc/jit/passes/hoist_conv_packed_params.cpp
- torch/csrc/jit/passes/quantization/helper.h
- torch/csrc/jit/serialization/import_source.cpp
Differential Revision: [D36860145](https://our.internmc.facebook.com/intern/diff/D36860145/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-22 01:38:55 +00:00
d47a9004c8
[skip ci] Set test owner for mobile tests ( #66829 )
...
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66829
Reviewed By: albanD
Differential Revision: D31928812
Pulled By: janeyx99
fbshipit-source-id: 8116b7f3728df8632278b013007c06ecce583862
2021-10-26 10:20:01 -07:00
227e37dd39
pytorch quantization ao migration phase 2: caffe2/test ( #65832 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65832
Renames `torch.quantization` to `torch.ao.quantization` in `caffe2/test`
folder.
```
find caffe2/test/ -type f -name "*.py" -print0 | xargs -0 sed -i "s/torch\.quantization/torch.ao.quantization/g"
HG: manually revert the files testing this migration
hg revert caffe2/test/quantization/ao_migration/common.py
hg revert caffe2/test/quantization/ao_migration/test_ao_migration.py
```
Test Plan: CI
Reviewed By: z-a-f
Differential Revision: D31275754
fbshipit-source-id: 4ed54a74525634feb0f47a26d071102e19c30049
2021-10-01 06:26:30 -07:00
9ef1c64907
[PyTorch][Edge] Tests for QuantizationFx API on lite modules ( #60476 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60476
# Context
Add tests for Lite modules that are quantized using fx API
Read this posts for details about why we need a test bench for quantized lite modules
https://fb.workplace.com/groups/2322282031156145/permalink/4289792691071726/
https://github.com/pytorch/pytorch/pull/60226#discussion_r654615851
moved common code to `caffe2/torch/testing/_internal/common_quantization.py`
ghstack-source-id: 133144292
Test Plan:
```
~/fbsource/fbcode] buck test caffe2/test:fx_quantization_lite
Downloaded 0/2 artifacts, 0.00 bytes, 100.0% cache miss
Building: finished in 8.3 sec (100%) 11892/11892 jobs, 2 updated
Total time: 8.6 sec
More details at https://www.internalfb.com/intern/buck/build/ffb7d517-d85e-4c8f-9531-5e5d9ca1d34c
Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details.
Running with tpx session id: d79a5713-bd29-4bbf-ae76-33a413869a09
Trace available for this run at /tmp/tpx-20210630-105547.675980/trace.log
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/3096224749578707
✓ ListingSuccess: caffe2/test:fx_quantization_lite - main (9.423)
✓ Pass: caffe2/test:fx_quantization_lite - test_embedding (mobile.test_quantize_fx_lite_script_module.TestFuseFx) (10.630)
✓ Pass: caffe2/test:fx_quantization_lite - test_submodule (mobile.test_quantize_fx_lite_script_module.TestFuseFx) (12.464)
✓ Pass: caffe2/test:fx_quantization_lite - test_conv2d (mobile.test_quantize_fx_lite_script_module.TestFuseFx) (12.728)
Summary
Pass: 3
ListingSuccess: 1
If you need help understanding your runs, please follow the wiki: https://fburl.com/posting_in_tpx_users
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/3096224749578707
```
Reviewed By: iseeyuan
Differential Revision: D29306402
fbshipit-source-id: aa481e0f696b7e9b04b9dcc6516e8a390f7dc1be
2021-07-08 10:40:08 -07:00