4 Commits

Author SHA1 Message Date
706f7d1f22 Enable UFMT on test/jit_hooks, test/lazy and some files (#123807)
Part of: #123062

Ran lintrunner on:

- `test/jit_hooks`
- `test/lazy`
- `test/linear.py`
- `test/load_torchscript_model.py`
- `test/mkl_verbose.py`
- `test/mkldnn_verbose.py`

with command:

```bash
lintrunner -a --take UFMT --all-files
```

Co-authored-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123807
Approved by: https://github.com/ezyang
2024-04-12 03:39:38 +00:00
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
bfae3789ba Move conv to mkldnn (#51483)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51483

This PR moves the conv weights of a frozen model to MKLDNN, and AOT reorders the weights. When the weights are already in MKLDNN, just computing a single conv by converting the input and output from/to mkldnn provides large speedups. I benchmark'd the results of the top 200 shapes in predictor [here](https://www.internalfb.com/phabricator/paste/view/P171537938), as well as verified that it sped up popular models in torchvision.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D26696703

Pulled By: eellison

fbshipit-source-id: 0b4441bee4f6e0890a4540fbca3bb5e58b8c5adf
2021-03-01 21:19:27 -08:00