102 Commits

Author SHA1 Message Date
f41c80c267 Dont error on 0-dim in convolution (#51922)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51922

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D26696701

Pulled By: eellison

fbshipit-source-id: f8b2c19e134931971fac00246920c1584dd43581
2021-03-01 21:22:30 -08:00
42bfda36e1 Add 0-dim support for binary mkldnn ops (#51921)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51921

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D26696696

Pulled By: eellison

fbshipit-source-id: 96ca79c0d6b5ed7c32c14dc4e7c383f2522a85cb
2021-03-01 21:22:26 -08:00
420fc42eab add OneDNN pooling backward (#49454)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49454

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D26006888

Pulled By: VitalyFedyunin

fbshipit-source-id: 6a4930982db784819fea70ffc9029441d673d90e
2021-02-23 14:45:55 -08:00
8f3ed60d3e enable mkldnn conv2d backward to support mkldnn tensor input (#48994)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48994

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D25537189

Pulled By: VitalyFedyunin

fbshipit-source-id: d81d247798fad3815b735468d66ef9d62c07ef77
2021-02-18 10:23:10 -08:00
324c6aada1 BFloat16: enable prepacked weights's inference (#48922)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48922

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D25537188

Pulled By: VitalyFedyunin

fbshipit-source-id: ab6eb1ba8cffb5ba9d00d05db8ef616628f8c932
2021-02-17 11:20:00 -08:00
bc1b1e8253 fixing mkldnn_linear & backward with silent error (#51713)
Summary:
mkldnn_linear & mkldnn_linear_backward_input gives wrong result when weight is non contiguous.

Issue exposed in PR https://github.com/pytorch/pytorch/issues/51613

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51713

Reviewed By: zhangguanheng66

Differential Revision: D26282319

Pulled By: ngimel

fbshipit-source-id: 96516e10c9dc72c30dac278fce09b746aa5f51b2
2021-02-05 18:36:30 -08:00
ec378055c3 add OneDNN linear backward (#49453)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49453

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D26006889

Pulled By: VitalyFedyunin

fbshipit-source-id: 06e2a02b6e01d847395521a31fe84d844f2ee9ae
2021-02-02 12:18:59 -08:00
c0966914bc Internal gradcheck wrapper in testing._internal that sets certain flags to True (#51133)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49409

There are many call sites where, gradcheck/gradgradcheck is now being implicitly invoked with `check_batched_grad` as True, but they were previously False. Cases fall into two basic categories:
1) the call site was previously using `torch.autograd.gradcheck` but is now changed to use the globally imported function instead
3) the call site was already using globally imported function, but does not explicitly pass `check_batched_grad` flag

Only in the _assertGradAndGradgradChecks cases, which are infrequent, I assumed that the the author is aware that omitting the flag means not applying check_batched_grad=True. (but maybe that is not the case?)

Overall this PR in its current state assumes that unless the author explicitly specified `check_batched_grad=False`, they were just probably not aware of this flag and did not mean to have this flag as False.

So far exceptions to the above (as discovered by CI) include:
 - Mkldnn (opaque tensors do not have strides) https://app.circleci.com/pipelines/github/pytorch/pytorch/264416/workflows/e4d87886-6247-4305-8526-2696130aa9a4/jobs/10401882/tests
 - all cases in test_sparse (https://app.circleci.com/pipelines/github/pytorch/pytorch/264553/workflows/3c1cbe30-830d-4acd-b240-38d833dccd9b/jobs/10407103)
 - all cases in test_overrides (https://app.circleci.com/pipelines/github/pytorch/pytorch/264553/workflows/3c1cbe30-830d-4acd-b240-38d833dccd9b/jobs/10407236)
 - test_autograd (test_LSTM_grad_and_gradgrad) - (https://app.circleci.com/pipelines/github/pytorch/pytorch/264553/workflows/3c1cbe30-830d-4acd-b240-38d833dccd9b/jobs/10407235)
 - test_data_parallel (test_data_parallel_buffers_requiring_grad) - *SIGSEGV* (https://app.circleci.com/pipelines/github/pytorch/pytorch/264820/workflows/14d89503-040d-4e3d-9f7b-0bc04833589b/jobs/10422697)
 - test_nn (https://app.circleci.com/pipelines/github/pytorch/pytorch/264919/workflows/df79e3ed-8a31-4a8e-b584-858ee99686ff/jobs/10427315)

Possible TODO is to prevent new tests from invoking external gradcheck.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51133

Reviewed By: ezyang

Differential Revision: D26147919

Pulled By: soulitzer

fbshipit-source-id: dff883b50f337510a89f391ea2fd87de2d531432
2021-01-29 09:13:37 -08:00
f66147ebca BFloat16: add explicit dtype support for to_mkldnn and to_dense (#48881)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48881

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25537190

Pulled By: VitalyFedyunin

fbshipit-source-id: a61a433c638e2e95576f88f081b64ff171b2316e
2020-12-16 16:09:42 -08:00
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
b72da0cf28 OneDNN: report error for dilation max_pooling and replace AT_ERROR with TORCH_CHECK in oneDNN codes (#43538)
Summary:
Fix https://github.com/pytorch/pytorch/issues/43514.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43538

Reviewed By: agolynski

Differential Revision: D23364302

Pulled By: ngimel

fbshipit-source-id: 8d17752cf33dcacd34504e32b5e523e607cfb497
2020-08-28 10:57:19 -07:00
2b14f2d368 [reland][DNNL]:enable max_pool3d and avg_pool3d (#40996)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40996

Test Plan: Imported from OSS

Differential Revision: D22440766

Pulled By: VitalyFedyunin

fbshipit-source-id: 242711612920081eb4a7e5a7e80bc8b2d4c9f978
2020-07-16 10:26:45 -07:00
2b8db35c7e [reland][DNNL]:enable batchnorm3d (#40995)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40995

Test Plan: Imported from OSS

Differential Revision: D22440765

Pulled By: VitalyFedyunin

fbshipit-source-id: b4bf427bbb7010ee234a54e81ade371627f9e82c
2020-07-15 13:56:47 -07:00
b48ee175e6 [reland][DNNL]:enable conv3d (#40691)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40691

Test Plan: Imported from OSS

Differential Revision: D22296548

Pulled By: VitalyFedyunin

fbshipit-source-id: 8e2a7cf14e8bdfa2f29b735a89e8c83f6119e68d
2020-07-15 13:54:41 -07:00
fc4824aa4a enable mkldnn dilation conv (#40483)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40483

Reviewed By: ezyang

Differential Revision: D22213696

Pulled By: ngimel

fbshipit-source-id: 0321eee8fcaf144b20a5182aa76f98d505c65400
2020-06-24 13:28:05 -07:00
016cf7d66e Revert D22102408: DNNL: enable conv3d
Test Plan: revert-hammer

Differential Revision:
D22102408

Original commit changeset: 1e95cede429f

fbshipit-source-id: a20b725164177e8571320007548a58cc4779d669
2020-06-22 15:41:51 -07:00
17fe0e2b8a Revert D22102407: DNNL: enable batchnorm3d
Test Plan: revert-hammer

Differential Revision:
D22102407

Original commit changeset: c9dbb61d0538

fbshipit-source-id: d40976aa8120d2d0839624bf02c082d7d1eb610d
2020-06-22 15:39:37 -07:00
13a8ec3cc5 Revert D22102406: DNNL: enable max_pool3d and avg_pool3d
Test Plan: revert-hammer

Differential Revision:
D22102406

Original commit changeset: 296a87188b79

fbshipit-source-id: ff023be5e8dd4bfcd68770cab305da6ba2e03893
2020-06-22 15:23:01 -07:00
9498e24ca8 Revert D22138737: DNNL: enable dilation conv
Test Plan: revert-hammer

Differential Revision:
D22138737

Original commit changeset: 4225bc7d2624

fbshipit-source-id: 7bbafbe9f412a8f9167e3ae4425dbc933ec67c6b
2020-06-22 15:20:55 -07:00
dbcc5b7533 DNNL: enable dilation conv (#40220)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40220

Test Plan: Imported from OSS

Differential Revision: D22138737

Pulled By: VitalyFedyunin

fbshipit-source-id: 4225bc7d26241b443d18ef9d56326e5a9e6bbeda
2020-06-22 13:14:09 -07:00
c873895722 DNNL: enable max_pool3d and avg_pool3d (#35664)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35664

Test Plan: Imported from OSS

Differential Revision: D22102406

Pulled By: VitalyFedyunin

fbshipit-source-id: 296a87188b79545741f6b7e136a58e4380564f25
2020-06-22 11:57:12 -07:00
8df35fd755 DNNL: enable batchnorm3d (#35663)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35663

Test Plan: Imported from OSS

Differential Revision: D22102407

Pulled By: VitalyFedyunin

fbshipit-source-id: c9dbb61d0538ab9e1e76b2815564030b5f89d33e
2020-06-22 11:57:09 -07:00
6ba807cb43 DNNL: enable conv3d (#35662)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35662

Test Plan: Imported from OSS

Differential Revision: D22102408

Pulled By: VitalyFedyunin

fbshipit-source-id: 1e95cede429f1a950f26bc7052ab33d198857df3
2020-06-22 11:55:04 -07:00
5d4a662846 DNNL: fix F.max_pool2d and F.avg_pool2 issue when stride=None (#39221)
Summary:
For F.max_pool2d and F.avg_pool2d, there has **RuntimeErro**r when stride is **None**, this PR sovle it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39221

Differential Revision: D22059565

Pulled By: ngimel

fbshipit-source-id: 2080e1e010815aedd904c58552e92be9f7443d38
2020-06-15 21:00:12 -07:00
9ad14f6b43 cover nn.Conv1d in mkldnn model conversion logic (#38528)
Summary:
current `to_mkldnn` model conversion logic under `torch.utils.mkldnn` does not cover `nn.Conv1d`. This patch fills the gap, using similar logic to `nn.Conv2d`. The model conversion will remove unnecessary memory format reorders of input/output tensors and thus speedup the model.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38528

Differential Revision: D21640325

Pulled By: albanD

fbshipit-source-id: c3340153b5c524e020c097eb4b9e2ffcbde8896d
2020-05-19 13:04:18 -07:00
3526627f46 Use unittest assertWarns instead (#36411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36411

This PR remove pytorch specific defined assertwarns and use the unit
test one, also format some tests

Test Plan: Imported from OSS

Differential Revision: D20998159

Pulled By: wanchaol

fbshipit-source-id: 1280ecff2dd293b95a639d13cc7417fc819c2201
2020-04-13 15:56:42 -07:00
bd604cb5b7 Upgrade MKL-DNN to DNNL v1.2 (#32422)
Summary:
## Motivation

This PR upgrades MKL-DNN from v0.20 to DNNL v1.2 and resolves https://github.com/pytorch/pytorch/issues/30300.

DNNL (Deep Neural Network Library) is the new brand of MKL-DNN, which improves performance, quality, and usability over the old version.

This PR focuses on the migration of all existing functionalities, including minor fixes, performance improvement and code clean up. It serves as the cornerstone of our future efforts to accommodate new features like OpenCL support, BF16 training, INT8 inference, etc. and to let the Pytorch community derive more benefits from the Intel Architecture.

<br>

## What's included?

Even DNNL has many breaking changes to the API, we managed to absorb most of them in ideep. This PR contains minimalist changes to the integration code in pytorch. Below is a summary of the changes:

<br>

**General:**

1. Replace op-level allocator with global-registered allocator

```
// before
ideep::sum::compute<AllocForMKLDNN>(scales, {x, y}, z);

// after
ideep::sum::compute(scales, {x, y}, z);
```

The allocator is now being registeted at `aten/src/ATen/native/mkldnn/IDeepRegistration.cpp`. Thereafter all tensors derived from the `cpu_engine` (by default) will use the c10 allocator.

```
RegisterEngineAllocator cpu_alloc(
  ideep::engine::cpu_engine(),
  [](size_t size) {
    return c10::GetAllocator(c10::DeviceType::CPU)->raw_allocate(size);
  },
  [](void* p) {
    c10::GetAllocator(c10::DeviceType::CPU)->raw_deallocate(p);
  }
);
```
------

2. Simplify group convolution

We had such a scenario in convolution where ideep tensor shape mismatched aten tensor: when `groups > 1`, DNNL expects weights tensors to be 5-d with an extra group dimension, e.g. `goihw` instead of `oihw` in 2d conv case.

As shown below, a lot of extra checks came with this difference in shape before. Now we've completely hidden this difference in ideep and all tensors are going to align with pytorch's definition. So we could safely remove these checks from both aten and c2 integration code.

```
// aten/src/ATen/native/mkldnn/Conv.cpp

if (w.ndims() == x.ndims() + 1) {
  AT_ASSERTM(
      groups > 1,
      "Only group _mkldnn_conv2d weights could have been reordered to 5d");
  kernel_size[0] = w.get_dim(0) * w.get_dim(1);
  std::copy_n(
      w.get_dims().cbegin() + 2, x.ndims() - 1, kernel_size.begin() + 1);
} else {
  std::copy_n(w.get_dims().cbegin(), x.ndims(), kernel_size.begin());
}
```

------

3. Enable DNNL built-in cache

Previously, we stored DNNL jitted kernels along with intermediate buffers inside ideep using an LRU cache. Now we are switching to the newly added DNNL built-in cache, and **no longer** caching buffers in order to reduce memory footprint.

This change will be mainly reflected in lower memory usage from memory profiling results. On the code side, we removed couple of lines of `op_key_` that depended on the ideep cache before.

------

4. Use 64-bit integer to denote dimensions

We changed the type of `ideep::dims` from `vector<int32_t>` to `vector<int64_t>`. This renders ideep dims no longer compatible with 32-bit dims used by caffe2. So we use something like `{stride_.begin(), stride_.end()}` to cast parameter `stride_` into a int64 vector.

<br>

**Misc changes in each commit:**

**Commit:** change build options

Some build options were slightly changed, mainly to avoid name collisions with other projects that include DNNL as a subproject. In addition, DNNL built-in cache is enabled by option `DNNL_ENABLE_PRIMITIVE_CACHE`.

Old | New
-- | --
WITH_EXAMPLE | MKLDNN_BUILD_EXAMPLES
WITH_TEST | MKLDNN_BUILD_TESTS
MKLDNN_THREADING | MKLDNN_CPU_RUNTIME
MKLDNN_USE_MKL | N/A (not use MKL anymore)

------

**Commit:** aten reintegration

- aten/src/ATen/native/mkldnn/BinaryOps.cpp

    Implement binary ops using new operation `binary` provided by DNNL

- aten/src/ATen/native/mkldnn/Conv.cpp

    Clean up group convolution checks
    Simplify conv backward integration

- aten/src/ATen/native/mkldnn/MKLDNNConversions.cpp

    Simplify prepacking convolution weights

- test/test_mkldnn.py

    Fixed an issue in conv2d unit test: it didn't check conv results between mkldnn and aten implementation before. Instead, it compared the mkldnn with mkldnn as the default cpu path will also go into mkldnn. Now we use `torch.backends.mkldnn.flags` to fix this issue

- torch/utils/mkldnn.py

    Prepack weight tensor on module `__init__` to achieve better performance significantly

------

**Commit:** caffe2 reintegration

- caffe2/ideep/ideep_utils.h

    Clean up unused type definitions

- caffe2/ideep/operators/adam_op.cc & caffe2/ideep/operators/momentum_sgd_op.cc

   Unify tensor initialization with `ideep::tensor::init`. Obsolete `ideep::tensor::reinit`

- caffe2/ideep/operators/conv_op.cc & caffe2/ideep/operators/quantization/int8_conv_op.cc

    Clean up group convolution checks
    Revamp convolution API

- caffe2/ideep/operators/conv_transpose_op.cc

    Clean up group convolution checks
    Clean up deconv workaround code

------

**Commit:** custom allocator

- Register c10 allocator as mentioned above

<br><br>

## Performance

We tested inference on some common models based on user scenarios, and most performance numbers are either better than or on par with DNNL 0.20.

ratio: new / old | Latency (batch=1 4T) | Throughput (batch=64 56T)
-- | -- | --
pytorch resnet18 | 121.4% | 99.7%
pytorch resnet50 | 123.1% | 106.9%
pytorch resnext101_32x8d | 116.3% | 100.1%
pytorch resnext50_32x4d | 141.9% | 104.4%
pytorch mobilenet_v2 | 163.0% | 105.8%
caffe2 alexnet | 303.0% | 99.2%
caffe2 googlenet-v3 | 101.1% | 99.2%
caffe2 inception-v1 | 102.2% | 101.7%
caffe2 mobilenet-v1 | 356.1% | 253.7%
caffe2 resnet101 | 100.4% | 99.8%
caffe2 resnet152 | 99.8% | 99.8%
caffe2 shufflenet | 141.1% | 69.0% †
caffe2 squeezenet | 98.5% | 99.2%
caffe2 vgg16 | 136.8% | 100.6%
caffe2 googlenet-v3 int8 | 100.0% | 100.7%
caffe2 mobilenet-v1 int8 | 779.2% | 943.0%
caffe2 resnet50 int8 | 99.5% | 95.5%

_Configuration:
Platform: Skylake 8180
Latency Test: 4 threads, warmup 30, iteration 500, batch size 1
Throughput Test: 56 threads, warmup 30, iteration 200, batch size 64_

† Shufflenet is one of the few models that require temp buffers during inference. The performance degradation is an expected issue since we no longer cache any buffer in the ideep. As for the solution, we suggest users opt for caching allocator like **jemalloc** as a drop-in replacement for system allocator in such heavy workloads.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32422

Test Plan:
Perf results: https://our.intern.facebook.com/intern/fblearner/details/177790608?tab=Experiment%20Results

10% improvement for ResNext with avx512, neutral on avx2

More results: https://fb.quip.com/ob10AL0bCDXW#NNNACAUoHJP

Reviewed By: yinghai

Differential Revision: D20381325

Pulled By: dzhulgakov

fbshipit-source-id: 803b906fd89ed8b723c5fcab55039efe3e4bcb77
2020-03-26 22:07:59 -07:00
67608cc018 Fix MKLDNN conv2d 5d weight handling (#34115)
Summary:
Effectively backporting c5c00c119f before that PR lands

The bug didn't manifesting itself earlier because MkldnnConv2d constructor didn't reorder the weights. So the issue was arising only on second serialization/deserialization. This also fixes the constructor to deliver better perf right away.

Note, that I still serialize 5d tensor - it was the previous behavior, we have to handle it anyway and with https://github.com/pytorch/pytorch/issues/32422 the output of `mkldnn_reorder_conv2d_weight` will always be 4d.

cc pinzhenx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34115

Reviewed By: wanchaol

Differential Revision: D20224685

Pulled By: dzhulgakov

fbshipit-source-id: 24ca9227c4eb4c139096a64ae348808d7478d7dc
2020-03-04 11:26:38 -08:00
f050b16dd9 Move pytorch distributed tests to separate folder for contbuild. (#30445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30445

Create distributed and rpc directories under caffe/test for better management
of unit tests.

Differential Revision: D18702786

fbshipit-source-id: e9daeed0cfb846ef68806f6decfcb57c0e0e3606
2020-01-22 21:16:59 -08:00
29f345831e Error out if legacy Tensor.new is called on alternate layouts / dtypes (#31485)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31485

Fixes: https://github.com/pytorch/pytorch/issues/22158

Test Plan: Imported from OSS

Differential Revision: D19196499

Pulled By: gchanan

fbshipit-source-id: a01ea7641b5fcd00a9d267243539ff64a5492e5f
2019-12-26 07:27:24 -08:00
3b1c3996e1 remove RTTI check for TensorImpl shadow copy (#22773)
Summary:
We introduced RTTI in recent change: https://github.com/pytorch/pytorch/pull/21613

For internal mobile build we don't enable '-frtti' yet. This diff is trying to replace
RTTI with alternative approach.

According to dzhulgakov we could compare two tensors' type_id directly in most cases -
which is more strict than comparing TensorImpl subclass type as TensorImpl -> type_id
mapping is 1-to-n but it's more proper for this use case.

The only two cases where we can relax direct type comparison (for legacy reason) are:
1. CPUTensor <-> CUDATensor;
2. SparseCPUTensor <-> SparseCUDATensor;
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22773

Differential Revision: D16277696

Pulled By: ljk53

fbshipit-source-id: 043e264fbacc37b7a11af2046983c70ddb62a599
2019-07-15 23:21:57 -07:00
d632b1ff3c Expose is_mkldnn to python and register it as torchscript prim op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22386

Differential Revision: D16074722

Pulled By: bddppq

fbshipit-source-id: b9b2a05a894847640084f063fba68d9db4e6aec1
2019-07-01 12:31:59 -07:00
7d81e62562 Add mkldnn tests for running end to end resnet models
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22041

Differential Revision: D15928786

Pulled By: bddppq

fbshipit-source-id: 4b12e5bda2da13aba2d63d357a0a854d59317362
2019-06-20 22:42:49 -07:00
b6f542f8a1 Add aten mkldnn transpose (#21943)
Summary:
This PR is about:

1.  Make mkldnn reshape can share same memory fro plain format tensor.

2.  Add mkldnn transpose operator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21943

Differential Revision: D15916063

Pulled By: bddppq

fbshipit-source-id: d1971c67341f277c1e80c1fa34e213b6c27f4062
2019-06-19 22:20:46 -07:00
c06ccbe663 Add aten mkldnn zero_ operator (#20573)
Summary:
### mkldnn backward ops list:
 - [ ] \(https://github.com/pytorch/pytorch/pull/20567) Add aten mkldnn conv2d backward operator 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20570) Add aten mkldnn backward ops: relu, linear and reshape 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20572) Add aten mkldnn batchnorm backward operator 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20573) Add aten mkldnn zero_ operator💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20575) Add mkldnn mul operator 💚
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20573

Differential Revision: D15820477

Pulled By: bddppq

fbshipit-source-id: 35d95f5b4e013c8db1911f52148550a2e40a2e68
2019-06-14 09:48:49 -07:00
b599bb3836 Add mkldnn mul operator (#20575)
Summary:
### mkldnn backward ops list:
 - [ ] \(https://github.com/pytorch/pytorch/pull/20567) Add aten mkldnn conv2d backward operator 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20570) Add aten mkldnn backward ops: relu, linear and reshape 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20572) Add aten mkldnn batchnorm backward operator 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20573) Add aten mkldnn zero_ operator💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20575) Add mkldnn mul operator 💛
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20575

Differential Revision: D15799529

Pulled By: bddppq

fbshipit-source-id: 4887d8ef1a0e316ad9db199b657d9481fc13e486
2019-06-12 22:41:51 -07:00
5744fb3007 Add mkldnn softmax operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21516

Differential Revision: D15712759

Pulled By: bddppq

fbshipit-source-id: bf515135263156bea1a2b3e53a47edf697b8b1e2
2019-06-07 15:22:18 -07:00
b161832f10 support ceil mode by padding changes (#21310)
Summary:
Modify MKLDNN pooling operation to support ceil mode by adjusting the right/bottom padding accordingly. This is done similarly as in Caffe (see discussion https://github.com/pytorch/pytorch/pull/19205#discussion_r276903751).

To make this possible, I split the padding to left and right (top / bottom). This naming is confusing but actually follows mkldnn's own naming for pooling::compute(). We increase the r paddings so that it matches the ceiling mode expected output size.

Strengthened the test case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21310

Reviewed By: bddppq

Differential Revision: D15611664

Pulled By: akyrola

fbshipit-source-id: 46b40015dafef69a8fd5e7b2c261d8dbf448cd20
2019-06-06 14:47:35 -07:00
57f932a638 Enable 'empty' function for mkldnn
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21184

Differential Revision: D15625296

Pulled By: bddppq

fbshipit-source-id: 47d26798bcf48e227ffd813f299959a7b8993641
2019-06-04 14:16:13 -07:00
ebc8d7170e fix the bug for mkldnn clone (#20943)
Summary:
This PR is to solve the bug for clone a MKLDNN tensor, please see the issue https://github.com/pytorch/pytorch/issues/20895.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20943

Differential Revision: D15511516

Pulled By: mrshenli

fbshipit-source-id: 05b41d6c7eaf8703521f4c768b8f26ec8501dc5e
2019-05-27 12:09:52 -07:00
8cde4c4d22 Remove Variable::Impl and DifferentiableViewImpl (#17072)
Summary:
As part of the Variable/Tensor merge work: https://github.com/pytorch/pytorch/issues/13638, we make the following changes in this PR:
1. Remove the `Variable::Impl` class and the `DifferentiableViewImpl` class
2. Change all `Variable.data()` call sites to either use `Variable` directly, or use `Variable.tensor_data()`
3. Remove `Variable.data()` API
3. Add `Variable.variable_data()` that matches `tensor.data` in Python API, which creates a new `Variable` that shares the same storage and tensor metadata with the original `Variable`, but with a completely new autograd history.

After this PR, Variable doesn't wrap a Tensor internally anymore, and both Variable and Tensor use the same TensorImpl class as its `impl_`. The only difference is that Variable always has AutogradMeta in its TensorImpl, but Tensor doesn't.

**Note that this PR is BC-breaking in the following use cases:**

**Use Case 1:**
Previously, `x.data = y` works even if `x` and `y` are of different TensorImpl type (e.g. `x` is a CPU dense tensor whose impl is of type TensorImpl, while `y` is a CPU sparse tensor whose impl is of type SparseTensorImpl). However, after this PR, `x.data = y` doesn't work anymore if `x` and `y` are of different TensorImpl type, because the underlying implementation `variable.set_data(tensor)` no longer works if `variable` and `tensor` have different TensorImpl type.

**Use Case 2:**
If a tensor `x`'s `grad` is sparse, accumulating dense gradients to `x` will change the tensor that `x.grad` is pointing to. This is better illustrated with the following example:
```python
params = torch.tensor([1.5, 1.5]).requires_grad_()
with torch.no_grad():
    # Change gradient to a sparse tensor
    params.grad = torch.sparse_coo_tensor(torch.tensor([[1, 1]]).long(), torch.tensor([1., 1.]))

grad_saved = params.grad
params.backward(torch.tensor([1.5, 1.5]))
assert id(grad_saved) == id(params.grad)  # This will fail after this PR
```
The assertion in the last line will fail after this PR, because adding dense gradients to sparse gradients will change the `params.grad` tensor reference.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17072

Differential Revision: D14075257

Pulled By: yf225

fbshipit-source-id: 0e681df641270dea586042dd26db59f2e76b5957
2019-05-23 21:09:04 -07:00
70caa2efe2 Add mkldnn sigmoid operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20820

Reviewed By: dzhulgakov

Differential Revision: D15455866

fbshipit-source-id: 712b06dfbd441051dc284a1acdf94926df09bc1d
2019-05-23 12:51:57 -07:00
8dedb04c26 Enable torch.jit.trace for mkldnn modules
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20800

Differential Revision: D15447892

fbshipit-source-id: 78e76523c5412c020a2bc22d6998ff7b36356720
2019-05-23 12:51:54 -07:00
63585c3b81 Add support for save and load mkldnn modules
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20799

Reviewed By: wanchaol

Differential Revision: D15447891

fbshipit-source-id: e34de946c79282fb934a5c52ff1def41c7993c75
2019-05-23 12:51:50 -07:00
cb8ff2a2b4 Add mkldnn support for adaptive_avg_pool2d (#19818)
Summary:
AdaptiveAvgPool2d is used in torchvision resnet models https://github.com/pytorch/vision/blob/9a481d0/torchvision/models/resnet.py#L145

Fixes #19797
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19818

Differential Revision: D15112777

Pulled By: bddppq

fbshipit-source-id: 6c9b29c805d28356cda49c10c2cd3ce9d7a8b3f5
2019-04-30 15:00:34 -07:00
c9f380df02 Add aten mkldnn linear operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19210

Reviewed By: dzhulgakov

Differential Revision: D14901641

fbshipit-source-id: 8fa68b9941fd93cea0f313a828cba34c5c81ae11
2019-04-26 13:41:57 -07:00
48b81da4cb Add aten mkldnn view operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19209

Reviewed By: dzhulgakov

Differential Revision: D14894545

fbshipit-source-id: 69455184811de1d1444b5d494e4a9d8c83301431
2019-04-26 13:41:54 -07:00
61d5a8dded Add aten mkldnn add operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19207

Reviewed By: dzhulgakov

Differential Revision: D14889477

fbshipit-source-id: 2c5e5ea5dfc26a9c9a172c5fa2c6d7584b167e16
2019-04-26 13:41:51 -07:00
fb53c189b3 Add aten mkldnn batch_norm operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19206

Reviewed By: dzhulgakov

Differential Revision: D14887205

fbshipit-source-id: ea00c9e3205c449d08ab29535309164f951aab95
2019-04-26 13:41:48 -07:00
4864000e55 Add aten mkldnn ops: relu, max_pool2d and avg_pool2d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19205

Reviewed By: dzhulgakov

Differential Revision: D14850598

fbshipit-source-id: 5bbd5909c06df9c980de680ffb81bf772766c0ba
2019-04-26 13:41:44 -07:00