Commit Graph

183 Commits

Author SHA1 Message Date
96eec95ece torch.from_numpy for complex dtypes (#35531)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35531

Differential Revision: D20693581

Pulled By: anjali411

fbshipit-source-id: d53e26b4175452fa00b287efbfceea18104c1364
2020-03-27 14:40:28 -07:00
de3b2f98db [Shape Inference] Add ssaRewrite pybind func (#35410)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35410

Reviewed By: yinghai

Differential Revision: D20653042

fbshipit-source-id: 3845413d4e80b9be4fb97dc1eb8e824a55fb7576
2020-03-26 00:46:28 -07:00
c235be42dd [jit] kill script namespace (#34515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34515

Once upon a time we thought this was necessary. In reality it is not, so
removing it.

For backcompat, our public interface (defined in `api/`) still has
typedefs to the old `script::` names.

There was only one collision: `Pass` as a `Stmt` and `Pass` as a graph
transform. I renamed one of them.

Test Plan: Imported from OSS

Differential Revision: D20353503

Pulled By: suo

fbshipit-source-id: 48bb911ce75120a8c9e0c6fb65262ef775dfba93
2020-03-11 23:32:48 -07:00
dbe850af5b [jit] do the code reorg (#33851)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33851

Rationale and context described in #33828.

Script to reproduce the move:
https://gist.github.com/suo/16cbefaaeb67ca5a7c6caffd49b7f6e9
ghstack-source-id: 99079645

Test Plan: Make sure CI passes

Reviewed By: jamesr66a

Differential Revision: D20133869

fbshipit-source-id: 390e9241a9c85366d9005c492ac31f10aa96488e
2020-02-27 13:02:51 -08:00
c47c78d0bf Revert D19597036: More code fakefp16 mapping unification
Test Plan: revert-hammer

Differential Revision:
D19597036

Original commit changeset: deed61945884

fbshipit-source-id: c057e57810a99464aefb00b645613ecd6a7c5533
2020-01-29 13:32:42 -08:00
642c9ef922 More code fakefp16 mapping unification
Summary: ATT

Reviewed By: amylittleyang

Differential Revision: D19597036

fbshipit-source-id: deed61945884fb4b01d058f3c72c75f5a937a41c
2020-01-29 11:01:24 -08:00
d2fdf140af Combine all the user inputs together and convert them to fp16 (#31898)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31898

Att

Reviewed By: tracelogfb

Differential Revision: D19291357

fbshipit-source-id: 747ed5234ca042ceeaff2d094701ead7597ac3ee
2020-01-08 14:36:42 -08:00
42324cb6e8 Change interface from map of TensorShape to shapeInfoMap (#30802)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30802

Change shape_hints from map<string, TensorShape> to ShapeInfoMap to catch dimType info from model file.

Reviewed By: ipiszy

Differential Revision: D18821486

fbshipit-source-id: c5d9ed72e158d3698aba38900aeda00f776745b4
2019-12-10 00:35:11 -08:00
7a6c3b36a1 Switch ScriptModuleOp to use a unique_ptr
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29856

Test Plan: waitforsadcastle

Reviewed By: dzhulgakov

Differential Revision: D18516553

fbshipit-source-id: d1e2d49ec613d07b21cd30bd777fbd300032cba1
2019-11-14 19:36:00 -08:00
f0dd7517f2 Add option to clean up allocated activations between c2 runs (#29619)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29619

att

Reviewed By: houseroad

Differential Revision: D18415190

fbshipit-source-id: 739aaf436578fac635df10de42b35e2b4368df37
2019-11-13 10:30:10 -08:00
9588cd921e weight_names bug fix (#23848)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23848

Problem:
In experiment running feed model 127607201 (/mnt/public/tracelog/feed_repro2/127607201_0.predictor), encountered blob dimensionality mismatch error when running onnxified net. This is due to the model initializing input blobs in current workspace with blob size 0, and onnxifi() falsely identified those input blobs as weight blobs and assigned wrong dimension.

Solution:
Add option to pass correct weight blob names to onnxifi() instead of using all blobs in current workspace.

Reviewed By: yinghai

Differential Revision: D16661396

fbshipit-source-id: cabe44db6b64e6538bef4b65e380312214b3ba9f
2019-08-06 10:58:43 -07:00
3a12520844 Pass Variable into Caffe2 ops, by requiring that the Variable doesn't require grad (#22473)
Summary:
As part of the Variable/Tensor merge, we want to be able to pass Variables into Caffe2 without doing extra shallow copy, to improve performance and also allow for in-place mutations in Caffe2 ops. There are a few approaches outlined in https://github.com/pytorch/pytorch/pull/22418, and this PR is the chosen approach.

Specifically, we can have the assumption that we won't be connecting autograd to C2 gradients at any point (as it's too tricky and not that useful). Therefore, we can pass Variable into Caffe2 ops by requiring that all Variables in Caffe2 don't require grad. For code paths in Caffe2 that might potentially track gradients (e.g. `ScriptModuleOp` and `call_caffe2_op_from_c10`), we use the `torch::NoGradGuard` to make sure gradients are not tracked.

This supersedes https://github.com/pytorch/pytorch/pull/22418.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22473

Differential Revision: D16099042

Pulled By: yf225

fbshipit-source-id: 57efc3c7cfb3048d9abe90e63759acc14ebd2972
2019-07-08 11:31:10 -07:00
5b87049c66 remove uses of std::shared_ptr<Module> (#21934)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21934
ghimport-source-id: e64ab9096f43749ead3ac5567675b815da295664

Test Plan: Imported from OSS

Differential Revision: D15892401

Pulled By: zdevito

fbshipit-source-id: 6424139206593ff944556c69d8a54723884eacaf
2019-06-25 13:24:38 -07:00
3bdde56907 Fix incorrect usage of __HIP_PLATFORM_HCC__ (#21757)
Summary:
This avoid using `__HIP_PLATFORM_HCC__` in case it changes in the future.

Following up https://github.com/pytorch/pytorch/issues/21718
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21757

Reviewed By: xw285cornell

Differential Revision: D15891867

Pulled By: bddppq

fbshipit-source-id: 5de55687ab1c86eddf6b4d8d25fee48d96ec72ad
2019-06-18 18:56:32 -07:00
5a7e2ccc0b Add use_rocm flag to detect AMD build in the runtime (#21718)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21718

adding a detection method on whether the package is built for AMD.

Reviewed By: bddppq

Differential Revision: D15795893

fbshipit-source-id: 91a21ee76b2273b1032507bdebe57e016717181d
2019-06-13 09:30:49 -07:00
4bdbd30b96 Add python binding to deserialize blob (#21532)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21532

Add python binding to deserialize blob

Reviewed By: yinghai

Differential Revision: D15706816

fbshipit-source-id: f498c7e0f7392f055b13810bbf81cba59f25e1d2
2019-06-10 10:49:21 -07:00
c25e33789e Lightweight at-most-once logging for API usage (#20745)
Summary:
Resubmit #20698 which got messed up.

Idea is that when PyTorch is used in a custom build environment (e.g. Facebook), it's useful to track usage of various APIs centrally. This PR introduces a simple very lightweight mechanism to do so - only first invocation of a trigger point would be logged. This is significantly more lightweight than #18235 and thus we can allow to put logging in e.g. TensorImpl.

Also adds an initial list of trigger points. Trigger points are added in such a way that no static initialization triggers them, i.e. just linking with libtorch.so will not cause any logging. Further suggestions of what to log are welcomed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20745

Differential Revision: D15429196

Pulled By: dzhulgakov

fbshipit-source-id: a5e41a709a65b7ebccc6b95f93854e583cf20aca
2019-05-23 23:17:59 -07:00
8cde4c4d22 Remove Variable::Impl and DifferentiableViewImpl (#17072)
Summary:
As part of the Variable/Tensor merge work: https://github.com/pytorch/pytorch/issues/13638, we make the following changes in this PR:
1. Remove the `Variable::Impl` class and the `DifferentiableViewImpl` class
2. Change all `Variable.data()` call sites to either use `Variable` directly, or use `Variable.tensor_data()`
3. Remove `Variable.data()` API
3. Add `Variable.variable_data()` that matches `tensor.data` in Python API, which creates a new `Variable` that shares the same storage and tensor metadata with the original `Variable`, but with a completely new autograd history.

After this PR, Variable doesn't wrap a Tensor internally anymore, and both Variable and Tensor use the same TensorImpl class as its `impl_`. The only difference is that Variable always has AutogradMeta in its TensorImpl, but Tensor doesn't.

**Note that this PR is BC-breaking in the following use cases:**

**Use Case 1:**
Previously, `x.data = y` works even if `x` and `y` are of different TensorImpl type (e.g. `x` is a CPU dense tensor whose impl is of type TensorImpl, while `y` is a CPU sparse tensor whose impl is of type SparseTensorImpl). However, after this PR, `x.data = y` doesn't work anymore if `x` and `y` are of different TensorImpl type, because the underlying implementation `variable.set_data(tensor)` no longer works if `variable` and `tensor` have different TensorImpl type.

**Use Case 2:**
If a tensor `x`'s `grad` is sparse, accumulating dense gradients to `x` will change the tensor that `x.grad` is pointing to. This is better illustrated with the following example:
```python
params = torch.tensor([1.5, 1.5]).requires_grad_()
with torch.no_grad():
    # Change gradient to a sparse tensor
    params.grad = torch.sparse_coo_tensor(torch.tensor([[1, 1]]).long(), torch.tensor([1., 1.]))

grad_saved = params.grad
params.backward(torch.tensor([1.5, 1.5]))
assert id(grad_saved) == id(params.grad)  # This will fail after this PR
```
The assertion in the last line will fail after this PR, because adding dense gradients to sparse gradients will change the `params.grad` tensor reference.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17072

Differential Revision: D14075257

Pulled By: yf225

fbshipit-source-id: 0e681df641270dea586042dd26db59f2e76b5957
2019-05-23 21:09:04 -07:00
9b1dbffba5 Re-sync with internal repository (#20702) 2019-05-20 09:22:57 -04:00
d3059b9c49 Lightweight logging for once-only API usage 2019-05-19 23:04:40 -07:00
73a97387c1 Replace AT_CHECK with TORCH_CHECK [shard 9/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20435

Reviewed By: jerryzh168

Differential Revision: D15318877

fbshipit-source-id: 4d83571187ea14a604fef83ac355d328b46d93e1
2019-05-15 08:05:59 -07:00
a9aaf698a4 add c2 benchmark runs in cpp (#20108)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20108

Add cpp runs for c2, hooked up via pybinds. Print output to terminal. This is not hooked up with the pep output yet because I'd like to verify the numbers first.

Note that this isn't quite the same mechanism as the pytorch cpp hookup, which uses cpp_python_extensions. If I can use the same mechanism to pull all the inputs for c2 through cpp and do FeedBlobs in cpp, then I'll switch to that.

Reviewed By: zheng-xq

Differential Revision: D15155976

fbshipit-source-id: 708079dacd3e19aacfe43d70c5e5bc54da2cf9e3
2019-05-13 17:01:08 -07:00
87a6974193 Make it possible for self.forward to return a ScriptMethod (#19217)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19217
ghimport-source-id: 6fdd7f5ac041dae950b47ca316f30682ede0b083

Reviewed By: suo

Differential Revision: D14922120

Pulled By: zdevito

fbshipit-source-id: 5e82e5d7ee72df6f401146d2519c80ea336ff40e
2019-04-24 11:14:34 -07:00
767d184b77 Add back option to not adjust output batch size (#19442)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19442

For cases like CV, some of ops like transpose and tile will mangle the batch size so that we don't know how to adjust output batch size. In this case, the current solution is just fix the input batch statically and do not adjust output batch size.

Reviewed By: zrphercule

Differential Revision: D15007237

fbshipit-source-id: a21b943a52ee5462d9d7804dfae44360f579f8cf
2019-04-22 12:29:24 -07:00
e7b2669151 caffe2 - Expose tensor filler util to Python (#18886)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18886

Expose tensor filler util to Python and add a unit test (both C++/Python)

Reviewed By: salexspb

Differential Revision: D14784470

fbshipit-source-id: bb8e013d1755c27c166e87d5a8491a97c65d3d8d
2019-04-08 11:54:10 -07:00
c34e5ff952 ScriptModuleOp in caffe2 (#18716)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18716

Might be useful as an intermediate stage for some systems that currently use Caffe2 nets as an execution mechanism.

Not sure it's a good idea all together, please comment.

Limitations:
- only Tensor types as inputs/outputs
- the entire module is serialized as a zip archive inside a proto in Caffe2 db, it'd be subject to 4Gb limit and is likely very slow. For small models it'd work though.
- no autograd, though it can be attached in principle
- no way to retrieve parameters inside the script module from C2 runtime perspective (though they potentially can be alias-fetched and stored as individual blobs)
- after deserialization, python wrappers returned don't have correct type (as we don't do module_lookup trick)

Build-wise, I had to add dependency from pybind_state to libtorch.so. I don't think we build Caffe2 python frontend independently anymore, so it should be fine.

Reviewed By: amirshim, houseroad

Differential Revision: D14339599

fbshipit-source-id: 88a37a8abd1f1c4703e5ef937031f222535d4080
2019-04-05 01:07:43 -07:00
5f5a2aaab9 Operator-level performance microbenchmarks (#18740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18740

Test utilities for writing Caffe2/PyTorch performance microbenchmarks. Brief description of the file structure

* benchmark_core.py : core utiltiites for running microbenchmark tests
* benchmark_caffe2.py : Caffe2 specific benchmark utilitites
* benchmark_pytorch.py: PyTorch specific benchmark utilities
* benchmark_runner.py : Main function. Currently it can run the microbenchmark tests in a stand-alone mode. The next step is to have this integrate with AI-PEP.

The utilities are located at https://github.com/pytorch/pytorch/tree/master/test to have access to both Caffe2/PyTorch Python's frontend.

Include two operator microbenchmarks; support both Caffe2/PyTorch:
* MatMul
* Add

Reference: PyTorch benchmarks : https://github.com/pytorch/benchmark/tree/master/timing/python. In this work, we start with two example binary operators MatMul and Add, but eventually we should to cover unary operators like in the PyTorch benchmark repo.

Reviewed By: zheng-xq

Differential Revision: D13887111

fbshipit-source-id: b7a56b95448c9ec3e674b0de0ffb96af4439bfce
2019-04-02 17:06:19 -07:00
e13101e069 support pre-convert filter format for mkldnn training mode and change 'OptimizeForIdeep' to 'OptimizeForMkldnn' (#15171)
Summary:
For MKL-DNN,the filter data will be reorderd to primitive format, it takes a lot of time.
So the patch provide a method to convert filter format before training.
And "OptimizeForIdeep" will be changed to "OptimizeForMkldnn" in this patch.
 This patch depends on https://github.com/pytorch/pytorch/pull/12866
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15171

Differential Revision: D14590741

Pulled By: yinghai

fbshipit-source-id: 07971c9977edac3c8eec08ca2c39cda639683492
2019-03-29 19:00:48 -07:00
66556f48e3 Remove sinkMaxPool transformation (#17694)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17694

Remove sinkMaxPool transformation as it's unused

Differential Revision: D14328624

fbshipit-source-id: bd245403b756157120faa61a0f9253c15120e7a8
2019-03-12 20:10:46 -07:00
dec116e96f PyTorch/Caffe2 tensor interop in Python (#17190)
Summary:
Because of two separate python extensions with different pybind
instances I have to go through void* conversion. Since it's hidden from
user, it's fine.

New APIs added on C2 side:
- workspace.FetchTorch('blob')
- workspace.Workspace.current.blobs['blob'].to_torch()
- workspace.FeedBlob('blob', pytorch_tensor)

Works on CPU an GPU.

The only glitches are with resizing because of variable/tensor split.
But data sharing works properly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17190

Reviewed By: ezyang

Differential Revision: D14163882

Pulled By: dzhulgakov

fbshipit-source-id: d18e5b8fcae026f393c842a1149e972515732de2
2019-03-04 11:34:01 -08:00
5b835682e3 Remove GPU dependency from ProfileObserver (#17592)
Summary:
Remove GPU dependency and register ProfileObserver.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17592

Reviewed By: ezyang

Differential Revision: D14265801

Pulled By: mdschatz

fbshipit-source-id: f98c0c32653c64a8b087c58ece4f864dfbe1d4b8
2019-03-04 10:00:46 -08:00
70ee257ad4 Fix batch insert (#17158)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17158

Because of Reshape op, batch size can be changed. This diff addresses first order issue raised from multiple batch size system. We need to export different real_batch_size for different max_batch_size input and attach it to the right output.

It also fixes a false exception.

Reviewed By: ipiszy

Differential Revision: D14099541

fbshipit-source-id: 0fa9e86826f417a11d2b5dd2ee60dff64a7ce8c4
2019-02-15 12:28:23 -08:00
58648a19df Create BackendTransformerBase to host common functions used for backend lowering (#17074)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17074

There are some common functionalities in backend lowering. This diff creates a base class which hosts these common stuff.

Reviewed By: ipiszy

Differential Revision: D14073192

fbshipit-source-id: 9617603d0e73db6f7fcc5572756b9dbab506dae5
2019-02-14 17:57:03 -08:00
b515ebc6f1 Remove fake inference for shape info in ONNXIFI transform (#17046)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17046

As we are moving to use bound shape inference, we can remove the awkward fake inference run path and make the code cleaner.

Reviewed By: ipiszy

Differential Revision: D14061501

fbshipit-source-id: b3ace98b3dabef3c3359086a0bb1410518cefa26
2019-02-14 15:12:20 -08:00
4292d13240 Keep weights name unchanged during SsaRewrite (#16932)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16932

During onnxifi transformation net ssa is rewritten. At the last step the weight
names are changed back to what they were before. The diff keeps the weight
names unchanged thru the process.

Reviewed By: yinghai

Differential Revision: D13972597

fbshipit-source-id: 7c29857f788a674edf625c073b345f2b44267b33
2019-02-11 14:55:31 -08:00
1b919ca93e Use bound shape inference in onnxifi transform (#16598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16598

ATT.

Reviewed By: bertmaher, rdzhabarov

Differential Revision: D13893698

fbshipit-source-id: 8d2ad9814fe76924a46b450eb7ebd3601fbdbbc7
2019-02-06 16:34:37 -08:00
c3a0000864 Support communicating with C2 protobuf in Onnxifi flow (#15472)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15472

Create a path to pass serialized C2 protobuf instead of ONNX during ONNXIFI flow

Reviewed By: houseroad

Differential Revision: D13536603

fbshipit-source-id: 7d016474f4beedbda480ed2e2c0004af7868aafe
2019-01-07 22:12:29 -08:00
cb79e1b3a5 Clean up onnxifi transformation code (#15453)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15453

Just move things around to facilitate further development. No logic change.

Reviewed By: rdzhabarov

Differential Revision: D13533959

fbshipit-source-id: eebab1306939e802aacffb24a711d372fd67916c
2018-12-20 22:06:47 -08:00
a51fe386c8 caffe2/caffe2/contrib/script (#15007)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15007

Pull Request resolved: https://github.com/pytorch/pytorch/pull/14979

att

Reviewed By: dzhulgakov

Differential Revision: D13286191

fbshipit-source-id: b8a6bc7aea44487aea4dcf7f44c858fd30c6293c
2018-12-10 14:23:31 -08:00
a597c0ca05 Add inplace FeedTensor for python frontend (#14512)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14512

att

Reviewed By: dzhulgakov

Differential Revision: D13243278

fbshipit-source-id: 78af417d0fcd9b9791ee839d62095903e49205cb
2018-12-04 12:45:11 -08:00
735cd06536 FeedTensor returns a Tensor (#14196)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14196

Pull Request resolved: https://github.com/pytorch/pytorch/pull/13641

FeedTensor function used to take a pointer to Tensor and feed the content using Resize
and mutable_data, but since Tensor is a pointer now, we can just return a Tensor instead.

Reviewed By: dzhulgakov

Differential Revision: D13091163

fbshipit-source-id: 9abf2fd320baca76e050530c500dd29f8e2d0211
2018-11-26 13:05:44 -08:00
3fbb753512 Revert D12873145: [pt1][tensor][refactor] FeedTensor returns a Tensor
Differential Revision:
D12873145

Original commit changeset: 653735c20d61

fbshipit-source-id: aa6e40a6a24c6f90acbe87b32b3be0020e2584f8
2018-11-15 14:52:46 -08:00
266bb8bf30 FeedTensor returns a Tensor (#13641)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13641

FeedTensor function used to take a pointer to Tensor and feed the content using Resize
and mutable_data, but since Tensor is a pointer now, we can just return a Tensor instead.

Reviewed By: ezyang

Differential Revision: D12873145

fbshipit-source-id: 653735c20d611ff6ac9e380d8b3c721cb396a28f
2018-11-13 10:50:32 -08:00
1600649792 Fix for nightly builds (#13779)
Summary:
Being tested on nightlies manually.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13779

Reviewed By: yinghai

Differential Revision: D13001930

Pulled By: pjh5

fbshipit-source-id: 954eaabe052914b7b23c74e922666bf9dbfb630a
2018-11-12 16:38:14 -08:00
8581d3ec67 Allow blacklist ops in onnxifi transform
Differential Revision: D12945523

fbshipit-source-id: cf5055652591bd1dd8d4be92b7fd6a40a0764536
2018-11-08 09:59:03 -08:00
13b9fd3e05 Renaming meta() to dtype() - 2/2 (#13334)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13334

Codemod generated with clangr shard mode, 50 files per diff,
clangr code(meta->dtype): diffusion/FBS/browse/master/fbcode/caffe2/caffe2/fb/codemods/TensorMethodRename.cpp

i-am-not-moving-c2-to-c10

Reviewed By: ezyang

Differential Revision: D12845197

fbshipit-source-id: f87eb575d3c31593ca76b70780cc4fca888e706b
2018-10-30 18:24:30 -07:00
ce469e6c71 dims() to sizes() remaining part
Summary: Made the clangr rule more robust and it discovered more callsites.

Reviewed By: smessmer

Differential Revision: D12825017

fbshipit-source-id: 3be1eeb7ea697b36ef89e78ba64c0ee1259439c4
2018-10-30 14:56:21 -07:00
dbab9b73b6 seperate mkl, mklml, and mkldnn (#12170)
Summary:
1. Remove avx2 support in mkldnn
2. Seperate mkl, mklml, and mkldnn
3. Fix convfusion test case
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12170

Reviewed By: yinghai

Differential Revision: D10207126

Pulled By: orionr

fbshipit-source-id: 1e62eb47943f426a89d57e2d2606439f2b04fd51
2018-10-29 10:52:55 -07:00
314d95a5f2 Renaming dims() to sizes() (caffe2/caffe2) - 3/4 (#13096)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13096

Codemod generated with clangr shard mode, 25 files per diff, for renaming dims() to sizes()

Reviewed By: ezyang

Differential Revision: D10842875

fbshipit-source-id: 1784859735ed4d1bd5ccd7ca56e289498374a68f
2018-10-25 12:14:21 -07:00
a6949abb15 Guard all Caffe2 protobuf string serializations with CAFFE_ENFORCE (fixed reverted bug) (#12848)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12848

Updated all non-test uses of protobuf::MessageLite::SerializeAsString to call
SerializeAsString_EnforceCheck so that the return value is checked and can
throw an exception if failing.

Most of the affected code was called from classes derived from  BlobSerializeBase.
Didn't touch most tests and ENFORCE calls because they usually do checks
anyway.

Original commit changeset: c0760e73ecc7

Reviewed By: dzhulgakov

Differential Revision: D10453456

fbshipit-source-id: d2f2b7b4578e721924354149f08f627c7e3bf070
2018-10-23 16:21:26 -07:00