Commit Graph

5041 Commits

Author SHA1 Message Date
c3d05e86cc Resend "Split ATen/Parallel into interface and backend" (#20825)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20825
ghimport-source-id: 0371fbd37cb37635647d473d5ac9f2859e787061

Differential Revision: D15458073

Pulled By: ilia-cher

fbshipit-source-id: cd27d0da1691f6be1183cd152348ac0d93a53996
2019-05-24 02:03:06 -07:00
c25e33789e Lightweight at-most-once logging for API usage (#20745)
Summary:
Resubmit #20698 which got messed up.

Idea is that when PyTorch is used in a custom build environment (e.g. Facebook), it's useful to track usage of various APIs centrally. This PR introduces a simple very lightweight mechanism to do so - only first invocation of a trigger point would be logged. This is significantly more lightweight than #18235 and thus we can allow to put logging in e.g. TensorImpl.

Also adds an initial list of trigger points. Trigger points are added in such a way that no static initialization triggers them, i.e. just linking with libtorch.so will not cause any logging. Further suggestions of what to log are welcomed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20745

Differential Revision: D15429196

Pulled By: dzhulgakov

fbshipit-source-id: a5e41a709a65b7ebccc6b95f93854e583cf20aca
2019-05-23 23:17:59 -07:00
8cde4c4d22 Remove Variable::Impl and DifferentiableViewImpl (#17072)
Summary:
As part of the Variable/Tensor merge work: https://github.com/pytorch/pytorch/issues/13638, we make the following changes in this PR:
1. Remove the `Variable::Impl` class and the `DifferentiableViewImpl` class
2. Change all `Variable.data()` call sites to either use `Variable` directly, or use `Variable.tensor_data()`
3. Remove `Variable.data()` API
3. Add `Variable.variable_data()` that matches `tensor.data` in Python API, which creates a new `Variable` that shares the same storage and tensor metadata with the original `Variable`, but with a completely new autograd history.

After this PR, Variable doesn't wrap a Tensor internally anymore, and both Variable and Tensor use the same TensorImpl class as its `impl_`. The only difference is that Variable always has AutogradMeta in its TensorImpl, but Tensor doesn't.

**Note that this PR is BC-breaking in the following use cases:**

**Use Case 1:**
Previously, `x.data = y` works even if `x` and `y` are of different TensorImpl type (e.g. `x` is a CPU dense tensor whose impl is of type TensorImpl, while `y` is a CPU sparse tensor whose impl is of type SparseTensorImpl). However, after this PR, `x.data = y` doesn't work anymore if `x` and `y` are of different TensorImpl type, because the underlying implementation `variable.set_data(tensor)` no longer works if `variable` and `tensor` have different TensorImpl type.

**Use Case 2:**
If a tensor `x`'s `grad` is sparse, accumulating dense gradients to `x` will change the tensor that `x.grad` is pointing to. This is better illustrated with the following example:
```python
params = torch.tensor([1.5, 1.5]).requires_grad_()
with torch.no_grad():
    # Change gradient to a sparse tensor
    params.grad = torch.sparse_coo_tensor(torch.tensor([[1, 1]]).long(), torch.tensor([1., 1.]))

grad_saved = params.grad
params.backward(torch.tensor([1.5, 1.5]))
assert id(grad_saved) == id(params.grad)  # This will fail after this PR
```
The assertion in the last line will fail after this PR, because adding dense gradients to sparse gradients will change the `params.grad` tensor reference.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17072

Differential Revision: D14075257

Pulled By: yf225

fbshipit-source-id: 0e681df641270dea586042dd26db59f2e76b5957
2019-05-23 21:09:04 -07:00
f93e0619f3 Adding ShufflenetV2 to caffe2's benchmark suite. (#20180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20180

Adding ShufflenetV2 (by Ma et. al. 2018) to the caffe2's benchmark
suite.

To run, use: `buck run mode/opt caffe2/caffe2/python/examples:imagenet_trainer -- --train_data null --batch_size 128 --epoch_size 3200 --num_epochs 2 --num_gpus 2 --model shufflenet`

Reviewed By: bddppq, xw285cornell

Differential Revision: D15094282

fbshipit-source-id: 0e1ce9c5975868e917b0f179e2c5b15647a76b4e
2019-05-23 20:40:17 -07:00
48bf7b9be8 Fix oscillation in coalesceInsertedDataDependencies (#20833)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20833

Att. The algorithm is still "horrendously inefficient". But since we are sunsetting Nomnigraph, I just did the minimal fix here.

Reviewed By: tracelogfb

Differential Revision: D15463880

fbshipit-source-id: 413a1280a92c1923ba49031177816a2d5f888575
2019-05-23 14:04:20 -07:00
fd2aa93b37 Exposing LengthsSum/Mean/Max in pytorch (#20802)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20802

Need this for sequence model

Reviewed By: dzhulgakov

Differential Revision: D15448529

fbshipit-source-id: cd5abe3b689fc0e02feff10faf8cd61c99369f4f
2019-05-22 13:55:19 -07:00
8d7a025703 ONNX Export Scatter
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18543

Differential Revision: D14658639

Pulled By: houseroad

fbshipit-source-id: 5d7821b54d2fc93f71120155adf328897d13aff6
2019-05-22 13:31:54 -07:00
fea4a56af3 Add ability to filter metric schema in LayerModelHelper (#20786)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20786

Add a method to LayerModelHelper to filter metrics_schema. A general model builder may add metric schema that is not needed in some situations. This change add the ability to skip those unneeded.

Reviewed By: alex1o1o7cloud

Differential Revision: D15418140

fbshipit-source-id: 520f5dffd9938cf206cb1352e2953a4d4d2b6ab1
2019-05-22 12:26:20 -07:00
fd95947e68 Revert D15248618: Split ATen/Parallel into interface and backend
Differential Revision:
D15248618

Original commit changeset: 060879266bc8

fbshipit-source-id: fc5cbb030b87613c9e15100118c3d4a064097c20
2019-05-22 09:55:51 -07:00
c4a3b4d528 Split ATen/Parallel into interface and backend (#20057)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20057
ghimport-source-id: c583f61bf661c994eb4d0625748a299e892a7246

Differential Revision: D15248618

Pulled By: ilia-cher

fbshipit-source-id: 060879266bc8616916fe220adef6ae6c0b076fbd
2019-05-21 19:15:47 -07:00
371cf109a3 Increase static tolerance for negative feature ids
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20671

Reviewed By: Wakeupbuddy

Differential Revision: D15401078

fbshipit-source-id: a946b1df6fae2851d60fadf32e57feb44ba95f38
2019-05-20 19:09:22 -07:00
0beecbdaad fix soft_nms_cpu call in BoxWithNMSLimit (#20738)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20738

D15348610 introduces a bug of misaligned arguments.

Reviewed By: isameer

Differential Revision: D15425627

fbshipit-source-id: b6345863847426ae04eb31245d13f7fcb69d0355
2019-05-20 18:49:41 -07:00
fbdafdffa1 Move bucketize_op to open source
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19952

Reviewed By: houseroad

Differential Revision: D15145552

fbshipit-source-id: e0074c878a5c164324a9cc477783285dedffd188
2019-05-20 18:03:27 -07:00
cf7ef5e631 Add onnxifi support for Int8FCDNNLowPPackedWeightBlob (#20564)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20564

Reviewed By: bddppq

Differential Revision: D15106712

fbshipit-source-id: 428db9c23cfd36ddedc8d79121fbbb3bb484c993
2019-05-20 16:57:11 -07:00
0bfc0eeef7 restore hidden visibility by default for Linux builds (#20461)
Summary:
Symbols are given hidden visibility by default on Linux to emulate the behavior on Windows.  This helps developers catch visibility issues in their streamlined Linux dev environment before being surprised, late in the process, by Windows errors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20461

Reviewed By: kostmo

Differential Revision: D15410410

Pulled By: dzhulgakov

fbshipit-source-id: 1d684b5a9a80b692966a775c3f1c56b7c72ffc95
2019-05-20 16:49:37 -07:00
cf548ba683 De-deprecate old list and dict APIs (#20709)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20709

- Remove ArrayRef based API. This is neither the old nor the planned new API.
- De-deprecate kernels based on std::vector and std::unordered_map. We don't have the Dict/List based API figured out entirely yet, so we shouldn't push people towards using them.
  std::vector and std::unordered_map will get deprecated again once we figured out List/Dict.

Reviewed By: dzhulgakov

Differential Revision: D15417025

fbshipit-source-id: bfbb33c762e43487bb499bc8cc36d515e678f8fc
2019-05-20 13:53:00 -07:00
c062175803 Remove unused var (ws_) and use vars in undefined case for compile (#20667)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20667

Compilation errors:
```
xplat/caffe2/caffe2/utils/signal_handler.h:31:10: error: private field 'SIGINT_action_' is not used [-Werror,-Wunused-private-field]
  Action SIGINT_action_;
         ^
xplat/caffe2/caffe2/utils/signal_handler.h:32:10: error: private field 'SIGHUP_action_' is not used [-Werror,-Wunused-private-field]
  Action SIGHUP_action_;
         ^
xplat/caffe2/caffe2/utils/signal_handler.h:33:17: error: private field 'my_sigint_count_' is not used [-Werror,-Wunused-private-field]
  unsigned long my_sigint_count_;
                ^
xplat/caffe2/caffe2/utils/signal_handler.h:34:17: error: private field 'my_sighup_count_' is not used [-Werror,-Wunused-private-field]
  unsigned long my_sighup_count_;
                ^
4 errors generated.

xplat/caffe2/caffe2/share/fb/stylizer/median_blur_ops.cc:593:14: error: private field 'ws_' is not used [-Werror,-Wunused-private-field]
  Workspace* ws_;
             ^
1 error generated.
```

Reviewed By: bwasti

Differential Revision: D15402928

fbshipit-source-id: 5b98499850aa659fd37ab8e7f2e75166787b8129
2019-05-20 13:52:57 -07:00
af6eea9391 Add the support of feature store example in pytorch model in fblearner (#20040)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20040

Add the support of feature store example in fblearner pytorch predictor, end to end

Reviewed By: dzhulgakov

Differential Revision: D15177897

fbshipit-source-id: 0f6df8b064eb9844fc9ddae61e978d6574c22916
2019-05-20 12:58:27 -07:00
f215db9b92 InsertGuards pass
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20438

Differential Revision: D15342655

Pulled By: Krovatkin

fbshipit-source-id: a193e582d621b99f848573fb4478e7b62265dc9f
2019-05-20 10:49:19 -07:00
9b1dbffba5 Re-sync with internal repository (#20702) 2019-05-20 09:22:57 -04:00
d3059b9c49 Lightweight logging for once-only API usage 2019-05-19 23:04:40 -07:00
7b9ee598d6 separate option for FE_OVERFLOW (#20476)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20476

There're overflow exceptions happening for legitimate computation like
for big x, sigmoid(x) = 1 / (1 + exp(-x)) = 1 / (1 + inf) = 1
This diff separates the option for FE_OVERFLOW to make caffe2_operator_throw_if_fp_exceptions=1 option less noisy.

Reviewed By: hx89

Differential Revision: D15332947

fbshipit-source-id: 9148233f5b84551a0900f0557ba22f2b1508ae0c
2019-05-19 16:05:27 -07:00
96a1f7695f Support plot norm of specific embeddings of a LUT in diagnose_options (#19809)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19809

as title

Reviewed By: chocjy

Differential Revision: D15100505

fbshipit-source-id: cba290fd4317b260e2bf1689b9ca215d3d19a9e2
2019-05-18 01:08:45 -07:00
cb6be42403 Options based registration API (#20514)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20514

Change API from

    static auto registry = c10::RegisterOperators()
      .op("my::op",
        c10::kernel(...),
        c10::dispatchKey(...)
      );

to

    static auto registry = c10::RegisterOperators()
      .op("my::op", c10::RegisterOperators::options()
        .kernel(...)
        .dispatchKey(...)
      );

because this allows better discoverability. People looking for which options are available will easier find it and IDE autocompletion will work better.

Reviewed By: zdevito

Differential Revision: D15346348

fbshipit-source-id: 4b74a33b75c2b9cda4a903639fb7abd2c7cff167
2019-05-17 20:54:42 -07:00
cd28ff5395 Add support for __getstate__/__setstate__ on module (#20242)
Summary:
Adds support for `__getstate__` and `__setstate__` on modules that are called as part of export (`torch.save()`) and import (`torch.jit.load`).
* `__getstate__` and `__setstate__` must be TorchScript functions with the signatures `() -> T` and `(T) -> None` respectively
* The results of `__getstate__` are stored using the pickler in `states.pkl` with one for each module in definition order (`__getstate__` returns `None` by default if an imlpementation is not provided)
    * This prevents sharing between `__getstate__` and attributes, but this should be fine since their use is mostly unrelated (attributes are for storing values to be used in script methods, `__getstate__` for running arbitrary computations during import)

Follow up
* Somehow replacing `__getstate__`/`__setstate__` with a `ScriptMethodStub` makes `MyScriptModule().__getstate__()` call `ScriptModule.__getstate__()` when used in Python. This should be fixed so semantics in Python are preserved, but it doesn't affect the typical usage.
](https://our.intern.facebook.com/intern/diff/15287161/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20242

Pulled By: driazati

Differential Revision: D15287161

fbshipit-source-id: b3f5f33ab74a21a89e6d15460af63aff75cab2d8
2019-05-17 14:43:14 -07:00
36d3398aa5 Clang-format ImageInputOp (#20441)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20441

This op is fairly complex and the fact that it isn't formatted
correctly makes things that much harder to reason about. Clean it up.

Reviewed By: dreiss

Differential Revision: D15220006

fbshipit-source-id: 30632d8bdbf15f96e73d8b6c96c5f29c052e6e7c
2019-05-16 23:00:09 -07:00
ea9c6e7581 eliminate FE_INVALID in unit test (#20502)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20502

Following D15307410 removing more floating point exceptions in unit tests

Reviewed By: hx89

Differential Revision: D15340930

fbshipit-source-id: 269fc75e0800bc9d39126767a0f3ca15cd8b0cad
2019-05-16 21:55:28 -07:00
3c86d597c4 update legacy plus one for mpscnn
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20554

Reviewed By: jerryzh168

Differential Revision: D15362378

fbshipit-source-id: 070cd8314257386036dca89167c738c6602b3f33
2019-05-16 18:17:18 -07:00
8bdbd59d0c handle box plus one for gpu generate_proposals
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20553

Reviewed By: newstzpz

Differential Revision: D15362108

fbshipit-source-id: 53b1ef132288855f8977748442bfe5e5806c6c6e
2019-05-16 18:17:15 -07:00
373e6a78bf make box plus one a legacy argument in detection ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20550

Reviewed By: newstzpz

Differential Revision: D15348610

fbshipit-source-id: 12b1e119e9bc9191ba9f2aa6d695ef215780c349
2019-05-16 18:17:12 -07:00
61012080c8 split and register CollectAndDistributeFpnRpnProposals with C10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20509

Reviewed By: newstzpz

Differential Revision: D15302181

fbshipit-source-id: 7d3b29b667cd900f2976101f35200e1ee20b0f64
2019-05-16 13:40:46 -07:00
5821a76b8e Forcing gcc ABI and safer bash scripts, v2 (#20540)
Summary:
First time this was merged it broke master and was reverted. This time I do not add ```set -u``` to the .circleci/scripts/setup* scripts. There's still a chance that ```set -u``` breaks the binary builds on master, but at least those can be fixed in parallel and don't completely eliminate signal from all merges.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20540

Differential Revision: D15373444

Pulled By: pjh5

fbshipit-source-id: 0203c20865827366ecd8fa07b2db74d255549ed1
2019-05-16 09:40:01 -07:00
5f8e849d84 eliminate FE_INVALID in optimizer related operators and tests (#20501)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20501

Fixing unit tests related to optimizer related operators and tests

Reviewed By: hx89

Differential Revision: D15307410

fbshipit-source-id: e5400c26e08f26191ee542fe6b02e0a69bc4e1ae
2019-05-16 08:23:46 -07:00
5b78a5eadb Memory format support for contiguous and is_contiguous (#20455)
Summary:
#19975 was separated by 2 PRs.

This one:

Introduce MemoryFormat argument to the `x.is_contiguous(memory_format=torch.channels_last)` and to the `y = x.contiguous(memory_format=torch.channels_last)` functions.

At this moment both functions just operate with strides and doesn't store any tensor state.

(Original RFC #19092)

-----

Expands functionality of two tensor functions `.is_contiguous` and `.contiguous` (both python and c++ api).

Note: We had several complaints about `.to(memory_format)` function, and decided not to support it.

1.  `.contiguous` now support optional keyword-only argument - `memory_format`, which can be either `torch.contiguous_format` or `torch.channels_last`.

    - Using `torch.contiguous_format` will preserve existing `.contiguous()` behavior.

    - Calling `x.contiguous(memory_format=torch.channels_last)` returns new tensor which maintain same semantical layout (NCHW), but have different memory allocation pattern.

        `x.contiguous(memory_format=torch.channels_last)` expects input tensor to be 3d, 4d or 5d; and fails otherwise.

2. `.is_contiguous` now support optional keyword-only argument - `memory_format`, which can be either `torch.contiguous_format` or `torch.channels_last`.

    - `x.is_contiguous(memory_format=torch.contiguous_format)` preserves same functionality as `x.is_contiguous()` and remains unchanged.

    - `x.is_contiguous(memory_format=torch.channels_last)` returns true if A) input tensor is contiguous in memory AND B) allocated in the memory in NWHC (or similar for 3d,5d) format.

Note: By the end of the phase one `x.is_contiguous(memory_format=torch.channels_last)` will calculate state of the Tensor on every call. This functionality going to be updated later.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20455

Differential Revision: D15341577

Pulled By: VitalyFedyunin

fbshipit-source-id: bbb6b4159a8a49149110ad321109a3742383185d
2019-05-16 07:18:24 -07:00
09f22d10a6 Infer schema for experimental ops (#20513)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20513

They've been using an old API, switch them to the new one instead.

Reviewed By: li-roy

Differential Revision: D15346349

fbshipit-source-id: 538eb460897ec6addebeebf88b316eb0d6b1dd6f
2019-05-16 01:29:35 -07:00
1891614aa5 Add GivenTensorInt16Fill (#20515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20515

Needed by the upcoming quantized version of GenerateProposals

Reviewed By: dzhulgakov

Differential Revision: D14430952

fbshipit-source-id: ea852f04cc4b070f8fbe7a1e6535bba4d5b230fd
2019-05-15 19:45:15 -07:00
c129ab06e9 Change onnxifi workflow to support multi-group quantized & Add multi quantization info to caffe2.proto (#20439)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20439

This is the QTensorProto workflow for multi group quantization in C2 side.
No DNNLOWP Tensor related thing is included in this pr, so once we finished glow side, we should be able to test this pr using resnet50.

Reviewed By: yinghai

Differential Revision: D15096919

fbshipit-source-id: 741eecd59eb79d24d9fe2b035f6246d42422d25c
2019-05-15 19:24:08 -07:00
1a0f753e6e Fixing typos in schema description for BatchMatMul (#20512)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20512

Fixing typos in the description of schema for one of the inputs for BatchMatMul operator.

Reviewed By: jianyuh, BIT-silence

Differential Revision: D15343879

fbshipit-source-id: 06354e8e6b0d79fea937ed2703bb457b2d04f859
2019-05-15 18:06:30 -07:00
b3e510518b Tensor codemod for instance_norm (#20517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20517

fixing a bug in instance_norm

Reviewed By: ezyang

Differential Revision: D15349006

fbshipit-source-id: 2496f7f372118d2713c12a6e9b3357bf6c640b71
2019-05-15 17:51:37 -07:00
161566187c enable CopyVector for type of int on CUDA (#20520)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20520

as title

Reviewed By: xianjiec

Differential Revision: D15351010

fbshipit-source-id: 99466de9da0abdffe26d6919768dcb4e52cb2ff1
2019-05-15 16:53:51 -07:00
08bdd694f9 Extract feature length information from SigridTransforms op (#20384)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20384

Pull Request resolved: https://github.com/pytorch/pytorch/pull/20171

Extract feature length information from SigridTransforms op

Reviewed By: ipiszy

Differential Revision: D15219408

fbshipit-source-id: 307d2b65b208d3af6977d90246d0372795c45815
2019-05-15 16:21:57 -07:00
73a97387c1 Replace AT_CHECK with TORCH_CHECK [shard 9/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20435

Reviewed By: jerryzh168

Differential Revision: D15318877

fbshipit-source-id: 4d83571187ea14a604fef83ac355d328b46d93e1
2019-05-15 08:05:59 -07:00
8e26759f14 Back out "[pytorch][PR] Manually set _GLIBCXX_USE_CXX11_ABI in devtoolset7 binary builds"
Summary: Original commit changeset: 571bba8a93ea

Reviewed By: pjh5

Differential Revision: D15349783

fbshipit-source-id: 75c3e2b9b97e0ac0e8bcdef93e53b0d475c6fa38
2019-05-15 00:02:55 -07:00
fd18b89c98 shape inference for learning rate op (#20020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20020

Add shape inference for LearningRate op. The output (lr) should have similar shape with input (iteration), but not the same type (float vs int).

Reviewed By: un-disclosed

Differential Revision: D15112300

fbshipit-source-id: 09969aefa15172a6f3c70cd9b2548e3020da5d7a
2019-05-14 23:34:32 -07:00
33f421027c Allow recency weight pooling for fp16 (#20506)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20506

as titled

Reviewed By: alex1o1o7cloud

Differential Revision: D15342758

fbshipit-source-id: 89e7cb6d7b9511ef6c70611359736328571d7fc0
2019-05-14 20:13:38 -07:00
254de9e8ec Removing cyclic dependency (#20511)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20511

Removed cyclic dependency of caffe2/core/net.h and workspace.h

Differential Revision: D15303412

fbshipit-source-id: 6e772e372cd0cf2af05d7815f1df8ae20bc2a65e
2019-05-14 18:55:19 -07:00
ea38fbfc5c Manually set _GLIBCXX_USE_CXX11_ABI in devtoolset7 binary builds (#20243)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/17492
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20243

Differential Revision: D15348101

Pulled By: pjh5

fbshipit-source-id: 571bba8a93eaa9806db3f3d38697c26b5285da7a
2019-05-14 18:02:42 -07:00
9e7f22b223 Remove dependencies from Caffe2Go on PyTorch JIT (#20463)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20463

Source file changes mostly involve ifdef'ing-out references to JIT code
from files that are part of Caffe2Go.  Update Internal build scripts to
remove those files from our globs.

After this, changes to most of the JIT files should not trigger mobile CI.

Reviewed By: dzhulgakov

Differential Revision: D15329407

fbshipit-source-id: 48f614c6b028eef0a03ce5161d083a3e078b0412
2019-05-14 14:36:08 -07:00
7ffc37e022 Add ShapeInference for AtomicIter Op (#20021)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20021

Add shape inference for AtomicIter operator. The operator takes two blobs iteration and iter_mutex as input and outputs iteration, which should have the same type and shape as the input.

Reviewed By: un-disclosed

Differential Revision: D15111643

fbshipit-source-id: 0d06413305cc4c6257c0cfabf62fb874970803bc
2019-05-14 11:43:21 -07:00
101176870e eliminate FE_INVALID exceptions related to fp16 conversion (#20390)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20390

duc0 Ngo implemented observing floating point exceptions but there were a couple of places where we have "benign" floating point exceptions leading to false positives. This diff eliminates one source of such false positives, namely using _mm256_cvtph_ps and _mm256_cvtps_ph for partially uninitialized array for the remainder loop.

Reviewed By: hx89

Differential Revision: D15307358

fbshipit-source-id: 38f57dfdd90c70bc693292d2f9c33c7ba558e2c9
2019-05-13 23:42:01 -07:00