Commit Graph

18149 Commits

Author SHA1 Message Date
08bdd694f9 Extract feature length information from SigridTransforms op (#20384)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20384

Pull Request resolved: https://github.com/pytorch/pytorch/pull/20171

Extract feature length information from SigridTransforms op

Reviewed By: ipiszy

Differential Revision: D15219408

fbshipit-source-id: 307d2b65b208d3af6977d90246d0372795c45815
2019-05-15 16:21:57 -07:00
428104c60a Automatic update of fbcode/onnx to ead449a30d026a7a0a59e2ba0a42ca8e52ec2359 (#20542)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20542

Previous import was e08efaa35ed54362dfa283240506c003175889b7

Included changes:
- **[ead449a3](https://github.com/onnx/onnx/commit/ead449a3)**: fix array range bug (#2015) <one-hello>
- **[0442d426](https://github.com/onnx/onnx/commit/0442d426)**: Relax constraint on subgraph input/output type and shape (#2009) <Bowen Bao>

Reviewed By: zrphercule

Differential Revision: D15350320

fbshipit-source-id: 2cc5db926785cda0b79efb6747da3900361dba76
2019-05-15 15:12:59 -07:00
8226330af3 Extend testAvailableArgTypes (#20374)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20374

This test case now also tests that the argument type works correctly in kernels that
- don't return outputs
- return multiple outputs

Reviewed By: li-roy

Differential Revision: D15298233

fbshipit-source-id: 82ab9d81b55b4f9fb34d66a155cc426af8592e25
2019-05-15 14:57:40 -07:00
f89ab7b623 Allow Dict type in c10 operators (#20373)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20373

- Add support for Dict<Key, Value> arguments and returns to c10 operators
- Add support for std::unordered_map<Key, Value> to the legacy API (but not to c10 kernels)

Reviewed By: li-roy

Differential Revision: D15298235

fbshipit-source-id: 6d9793db1f12bea377f508a9b33a495ebe0bec18
2019-05-15 14:57:37 -07:00
a821e11127 Speed up RecordFunction with sampled callbacks (#20307)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20307
ghimport-source-id: 94adc3c3102bb6b9cb60cf6c6112e350aa954aaf

Differential Revision: D15276308

Pulled By: ilia-cher

fbshipit-source-id: c536063669d8414b4ce0b09fd5dc0d76f1e94bb5
2019-05-15 14:48:49 -07:00
b55d2dcc84 Publish c10::RegisterOperators as torch::RegisterOperators (#20334)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20334

-

Reviewed By: li-roy

Differential Revision: D15284557

fbshipit-source-id: fdd1d9f2910dbd05a869eef13ccdc68c80e6bd81
2019-05-15 13:45:07 -07:00
852f8526c5 Replace AT_CHECK with TORCH_CHECK [shard 5/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20431

Reviewed By: jerryzh168

Differential Revision: D15318266

fbshipit-source-id: 500719451202458fae312aa196c0c60098d6a541
2019-05-15 12:54:08 -07:00
5243fe0350 Allow static inheritence for ScriptModules (#20503)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20503
ghimport-source-id: 35684175f0806485b074ea136548823ad1bc1c30

Differential Revision: D15341555

Pulled By: zdevito

fbshipit-source-id: ad19da3306914196dcbbcee829dcb0a9f22e3722
2019-05-15 12:41:55 -07:00
da3e74b21c define use_cuda in dropout backward to allow peephole optimization to… (#20289)
Summary:
… work
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20289

Differential Revision: D15350262

Pulled By: wanchaol

fbshipit-source-id: b457304688524822c1e6f23049e05472130c1ff4
2019-05-15 11:36:06 -07:00
bd047d812e Recursively checkout submodules for Pytorch
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20537

Differential Revision: D15354830

Pulled By: pjh5

fbshipit-source-id: 40902c450756dc127d34c9ec64e78d33edb6c5c9
2019-05-15 10:48:27 -07:00
72bb84c518 Provide a few default args for numpy translation (#20451)
Summary:
Add automatic translations for a few argument names that commonly differ between PyTorch and NumPy.

For now, they are as follows:

* `keepdim` -> `keepdims`
* `dim` -> `axis`
* `input` -> (any of `a`, `x`, `x1`)
* `other` -> `x2`

Basic examples:
```python
>>> t=torch.randn(10,10)
>>> torch.sum(x=t, axis=1)
tensor([ 0.5199, -0.3768,  4.3619, -0.9105,  1.1804,  1.0837, -0.9036,  0.2365,
         1.1171, -0.0999])
```
```python
>>> torch.add(x1=5, x2=6)
tensor(11)
```

The additional overhead is zero when using traditional PyTorch argument names, and a few (usually 1) extra PyDict lookups when using NumPy argument names.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20451

Differential Revision: D15337521

Pulled By: umanwizard

fbshipit-source-id: 7a7d389786f4ccf5c86a14ecb2002c61730c51b5
2019-05-15 10:13:17 -07:00
83649ef081 Replace AT_CHECK with TORCH_CHECK [shard 1/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20426

Reviewed By: jerryzh168

Differential Revision: D15318160

fbshipit-source-id: 4d1fb341ab47147d760d527b901de6ce54182753
2019-05-15 08:44:54 -07:00
2827f3ded6 Portable way of the warning clause
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20484

Differential Revision: D15353119

Pulled By: ezyang

fbshipit-source-id: a708548554728ec34c51a8032ceb2b12f16a8d5c
2019-05-15 08:14:01 -07:00
15c0091d8a Fix GetLastError in THAllocator for Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20526

Differential Revision: D15352969

Pulled By: ezyang

fbshipit-source-id: 50a9ee10c1c80cfe737c96dd8af63a2b034686ae
2019-05-15 08:13:57 -07:00
73a97387c1 Replace AT_CHECK with TORCH_CHECK [shard 9/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20435

Reviewed By: jerryzh168

Differential Revision: D15318877

fbshipit-source-id: 4d83571187ea14a604fef83ac355d328b46d93e1
2019-05-15 08:05:59 -07:00
365fc26571 Replace AT_CHECK with TORCH_CHECK [shard 8/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20434

Reviewed By: jerryzh168

Differential Revision: D15318396

fbshipit-source-id: dcd0f51be2d64b9440bb95ce8f40acb12545c2f4
2019-05-15 08:05:56 -07:00
d1623f4cc9 Replace AT_CHECK with TORCH_CHECK [shard 3/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20428

Reviewed By: jerryzh168

Differential Revision: D15318209

fbshipit-source-id: e492aaa79146cfce9489bdb354cc539d7c4220a7
2019-05-15 07:40:50 -07:00
9d09f5df6c Replace AT_CHECK with TORCH_CHECK [shard 7/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20432

Reviewed By: jerryzh168

Differential Revision: D15318289

fbshipit-source-id: 6c443ac848fe28a1e3e8d7f33a12cd50f80b3e40
2019-05-15 07:40:47 -07:00
101067703e Fix strtod for MSVC (#20490)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/20408. Tested locally by Jonas1312.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20490

Differential Revision: D15353137

Pulled By: ezyang

fbshipit-source-id: 0c0aefe54b11d50f703171700838af51f7666418
2019-05-15 07:40:44 -07:00
97e1f07ffc Replace AT_CHECK with TORCH_CHECK [shard 10/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20436

Reviewed By: jerryzh168

Differential Revision: D15318926

fbshipit-source-id: 71a43070cc50cc174f703ebc595f1d87c6fc1e91
2019-05-15 07:35:37 -07:00
8e26759f14 Back out "[pytorch][PR] Manually set _GLIBCXX_USE_CXX11_ABI in devtoolset7 binary builds"
Summary: Original commit changeset: 571bba8a93ea

Reviewed By: pjh5

Differential Revision: D15349783

fbshipit-source-id: 75c3e2b9b97e0ac0e8bcdef93e53b0d475c6fa38
2019-05-15 00:02:55 -07:00
fd18b89c98 shape inference for learning rate op (#20020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20020

Add shape inference for LearningRate op. The output (lr) should have similar shape with input (iteration), but not the same type (float vs int).

Reviewed By: un-disclosed

Differential Revision: D15112300

fbshipit-source-id: 09969aefa15172a6f3c70cd9b2548e3020da5d7a
2019-05-14 23:34:32 -07:00
33f421027c Allow recency weight pooling for fp16 (#20506)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20506

as titled

Reviewed By: alex1o1o7cloud

Differential Revision: D15342758

fbshipit-source-id: 89e7cb6d7b9511ef6c70611359736328571d7fc0
2019-05-14 20:13:38 -07:00
ea13b53856 Updating submodules
Reviewed By: cdelahousse

fbshipit-source-id: 63e9b4a8cf5b15a6ba20d1946aac36c1604d8079
2019-05-14 19:02:43 -07:00
254de9e8ec Removing cyclic dependency (#20511)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20511

Removed cyclic dependency of caffe2/core/net.h and workspace.h

Differential Revision: D15303412

fbshipit-source-id: 6e772e372cd0cf2af05d7815f1df8ae20bc2a65e
2019-05-14 18:55:19 -07:00
ace506fb38 Dict (#20372)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20372

Implement a Dict type that allows us to abstract away from the concrete implementation used.
The API is similar to std::unordered_map, but behind the scenes we can switch to any map implementation we like. ska::flat_hash_map, google dense map, or any future map implementation with better performance.
Switching such an implementation choice does not have to break backwards compatibility of kernel code using the Dict type.

Reviewed By: zdevito

Differential Revision: D15298234

fbshipit-source-id: b5ad368a9e9516030805cd8f5f1b02e3986933c0
2019-05-14 18:37:02 -07:00
56fb5e03b5 refactor registerStoragePyTypeObject (#20467)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20467

for upcoming changes in Storage for QInt8

Reviewed By: ezyang

Differential Revision: D15330865

fbshipit-source-id: 2840e59c0bf088983f792fd724de41b3bb3dec55
2019-05-14 18:22:33 -07:00
ea38fbfc5c Manually set _GLIBCXX_USE_CXX11_ABI in devtoolset7 binary builds (#20243)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/17492
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20243

Differential Revision: D15348101

Pulled By: pjh5

fbshipit-source-id: 571bba8a93eaa9806db3f3d38697c26b5285da7a
2019-05-14 18:02:42 -07:00
358fb51e77 Replace AT_CHECK with TORCH_CHECK [shard 6/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20430

Reviewed By: jerryzh168

Differential Revision: D15318250

fbshipit-source-id: eaee93447d757124a0c9fb5dcde503ae6a065912
2019-05-14 16:00:59 -07:00
5b45355431 Replace AT_CHECK with TORCH_CHECK [shard 2/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20427

Reviewed By: jerryzh168

Differential Revision: D15318190

fbshipit-source-id: 15518a683d7b662ef00f255134aaf9dbd183f099
2019-05-14 16:00:56 -07:00
71af7c46bb Replace AT_CHECK with TORCH_CHECK [shard 4/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20429

Reviewed By: jerryzh168

Differential Revision: D15318222

fbshipit-source-id: daf693c34b4ee92e302eee679ed76a862715d1bb
2019-05-14 15:50:16 -07:00
9610f150d7 stop build spew on development (#20508)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20508
ghimport-source-id: 26a16e2918fb93058c7740afb85070e0d29b4d1b

Differential Revision: D15343207

Pulled By: zdevito

fbshipit-source-id: b6d8858024cc440d59cf88d69e0fbc0e67dc85ce
2019-05-14 15:30:52 -07:00
24cd0e08cf identify important circleci builds (#20498)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20498
ghimport-source-id: b62b5bcf73ce87b1054cad053fd1cc118a586cf6

Differential Revision: D15342506

Pulled By: suo

fbshipit-source-id: 9889103d23affe0d7eea0abfd801bae46d5238a2
2019-05-14 15:16:06 -07:00
9e7f22b223 Remove dependencies from Caffe2Go on PyTorch JIT (#20463)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20463

Source file changes mostly involve ifdef'ing-out references to JIT code
from files that are part of Caffe2Go.  Update Internal build scripts to
remove those files from our globs.

After this, changes to most of the JIT files should not trigger mobile CI.

Reviewed By: dzhulgakov

Differential Revision: D15329407

fbshipit-source-id: 48f614c6b028eef0a03ce5161d083a3e078b0412
2019-05-14 14:36:08 -07:00
3479777519 UpSample GPU Porting (#19630)
Summary:
resolves #16158
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19630

Differential Revision: D15335765

Pulled By: ezyang

fbshipit-source-id: 03dd590c715a65c20ac99674a5d77179cd4a50fc
2019-05-14 11:58:21 -07:00
7ffc37e022 Add ShapeInference for AtomicIter Op (#20021)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20021

Add shape inference for AtomicIter operator. The operator takes two blobs iteration and iter_mutex as input and outputs iteration, which should have the same type and shape as the input.

Reviewed By: un-disclosed

Differential Revision: D15111643

fbshipit-source-id: 0d06413305cc4c6257c0cfabf62fb874970803bc
2019-05-14 11:43:21 -07:00
6e82b1c77d Split nn.MultiHeadAttention into Module + functional (#20415)
Summary:
Moving functions from torch/nn/modules/activation.py to torch/nn/functional.py. For functions not implemented (_get_input_buffer and _set_input_buffer), a TODO is added.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20415

Differential Revision: D15318078

Pulled By: jamarshon

fbshipit-source-id: 5ca698e2913821442cf8609cc61ac8190496a3c6
2019-05-14 08:41:28 -07:00
b46a630836 Update Sleef to include fix for FMA4 detection (#20450)
Summary:
FMA4 support is in bit 16 of register ECX, not EDX of the "extended
processor info" (0x80000001).

Once we verify that this change fixes https://github.com/pytorch/pytorch/issues/12112, I'll make a PR for upstream Sleef.

The mapping of registers to reg is:

```
  reg[0] = eax
  reg[1] = ebx
  reg[2] = ecx <---
  reg[3] = edx
```

Bit 16 of EDX is PAT (Page Attribute Table) on AMD CPUs, which is widely
supported. Intel CPUs do not set this bit. This causes "Illegal
instruction"
errors on AMD CPUs that do not support FMA4.

See https://github.com/pytorch/pytorch/issues/12112
See https://github.com/shibatch/sleef/issues/261

http://developer.amd.com/wordpress/media/2012/10/254811.pdf (Page 20)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20450

Differential Revision: D15324405

Pulled By: colesbury

fbshipit-source-id: 96fb344c646998ff5da19e4cdbf493f5a4e9892a
2019-05-14 08:33:18 -07:00
101176870e eliminate FE_INVALID exceptions related to fp16 conversion (#20390)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20390

duc0 Ngo implemented observing floating point exceptions but there were a couple of places where we have "benign" floating point exceptions leading to false positives. This diff eliminates one source of such false positives, namely using _mm256_cvtph_ps and _mm256_cvtps_ph for partially uninitialized array for the remainder loop.

Reviewed By: hx89

Differential Revision: D15307358

fbshipit-source-id: 38f57dfdd90c70bc693292d2f9c33c7ba558e2c9
2019-05-13 23:42:01 -07:00
8e9692df27 codemode change missing [from D13586737]
Summary: as title

Reviewed By: jerryzh168

Differential Revision: D15327669

fbshipit-source-id: e262dacb097e91475b1925ec40b375ec6722ad5a
2019-05-13 20:44:04 -07:00
e8fb5f35f0 Bump torch proto version (#20444)
Summary:
Tagging along to changes in #20191 which added more support for types in the pickler
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20444

Pulled By: driazati

Differential Revision: D15321463

fbshipit-source-id: 985061bf5070a7d7bad58ea8db11d531f3d13e74
2019-05-13 18:32:16 -07:00
a9aaf698a4 add c2 benchmark runs in cpp (#20108)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20108

Add cpp runs for c2, hooked up via pybinds. Print output to terminal. This is not hooked up with the pep output yet because I'd like to verify the numbers first.

Note that this isn't quite the same mechanism as the pytorch cpp hookup, which uses cpp_python_extensions. If I can use the same mechanism to pull all the inputs for c2 through cpp and do FeedBlobs in cpp, then I'll switch to that.

Reviewed By: zheng-xq

Differential Revision: D15155976

fbshipit-source-id: 708079dacd3e19aacfe43d70c5e5bc54da2cf9e3
2019-05-13 17:01:08 -07:00
d2da3ee601 temporarily disbale layernorm AD (#20442)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20442
ghimport-source-id: c246ade4ee9ee31b2e3413efff3ea6a246e1837e

Differential Revision: D15321524

Pulled By: wanchaol

fbshipit-source-id: 22c77d08c91af2d83dfd2c4a84cafc56e9240033
2019-05-13 16:35:50 -07:00
f0829f37c8 Rename AT_ASSERT to TORCH_INTERNAL_ASSERT; other macro updates (#20321)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20321

First part of https://github.com/pytorch/pytorch/issues/20287

- Rename `AT_ASSERT` to `TORCH_INTERNAL_ASSERT`
- Make `TORCH_INTERNAL_ASSERT` work with variadic inputs
- Deprecated `AT_ASSERT` and `AT_ASSERTM`
- Rename `AT_CHECK` to `TORCH_CHECK`
- Make `TORCH_CHECK` give a better error message when no arguments are
  provided
- Deprecate `AT_ERROR` in favor of `TORCH_CHECK(false, ...)`
- Deprecate `AT_INDEX_ERROR` in favor of `TORCH_CHECK_INDEX(false, ...)`
- Rename `AT_WARN` to `TORCH_WARN`

No use sites are changed; I'll work on that in follow up patches
(or disable the deprecation, if necessary.)

Differential Revision: D15278439

fbshipit-source-id: 7e0ed489d4e89e5f56b8ad7eafa72cb9a06065ee
2019-05-13 16:16:42 -07:00
1364104054 Fix version counter sharing in set_data() (#20391)
Summary:
In https://github.com/pytorch/pytorch/pull/18223/files#diff-77a6f3462f2233b921d3042412fed6d3R178, we used `auto saved_version_ = data_.unsafeGetTensorImpl()->version_counter().current_version()` and then `new_data_impl_copy->set_version_counter(saved_version_)`, which actually doesn't preserve the original semantics that `var.set_data(tensor)` should keep `var`'s version counter object intact. This PR fixes the bug and adds test to make sure it doesn't happen again.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20391

Differential Revision: D15323430

Pulled By: yf225

fbshipit-source-id: e3ba49b51ec8ccecd51c80cb182387f74cfd2b2b
2019-05-13 16:03:42 -07:00
3a0b27b73d Move at::NonVariableTypeMode to TensorImpl, and check it in is_variable() (#20392)
Summary:
As part of the Variable/Tensor merge, we allow passing Tensor with AutogradMeta into ATen ops, but we want to make sure they are not treated as Variables (i.e. their `is_variable()` is false). This PR makes the necessary change to make this work.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20392

Differential Revision: D15321899

Pulled By: yf225

fbshipit-source-id: c2ab09db73c63bd71ba2d8391095f4d6b4240a9a
2019-05-13 15:49:23 -07:00
2dc9152dbe Automatic update of fbcode/onnx to e08efaa35ed54362dfa283240506c003175889b7 (#20443)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20443

Previous import was 5bde6371620b76302864bce90f521d72eda95d0e

Included changes:
- **[e08efaa3](https://github.com/onnx/onnx/commit/e08efaa3)**: Fix shape inference logic for TopK operator (#2005) <Hariharan Seshadri>
- **[d80ea947](https://github.com/onnx/onnx/commit/d80ea947)**: Nullary variadic (#1889) <G. Ramalingam>
- **[50dc186b](https://github.com/onnx/onnx/commit/50dc186b)**: Removed setting MD/MDd flags manually through cmake. The MTd/MT part is still necessary. Looks like CI fails without it. (#1995) <Alexander Yermolovich>
- **[e7f81c5e](https://github.com/onnx/onnx/commit/e7f81c5e)**: Move NonMaxSupression to object_detection folder (#2001) <Hector Li>
- **[86ab4517](https://github.com/onnx/onnx/commit/86ab4517)**: Prevent using invalid iterator, fix arithmetics. (#2004) <Dmitri Smirnov>

Reviewed By: zrphercule

Differential Revision: D15302141

fbshipit-source-id: 146c346c188934e5125371b261ecfde93b4aa166
2019-05-13 14:47:11 -07:00
824d4f9957 Needed fixes for binaries
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20385

Differential Revision: D15321396

Pulled By: pjh5

fbshipit-source-id: de7ca1ac928bdea3bcf6c78e84c7e9b786bcff52
2019-05-13 11:58:50 -07:00
6c3b8a24ff Make sure reducer=None is not used when fp16 embedding is enabled
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20349

Reviewed By: hyuen

Differential Revision: D15291545

fbshipit-source-id: fa5fd0b97aeca6e5f45866908f3f205b701c931b
2019-05-13 11:53:14 -07:00
63c05bffcb Fix lint
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20440

Pulled By: driazati

Differential Revision: D15320614

fbshipit-source-id: dc650c478e39d0c3e6b660c2d9ef93b3479df1ac
2019-05-13 11:37:27 -07:00