Commit Graph

35922 Commits

Author SHA1 Message Date
a90a3acbee Use JIT Plug-in for coverage to cover JIT'd functions and methods (#56310)
Summary:
This PR is step 2 (after https://github.com/pytorch/pytorch/issues/56708) to having JIT coverage--it actually uses the plug-in in CI!

Disclaimer: note that this will mark the entire JIT'd function/method as covered without seeking proof that the
compiled code has been executed. This means that even if the code chunk is merely compiled and not run, it will get
marked as covered.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56310

Test Plan:
We should see coverage improvements in CI after. A file to look out for would be `torch/jit/quantized.py`, which should have more coverage after this PR, which it does!
d3283ccd8c/torch/jit/quantized.py vs https://codecov.io/gh/pytorch/pytorch/src/master/torch/jit/quantized.py

More generally, the whole jit folder got ~3% increase in coverage, I believe.

Reviewed By: walterddr

Differential Revision: D28000672

Pulled By: janeyx99

fbshipit-source-id: 6712979d63a5e1224a92ee9bd9679ec62cf1cbba
2021-04-26 09:19:32 -07:00
1e51c05b71 Name .coverage.jit with timestamp to prevent loss of stats (#56829)
Summary:
The reason we were not seeing so many wins was because .coverage.jit would overwrite itself every coverage run. (What a noob mistake who wrote that code?!?!)

This should fix that.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56829

Test Plan:
Coverage in CI should audibly increase. It does, somewhat:
Check out f8a475b056! New covered files include:
Classes in torch/distributed/optim
torch/utils/mkldnn.py

Reviewed By: walterddr

Differential Revision: D27984427

Pulled By: janeyx99

fbshipit-source-id: e82d074c2b4a60a5204a73efc2823824384c8bf5
2021-04-26 08:43:17 -07:00
689d3a70aa Fix broken link to fx graph quant guide in quantization.rst (#56776)
Summary:
No oustanding issue, can create it if needed.

Was looking for that resource and it was moved without fixing the documentation.

Cheers

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56776

Reviewed By: heitorschueroff

Differential Revision: D27967020

Pulled By: ezyang

fbshipit-source-id: a5cd7d554da43a9c9e44966ccd0b0ad9eef2948c
2021-04-26 08:22:28 -07:00
ed9c7e187b Added OpInfo for addmm (#55920)
Summary:
Added an OpInfo for `addmm` & ported its `method_tests`

Skipping `test_variant_consistency_eager` on CPU, as it's blocked by https://github.com/pytorch/pytorch/issues/56233

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55920

Reviewed By: agolynski

Differential Revision: D27800325

Pulled By: heitorschueroff

fbshipit-source-id: 311cd26c6b491b486f652cf64275c6901fea03c5
2021-04-26 06:20:00 -07:00
b3f56ec0e0 Automated submodule update: tensorpipe (#56495)
Summary:
This is an automated pull request to update the first-party submodule for [pytorch/tensorpipe](https://github.com/pytorch/tensorpipe).

New submodule commit: 87f7681286

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56495

Test Plan: Ensure that CI jobs succeed on GitHub before landing.

Reviewed By: beauby

Differential Revision: D27886370

fbshipit-source-id: 2b6e2b38412694633517df2b0501e5da9e81656c
2021-04-26 04:53:41 -07:00
f27513e951 Fix bug in torch.sparse.addmm on CUDA when beta != 0 or 1 (#56160)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/55917, which caused `torch.sparse.addmm` to fail on CUDA whenever `beta` was different from 0 or 1

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56160

Reviewed By: ejguan

Differential Revision: D27825108

Pulled By: ngimel

fbshipit-source-id: 2ade5ea38c5322768dc4dffb40c65fcbb17ec201
2021-04-26 02:57:41 -07:00
f3743f097f [TensorExpr] Nuke tensorexpr::ScalarType and instead use c10::ScalarType directly. (#56825)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56825

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D27977461

Pulled By: ZolotukhinM

fbshipit-source-id: f8a72938ba395e426e2d9449627113abb1c9c34f
2021-04-26 01:51:21 -07:00
441c835733 [TensorExpr] Remove unused field from TensorExprKernel. (#56761)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56761

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D27960594

Pulled By: ZolotukhinM

fbshipit-source-id: 8f2bf1d688422363b97f48045ff96601665301f5
2021-04-26 01:51:19 -07:00
1faf1f96aa [TensorExpr] Fuser: don't lift tensor constants from fusion groups. (#56756)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56756

With #56319 TE kernel could handle tensor constants, so there is no more
need in lifting them out and passing as inputs.

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D27959258

Pulled By: ZolotukhinM

fbshipit-source-id: 00269cf1c4747c10dfc40cb4e330991d0bf1e2ee
2021-04-26 01:49:26 -07:00
7b31ba4708 Fix cudnn ctc loss backward (#56639)
Summary:
Fix cudnn ctc loss backward

Fix https://github.com/pytorch/pytorch/issues/49046, which was working in pytorch 1.1

Originally modified in this PR in Oct 2019, https://github.com/pytorch/pytorch/pull/27039/files#diff-25ec2c1108ee03e2167622588ec31d167897ef1cccb12a4cfe77eb98777316daR2383-R2392

According to the original code

90ffab6e37/tools/autograd/derivatives.yaml (L1387-L1388)

and the code after PR

f461184505/tools/autograd/templates/Functions.cpp (L2456-L2465)

This `at::zeros({0}, raw_grad.options())` in line 2460 seems suspicious, and is causing `infer_size` runtime error

```
RuntimeError: The size of tensor a (0) must match the size of tensor b (177) at non-singleton dimension 2
Exception raised from infer_size at ..\aten\src\ATen\ExpandUtils.cpp:24 (most recent call first):
```

I've modified that to `at::zeros_like(raw_grad)`, which looks more accurate.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56639

Reviewed By: mruberry

Differential Revision: D27987860

Pulled By: ngimel

fbshipit-source-id: 5ad65e78d017c26894fb26318a5992b0878d04d5
2021-04-25 22:51:19 -07:00
9eee14704a OpInfo: roll and rot90 (#56770)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56770

Reviewed By: ngimel

Differential Revision: D27987820

Pulled By: mruberry

fbshipit-source-id: c6b86cdc1b89d91eeda2215020137582e7c20c65
2021-04-25 22:12:38 -07:00
9e027d7ea3 [OpInfo] Add opinfo for transpose and its aliases (#56122)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56122

Reviewed By: ezyang

Differential Revision: D27962878

Pulled By: mruberry

fbshipit-source-id: cfd84bb0dcedeb98233a10e2c9754281f7cb76af
2021-04-25 21:58:16 -07:00
298db67220 [OpInfo] Add Function Variant and Opinfo for permute (#56125)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56125

Reviewed By: ezyang

Differential Revision: D27960312

Pulled By: mruberry

fbshipit-source-id: b9dd89f7e69d7dff29f3b53828656c13df898fa5
2021-04-25 21:26:44 -07:00
267b554b6f fx: Fix type_matches for Optional[List[int]] arguments (#56790)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56790

If the argument doesn't match `List[int]`, this code falls through to
`issubclass(argument_type, List[int])` which is invalid and raises a
`TypeError`. If this happens during the processing of a `Union` (e.g.
`Optional`), the other union types aren't given the chance to match against the
signature.

This also stop normalize_function from indescriminately swallowing exceptions,
which let this bug go unnoticed.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27987746

Pulled By: mruberry

fbshipit-source-id: c5aa5f61a215f0f39925e7053f33bff4b5d5acc2
2021-04-25 20:28:37 -07:00
dde2bc4818 Add OPENSSL_ROOT_DIR to cmake.py (#56846)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56846

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D27992923

Pulled By: pbelevich

fbshipit-source-id: dc2d26d4bc9d17a5da441ae4db8241609ca97c6e
2021-04-25 20:14:56 -07:00
7b74c3c70a Enable tests for dist profiling with torch.profiler (#56216)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56216

Verifies that the newly added distributed profiling works as expected for torch.profiler.

Example trace from `test_ddp_profiling`:

Note that tests are disabled internally due to an unrelated hang issue but run in OSS.
ghstack-source-id: 127357993

Reviewed By: mrshenli

Differential Revision: D27645105

fbshipit-source-id: 7ddba271acd8f7fbce1f9c5370830d5310314736
2021-04-25 19:41:27 -07:00
2d2370bb61 [Dist profiling] Fix ProcessGroupNCCL collective profiling (#55204)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55204

Implements a fix discussed offline with pritamdamia87 to run end callbacks after `CUDAFuture`'s wrapCallback has ensured appropriate synchronization. Also enables the relevant distributed profiling tests that were previously disabled for ProcessGroupNCCL.

Note that the profiling infrastructure has moved to primarily encourage the use of torch.profiler and CUPTI to trace CUDA kernels, support for distributed collectives for that will require further discussion with ilia-cher. However, this PR improves the usability of torch.autograd.profiler with respect to distributed collectives.

ghstack-source-id: 127357995

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D27491711

fbshipit-source-id: cec7703a4c5d59b5023b0aa8fef4c2e3fb8d37d0
2021-04-25 19:40:19 -07:00
70d9be0f42 Replace duplicative s with alpha (#56804)
Summary:
It is always easier to read a document when different objects / concepts denoted with different variables / representations.
In this PR we make sure the [complex autograd](https://pytorch.org/docs/master/notes/autograd.html#autograd-for-complex-numbers) documentation, the variable of output and step size diverge.

Fixes https://github.com/pytorch/pytorch/issues/53633

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56804

Reviewed By: anjali411

Differential Revision: D27989959

Pulled By: iramazanli

fbshipit-source-id: c271590ee744c8aeeff62bfaa2295429765ef64e
2021-04-25 16:27:09 -07:00
d4707e260b Infer types (#56832)
Summary:
Addresses:  Infer argument types for functions in JIT

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56832

Reviewed By: pbelevich

Differential Revision: D27979495

Pulled By: nikithamalgifb

fbshipit-source-id: 82156a516c7f96cdd3f7a067d41cb210a6d13a51
2021-04-25 13:01:55 -07:00
e97c17afa0 Update internal code for torch.geqrf (#56250)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56250

Moved `apply_geqrf` to `BatchLinearAlgebraKernel.cpp`. Added
`geqrf_stub` dispatch.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27907362

Pulled By: mruberry

fbshipit-source-id: 6719464aef29dcf3bbbde060edf79f1e32fc8ad6
2021-04-25 03:46:59 -07:00
d5ff432615 Add torch.linalg.svdvals (#56684)
Summary:
This PR adds `torch.linalg.svdvals(input, out=None)` that computes only the singular values of `input`.

Resolves https://github.com/pytorch/pytorch/issues/54155.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56684

Reviewed By: albanD

Differential Revision: D27938229

Pulled By: mruberry

fbshipit-source-id: 5ea79ad9cccf818df0fbda1f431299ebf8de3798
2021-04-25 03:42:24 -07:00
58fcf77712 Port CPU torch.geqrf to ATen (#56249)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56249

This PR ports `torch.geqrf` from TH to ATen. CUDA path will be
implemented in a follow-up PR.
With ATen port support for complex and batched inputs is added.
There were no correctness tests, they are
added in this PR and I added OpInfo for this operation.

We can implement the QR decomposition as a composition of geqrf and
orgqr (torch.linalg.householder_product).
Also we can implement the least squares solver with geqrf + ormqr +
trtrs. So it's useful to have this function renewed at least for the
internal code.

Resolves https://github.com/pytorch/pytorch/issues/24705

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27907357

Pulled By: mruberry

fbshipit-source-id: 94e1806078977417e7903db76eab9d578305f585
2021-04-25 01:17:00 -07:00
805129f957 enable support for custom error messages in torch.testing (#55890)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55890

Proof-of-concept for https://github.com/pytorch/pytorch/pull/55145#issuecomment-817297273

With this the user is able to pass a custom error message to `assert_(equal|close)` which will be used in case the values mismatch. Optionally, a callable can be passed which will be called with mismatch diagnostics and should return an error message:

```python
def make_msg(a, b, info):
    return (
        f"Argh, we found {info.total_mismatches} mismatches! "
        f"That is {info.mismatch_ratio:.1%}!"
    )

torch.testing.assert_equal(torch.tensor(1), torch.tensor(2), msg=make_msg)
```

If you imagine `a` and `b` as the outputs of binary ufuncs, the error message could look like this:

```python
def make_msg(input, torch_output, numpy_output, info):
    return (
        f"For input {input} torch.binary_op() and np.binary_op() do not match: "
        f"{torch_output} != {numpy_output}"
    )

torch.testing.assert_equal(
    torch.binary_op(input),
    numpy.binary_op(input),
    msg=lambda a, b, info: make_msg(input, a, b, info),
)
```

This should make it much easier for developers to find out what is actually going wrong.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27903842

Pulled By: mruberry

fbshipit-source-id: 4c82e3d969e9a621789018018bec6399724cf388
2021-04-24 23:37:44 -07:00
edfbc989d1 add support for equal_nan in torch.testing.assert_close (#55788)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55788

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27903821

Pulled By: mruberry

fbshipit-source-id: c10254b2cdc7c1ae5a31b22913136013f0472b26
2021-04-24 23:37:43 -07:00
27148db5df Add support for scalars and numpy in torch.testing (#55786)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55786

Add support to compare scalars as well as `np.ndarray`'s with torch.testing. We are reusing the mathcing functionality that is already in place for tensors, by casting the inputs. The approach can easily extended if we want to support other input types as long as they can be cast to a tensor.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27903814

Pulled By: mruberry

fbshipit-source-id: fe3d063d0c9513cbd8b3408a2023e94c490c817e
2021-04-24 23:37:41 -07:00
dbf3451c6e Add support for checking tensor containers in torch.testing (#55385)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55385

This renames `assert_tensors_(equal|close)` to `_check_tensors_(equal|close)` and exposes two new functions: `assert_(equal|close)`. In addition to tensor pairs, the newly added functions also support the comparison of tensors in sequences or mappings. Otherwise their signature stays the same.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27903805

Pulled By: mruberry

fbshipit-source-id: 719d19a1d26de8d14cb25846e3d22a6ac828c80a
2021-04-24 23:36:36 -07:00
bcef7ebd60 [NNC] Added matmul for NNC lowering/unified dtypes (#56456)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56456

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D27977532

Pulled By: Chillee

fbshipit-source-id: c04372d988c8ef795f27037348a155894c2eddad
2021-04-24 19:15:16 -07:00
710288e413 torch.fft: Document out argument (#56732)
Summary:
An oversight from https://github.com/pytorch/pytorch/issues/49335, the documentation was never updated to include `out` arguments.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56732

Reviewed By: ezyang

Differential Revision: D27960478

Pulled By: mruberry

fbshipit-source-id: a342a4f590369d6d2e17bed014fa64e49ee72936
2021-04-24 17:14:00 -07:00
6e5ce569bd DOC: add note for torch.clamp() special case min > max See #45664 (#56367)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45664

This PR adds a note to the documentation for `torch.clamp()` to alert users to a special case: If `min` is greater than `max`, all values are set to the `max` value.

Also, an example was added after the first code example. And this one is referenced in the note.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56367

Reviewed By: ezyang

Differential Revision: D27960553

Pulled By: mruberry

fbshipit-source-id: 9dc6016ccacebe87c809a0dd9f557b4aea0ae6f5
2021-04-24 17:09:22 -07:00
45692fbef0 [fx splitter][fx net_min] Move Splitter, Minimizer and necessary deps to OSS (#56201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56201

Refactor Splitter and Minimizer to superclass `_SplitterBase` and `_MinimizerBase` and move them to OSS. This is needed to create an OSS example of GPU lowering with those tools.

Test Plan: CI

Reviewed By: jackm321

Differential Revision: D27629598

fbshipit-source-id: 0d4da02105ca509b31f1a6c4a39b1122c2bc7bf0
2021-04-24 15:19:12 -07:00
51bca2ca4d [caffe2] fix -Wrange-loop-construct in onnx_exporter.cc (#56759)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56759

```
 caffe2/caffe2/onnx/onnx_exporter.cc:415:21: error: loop variable 'it' creates a copy from type 'const std::pair<const std::basic_string<char>, int>' [-Werror,-Wrange-loop-construct]
    for (const auto it : blob_versions) {
                    ^
caffe2/caffe2/onnx/onnx_exporter.cc:415:10: note: use reference type 'const std::pair<const std::basic_string<char>, int> &' to prevent copying
    for (const auto it : blob_versions) {
         ^~~~~~~~~~~~~~~
                    &
```

Reviewed By: yfeldblum

Differential Revision: D27960126

fbshipit-source-id: fd46f37cf1aca9441209de8eb06add204046db95
2021-04-24 13:13:51 -07:00
4ef8205104 [fx][normalize] Allow for args to be left as args (#55995)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55995

Normalization is kind of broken currently. But making default arguments visible still appears to work, and is nice functionality to still be able to rely on/use. Adds an option to `NormalizeArgs`'s `__init__` called `normalize_to_only_use_kwargs` which defaults to true, which if set to false will keep using the same signature as provided, but additionally set kwargs in kwargs.

Test Plan: Added test to `test_fx_experimental`.

Reviewed By: 842974287

Differential Revision: D27759448

fbshipit-source-id: 620061fcf46d8549ac70b62aede8b6740aee3778
2021-04-24 08:15:17 -07:00
3fbc15410a Revert D27967517: [pytorch][PR] Use JIT Plug-in for coverage to cover JIT'd functions and methods
Test Plan: revert-hammer

Differential Revision:
D27967517 (88bd0510ef)

Original commit changeset: 53fd8431d772

fbshipit-source-id: 491841dcde629f1e9f8ee38be7366955c03b6e27
2021-04-24 07:53:49 -07:00
c416167fb7 Add tests for CUDAFuture (#56518)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56518

I don't think we have any tests for CUDAFuture (I couldn't find any, and I didn't write any in the past). I think especially for the two latest features added by this stack we should have a test to ensure they properly work and to catch regressions. (These tests also add indirect coverage for the more "basic" features of CUDAFuture).

I didn't know how/where to add tests for C++ ATen stuff, so instead I added these tests to the Python RPC suite, using the torch.futures.Future wrapper. (It made sense in my mind because RPC is the main user of CUDAFuture). I'll gladly accept pointers to better ways of doing this.
ghstack-source-id: 127295022

Test Plan: The tests themselves.

Reviewed By: mrshenli

Differential Revision: D27887191

fbshipit-source-id: 4ad6d81e676fe486aa8d329591ee1a3818fea059
2021-04-24 07:07:31 -07:00
a688b29750 Support custom Python classes in CUDAFuture (#56516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56516

One problem with CUDAFuture's extraction of DataPtrs from IValues is that it only supported Python objects that could be converted to "regular" IValues (e.g., lists/dicts/tuples of ints/strings/tensors/...). One notable exception are custom Python classes, which are in fact a very common data type transferred over RPC. The only solution we found for those is to use the Python pickler to extract the tensors contained in them.

We can't insert a Python dependency directly into CUDAFuture, so instead I'm proposing to use the same indirection technique used to support `getSubValues` on Python objects: define some methods on the abstract class `PyObjectHolder` (which can be used by CUDAFuture) but only implement them in the concrete subclass `ConcretePyObjectHolder` (which is only built when Python support is enabled).

I am a bit worried about the performance toll of this (pickling isn't exactly known to be cheap) but I think we should start by providing a functionally complete API. We already have ideas on how to make this faster if needed, for example by having users provide a custom DataPtr extractor tailored to their class via a decorator. (Or just use TorchScript).
ghstack-source-id: 127295014

Test Plan: Added a test later in the stack

Reviewed By: mrshenli

Differential Revision: D27887189

fbshipit-source-id: 9d27e4e62390b836e5bb4f06f401cc002f0cf95b
2021-04-24 07:06:28 -07:00
e4efc0c948 [Static Runtime] Enable check_for_memory_leak in StaticRuntime::benchmark (#56839)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56839

Enable check_for_memory_leak at the end of StaticRuntime::benchmark so this code is exercised more often.

Test Plan: Checked with adindexer merge net model

Reviewed By: edvgha

Differential Revision: D27417911

fbshipit-source-id: 5248942dc439fcc7301ffb0005da76374939fa96
2021-04-23 19:54:58 -07:00
34eb6c8589 [Caffe2] ScriptModuleOp support pass_inputs_as_tensor_list (#56813)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56813

When the arg `pass_inputs_as_tensor_list` is True, the input tensors are wrapped into a TensorList and passes in as a single param.

Test Plan: buck test //caffe2/caffe2/python:workspace_test -- TestScriptModule

Reviewed By: dzhulgakov

Differential Revision: D27972928

fbshipit-source-id: 5a199649445b0306f3134086c85bd55da45e1a0b
2021-04-23 18:49:57 -07:00
b2b9efb33a .github: Add initial Linux CI for CUDA (#56494)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56494

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D27953781

Pulled By: seemethere

fbshipit-source-id: bce9298dc40d035bfbb5057e48b99d15c13733bc
2021-04-23 18:09:08 -07:00
060e4c96ee Torchelastic: forbid mp tests running with *san (#56827)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56827

The diff makes sure that mp tests are not executed in modes that allow *san, since python mp does not behave well with tsan and asan.

Test Plan: buck test mode/opt-tsan //caffe2/test/distributed/launcher/... -- --run-disabled

Reviewed By: cbalioglu

Differential Revision: D27976626

fbshipit-source-id: 7747d67687fa0fd095f799b3708038f672119e73
2021-04-23 17:55:26 -07:00
bd3dda95fd Make old_gpu warning dynamic (#56621)
Summary:
Compute minimum support CUDA architecture as oldest GPU arch_list
supported by current build

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56621

Reviewed By: soumith

Differential Revision: D27920141

Pulled By: malfet

fbshipit-source-id: 71a42dd60c38a658ebad4544bcfb3d2d20e471b5
2021-04-23 17:52:07 -07:00
5d940e2fbc [TSAN] Fix PythonEngine data-race-on-vptr. (#56808)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56808

For information about data-race-on-vptr in general, see https://www.internalfb.com/intern/wiki/TSAN/Common_Concurrency_Mistakes/Stopping_a_Thread_in_Destructor/

Engine::~Engine() was previously tasked with stopping the threads. This causes a data race on the object's vptr when PythonEngine is being destructed. This fixes the data race by making ~PythonEngine trigger the thread stopping before going down to the base class's destructor.

Test Plan:
Many tests are affected, but here's one example:

buck test mode/dev-tsan -c fbcode.tsan_strict_mode=true //oculus/research/orcoptics/deep_learning/srg_nn/tests:test_grating_net -- 'test_train (oculus.research.orcoptics.deep_learning.srg_nn.tests.test_grating_net.TestGratingNet)' --run-disabled

Reviewed By: walterddr, albanD

Differential Revision: D27972384

fbshipit-source-id: 8b70fec8d9326497c591a2777b355ea590a85082
2021-04-23 17:39:27 -07:00
2041cd6707 Enable forward/backward compatibility in TS mobile (#56079)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56079

Test Plan: Imported from OSS

Reviewed By: iseeyuan

Differential Revision: D27828149

Pulled By: tugsbayasgalan

fbshipit-source-id: 9291ddbf01853354fca0fa0a58b8115d5d2294da
2021-04-23 16:55:18 -07:00
be7a943bb8 s/AutoDispatchBelowAutograd/AutoDispatchBelowInplaceOrView. (#56657)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56657

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27931526

Pulled By: ailzhang

fbshipit-source-id: 3af718df3435e2b0b30bc62070dbdc5aeeecdfb4
2021-04-23 15:50:00 -07:00
375ebd634a [PyTorch] Break up generated tag in source (#56503)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56503

The presence of `generated` causes Phabricator and hg to think the file is generated (e.g., hg won't prompt to resolve merge conflicts with an editor). Breaking up the tag is the traditional way to solve this.
ghstack-source-id: 126965382

Test Plan: Review, builds

Reviewed By: ailzhang

Differential Revision: D27887691

fbshipit-source-id: 394a38d50289d64f8801a13f9a28f6f0f37ca59d
2021-04-23 15:46:24 -07:00
5288d05cfd Revert D27958477: [PyTorch][Edge] Add v4 and v5 models and remove unused model
Test Plan: revert-hammer

Differential Revision:
D27958477 (2e4c68a727)

Original commit changeset: 2e6f985a988d

fbshipit-source-id: 520cb8a353d91cd26cb27880a0a8e27dbfcd2d99
2021-04-23 14:42:01 -07:00
c37095760d [torch distributed] Implementing all_gather_base (#56315)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56315

This diff implements the all_gather_base in pytorch distributed.

Test Plan: dist.all_gather_base(output, input)...

Reviewed By: agolynski, amylittleyang

Differential Revision: D27488999

fbshipit-source-id: 937ec8bddf9527fa4d114f984d1d0f6a5b8c3936
2021-04-23 14:16:47 -07:00
5b7317b562 [NNC] API for Buffer Compression (#55853)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54338

This PR adds the following API in NNC to implement "buffer compression".

```
static void compressBuffer(Buf* buf, Stmt* stmt);
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55853

Reviewed By: ezyang

Differential Revision: D27960986

Pulled By: navahgar

fbshipit-source-id: a69988e607196f3e2db0212313ea5deefb9859ac
2021-04-23 14:12:03 -07:00
e098515b89 Fix cdist backward for empty inputs (#56606)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56606

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D27939201

Pulled By: albanD

fbshipit-source-id: 7ac2b579577cc5b58e714935d791be26478eb83c
2021-04-23 14:08:20 -07:00
0d7e780eff Fix broadcasting of cdist backward (#56605)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56605

Fix https://github.com/pytorch/pytorch/issues/55370

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D27939202

Pulled By: albanD

fbshipit-source-id: a4ac50a7b504c24f47f5343414fb57523546a0c7
2021-04-23 14:08:18 -07:00
3ddcc8d833 Add more test cases for cdist OpInfo and TODOs (#56604)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56604

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D27939203

Pulled By: albanD

fbshipit-source-id: 197de148ba00d217eb0bfc5b5724d23cf6de0910
2021-04-23 14:08:17 -07:00