Commit Graph

136 Commits

Author SHA1 Message Date
73e1455327 [BE] Enable ruff's UP rules and autoformat test/ (#105434)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105434
Approved by: https://github.com/albanD
2023-07-19 20:36:06 +00:00
fea683491e Make torch._dynamo lazy-importable (#104368)
Use [PEP-562](https://peps.python.org/pep-0562) to import `_dynamo` and `_inductor` only when needed.

- Remove redundant imports from tests
- Add `test_lazy_imports_are_lazy` to make sure they will not get imported by accident

<!--
copilot:poem
-->
### <samp>🤖 Generated by Copilot at bae8e90</samp>

> _Sing, O Muse, of the daring deeds of PyTorch, the swift and fiery_
> _framework of deep learning, that with skill and cunning wrought_
> _many wonders of dynamic compilation, using the hidden powers_
> _of `_dynamo` and `_inductor`, the secret modules of LLVM and MLIR._

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104368
Approved by: https://github.com/msaroufim, https://github.com/albanD
2023-06-29 00:51:59 +00:00
ed2eb13d76 [inductor] Create triton_helpers module for helper functions (#99880)
This changes codegen of `torch.prod` from:
```python
   tl.reduce(tmp2, 1, _prod_accumulate)[:, None]
```
where `_prod_accumulate` is defined elsewhere, to

```python
   triton_helpers.prod(tmp2, 1)[:, None]
```

A quirk I uncovered though is that `TritonCodeCache` breaks if you
define any new symbol beginning with `triton_`, since it assumes that
must be the kernel name. Instead, I've made the kernel name an
explicit argument to `async_compile.triton` so it doesn't have to guess.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99880
Approved by: https://github.com/ngimel
2023-04-27 15:10:50 +00:00
ecf08a0f8b [ROCm] Enable test_filtering_env_var (#84100)
The test "test_filtering_env_var" requires CPU device_type along with GPU.
Hence enable both device_types for ROCm, since the PYTORCH_TESTING_DEVICE_ONLY_FOR env var will have the same effect as the code being removed, making the latter redundant anyway:
9e81c0c3f4/.jenkins/pytorch/test.sh (L54-L59)
9e81c0c3f4/torch/testing/_internal/common_device_type.py (L626-L634)

Signed-off-by: Jagadish Krishnamoorthy <jagdish.krishna@gmail.com>

Enables the test disabled by #56178

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84100
Approved by: https://github.com/jeffdaily, https://github.com/malfet
2023-04-04 21:49:53 +00:00
129e03905d disallow invalid value ranges in torch.testing.make_tensor (#96334)
Fixes #96179.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96334
Approved by: https://github.com/mruberry
2023-03-24 23:55:17 +00:00
47bfb192a7 deprecate low==high in torch.testing.make_tensor (#96333)
Addresses #96179.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96333
Approved by: https://github.com/mruberry
2023-03-24 23:55:17 +00:00
76fb9a1c7f fix low and high in torch.testing.make_tensor for integral inputs (#96124)
Fixes #96178.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96124
Approved by: https://github.com/mruberry
2023-03-24 23:55:17 +00:00
9029361f24 honor low and high for torch.bool in torch.testing.make_tensor (#96332)
Fixes #96101.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96332
Approved by: https://github.com/mruberry
2023-03-24 23:55:17 +00:00
303eb37e38 QoL improvements for torch.testing.make_tensor (#96125)
Per title. The major ones:

- Enforce keyword only parameters for `_modify_low_high`, which takes 7 parameters.
  28aa2efd14/torch/testing/_creation.py (L147)
  is just impossible to comprehend without multiple trips back and forth.
- Improve the error messages by including the offending values in the message

I'll highlight the smaller ones inline.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96125
Approved by: https://github.com/mruberry
2023-03-24 23:55:17 +00:00
090af4aa71 add proper tests for torch.testing.make_tensor (#96331)
We had some minimal tests for `torch.testing.make_tensor` before, but nothing exhaustive. This lead to quite few edge cases being undetected. This PR adds comprehensive tests and leaves a few FIXMEs in there for behavior that needs to be fixed in `make_tensor`. This will happen in later commits of this stack. Meaning, at the end of this stack, there shouldn't be any FIXME left in the tests added here.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96331
Approved by: https://github.com/mruberry
2023-03-24 23:55:17 +00:00
8aa34602f7 Jetson Update for CI Redo (#94549)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94549
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-02-21 17:13:38 +00:00
889a4640a0 [ONNX] Skip import test for experimental files (#94552)
`torch.onnx._internal.fx` is experimental and is not imported when `import torch`/`import torch.onnx`.
Need to skip it in this test as it depends on `onnx-script`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94552
Approved by: https://github.com/kit1980
2023-02-10 15:58:49 +00:00
748bac8757 [BE]: Apply pyupgrade yield from and unit test alias upgrades (#94309)
Applies some more harmless pyupgrades. This one gets rid of deprecated aliases in unit_tests and more upgrades yield for loops into yield from generators which are more performance and propagates more information / exceptions from original generator. This is the modern recommended way of forwarding generators.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94309
Approved by: https://github.com/albanD
2023-02-07 20:08:58 +00:00
8612ec5b90 Implement hybrid sparse to/from dense conversions. (#90177)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90177
Approved by: https://github.com/cpuhrsch, https://github.com/pearu
2023-01-12 03:31:30 +00:00
1effabe257 Support per-parameter test decoration (#91658)
Continuation of #79979.

Fixes #79161

This PR does the following:
* Expands the `parametrize_fn()` signature from returning a 3-tuple of `(test, test_name, param_kwargs)` to returning a 4-tuple of `(test, test_name, param_kwargs, decorator_fn)`. Expected signature for the addition is `decorator_fn(param_kwargs) -> List[decorator]` i.e. given the full set of test params, return a list of decorators to apply.
    * `modules`, `ops`, and `parametrize` now fit the new signature, returning `decorator_fn`s instead of applying decorators themselves.
    * `instantiate_parametrized_tests()` and `instantiate_device_type_tests()` now call the returned `decorator_fn`, passing in the full set of `param_kwargs` (after composition + `device` / `dtype` additions) and applying the returned decorators.
    * Composing multiple `parametrize_fn`s also composes the corresponding `decorator_fn`s; the composed `decorator_fn` simply concatenates the decorator lists returned by the constituents.
* Expands `DecorateInfo.is_active` to support callables:
```python
DecorateInfo(
    unittest.expectedFailure, "TestOps", "test_python_ref_executor",
    device_type='cuda', active_if=lambda params: params['executor'] == 'nvfuser'
),
```
* Adds several tests to `test/test_testing.py` ensuring proper decoration using `@parametrize`, `@modules`, and `@ops`.
* (minor) Fixes a couple `ModuleInfo` naming oddities uncovered during testing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91658
Approved by: https://github.com/malfet
2023-01-04 21:08:32 +00:00
b589e726d9 Refactor how AOTAutograd backends are defined (#89736)
There was a lot of strangeness in how AOTAutograd backends were previously defined. This refactor replaces the strangeness with something simple and straightforward. The improvements:

- There is no longer a footgun aot_autograd "backend" which doesn't actually work. No more mistyping `torch._dynamo.optimize("aot_autograd")` when you meant "aot_eager"
- Deleted aot_print because it's annoying and anyway there's no uses of it
- Instead of having BOTH the backend Subgraph and AotAutogradStrategy, there is now only an aot_autograd function which takes the kwargs to configure AOTAutograd, and then gives you a compiler function that does AOTAutograd given those kwargs. Easy.
- The primary downside is that we are now eagerly populating all of the kwargs, and that can get us into import cycle shenanigans. Some cycles I resolved directly (e.g., we now no longer manually disable the forward function before passing it to aot_autograd; aot_autograd it does it for us), but for getting inductor decompositions I had to make it take a lambda so I could lazily populate the decomps later.

New code is 130 lines shorter!

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89736
Approved by: https://github.com/anjali411, https://github.com/albanD
2022-11-28 18:39:12 +00:00
c2cf0bde1f Move the OpInfo same-storage error to the autograd test (#88306)
This check was previously located at the `non_contiguous` test (quite
and odd location). Even more, at https://github.com/pytorch/pytorch/pull/86378#discussion_r993658395, Kshiteej found that this assert was not doing anything really.

We move it to the autograd test and make it a proper `self.assert`. We also disallow returning 1-tuples from sample_input functions, as they were breaking this assert.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88306
Approved by: https://github.com/mruberry
2022-11-21 13:59:03 +00:00
ee05f47bdd Rebase and re-land thread PG (#88795)
The previous PR (https://github.com/pytorch/pytorch/pull/88627) has been reverted due to a failed check. After rebasing and rerun, all checks passed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88795
Approved by: https://github.com/huydhn, https://github.com/wanchaol
2022-11-15 21:58:58 +00:00
f98edfcc48 Make TorchElastic timer importable on Windows (#88522)
Also, add `torch.distributed` to test imports, so that we would not
regress in the future

Fixes https://github.com/pytorch/pytorch/issues/85427
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88522
Approved by: https://github.com/d4l3k
2022-11-10 17:42:20 +00:00
e6561291b8 add hack to allow hybrid compressed sparse comparison in assertEqual (#88749)
Hybrid sparse CSR tensors can currently not be compared to strided ones since `.to_dense` does not work:

```py
import torch
from torch.testing._internal.common_utils import TestCase

assertEqual = TestCase().assertEqual

actual = torch.sparse_csr_tensor([0, 2, 4], [0, 1, 0, 1], [[1, 11], [2, 12] ,[3, 13] ,[4, 14]])
expected = torch.stack([actual[0].to_dense(), actual[1].to_dense()])
assertEqual(actual, expected)
```

```
main.py:4: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at ../aten/src/ATen/SparseCsrTensorImpl.cpp:54.)
  actual = torch.sparse_csr_tensor([0, 2, 4], [0, 1, 0, 1], [[1, 11], [2, 12] ,[3, 13] ,[4, 14]])
Traceback (most recent call last):
  File "/home/philip/git/pytorch/torch/torch/testing/_comparison.py", line 1098, in assert_equal
    pair.compare()
  File "/home/philip/git/pytorch/torch/torch/testing/_comparison.py", line 619, in compare
    actual, expected = self._equalize_attributes(actual, expected)
  File "/home/philip/git/pytorch/torch/torch/testing/_comparison.py", line 706, in _equalize_attributes
    actual = actual.to_dense() if actual.layout != torch.strided else actual
RuntimeError: sparse_compressed_to_dense: Hybrid tensors are not supported

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "main.py", line 10, in <module>
    assertEqual(actual, expected)
  File "/home/philip/git/pytorch/torch/torch/testing/_internal/common_utils.py", line 2503, in assertEqual
    msg=(lambda generated_msg: f"{generated_msg}\n{msg}") if isinstance(msg, str) and self.longMessage else msg,
  File "/home/philip/git/pytorch/torch/torch/testing/_comparison.py", line 1112, in assert_equal
    ) from error

RuntimeError: Comparing

TensorOrArrayPair(
    id=(),
    actual=tensor(crow_indices=tensor([0, 2, 4]),
       col_indices=tensor([0, 1, 0, 1]),
       values=tensor([[ 1, 11],
                      [ 2, 12],
                      [ 3, 13],
                      [ 4, 14]]), size=(2, 2, 2), nnz=4,
       layout=torch.sparse_csr),
    expected=tensor([[[ 1, 11],
         [ 2, 12]],

        [[ 3, 13],
         [ 4, 14]]]),
    rtol=0.0,
    atol=0.0,
    equal_nan=True,
    check_device=False,
    check_dtype=True,
    check_layout=False,
    check_stride=False,
    check_is_coalesced=False,
)

resulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead.
```

This adds a temporary hack to `TestCase.assertEqual` to enable this. Basically, we are going through the individual CSR subtensors, call `.to_dense()` on them, and stack everything back together. I opted to not do this in the common machinery, since that way users are not affected by this (undocumented) hack.

I also added an xfailed test that will trigger as soon as the behavior is supported natively so we don't forget to remove the hack when it is no longer needed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88749
Approved by: https://github.com/mruberry, https://github.com/pearu
2022-11-10 13:44:45 +00:00
642b63e1e7 Add test that import torch doesn't modify global logging state (#87629)
Fixes https://github.com/pytorch/pytorch/issues/87626

Also adds the same test for `import functorch`. Users have complained at
us when we do modify the global logging state, which has happened in the
past.

Test Plan:
- tested locally; I added `logging.basicConfig` to `torch/__init__.py`
and checked that the test got triggered
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87629
Approved by: https://github.com/albanD
2022-10-26 15:53:28 +00:00
5e4bcb049e Improve readability of the extra message errors in assertEqual (#87202)
Goes from (note the `linspace.default` is very difficult to find)
```
Mismatched elements: 15 / 50 (30.0%)
Greatest absolute difference: 1 at index (17,)
Greatest relative difference: 1.0 at index (17,) : linspace.default
args = (0, -3, 50)
kwargs = {'dtype': torch.int16, 'device': device(type='cpu'),
'pin_memory': False}
```
to
```
Mismatched elements: 15 / 50 (30.0%)
Greatest absolute difference: 1 at index (17,)
Greatest relative difference: 1.0 at index (17,)
linspace.default
args = (0, -3, 50)
kwargs = {'dtype': torch.int16, 'device': device(type='cpu'),
'pin_memory': False}
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87202
Approved by: https://github.com/ezyang
2022-10-24 06:11:50 +00:00
1285542f9b OpInfo: Add test that sample_inputs_func returns a generator (#84567)
This also includes a small list exception for single element lists since none of the memory usage or performance implications of lists apply there.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84567
Approved by: https://github.com/lezcano, https://github.com/mruberry
2022-10-21 15:28:47 +00:00
8f2dda5bf2 [CI] Build MacOS M1 binaries without distributed support (#86451)
Partial fix for #86448 which causes the broken code to be exercised in CI. If this demonstrates the break, I'm not sure whether there should be a fix forward of https://github.com/pytorch/pytorch/pull/85781 or a revert
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86451
Approved by: https://github.com/malfet
2022-10-10 17:42:13 +00:00
0e30da3f2f [refactor] Renaming ao.sparsity to ao.pruning (#84867)
`Sparsity` as a term doesn't reflect the tools that are developed by the AO. The `torch/ao/sparsity` also has utilities for structured pruning, which internally we always referred to as just "pruning". To avoid any confusion, we renamed `Sparsity` to `Prune`. We will not be introducing the backwards compatibility, as so far this toolset was kept under silent development.

This change will reflect the changes in the documentation as well.

**TODO:**
- [ ] Change the tutorials
- [ ] Confirm no bc-breakages
- [ ] Reflect the changes in the trackers and RFC docs

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84867
Approved by: https://github.com/supriyar
2022-10-07 00:58:41 +00:00
007e12a3e9 OpInfo: Extend natural syntax to allow adding metadata (#85890)
Splitting into a seperate PR in case of bike shedding. We can't use
the normal fluent syntax `SampleInput(x).name("foo")` because `.name`
is already how the metadata is accessed. So instead, this adds a
single function where you pass keyword arguments to fill in the
metadata, e.g.
```
SampleInput(x).with_metadata(
    name="foo", output_process_fn_grad=out_fn)
```

An alternative closer to the normal fluent style would be to adding a
prefix to the property's name, e.g.
```
(SampleInput(x)
    .with_name("foo")
    .with_output_process_fn_grad(out_fn))
```

However, I have a slight preference for the `with_metadata` style
because you don't need to add extra parenthesis to break lines.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85890
Approved by: https://github.com/mruberry
2022-10-02 19:56:40 +00:00
ed5f95048e OpInfo: Add natural syntax for SampleInput creation (#85723)
Most SampleInput objects currently have no additional metadata,
meaning they have a 1:1 mapping with a normal function call. This adds
var arg forms of the `SampleInput` constructor such that you can just
call the `SampleInput` constructor as you would call the operator.

So, for example
```python
SampleInput(make_arg(shape), args=(2, 3), kwargs=dict(alpha=4))
```
becomes
```python
SampleInput(make_arg(shape), 2, 3, alpha=4)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85723
Approved by: https://github.com/mruberry
2022-10-02 19:56:40 +00:00
4d405517e4 Move OpInfo class into new opinfo folder (#82540)
Ref #82518

Starting small to minimize merge conflicts, this moves the top-level
class definitions and some helper functions into the `opinfos` folder.
It also brings `common_methods_invocations.py` to just below 1MB.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82540
Approved by: https://github.com/albanD
2022-08-05 15:10:17 +00:00
3c2c2cc947 cudagraphs dynamo backend (#80566)
This backend handles cases where the preexisting cuda graphs
implementation from dynamo is unsound/has errors.

Requires this functorch bug fix: https://github.com/pytorch/functorch/pull/935

Signed-off-by: Edward Z. Yang <ezyangfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80566
Approved by: https://github.com/ngimel, https://github.com/wconstab
2022-07-22 14:06:07 +00:00
eecf34fbe7 [ao][sparsity] Post training data sparsifier callback for lightning (#80370)
Lightning callback that enables post-training sparsity.

This callback aims to sparsify the model inside lightning module after training.
**Note that the model is copied and then sparsified, so the existing model is not modified**

The sparsified model can be used for comparison and can be accessed using <callback_obj>.sparsified

Test Plan
```python torch/ao/sparsity/_experimental/data_sparsifier/lightning/tests/test_callbacks.py TestPostTrainingCallback```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80370
Approved by: https://github.com/z-a-f
2022-07-21 16:39:13 +00:00
cde365a7cd Validate Sparse Compressed tensor inputs (#79385)
The validation includes regular tensor inputs, batched tensor inputs, as well as hybrid tensor inputs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79385
Approved by: https://github.com/nikitaved, https://github.com/cpuhrsch
2022-06-27 17:19:54 +00:00
94dda03c78 [fx2trt] move common_fx2trt.py into fx folder (#79924)
Summary:
as titled
Then update the library import path

Test Plan: internal CI

Reviewed By: yinghai

Differential Revision: D37287068

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79924
Approved by: https://github.com/yinghai
2022-06-22 17:54:10 +00:00
c978b609f7 [ci] remove IN_CI env var
The conventional env var to set is CI. Both circle and GHA set it, so
IN_CI is unnecessary

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79229

Approved by: https://github.com/janeyx99
2022-06-11 17:16:30 +00:00
97594a24b4 Print output during MPS test import tests (#79163)
Simplify `test_no_warnings_on_input` to simply capture any output.
Copy its implementation to `test_testing.py` as this is not specific to MPS
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79163
Approved by: https://github.com/janeyx99, https://github.com/kulinseth
2022-06-09 13:07:05 +00:00
45eab670ac Add test_imports (#77728)
That validates that every PyTorch submodule can be imported

Prevents regressions like the one described in https://github.com/pytorch/pytorch/issues/77441 from happening in the future

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77728
Approved by: https://github.com/seemethere, https://github.com/janeyx99
2022-05-21 02:11:34 +00:00
dd313d7338 support TestCase.longMessage in TestCase.assertEqual
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77602

Approved by: https://github.com/mruberry
2022-05-20 11:09:28 +00:00
63e9fdd92f re-add dynamic error messages to assert_close
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77601

Approved by: https://github.com/mruberry
2022-05-20 11:09:28 +00:00
dc882ed33d Add Sparse Compressed tensor support to torch.clone
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77512

Approved by: https://github.com/cpuhrsch
2022-05-17 16:29:41 +00:00
0d1329c4ea Revert "Add Sparse Compressed tensor support to torch.clone"
This reverts commit 942f04172a69ce741592a8832d17e68b15c5cadd.

Reverted https://github.com/pytorch/pytorch/pull/77512 on behalf of https://github.com/atalman
2022-05-17 14:26:52 +00:00
942f04172a Add Sparse Compressed tensor support to torch.clone
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77512

Approved by: https://github.com/cpuhrsch
2022-05-17 07:32:46 +00:00
f1c8e8fa4e Revert "Add Sparse Compressed tensor support to torch.clone"
This reverts commit 20ba6e693563c5a0cf67b0fb9413f68a8d91fd25.

Reverted https://github.com/pytorch/pytorch/pull/77512 on behalf of https://github.com/malfet
2022-05-17 00:31:49 +00:00
20ba6e6935 Add Sparse Compressed tensor support to torch.clone
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77512

Approved by: https://github.com/cpuhrsch
2022-05-16 22:21:49 +00:00
9a608af828 Support comparing Sparse Compressed tensors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77525

Approved by: https://github.com/pmeier, https://github.com/cpuhrsch
2022-05-16 22:13:53 +00:00
e846ef8818 add rocm ciflow/slow workflow
Enables additional tests that historically have been missed for ROCm CI.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72686
Approved by: https://github.com/seemethere
2022-04-22 17:41:28 +00:00
de949a0e59 Various OpInfo architecture improvements
This PR makes the following improvements:

- moves the custom skip list for test_normalize_operator_exhaustive in test_fx_experimental to use the typical OpInfo skip architecture. The skips were updated to xfails, and that identified some operators which were no longer failing the test
- redundant tests with OpInfo-based testing in test_jit.py were removed
- test_dtypes was improved so its error messages are clear and it makes test_nondifferentiable redundant; the latter test has been removed
- OpInfo.supports_complex_autograd() is removed in favor of a more accurate and general test for whether the particular dtype is in the backward dtypes of the operator
- gradchecks have been improved to verify that an operator doesn't support grad if it claims not to
- gradchecks have been improved to test the gradient of all input tensors that require gradient
- the concept of "default test dtypes" has been removed
- excessive and mostly redundant out testing for elementwise unary operators has been removed
- metadata for whether an op supports nuanced "safe casting" to out behavior has been removed from OpInfos
- numerous skips have been converted to xfails
- numerous OpInfos have had their metadata fixed based on the new checks
- jit-specific utilities in common_methods_invocations.py have been moved to jit_programming_utils.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75951
Approved by: https://github.com/ngimel
2022-04-18 21:55:32 +00:00
bfac65dfe5 [testing] Update dispatch macros (#74977)
This PR is reland of #74289 
Co-authored-by: Khushi Agrawal <khushiagrawal411@gmail.com>
2022-03-30 14:13:21 -07:00
2e4152b118 Revert "[testing] Update dispatch macros"
This reverts commit eed19a0f38a81015ca50dd25e997b1c6e223d46b.

Reverted https://github.com/pytorch/pytorch/pull/74289 on behalf of https://github.com/malfet
2022-03-30 19:52:37 +00:00
eed19a0f38 [testing] Update dispatch macros
Hi,
This PR is the follow-up PR of #71561. (the previous PR had a couple of merge conflicts and was reverted, this PR resolves that).
Please take a look. Thanks!

cc: @pmeier @mruberry @kshitij12345
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74289
Approved by: https://github.com/pmeier, https://github.com/mruberry
2022-03-30 16:10:16 +00:00
3269729c68 [complex32] make_tensor
Update `make_tensor` so that it can generate `complex32` tensor.

**Note**: This doesn't enable `complex32` tests in the OpInfo test suite but only updates `make_tensor` to generate it. Enabling `complex32` in the test suite will be done later PRs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74854
Approved by: https://github.com/pmeier, https://github.com/anjali411
2022-03-30 01:05:34 +00:00
0aa3c39e5f Extends OpInfo architecture with reference inputs, adds them for elementwise binary operators
This PR extends our OpInfo test architecture with "reference inputs," an optional expansion of typical sample inputs that allows for more thorough testing. Currently only the elementwise binary operations implement an extended set of reference inputs. This PR also cleans up some smaller OpInfo-related issues, including several bugs, and it identified https://github.com/pytorch/pytorch/issues/74279.

A reference inputs function can be specified for an OpInfo by filling in its "reference_inputs_func" metadata. If this is done it's recommended that the reference inputs function first call the sample inputs function, then produce additional sample inputs. See `reference_inputs_elementwise_binary` for an example of this pattern.

In addition to implementing reference inputs for the elementwise binary operations, this PR improves their consistency and simplifies how their metadata is represented. The great majority now use a generic sample input function, and those that want extensions start by calling the generic sample input function and then adding additional samples. This removes many older sample input functions. The BinaryUfuncInfo subclass also now allows specifying scalar support more precisely, and reference inputs and error inputs are generated based on this metadata to ensure it's correct.

cc @kshitij12345 @pmeier @zou3519 @Chillee

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74280
Approved by: https://github.com/ngimel
2022-03-21 03:24:16 +00:00