Commit Graph

56 Commits

Author SHA1 Message Date
408c921d96 Make hashing a SymInt raise an error again (#130548)
See https://github.com/pytorch/pytorch/issues/130547

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130548
Approved by: https://github.com/Skylion007, https://github.com/albanD, https://github.com/lezcano
2024-07-16 18:30:30 +00:00
2b1df24877 Revert "Make hashing a SymInt raise an error again (#130548)"
This reverts commit 3100455b8eeebdfbc3428ff9454579ac50666faf.

Reverted https://github.com/pytorch/pytorch/pull/130548 on behalf of https://github.com/clee2000 due to broke inductor/test_triton_kernels.py https://github.com/pytorch/pytorch/actions/runs/9908970127/job/27377960411 3100455b8e. Not run on PR due to bad TD ([comment](https://github.com/pytorch/pytorch/pull/130548#issuecomment-2225912018))
2024-07-12 16:20:12 +00:00
3100455b8e Make hashing a SymInt raise an error again (#130548)
See https://github.com/pytorch/pytorch/issues/130547

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130548
Approved by: https://github.com/Skylion007, https://github.com/albanD
2024-07-12 13:49:56 +00:00
973037be6a [BE][Easy] apply autofix for ruff rules unnecessary-collection-call (C408): list() / tuple() / dict() (#130199)
This PR changes the empty collection factory call to Python literals:

- `list()` -> `[]`
- `tuple()` -> `()`
- `dict()` -> `{}`

The Python literals are more performant and safer. For example, the bytecode for building an empty dictionary:

```bash
$ python3 -m dis - <<EOS
import collections

d1 = {}
d2 = dict()

dict = collections.OrderedDict
d3 = dict()
EOS
```

```text
  0           0 RESUME                   0

  1           2 LOAD_CONST               0 (0)
              4 LOAD_CONST               1 (None)
              6 IMPORT_NAME              0 (collections)
              8 STORE_NAME               0 (collections)

  3          10 BUILD_MAP                0
             12 STORE_NAME               1 (d1)

  4          14 PUSH_NULL
             16 LOAD_NAME                2 (dict)
             18 CALL                     0
             26 STORE_NAME               3 (d2)

  6          28 LOAD_NAME                0 (collections)
             30 LOAD_ATTR                8 (OrderedDict)
             50 STORE_NAME               2 (dict)

  7          52 PUSH_NULL
             54 LOAD_NAME                2 (dict)
             56 CALL                     0
             64 STORE_NAME               5 (d3)
             66 RETURN_CONST             1 (None)
```

The dict literal `{}` only has one bytecode `BUILD_MAP`, while the factory call `dict()` has three `PUSH_NULL + LOAD_NAME + CALL`. Also, the factory call is not safe if users override the `dict` name in `locals` or `globals` (see the example of replacing with `OrderedDict` above).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130199
Approved by: https://github.com/malfet
2024-07-11 17:30:28 +00:00
6c2a8b6b38 [Ez][BE]: Enable new stable ruff rules (#129825)
Applies a bunch of new ruff lint rules that are now stable. Some of these improve efficiency or readability. Since I already did passes on the codebase for these when they were in preview, there should be relatively few changes to the codebase. This is just more for future hardening of it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129825
Approved by: https://github.com/XuehaiPan, https://github.com/jansel, https://github.com/malfet
2024-07-02 14:47:10 +00:00
7837a12474 [BE] enforce style for empty lines in import segments (#129751)
This PR follows https://github.com/pytorch/pytorch/pull/129374#pullrequestreview-2136555775 cc @malfet:

> Lots of formatting changes unrelated to PR goal, please keep them as part of separate PR (and please add lint rule if you want to enforce those, or at least cite one)

`usort` allows empty lines within import segments. For example, `usort` do not change the following code:

```python
import torch.aaa
import torch.bbb
import torch.ccc

x = ...  # some code
```

```python
import torch.aaa

import torch.bbb
import torch.ccc

x = ...  # some code
```

```python
import torch.aaa

import torch.bbb

import torch.ccc

x = ...  # some code
```

This PR first sort imports via `isort`, then re-sort the file using `ufmt` (`usort` + `black`). This enforces the following import style:

1. no empty lines within segments.
2. single empty line between segments.
3. two spaces after import statements.

All the code snippets above will be formatted to:

```python
import torch.aaa
import torch.bbb
import torch.ccc

x = ...  # some code
```

which produces a consistent code style.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129751
Approved by: https://github.com/malfet
2024-06-29 14:15:24 +00:00
00d7bba2fa Revert "[BE] enforce style for empty lines in import segments (#129751)"
This reverts commit f5ff1a3ab9ef279655308266029faf6543a8a1ca.

Reverted https://github.com/pytorch/pytorch/pull/129751 on behalf of https://github.com/huydhn due to Sorry for reverting your change but I need to revert to cleanly revert https://github.com/pytorch/pytorch/pull/129374, please do a rebase and reland this ([comment](https://github.com/pytorch/pytorch/pull/129751#issuecomment-2197799814))
2024-06-29 00:41:41 +00:00
f5ff1a3ab9 [BE] enforce style for empty lines in import segments (#129751)
This PR follows https://github.com/pytorch/pytorch/pull/129374#pullrequestreview-2136555775 cc @malfet:

> Lots of formatting changes unrelated to PR goal, please keep them as part of separate PR (and please add lint rule if you want to enforce those, or at least cite one)

`usort` allows empty lines within import segments. For example, `usort` do not change the following code:

```python
import torch.aaa
import torch.bbb
import torch.ccc

x = ...  # some code
```

```python
import torch.aaa

import torch.bbb
import torch.ccc

x = ...  # some code
```

```python
import torch.aaa

import torch.bbb

import torch.ccc

x = ...  # some code
```

This PR first sort imports via `isort`, then re-sort the file using `ufmt` (`usort` + `black`). This enforces the following import style:

1. no empty lines within segments.
2. single empty line between segments.
3. two spaces after import statements.

All the code snippets above will be formatted to:

```python
import torch.aaa
import torch.bbb
import torch.ccc

x = ...  # some code
```

which produces a consistent code style.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129751
Approved by: https://github.com/malfet
2024-06-28 21:02:59 +00:00
80d34217c6 Typo fixes: et al. (#127811)
"et al." is short for _et alia_ and should be abbreviated with a period on the second word. Noticed this typo when reading through the SGD docs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127811
Approved by: https://github.com/janeyx99
2024-06-06 01:03:25 +00:00
879d01afcb [dynamo][numpy] Add unsigned integer dtypes (#125717)
We should support these to whatever extent we can. They corresponding
`torch.uint<w>` types are defined, so I don't see an issue with
generating the various casting rules and allowing them to trace.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125717
Approved by: https://github.com/lezcano
2024-06-05 14:33:47 +00:00
81277baa0c Remove removed ruff rule TRY200 (#126256)
My TOML linter is complaining that "TRY200" is not acceptable for the `tool.ruff.lint` schema.

From the ruff docs: https://docs.astral.sh/ruff/rules/reraise-no-cause/

> This rule has been removed and its documentation is only available for historical reasons.
>
> This rule is identical to [B904](https://docs.astral.sh/ruff/rules/raise-without-from-inside-except/) which should be used instead.

and we are currently explicitly ignoring B904.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126256
Approved by: https://github.com/Skylion007
2024-05-17 16:31:05 +00:00
34910f87f0 [BE]: Update ruff to v0.4.4 (#125031)
Update ruff version to 0.4.2. This version mostly has bugfixes for the new parser and also updates the f-string rule to be able to apply more fixes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125031
Approved by: https://github.com/albanD, https://github.com/malfet
2024-05-12 20:02:37 +00:00
1dd42e42c4 [BE]: Try TCH autofixes on torch/ (#125536)
Tries TCH autofixes and see what breaks

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125536
Approved by: https://github.com/ezyang
2024-05-05 23:13:59 +00:00
a8574a9719 Fix global flake8 issues (#124771)
Prior to this `lintrunner --all-files --take FLAKE8` failed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124771
Approved by: https://github.com/Skylion007
ghstack dependencies: #124428
2024-04-26 15:35:53 +00:00
1ac60484c1 Revert "Fix global flake8 issues (#124771)"
This reverts commit f01275934bfa1ff358b1c01d3754f2807cd04ee2.

Reverted https://github.com/pytorch/pytorch/pull/124771 on behalf of https://github.com/jeanschmidt due to Unfortunately, I needed to revert #123735 and this one depends on it. So please check if there are no merge conflicts or breakages and feel free to merge this PR again ([comment](https://github.com/pytorch/pytorch/pull/124428#issuecomment-2078699836))
2024-04-26 06:15:17 +00:00
f01275934b Fix global flake8 issues (#124771)
Prior to this `lintrunner --all-files --take FLAKE8` failed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124771
Approved by: https://github.com/Skylion007
ghstack dependencies: #124428
2024-04-25 14:25:00 +00:00
93e249969b [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261)
Remove useless parentheses in `raise` statements if the exception type is raised with no argument.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124261
Approved by: https://github.com/albanD
2024-04-17 19:29:34 +00:00
4dc53f777b Fix dynamo failure w/ astype (#117952)
The torch "fake" ndarray had some mismatches vs numpy.ndarray which caused test_sparse_to_sparse_compressed to fail under dynamo.

This also fixes (because the test now hits it) a problem where unpacking a sequence with the incorrect number of args would assert in dynamo instead of graph breaking (because it would throw an exception). Added a unit test for this condition.

Fixed:
- torch._numpy._ndarray.astype() (actually used by the test)
- torch._numpy._ndarray.put() (drive-by discovery)
- torch._numpy._ndarray.view() (drive-by discovery)

(burndown item 7)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117952
Approved by: https://github.com/yanboliang
ghstack dependencies: #117951
2024-02-03 08:10:15 +00:00
41dfdde9f5 Handle some numpy functions with out arguments correctly in dynamo (#118248)
Dynamo creates Tensors when tracing through numpy ufuncs like np.sin, np.minimum etc. When running, np functions generally return Tensors when run with `torch.compile`. However, we currently require when normalizing `out` arguments that the input is an ndarray.  This creates assertion errors when running torch.compile on any numpy function with an out argument:
```
    def test_numpy_ufunc_out(self):
        @torch.compile(backend="eager")
        def foo():
            x = np.arange(5)
            out = np.empty((x.shape[0], x.shape[0]))
            res_out = np.sin(x, out=out)
            assert res_out is out
        foo()
```
Failure with stack trace: https://gist.github.com/jamesjwu/68e217638d735678b3de968584dba23f

Instead, we can wrap tensors in an ndarray in normalize_outarray to handle the case correctly. Fixing this resolves ~220 tests under dynamo_test_failures, but also exposes a followup bug.

In the presence of a graph break, ndarrays don't preserve their id, which can affect assertions and `is` checks between numpy arrays:
```
     def test_x_and_out_broadcast(self, ufunc):
        x = self.get_x(ufunc)
        out = np.empty((x.shape[0], x.shape[0]))

        x_b = np.broadcast_to(x, out.shape)
        # ufunc is just np.sin here
        res_out = ufunc(x, out=out)
        res_bcast = ufunc(x_b)
        # passes
        assert res_out is out
        graph_break()
        # fails
        assert res_out is out
```
Regular tensors preserve their id because Dynamo caches their example tensor values across a graph break. However, with ndarrays, we only store their converted tensor values, and construct new ndarrays around those values:
eebe7e1d37/torch/_dynamo/variables/builder.py (L1083)
Added a test with expected failure to showcase this — we can then fix that issue separately.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118248
Approved by: https://github.com/lezcano
2024-01-29 09:09:21 +00:00
9bce208dfb Replace follow_imports = silent with normal (#118414)
This is a lot of files changed! Don't panic! Here's how it works:

* Previously, we set `follow_imports = silent` for our mypy.ini configuration. Per https://mypy.readthedocs.io/en/stable/running_mypy.html#follow-imports, what this does is whenever we have an import to a module which is not listed as a file to be typechecked in mypy, we typecheck it as normal but suppress all errors that occurred in that file.
* When mypy is run inside lintrunner, the list of files is precisely the files covered by the glob in lintrunner.toml, but with files in excludes excluded.
* The top-level directive `# mypy: ignore-errors` instructs mypy to typecheck the file as normal, but ignore all errors.
* Therefore, it should be equivalent to set `follow_imports = normal`, if we put `# mypy: ignore-errors` on all files that were previously excluded from the file list.
* Having done this, we can remove the exclude list from .lintrunner.toml, since excluding a file from typechecking is baked into the files themselves.
* torch/_dynamo and torch/_inductor were previously in the exclude list, because they were covered by MYPYINDUCTOR. It is not OK to mark these as `# mypy: ignore-errors` as this will impede typechecking on the alternate configuration. So they are temporarily being checked twice, but I am suppressing the errors in these files as the configurations are not quite the same. I plan to unify the configurations so this is only a temporary state.
* There were some straggler type errors after these changes somehow, so I fixed them as needed. There weren't that many.

In the future, to start type checking a file, just remove the ignore-errors directive from the top of the file.

The codemod was done with this script authored by GPT-4:

```
import glob

exclude_patterns = [
    ...
]

for pattern in exclude_patterns:
    for filepath in glob.glob(pattern, recursive=True):
        if filepath.endswith('.py'):
            with open(filepath, 'r+') as f:
                content = f.read()
                f.seek(0, 0)
                f.write('# mypy: ignore-errors\n\n' + content)
```

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118414
Approved by: https://github.com/thiagocrepaldi, https://github.com/albanD
2024-01-27 02:44:11 +00:00
7c8f38700a [dynamo] Fix np.issubdtype (#116459)
Fixes the issue described at https://github.com/pytorch/pytorch/issues/93697#issuecomment-1828346590

This doesn't fix the full issue yet, now we hit
```python
  File
  "/home/lezcano/git/pytorch/pytorch/torch/_dynamo/symbolic_convert.py",
  line 744, in step
  getattr(self, inst.opname)(inst)
  File
  "/home/lezcano/git/pytorch/pytorch/torch/_dynamo/symbolic_convert.py",
  line 1366, in BUILD_MAP
      assert (
      AssertionError
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116459
Approved by: https://github.com/peterbell10
2024-01-05 01:48:07 +00:00
75dae4f691 Revert "[dynamo] Fix np.issubdtype (#116459)"
This reverts commit b5c33ccdb3198a48a354e21a4fdace0ec6d04146.

Reverted https://github.com/pytorch/pytorch/pull/116459 on behalf of https://github.com/zou3519 due to Broke CI, seems to be a landrace ([comment](https://github.com/pytorch/pytorch/pull/116459#issuecomment-1877135999))
2024-01-04 14:00:11 +00:00
b5c33ccdb3 [dynamo] Fix np.issubdtype (#116459)
Fixes the issue described at https://github.com/pytorch/pytorch/issues/93697#issuecomment-1828346590

This doesn't fix the full issue yet, now we hit
```python
  File
  "/home/lezcano/git/pytorch/pytorch/torch/_dynamo/symbolic_convert.py",
  line 744, in step
  getattr(self, inst.opname)(inst)
  File
  "/home/lezcano/git/pytorch/pytorch/torch/_dynamo/symbolic_convert.py",
  line 1366, in BUILD_MAP
      assert (
      AssertionError
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116459
Approved by: https://github.com/peterbell10
2024-01-04 03:55:50 +00:00
bbe3261dd3 [BE]: Use iterable.chain.from_iterable where possible (#116376)
This is more readable and more efficient when dealing with lots of sequences to chain together.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116376
Approved by: https://github.com/albanD
2023-12-27 19:20:07 +00:00
d7b303dcf8 [BE]: Enable a PLC0131, PLC0132, PLC0205. Fix PLC0132 bug. (#115015)
Enable pylint rules `PLC0131` and `PLC0132`. There was a violation of the `PLC0132` so this commit also fixes it and enables the rules so the violation do not occur again. `PLC0205` checks accidentally setting your `__slots__` to a string which is almost always a bug.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115015
Approved by: https://github.com/jansel, https://github.com/malfet
2023-12-02 20:35:10 +00:00
dc3d0caab3 BUG: fix np.ndarray.resize under dynamo (#113931)
Make sure ndarray.resize actually works in-place, so that dynamo does the right thing tracking the result.

Fixes #113539

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113931
Approved by: https://github.com/lezcano
2023-11-17 18:12:17 +00:00
237cbd5be6 BUG: trace frames with numpy scalar -> ndarray functions (#112959)
Fixes #112951

Make dynamo detect that `np.arange(3)` returns a FakeTensor, so the frame needs to be traced.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112959
Approved by: https://github.com/lezcano
2023-11-17 03:00:24 +00:00
6ce5de5275 Avoid calling as_tensor twice (#112866)
Sometimes doing so may copy and that's not good. We avoid that by
setting global flags.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112866
Approved by: https://github.com/kit1980, https://github.com/ev-br
2023-11-07 16:10:59 +00:00
6a3922d523 BUG: compile np.array(list_of_arrays) (#112711)
Add a shortcut for a sequence of arrays only. This remove a graph break on a common pattern of
`np.array([np.cos(theta), np.sin(theta)])` and its ilk.

This PR is a simpified alternative to https://github.com/pytorch/pytorch/pull/112521 --- it still breaks on mixing arrays and scalars or array_likes (e.g.  `np.array([[1, 2], np.array[3, 4]])`) and instead adds a simple shortcut.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112711
Approved by: https://github.com/lezcano
2023-11-02 20:18:16 +00:00
7e654c8f88 Revert "WIP / TST: allow testing torch._numpy under Dynamo (#110401)"
This reverts commit 5ed4a423ded14138f1a724eff15ccd14648f6c49.

Reverted https://github.com/pytorch/pytorch/pull/110401 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing dynamo job in trunk 5ed4a423de ([comment](https://github.com/pytorch/pytorch/pull/110401#issuecomment-1779811943))
2023-10-25 18:21:16 +00:00
5ed4a423de WIP / TST: allow testing torch._numpy under Dynamo (#110401)
Use conditional imports: when running under dynamo, import the original NumPy not torch._numpy. This is what we want to trace, not our implementation.

With this, the test suite passes with and without `PYTORCH_TEST_WITH_DYNAMO=1` (modulo a couple of test modules which are not meant to be compiled, e.g. `test_nep50_examples`). There are two new decorators, `x{fail,pass}ifTorchDynamo`, the `xpass` in most cases indicates a graph break and a fallback to eager for things we do not implement.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110401
Approved by: https://github.com/lezcano
2023-10-25 16:02:16 +00:00
cb856b08b2 [BE]: Attach cause to some exceptions and enable RUFF TRY200 (#111496)
Did some easy fixes from enabling TRY200. Most of these seem like oversights instead of intentional. The proper way to silence intentional errors is with `from None` to note that you thought about whether it should contain the cause and decided against it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111496
Approved by: https://github.com/malfet
2023-10-19 21:56:36 +00:00
48989bc820 trace frames with np.ndarray (#110512)
Fixes #109604

Resubmit gh-109715 + several skips and small fixes to make tests pass.

The main fix here is by @ysiraichi : previously, dynamo did not resume tracing numpy ndarrays after a graph break.
While at it, fix several small issues Yukio's fix uncovers:

- graph break gracefully on numpy dtypes which do not map to torch.dtypes (uint16 etc)
- recognize array scalars in dynamo, treat them as 0D ndarrays
- make sure that iterating over torch.ndarray generates arrays not bare tensors

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110512
Approved by: https://github.com/lezcano
2023-10-15 00:56:10 +00:00
19ce68a45c Fix typo under torch/_numpy directory (#110782)
This PR fixes typo of comments in files under torch/_numpy directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110782
Approved by: https://github.com/Skylion007
2023-10-07 17:42:35 +00:00
3603f646eb BUG: fix torch._numpy.arange(5, dtype="float32") (#110005)
Make `np.arange` respect an explicitly provided dtype.

Also remove duplicated tests:
- torch_np/test_function_base.py::TestArange is a dupe of
- torch_np/numpy_tests/core/test_multiarray.py::TestArange

Fixes #109975

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110005
Approved by: https://github.com/lezcano
2023-09-28 18:21:18 +00:00
ee8983da70 109605 dynamo scalar ndarray pow gen (#109953)
Fixes #109605

Generated code before:
```
def call(args):
    arg0_1, = args
    args.clear()
    assert_size_stride(arg0_1, (8, ), (1, ))
    buf0 = empty_strided((), (), device='cpu', dtype=torch.int64)
    cpp_fused_lift_fresh_0(c_void_p(buf0.data_ptr()))
    # Source Nodes: [wrapped_pow], Original ATen: [aten.lift_fresh, aten.pow]
    buf1 = aten.pow(arg0_1, reinterpret_tensor(buf0, (8, ), (0, ), 0))
    del arg0_1
    del buf0
    buf2 = buf1
    assert_size_stride(buf2, (8, ), (1, ))
    del buf1
    return (buf2, )
```

Generated code now:
```
def call(args):
    arg0_1, = args
    args.clear()
    assert_size_stride(arg0_1, (8, ), (1, ))
    buf0 = empty_strided((8, ), (1, ), device='cpu', dtype=torch.int64)
    cpp_fused_pow_0(c_void_p(arg0_1.data_ptr()), c_void_p(buf0.data_ptr()))
    del arg0_1
    return (buf0, )
```
@lezcano What would be a good way to add a test for this?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109953
Approved by: https://github.com/lezcano
2023-09-28 13:11:06 +00:00
d046376c4f Dispatch numpy.take_along_axis to torch.take_along_dim (#108880)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108880
Approved by: https://github.com/lezcano
ghstack dependencies: #108879
2023-09-13 23:13:09 +00:00
3ac2396e00 Fix torch._numpy.random (#108944)
Fix several issues with `torch._numpy.random` functions on eager

1. actually return scalars when `size is None`
2. fix dispatch with USE_NUMPY_STREAM
3. make tnp.random functions composable: make numpy functions receive numpy arguments, not `tnp.ndarray`s
4. fix random.shuffle for e.g. lists

The main need for this gymnastics is due to `np.random` functions returning an ndarray or python scalar depending on the `size` argument. We decided a while ago to replicate this behavior in `tnp.random` and not elsewhere where we always return 0D arrays instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108944
Approved by: https://github.com/lezcano
2023-09-13 05:08:19 +00:00
cd46b5db76 make sure all torch._numpy tests run on CI (#108762)
- Add `if __name__ == "__main__": run_tests()` stanzas to test files in `torch_np` folder so that these tests run on CI
- Skip / xfail things smoked out by this change
- remove a stray python file which should not have been added to tests in the first place.
- fix einsum if opt_einsum is present
- add skips for older numpies

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108762
Approved by: https://github.com/lezcano
2023-09-12 17:12:21 +00:00
090fe45e1c Revert "make sure all torch._numpy tests run on CI (#108762)"
This reverts commit 7abeb92796635bd3ee216a0872bddd0395e97d10.

Reverted https://github.com/pytorch/pytorch/pull/108762 on behalf of https://github.com/clee2000 due to sorry but I think the asan test_scalarmath failure is real 7abeb92796 https://github.com/pytorch/pytorch/actions/runs/6132913963/job/16645381921 ([comment](https://github.com/pytorch/pytorch/pull/108762#issuecomment-1714214523))
2023-09-11 16:29:20 +00:00
7abeb92796 make sure all torch._numpy tests run on CI (#108762)
- Add `if __name__ == "__main__": run_tests()` stanzas to test files in `torch_np` folder so that these tests run on CI
- Skip / xfail things smoked out by this change
- remove a stray python file which should not have been added to tests in the first place.
- fix einsum if opt_einsum is present
- add skips for older numpies

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108762
Approved by: https://github.com/lezcano
2023-09-09 20:05:27 +00:00
324b23f337 MAINT: torch/_numpy: remove stubs raising NIError (#108902)
Remove remaining stubs. There is no point raising NotImplementedError, now that a missing function triggers a graph break just by being missing in `torch._numpy` namespace.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108902
Approved by: https://github.com/lezcano
2023-09-09 00:11:14 +00:00
1f20531939 fall back to eager on NotImplementedError (#107863)
Follow-up to https://github.com/pytorch/pytorch/pull/107710:

Help  dynamo fall back to eager when compiling unimplemented numpy constructs:

- arrays of strings
- (arg){min, max} for complex types
- various arguments typed as NotImplemented (`np.ones(4, order="F")` etc)
- numpy functions which torch._numpy does not implement

To test, run (we do not implement arrays of strings)

```
import torch
import numpy as np

@torch.compile(fullgraph=False)
def fn():
    return np.asarray(["L", "U"])
```

and observe it compiles with fullgraph=False and fails with fullgraph=True

Fixes https://github.com/pytorch/pytorch/issues/107970

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107863
Approved by: https://github.com/ezyang, https://github.com/lezcano
2023-09-07 21:22:20 +00:00
2a6ef9b04d [dynamo] Avoid recompilation when the PyTorch function accepts scalars (#108162)
Before, it would create a 0D tensor with the input, which would incur in
a guard and specialisation.

It's not clear whether the guard and specialisation is the right behaviour
when we create 0D tensors, but that's a story for another day.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108162
Approved by: https://github.com/ev-br, https://github.com/peterbell10
2023-09-01 14:35:42 +00:00
9178deedff removing some redundant str splits (#106089)
drop some redundant string splits, no factual changes, just cleaning the codebase

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106089
Approved by: https://github.com/albanD, https://github.com/malfet
2023-09-01 00:22:58 +00:00
01dfa7620d MAINT: np.unique works with f16 directly (#108228)
(follow up on gh-107768)

Remove a f16->f32 workaround from np.unique, since torch.unique and np.unique seem to just work with float16 tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108228
Approved by: https://github.com/lezcano
2023-08-31 16:21:13 +00:00
55d6b80188 torch._numpy: keep f16 CUDA tensors in f16 where possible (#107768)
Contain workarounds for _RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'_ to CPU tensors, do computations on CUDA tensors in f16.

Fixes https://github.com/Quansight-Labs/numpy_pytorch_interop/issues/170

We do not really systematically test CUDA tensors in torch._numpy, so I only spot-checked locally that the affected functions work with `tensor.to("cuda")`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107768
Approved by: https://github.com/lezcano
2023-08-23 18:35:47 +00:00
b5c90ba7e7 [dynamo] Fix ndarray.__pow__ (#107746)
As per title. Tests in the next PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107746
Approved by: https://github.com/ezyang
ghstack dependencies: #107687, #107688, #107710, #107711
2023-08-23 13:55:36 +00:00
fada0527fa Dispatch take_along_axis to gather (#107711)
Gather does the same thing, but it's much better supported in the
`torch.compile` stack

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107711
Approved by: https://github.com/ezyang
ghstack dependencies: #107687, #107688, #107710
2023-08-23 01:21:23 +00:00
62113a2361 [dynamo] np.sort(complex) is not implemented (#107710)
This issue was discovered once we were able to trace without breaking
in https://github.com/pytorch/pytorch/pull/107689. Same for the next
one.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107710
Approved by: https://github.com/ezyang
ghstack dependencies: #107687, #107688
2023-08-23 01:21:23 +00:00