Commit Graph

15 Commits

Author SHA1 Message Date
1d0a8eed5d [generate_opcheck_tests] Enable using same failures_dict for multiple testclasses (#110164)
This PR allows us to use the same failures_dict for multiple test
classes. This is helpful if you have a bunch of small TestCase(es) and
to centralize all the failures dict into one big one.

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110164
Approved by: https://github.com/williamwen42
2023-09-28 17:56:45 +00:00
bb9779ecd2 Revert D49640259: Revert D49615962: [optests] Test names in failure dicts should be prefixed with test class (#110094)
Summary: Revert D49640259: Revert D49615962: [optests] Test names in failure dicts should

Test Plan: revert-hammer

Differential Revision: D49645397

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110094
Approved by: https://github.com/izaitsevfb
2023-09-26 21:16:36 +00:00
2393864070 Revert "[optests] Test names in failure dicts should be prefixed with test class (#110045)"
This reverts commit 76fcec74c413af22186f0782f02aca49ab61dc20.

Reverted https://github.com/pytorch/pytorch/pull/110045 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/110045#issuecomment-1735711094))
2023-09-26 14:56:08 +00:00
76fcec74c4 [optests] Test names in failure dicts should be prefixed with test class (#110045)
We want to use the same failures dict for multiple TestCase. This happens
common in e.g. fbgemm. To move towards that, we need to prefix each test name
with their test class to avoid ambiguity

Differential Revision: [D49615962](https://our.internmc.facebook.com/intern/diff/D49615962/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110045
Approved by: https://github.com/williamwen42
2023-09-26 03:21:12 +00:00
122264a0c0 [generate_opcheck_tests] tests should ignore meta/FakeTensors (#109641)
These tests generally don't work on meta tensors because they need to
compare the data of the Tensors. For example, SchemaCheckMode errors out
if any inputs are meta or Fake because it needs to check their storages
to see if any mutation occurred and those do not have storages.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109641
Approved by: https://github.com/bdhirsh, https://github.com/soulitzer
ghstack dependencies: #109637, #109638, #109639, #109640
2023-09-20 06:33:37 +00:00
d3d71367b9 [generate_opcheck_tests] Always print a repro (#109640)
On failure of a test, we will always print a "repro". This repro isn't
really runnable but gives the user a sense of how to actually reproduce
the test without the test suite, because using the test suite is a bit
convoluted.

If the user passes PYTORCH_OPCHECK_PRINT_BETTER_REPRO, we will print a
fuller repro that saves the exact problematic test inputs to disk and
reads them back out.

Test Plan:
- expecttests on the generate_repro helper function
- tried this out locally.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109640
Approved by: https://github.com/bdhirsh, https://github.com/soulitzer
ghstack dependencies: #109637, #109638, #109639
2023-09-20 06:33:37 +00:00
af900fe228 [generate_opcheck_tests] flip unified_diff order (#109639)
It was reversed. As written this is a bit difficult to test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109639
Approved by: https://github.com/bdhirsh, https://github.com/soulitzer
ghstack dependencies: #109637, #109638
2023-09-20 06:33:37 +00:00
7564f04389 [generate_opcheck_tests] add type checking (#109638)
Test Plan:
- lintrunner
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109638
Approved by: https://github.com/bdhirsh, https://github.com/soulitzer
ghstack dependencies: #109637
2023-09-20 06:33:37 +00:00
10d575911e [generate_opcheck_tests] rename "success" to "xsuccess" (#109637)
Not BC breaking because no existing failures dict have "success" in
them.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109637
Approved by: https://github.com/bdhirsh, https://github.com/soulitzer
2023-09-20 06:33:37 +00:00
7f1f5afc91 Run only one pytest parametrization when generating optest (#108936)
Richard, I'm curious to see what you think of this. I'm trying to use optest on the torchvision test suite, and after hacking up pytest support in https://github.com/pytorch/pytorch/pull/108929 I noticed that this was 5x'ing the test time... for no good reason.

* torchvision nms tests before optests: 60 passed, 4 skipped, 1206 deselected in 11.47s
* after optests: 300 passed, 20 skipped, 1206 deselected in 49.85s

It's no good reason because torchvision has parametrized the tests to get a spread of various random generation, but for checking schema or fake tensor, we don't actually need to test for different values.

This PR hacks up the codegen to replace pytest parametrize markers so that, instead of sampling many values, we sample only one value if you mark it with `opcheck_only_one`. There's a carveout for device parametrization, where we always run all those variants.

With this PR:

* reduced optests: 88 passed, 4 skipped, 1206 deselected in 13.89s

Companion torchvision PR which uses this at https://github.com/pytorch/vision/pull/7961

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108936
Approved by: https://github.com/zou3519
2023-09-14 14:54:57 +00:00
55f956f1d2 optests improvements based on torchvision usage on nms (#108929)
- Update cross-ref FakeMode test to use ShapeEnv.  Dynamic ops can now
  return an unbacked SymInt.  We always accept this as equal to whatever
  the real value was.
- Relax test so it works on all classes, not just unittest.TestCase
- Properly wrap the original method, so things like
  pytree.mark.parametrize are carried over
- Support dynamic shapes by default for make_fx `tracing_mode="fake"` without symbolifying everything else

Fixes https://github.com/pytorch/pytorch/issues/108927

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108929
Approved by: https://github.com/zou3519
2023-09-13 13:26:15 +00:00
bfa8429c6a [optests] Changed failures_dict format to json; automatic update of failures_dict (#109110)
We changed the failures_dict format from .py to json and added a way to
automatically update the failures dict (the user can set
PYTORCH_OPCHECK_ACCEPT=1 to do so), assuming the tests don't crash in the
process.

Some details:
- We introduced a FailuresDict class that handles save/load and from which one
can query a test status ("xfail", "skip", etc).
- PYTORCH_OPCHECK_ACCEPT=1 does not override everything. In particular: it
doesn't try to update the failures dict for a test marked as "skip", but it
will update it for tests marked as "xfail" or "success".
- PYTORCH_OPCHECK_ACCEPT=1 also does not override the "comment" field, unless
it is flipping an "xfail" into "success".
- I'll update the gdoc linked in the comments with how to actually use
PYTORCH_OPCHECK_ACCEPT=1 internally (it's not trivial).

Note that this isn't multithreading-safe, the current recommendation is to run
the tests sequentially if the user wants to use PYTORCH_OPCHECK_ACCEPT=1.

Differential Revision: D49167181

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109110
Approved by: https://github.com/ezyang
2023-09-13 13:24:15 +00:00
df42f15e28 Improve generate_opcheck_tests, add opcheck utility (#107597)
Summary:
This PR improves `generate_opcheck_tests`:
- We shouldn't run automated testing through operators called in
  torch.jit.trace / torch.jit.script
- I improved the error message and added a guide on what to do if one of the
  tests fail.
- While dogfooding this, I realize I wanted a way to reproduce the failure
  without using the test suite. If you pass `PYTORCH_OPCHECK_PRINT_REPRO`, it
  will now print a minimal repro on failure. This involves serializing some
  tensors to disk.
- The minimal repro includes a call to a new API called `opcheck`.

The opcheck utility runs the same checks as the tests generated
by `generate_opcheck_tests`. It doesn't have a lot of knobs on it for
simplicity. The general workflow is: if an autogenerated test fails, then the
user may find it easier to reproduce the failure without the test suite by
using opcheck

Test Plan: - new tests

Differential Revision: D48485013

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107597
Approved by: https://github.com/ezyang
2023-08-22 15:16:04 +00:00
c69514ccb2 Update generate_opcheck_tests, also use it to test some internal tests (#107328)
Summary:
We change `generate_opcheck_tests` to be a bit more user-friendly. Note that
there are some internal-only changes, go review them there.

Test Plan: - tests

Differential Revision: D47965247

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107328
Approved by: https://github.com/ezyang
2023-08-17 21:18:14 +00:00
e4e9aa28a7 Add generate_opcheck_tests, a PT2 crossref testing mechanism (#106903)
This PR adds `generate_opcheck_tests`. This is a utility that adds
additional crossref tests to an existing TestCase that has tests that
invokes operators. The main use case is if you have a large test suite
that already exercises operators and want to add automated testing that
the operators are correct, without actually refactoring your code into
something like OpInfos.

Given a `test_` method of a TestCase, we will generate one new
additional test for each of {schema correctness, autograd registration,
faketensor rule, aot_autograd static shapes, aot_autograd dynamic
shapes}. Each newly generated test runs the original test method under a
special torch_function mode (OpCheckMode) that intercepts
`op(*args, **kwargs)` calls and additional passes (op, args, kwargs) to
a separate function (e.g. SchemaCheck).

Nitty-gritty details:
- If a test is named test_cumsum, we end up generating new tests
(`test_schema__test_cumsum`, `test_<something>__test_cumsum`)
- Users can provide a dictionary of expected failures / skips  that is indexed on
operators. This gives us a sense of what operators support PT2 and which
operators require fixing before they support PT2.

Due to some co-dev limitations, I'm planning on landing this PR first
and then using it to add crossref testing for internal tests and
fbgemms. I could squash this PR with the internal changes if we want to
see how that works out, just let me know.

Test Plan:
- We create a mini op test suite called MiniOpTests.
- Then, we use `generate_opcheck_tests` to generate tests onto it.
- We have our own test xfail list to check that the things that should
fail do fail.
- Finally, there is a separate TestGenerateOpcheckTests that checks that
the correct number of tests were generated and also tests some helper
functions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106903
Approved by: https://github.com/ezyang, https://github.com/bdhirsh
2023-08-15 02:16:07 +00:00