243 Commits

Author SHA1 Message Date
96f46316c9 Preserve PyTest Cache across job runs (#100522)
Preserves the PyTest cache from one job run to the next.  In a later PR, this will be used to change the order in which we actually run those tests

The process is:
1. Before running tests, check S3 to see if there is an uploaded cache from any shard of the current job
2. If there are, download them all and merge their contents. Put the merged cache in the default .pytest_cache folder
3. After running the tests, merge the now-current .pytest_cache folder with the cache previously downloaded for the current shard. This will make the merged cache contain all tests that have ever failed for the given PR in the current shard
4. Upload the resulting cache file back to S3

The S3 folder has a retention policy of 30 days, after which the uploaded cache files will get auto-deleted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100522
Approved by: https://github.com/huydhn
2023-05-10 18:37:28 +00:00
3beafc91d1 USE_FAST_NVCC Windows (#95206)
USE_FAST_NVCC now works on Windows.

Fixes #67100

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95206
Approved by: https://github.com/ezyang
2023-03-06 15:04:24 +00:00
b8151d2ba9 Utility for running delta comparisons between two flag configs (#95411)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95411
Approved by: https://github.com/Chillee
2023-02-25 02:30:22 +00:00
a8fdfb4ba8 [inductor] Persistent reductions (#92267)
This one may need to wait for the new MLIR Triton to land as it triggers some Triton crashes.

Before:
```
$ pytest test/inductor/test_torchinductor.py -vsk test_softmax_one_kernel_loop_cuda
...
@reduction(
    size_hints=[16, 32],
    reduction_hint=ReductionHint.INNER,
    filename=__file__,
    meta={'signature': {0: '*fp32', 1: '*fp32', 2: 'i32', 3: 'i32'}, 'device': 0, 'constants': {}, 'mutated_arg_names': [], 'configs': [instance_descriptor(divisible_by_16=(0, 1, 2, 3), equal_to_1=())]}
)
@triton.jit
def triton_(in_ptr0, out_ptr2, xnumel, rnumel, XBLOCK : tl.constexpr, RBLOCK : tl.constexpr):
    xnumel = 16
    rnumel = 32
    xoffset = tl.program_id(0) * XBLOCK
    xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
    xmask = xindex < xnumel
    rbase = tl.arange(0, RBLOCK)[None, :]
    x0 = xindex
    _tmp1 = tl.zeros([XBLOCK, RBLOCK], tl.float32) + float("-inf")
    for roffset in range(0, rnumel, RBLOCK):
        rindex = roffset + rbase
        rmask = rindex < rnumel
        r1 = rindex
        tmp0 = tl.load(in_ptr0 + (r1 + (32*x0)), rmask & xmask, eviction_policy='evict_last')
        _tmp1 = tl.where(xmask & rmask & (_tmp1 < tmp0), tmp0, _tmp1)
    tmp1 = tl.max(_tmp1, 1)[:, None]
    _tmp5 = tl.zeros([XBLOCK, RBLOCK], tl.float32) + 0
    for roffset in range(0, rnumel, RBLOCK):
        rindex = roffset + rbase
        rmask = rindex < rnumel
        r1 = rindex
        tmp2 = tl.load(in_ptr0 + (r1 + (32*x0)), rmask & xmask, eviction_policy='evict_last')
        tmp3 = tmp2 - tmp1
        tmp4 = tl.exp(tmp3)
        _tmp5 = tl.where(xmask & rmask, _tmp5 + tmp4, _tmp5)
    tmp5 = tl.sum(_tmp5, 1)[:, None]
    for roffset in range(0, rnumel, RBLOCK):
        rindex = roffset + rbase
        rmask = rindex < rnumel
        r1 = rindex
        tmp6 = tl.load(in_ptr0 + (r1 + (32*x0)), rmask & xmask, eviction_policy='evict_last')
        tmp7 = tmp6 - tmp1
        tmp8 = tl.exp(tmp7)
        tmp9 = tmp8 / tmp5
        tl.store(out_ptr2 + (r1 + (32*x0) + tl.zeros([XBLOCK, RBLOCK], tl.int32)), tmp9, rmask & xmask)
```

After
```
$ pytest test/inductor/test_torchinductor.py -vsk test_softmax_one_kernel_persist_cuda
...
@persistent_reduction(
    size_hints=[16, 32],
    reduction_hint=ReductionHint.INNER,
    filename=__file__,
    meta={'signature': {0: '*fp32', 1: '*fp32', 2: 'i32', 3: 'i32'}, 'device': 0, 'constants': {}, 'mutated_arg_names': [], 'configs': [instance_descriptor(divisible_by_16=(0, 1, 2, 3), equal_to_1=())]}
)
@triton.jit
def triton_(in_ptr0, out_ptr2, xnumel, rnumel, XBLOCK : tl.constexpr, RBLOCK : tl.constexpr):
    xnumel = 16
    rnumel = 32
    xoffset = tl.program_id(0) * XBLOCK
    xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
    xmask = xindex < xnumel
    rindex = tl.arange(0, RBLOCK)[None, :]
    rmask = rindex < rnumel
    r1 = rindex
    x0 = xindex
    tmp0 = tl.load(in_ptr0 + (r1 + (32*x0)), rmask & xmask)
    tmp2 = tl.where(xmask & rmask, tmp0, float("-inf"))
    tmp3 = tl.max(tmp2, 1)[:, None]
    tmp4 = tmp0 - tmp3
    tmp5 = tl.exp(tmp4)
    tmp7 = tl.where(xmask & rmask, tmp5, 0)
    tmp8 = tl.sum(tmp7, 1)[:, None]
    tmp9 = tmp5 / tmp8
    tl.store(out_ptr2 + (r1 + (32*x0) + tl.zeros([XBLOCK, RBLOCK], tl.int32)), tmp9, rmask & xmask)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92267
Approved by: https://github.com/Chillee
2023-02-12 17:39:25 +00:00
25c0737adc dont graph break on list[SymInt] comparisons (#94054)
Reland of https://github.com/pytorch/pytorch/pull/92617

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94054
Approved by: https://github.com/jansel
2023-02-05 04:47:12 +00:00
68b06ee4d4 Add torch_compile_debug/ to .gitignore (#93889)
# Summary
I have almost checked this in multiple times. Add to gitignore.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93889
Approved by: https://github.com/malfet
2023-02-02 03:31:55 +00:00
7078ad5b8c Reland "AOT Autograd refactor + cleanup, handle intermediate views of bases, use view replay, fix non-tensor input handling" (#92076)
Original PR: https://github.com/pytorch/pytorch/pull/89532

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92076
Approved by: https://github.com/janeyx99, https://github.com/albanD
2023-01-12 21:32:05 +00:00
d0a4e2e782 Don't remove files across the whole OS on clean (#91503)
setup.py clean now won't remove paths matching .gitignore patterns across the entire OS. Instead, now only files from the repository will be removed.

`/build_*` had to be removed from .gitignore because with the wildcard fixed, build_variables.bzl file was deleted on cleanup.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91503
Approved by: https://github.com/soumith
2023-01-06 05:13:51 +00:00
8695f0cced Rectify native_batch_norm schema by splitting it into two legit schemas (#88697)
Using the same repro from the issue (but with BatchNorm2D)

Rectifies native_batch_norm schema by splitting the schema into 2:
1. one will have NON-optional alias-able running_mean and running_var inputs
2. the other will just not have those parameters at all (no_stats variation)

**Calling for name suggestions!**

## test plan
I've added tests in test_functionalization.py as well as an entry in common_method_invocations.py for `native_batch_norm_legit`
CI should pass.

## next steps
Because of bc/fc reasons, we reroute native_batch_norm to call our new schemas ONLY through the python dispatcher, but in 2 weeks or so, we should make `native_batch_norm_legit` the official batch_norm.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88697
Approved by: https://github.com/albanD
2022-11-23 23:23:17 +00:00
f717986f93 .gitignore log files (#88085)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88085
Approved by: https://github.com/albanD
2022-10-31 13:40:30 +00:00
09d720919e Add venv to gitignore (#86702)
`venv` is the common directory for creating virtual environments. Adding it to gitignore to support development that does not use anaconda to manage envs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86702
Approved by: https://github.com/kit1980
2022-10-17 21:50:03 +00:00
f1fdb6efbd Manual changes for moving dynamo to core (#86621)
This is the subset of the changes in #86461 not auto-generated by `copy_to_core.sh`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86621
Approved by: https://github.com/albanD
2022-10-11 23:01:21 +00:00
936e93058b Delete torch::deploy from pytorch core (#85953)
As we have migrated torch::deploy over to https://github.com/pytorch/multipy, we can now delete it from pytorch core as ongoing development will happen there.

This PR was created due to syncing issues with https://github.com/pytorch/pytorch/pull/85443 which is where the review history can be found.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85953
Approved by: https://github.com/seemethere, https://github.com/malfet
2022-10-06 07:20:16 +00:00
d3d163af80 Add xla/ folder to gitignore (#84632)
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84632
Approved by: https://github.com/ezyang
2022-09-07 15:32:05 +00:00
6f2a88dd50 script to monitor memory + cpu utilization (#82006)
Add a python script that runs in the background during test jobs to log cpu + gpu memory usage and cpu utilization of python tests (really any python process) to a file and upload the file as an artifact.

I plan on using the the gpu memory usage stats to better understand how to parallelize them, but it is easy to add on other stats if people want them.

In the future, we want to add the ability to track network usage to see if we can decrease it.  GPU utilization will also likely need to be improved.

Click the hud link to see uploaded usage log artifacts

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82006
Approved by: https://github.com/huydhn
2022-07-25 16:53:31 +00:00
3771331b2a [ci] move sccache stats off RDS
We want to decommission RDS, so upload sccache stats to S3/GHA as
appropriate.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79366

Approved by: https://github.com/janeyx99
2022-06-13 16:09:53 +00:00
9da5defff6 Package config/template files with torchgen (#78942)
Package config/template files with torchgen

This PR packages native_functions.yaml, tags.yaml and ATen/templates
with torchgen.

This PR:
- adds a step to setup.py to copy the relevant files over into torchgen
- adds a docstring for torchgen (so `import torchgen; help(torchgen)`
says something)
- adds a helper function in torchgen so you can get the torchgen root
directory (and figure out where the packaged files are)
- changes some scripts to explicitly pass the location of torchgen,
which will be helpful for the first item in the Future section.

Future
======

- torchgen, when invoked from the command line, should use sources
in torchgen/packaged instead of aten/src. I'm unable to do this because
people (aka PyTorch CI) invokes `python -m torchgen.gen` without
installing torchgen.
- the source of truth for all of these files should be in torchgen.
This is a bit annoying to execute on due to potential merge conflicts
and dealing with merge systems
- CI and testing. The way things are set up right now is really fragile,
we should have a CI job for torchgen.

Test Plan
=========
I ran the following locally:

```
python -m torchgen.gen -s torchgen/packaged
```
and verified that it outputted files.

Furthermore, I did a setup.py install and checked that the files are
actually being packaged with torchgen.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78942
Approved by: https://github.com/ezyang
2022-06-07 13:33:55 +00:00
1f8049566f Re-land BUCK build for pytorch mobile (#77612)
see https://github.com/pytorch/pytorch/pull/76480
fixed most lint errors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77612
Approved by: https://github.com/kit1980
2022-05-17 00:30:13 +00:00
530481ed69 Revert "[mobile] add buck build for mobile targets (#76480)"
This reverts commit 168dc70faf9764417a7e41a14bf2f4e15a7f3e4a.

Reverted https://github.com/pytorch/pytorch/pull/76480 on behalf of https://github.com/atalman
2022-05-16 16:14:17 +00:00
168dc70faf [mobile] add buck build for mobile targets (#76480)
Create buck targets to replicate internal BUCK build, including
- XNNPACK
- QNNPACK
- C10
- aten_cpu
- torch_mobile_core
- torch_mobile_all_ops
- ptmobile_benchmark

And able to run mobilenet v2 using ptmobile_benchmark (with all ops).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76480
Approved by: https://github.com/seemethere, https://github.com/dreiss
2022-05-15 18:42:41 +00:00
6e959dec69 add buck generated files to ignore list
add buck generated files to ignore list

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75744
Approved by: https://github.com/kit1980
2022-04-27 01:40:17 +00:00
51666ff123 Do not ignore lazy/generated/README.me
Fixes https://github.com/pytorch/pytorch/issues/75624

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75626
Approved by: https://github.com/osalpekar
2022-04-13 12:50:55 +00:00
caa28ff495 refining regrex of .gitignore core.*
- core.* regrex is changed to more refined command

Fixes #74890

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75550
Approved by: https://github.com/seemethere
2022-04-11 01:41:07 +00:00
89e79f844d Add list of supported ATen ops by ONNX converter into torch.onnx page
This PR introduces a new documentation page with a list of supported ATen operators by the ONNX converter.

When `make html` (or similar) are called, a python script will generate a temporary CSV file inside the doc build folder with a list of operators/opsets currently supported by the PyTorch ONNX exporter. That CSV is used by Sphinx to build a HTML table using the same theme as the rest of the documentation.

That page is linked to the existing `onnx.rst`, including its table of contents.

@BowenBao @shubhambhokare1 Feel free to add more details on how the script cross reference onnx symbolics and aten operators list from torch jit api`

Below is the workflow for the changed pages:

The initial torch.onnx page was modified to add a link to the list of supported aten operators
![image](https://user-images.githubusercontent.com/5469809/159046387-c459bffc-c9b2-4fcb-8468-8181fdddf911.png)

The screen below highlights the text structure changes to the `ATen operartors` section
![image](https://user-images.githubusercontent.com/5469809/159046730-ccd1e594-c8e6-4b8d-a9ec-8bf6ad58a435.png)

Finally the new page with the list of supported operators is shown below
![image](https://user-images.githubusercontent.com/5469809/159046872-0d99b769-8b95-4c2b-99a9-a8cfdd0b6ecf.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74397
Approved by: https://github.com/garymm, https://github.com/malfet
2022-04-07 00:05:44 +00:00
74b23b2066 quantization: autogenerate quantization backend configs for documentation (#75126)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75126

Quantization has a high volume of configurations of how to quantize an
op for a reference model representation which is useful for a lowering
step for a backend.  An example of this is

```
 {'dtype_configs': [{'input_dtype': torch.quint8,
										 'output_dtype': torch.quint8}],
	'observation_type': <ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT: 0>,
	'pattern': <class 'torch.nn.modules.conv.ConvTranspose1d'>},
```

These configs are checked into master, and they are created with Python functions.
Therefore, there is no easy way for the user to see what the configs actually
are without running some Python code.

This PR is one approach to document these configs. Here is what this is doing:
1. during documentation build, write a text file of the configs
2. render that text file on a quantization page, with some additional context

In the future, this could be extended to autogenerate better looking tables
such as: op support per backend and dtype, op support per valid quantization settings per backend,
etc.

Test Plan:
```
cd docs
make html
cd html
python -m http.server 8000
// render http://[::]:8000/quantization-backend-configuration.html
// it renders correctly
```

Reviewed By: ejguan

Differential Revision: D35365461

Pulled By: vkuzo

fbshipit-source-id: d60f776ccb57da9db3d09550e4b27bd5e725635a
(cherry picked from commit 14865c0e23bc080120342c8f9278f0fae8eb8fbd)
2022-04-04 22:22:30 +00:00
71003c74f8 Add typing for torch.return_type
Currently, `NamedTuple` return types are created in `torch/_VF.pyi` instead of
typing being added for the symbols in in `torch/return_types.py`. This also
fixes the type names to match the actual names in the python code.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74199

Approved by: https://github.com/ezyang
2022-03-29 02:17:21 +00:00
72b1194464 Run lazy tensor codegen in generate_code.py (#73996)
Summary:
Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel.

Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later)

Bazel support is added in a later diff.

Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73996

Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h

Reviewed By: ezyang

Differential Revision: D34408536

fbshipit-source-id: 8af0aea3b95d81eccafc17d64390d70ddd176515
(cherry picked from commit f930612f2bad61c76eb02d85cfbec9f33a1459dc)
2022-03-17 15:31:26 +00:00
ff3688f07a [BE Hackathon][DataPipe] Automatically generate datapipe.pyi via CMake (#73991)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73991

Automatically generate `datapipe.pyi` via CMake and removing the generated .pyi file from Git. Users should have the .pyi file locally after building for the first time.

I will also be adding an internal equivalent diff for buck.

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D34868001

Pulled By: NivekT

fbshipit-source-id: 448c92da659d6b4c5f686407d3723933c266c74f
(cherry picked from commit 306dbc5f469e63bc141dac57ef310e6f0e16d9cd)
2022-03-15 14:46:34 +00:00
fc832d476d gitignore tools/bazel executable (#72878)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72878

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D34252470

Pulled By: ezyang

fbshipit-source-id: 5b4d6738c2fed7c1acc860fd9addaca8a24fa937
(cherry picked from commit 5aa28474a262859a0b543e14f53691650c5752ed)
2022-02-17 16:17:23 +00:00
8bdbe94344 Add forward compatability tests in CI (#64139)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64139

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D30626912

Pulled By: tugsbayasgalan

fbshipit-source-id: 781a88386701b42e2e86daaca0a779d1fc1c4df3
2022-01-05 23:40:06 -08:00
17f3179d60 Back out "[pytorch][PR] Add ability for a mobile::Module to save as flatbuffer" (#69796)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69796

(Note: this ignores all push blocking failures!)

Test Plan: External CI + Sandcastle

Reviewed By: zhxchen17

Differential Revision: D33032671

fbshipit-source-id: dbf6690e960e25d6a5f19043cbe792add2acd7ef
2021-12-10 21:29:53 -08:00
d3649309e6 [pytorch][PR] Add ability for a mobile::Module to save as flatbuffer (#69306)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69306

Included functions:

save_mobile_module -> saves a mobile::Module to flatbuffer
load_mobile_module_from_file -> loads a flatbuffer into mobile::Module
parse_mobile_module -> parses from bytes or deserialized flatbuffer
Module object

Test Plan: unittests

Reviewed By: gmagogsfm

Differential Revision: D32806835

fbshipit-source-id: 71913c6650e225634f878946bd16960d377a7f57
2021-12-09 14:53:31 -08:00
00ebbd5ef6 Revert D32010095: [pytorch][PR] Add ability for a mobile::Module to save as flatbuffer
Test Plan: revert-hammer

Differential Revision:
D32010095 (41d35dc201)

Original commit changeset: d763b0557780

fbshipit-source-id: bf746a0389135c9f5f67f00f449435ce08fb5f6d
2021-12-02 06:41:40 -08:00
41d35dc201 Add ability for a mobile::Module to save as flatbuffer (#67351)
Summary:
Included functions:

* save_mobile_module -> saves a mobile::Module to flatbuffer
* load_mobile_module_from_file -> loads a flatbuffer into mobile::Module
* parse_mobile_module -> parses from bytes or deserialized flatbuffer
      Module object

Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67351

Reviewed By: iseeyuan

Differential Revision: D32010095

Pulled By: qihqi

fbshipit-source-id: d763b0557780f7c2661b6485105b045e41a5e8f1
2021-12-01 23:58:15 -08:00
478069d6f2 Remove duplicate .DS_Store in gitignore (#68981)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68981

Reviewed By: samdow

Differential Revision: D32707039

Pulled By: soulitzer

fbshipit-source-id: 346f0f3de583d995be34c252db4f9f26cd574ba8
2021-12-01 07:28:33 -08:00
24b60b2cbf [lint] lintrunner fixes/improvements (#68292)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68292

- noqa was typo-d to be the same as type: ignore
- generalize clang-tidy initialization and use it for clang_format as well
- Add a script that lets you update the binaries in s3 relatively easily

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D32403934

Pulled By: suo

fbshipit-source-id: 4e21b22605216f013d87d636a205707ca8e0af36
2021-11-15 11:08:26 -08:00
a5a10fe353 Move all downloading logic out of common_utils.py (#61479)
Summary:
and into tools/ folder

Currently run_tests.py invokes tools/test_selections.py
1. download and analyze what test_file to run
2. download and parse S3 stats and pass the info to local files.
3. common_utils.py uses download S3 stats to determine what test cases to run.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61479

Reviewed By: janeyx99

Differential Revision: D29661986

Pulled By: walterddr

fbshipit-source-id: bebd8c474bcc2444e135bfd2fa4bdd1eefafe595
2021-07-12 11:23:22 -07:00
f86460a352 Add coverage files to .gitignore (#61144)
Summary:
Fixes failures when coverage is turned on: https://github.com/pytorch/pytorch/runs/2966295169 https://github.com/pytorch/pytorch/runs/2964409741

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61144

Test Plan:
```bash
$ echo hi > test/.coverage.jit.1625168654.4504092
$ git status
$
```

Reviewed By: zhouzhuojie

Differential Revision: D29530709

Pulled By: driazati

fbshipit-source-id: 0e6a1cb217c4d48f14c0c58a546f98393d2b0392
2021-07-07 15:28:35 -07:00
a1ad28da10 Refactor clang_tidy.py (#61119)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61119

This change spilts the clang-tidy CI job into smaller steps and uses a
refactored version of the clang_tidy.py script.

The new folder structure is as follows:
```
tools/linter/clang_tidy
|_ __main__py
|_ requirements.txt
|_ run.py
|_ setup.sh
```

`__main__.py`

This script will run `tools/linter/clang_tidy/setup.sh` if a `build`
directory doesn't exist, mimicing what used to be done as a separate
step in the CI job.

After that, it will invoke `clang-tidy` with default arguments being
declared in the script itself (as opposed to declaring them in
lint.yml).

The reasoning behind this approach is two-fold:

- Make it easier to run `clang-tidy` locally using this script
- De-duplicate the option passing

`requirements.txt`

Contains a list of additional python dependencies needed by the
`clang-tidy` script.

`setup.sh`

If a build directory doesn't exist, this command will run the necessary
codegen and build commands for running `clang-tidy`

Example usage:
```
python3 tools/linter/clang_tidy --parallel
```
Notice that we don't have to put the `.py` at the end of `clang_tidy`.

Test Plan:
Run the following command:
```
python3 tools/linter/clang_tidy --paths torch/csrc/fx --parallel
```

Reviewed By: walterddr, janeyx99

Differential Revision: D29568582

Pulled By: 1ntEgr8

fbshipit-source-id: cd6d11c5cb8ba9f1344a87c35647a1cd8dd45b04
2021-07-06 16:02:11 -07:00
c63a0d0cfe Adding windows CUDA smoke tests on PRs (#59686)
Summary:
Adding windows CUDA smoke tests on PRs (master should run the full suite).

Next step:
- Automate data update so we get a new smoke test list without manual effort

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59686

Test Plan: https://github.com/pytorch/pytorch/actions/runs/958296267 The sharded smoke tests take long still because of dependencies installation

Reviewed By: walterddr

Differential Revision: D29243533

Pulled By: janeyx99

fbshipit-source-id: dde7ba127fa15c95bda0e833cc5311598fb85e2b
2021-06-23 10:13:50 -07:00
97dfc7e300 [Reland] Adding run specified tests option to run_test.py (#59649)
Summary:
Reland of https://github.com/pytorch/pytorch/issues/59487

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59649

Reviewed By: samestep

Differential Revision: D28970751

Pulled By: janeyx99

fbshipit-source-id: 6e28d4dcfdab8a49da4b6a02c57516b08bacd7b5
2021-06-08 16:04:46 -07:00
5d6a10a765 Revert D28913223: [pytorch][PR] Adding run-specified-test-cases option in run_test.py
Test Plan: revert-hammer

Differential Revision:
D28913223 (24432eaa29)

Original commit changeset: 0d1f99109734

fbshipit-source-id: 47c073720cff23a5d4cb64556381c46025e90937
2021-06-08 02:18:16 -07:00
24432eaa29 Adding run-specified-test-cases option in run_test.py (#59487)
Summary:
The run-specified-test-cases option would allow us to specify a list of test cases to run by having a CSV with minimally two columns: test_filename and test_case_name.

This PR also adds .json to some files we use for better clarity.

Usage:
`python test/run_test.py --run-specified-test-cases <csv_file>` where the csv file can look like:
```
test_filename,test_case_name,test_total_time,windows_only_failure_sha_count,total_sha_count,windows_failure_count,linux_failure_count,windows_total_count,linux_total_count
test_cuda,test_cudnn_multiple_threads_same_device,8068.8409659525,46,3768,53,0,2181,6750
test_utils,test_load_standalone,8308.8062920459,14,4630,65,0,2718,8729
test_ops,test_forward_mode_AD_acosh_cuda_complex128,91.652619369806,11,1971,26,1,1197,3825
test_ops,test_forward_mode_AD_acos_cuda_complex128,91.825633094915,11,1971,26,1,1197,3825
test_profiler,test_source,60.93786725749,9,4656,21,3,2742,8805
test_profiler,test_profiler_tracing,203.09352795241,9,4662,21,3,2737,8807
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59487

Test Plan:
Without specifying the option, everything should be as they were before.

Running `python test/run_test.py --run-specified-test-cases windows_smoke_tests.csv` resulted in this paste P420276949 (you can see internally). A snippet looks like:
```
(pytorch) janeyx@janeyx-mbp pytorch % python test/run_test.py --run-specified-test-cases windows_smoke_tests.csv
Loading specified test cases to run from windows_smoke_tests.csv.
Processed 28 test cases.
Running test_cpp_extensions_jit ... [2021-06-04 17:24:41.213644]
Executing ['/Users/janeyx/miniconda3/envs/pytorch/bin/python', 'test_cpp_extensions_jit.py', '-k', 'test_jit_cuda_archflags'] ... [2021-06-04 17:24:41.213781]
s
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK (skipped=1)
...
```
With pytest, an example executable would be:
`Running test_dataloader ... [2021-06-04 17:37:57.643039]
Executing ['/Users/janeyx/miniconda3/envs/pytorch/bin/python', '-m', 'pytest', 'test_dataloader.py', '-v', '-k', 'test_segfault or test_timeout'] ... [2021-06-04 17:37:57.643327]`

Reviewed By: samestep

Differential Revision: D28913223

Pulled By: janeyx99

fbshipit-source-id: 0d1f9910973426b8756815c697b483160517b127
2021-06-07 16:27:43 -07:00
e5179e960e Share VS Code settings/extensions nicely (#57671)
Summary:
This is a second attempt at https://github.com/pytorch/pytorch/issues/51214. It should achieve the same goals with (as far as I can tell) no disadvantages, but the advantages are a bit less pronounced than in the more dictatorial approach that https://github.com/pytorch/pytorch/issues/51214 took:

- Unfortunately, I was unable to figure out how to include [the `mypy` configuration given in the docstring of `tools.mypy_wrapper.main`](7115a4b870/tools/mypy_wrapper.py (L81-L89)), because as walterddr pointed out, `"${env:HOME}/miniconda3/envs/pytorch/bin/python"` is not guaranteed to be correct on everyone's machine:
  ```json
  {
    "python.linting.enabled": true,
    "python.linting.mypyEnabled": true,
    "python.linting.mypyPath": "${env:HOME}/miniconda3/envs/pytorch/bin/python",
    "python.linting.mypyArgs": [
      "${workspaceFolder}/tools/mypy_wrapper.py"
    ]
  }
  ```

  Importantly, this does not work:
  ```json
  "python.linting.mypyPath": "${workspaceFolder}/tools/mypy_wrapper.py"
  ```
  This is because VS Code does not run the given `mypy` command inside of the user's specified virtual environment, so for instance, on my system, setting the `mypy` command to directly call `tools/mypy_wrapper.py` results in using `mypy 0.782` instead of the correct `mypy 0.812`.

  Sadly, [this](https://code.visualstudio.com/docs/editor/variables-reference#_configuration-variables) does not work either, although I'm not sure why:
  ```json
  {
    "python.linting.mypyPath": "${config:python.pythonPath}",
    "python.linting.mypyArgs": [
      "${workspaceFolder}/tools/mypy_wrapper.py"
    ]
  }
  ```

- As a result, `git clean -fdx; tools/vscode_settings.py` still results in some loss of useful configuration.

One other thing to note: as `.vscode/settings_recommended.json` shows, there are some configuration sections that only take effect within the context of a `"[language]"`, so currently, if a dev already has one of those settings, it would be entirely overwritten by `tools/vscode_settings.py` rather than a graceful merge. This could probably be fixed by using a deep merge instead of the current shallow merge strategy.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57671

Test Plan:
If you want, you can typecheck the small script added by this PR (no output is expected):
```sh
tools/mypy_wrapper.py $PWD/tools/vscode_settings.py
```
You can also try running it to update your own VS Code workspace settings:
```sh
tools/vscode_settings.py
```
This should have minimal impact on your existing `tools/settings.json` file other than enabling the few explicitly recommended settings (e.g. it should not reorder or remove any of your existing settings).

Reviewed By: malfet

Differential Revision: D28230390

Pulled By: samestep

fbshipit-source-id: 53a7907229e5807c77531cae4f9ab9d469fd7684
2021-05-05 15:19:59 -07:00
5b01b3e8e8 Introducing JitPlugin (#56708)
Summary:
This PR is step 1 to covering JIT'd methods and functions. Step 2 (using it in CI) is here: https://github.com/pytorch/pytorch/issues/56310.

1. This PR introduces a package `coverage_plugins` that hosts JITPlugin.
2. We also bring in a `.coveragerc` file that is used in CI to omit the files we don't want to report on (e.g., temporary directories or test or utils.)

**Disclaimer: This PR does NOT use the plug-in. Nothing should change as a result.**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56708

Test Plan:
CI. Coverage should not go down.

If you're interested in testing this plug-in locally, you should:
`pip install -e tools/coverage_plugins_package` from the root directory.
Add the following lines to `.coveragerc` under `[run]`
```
plugins =
    coverage_plugins.jit_plugin
```
And then try:
`coverage run test/test_jit.py TestAsync.test_async_script_no_script_mod`

You should see `.coverage.jit` show up at the end. You can then run `coverage combine --append` and `coverage debug data` to see that some files in `torch/jit` are covered.

Reviewed By: samestep

Differential Revision: D27945570

Pulled By: janeyx99

fbshipit-source-id: 78732940fcb498d5ec37d4075c4e7e08e96a8d55
2021-04-22 13:41:49 -07:00
31677c5fcb [reland] .github: Add initial linux CI workflow (#56280)
Summary:
This reverts commit 6b5ed5ec454ecd8597ff0465305915dd1e09a805.

There'll also probably be fixes here, see diff from original PR: https://github.com/pytorch/pytorch/compare/f2abce0...ci-all/add-initial-linux-ci-gha

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56280

Reviewed By: walterddr

Differential Revision: D27826012

Pulled By: seemethere

fbshipit-source-id: 71cad1d7f840ede5025b1bb4a33d628aa74686d1
2021-04-19 17:36:09 -07:00
c5e80d30bf Harden "Add annotations" workflow (#56071)
Summary:
Resolves https://github.com/pytorch/pytorch/issues/55810 by closing some possible security holes due to using [GitHub Actions `${{ <expressions> }}`](https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#about-contexts-and-expressions) in `.github/workflows/add_annotations.yml` and also patching a few other possible scenarios that could cause the workflow to fail by a PR passing a malformed artifact.

- [x] flag and remove GitHub Actions expressions in JS scripts
- [x] don't fail the workflow if the artifact doesn't look as expected
- [x] write unit tests for `tools/extract_scripts.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56071

Test Plan:
I tested the end-to-end "Lint" and "Add annotations" system in a separate sandbox repo, including the following cases:

- well-formed artifact
- missing artifact
- artifact containing a file named `linter-output.zip` (name clash)
- artifact whose `commit-sha.txt` doesn't contain a 40-digit hex string
- artifact whose `commit-sha.txt` contains a 40-digit hex string that isn't a valid Git hash for the current repo
  - in this last case, the workflow does fail, but handling that is the responsibility of [pytorch/add-annotations-github-action](https://github.com/pytorch/add-annotations-github-action), not pytorch/pytorch

To run the new unit tests added in this PR:
```
python tools/test/test_extract_scripts.py
```

Reviewed By: seemethere

Differential Revision: D27807074

Pulled By: samestep

fbshipit-source-id: e2d3cc5437fe80ff03d46237ebba289901bc567c
2021-04-16 07:46:20 -07:00
e387bd780e Ignore envrc files (#56199)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56199

Reviewed By: ejguan

Differential Revision: D27821439

Pulled By: agolynski

fbshipit-source-id: 4be7158d723c58f82b6ec56b3817932899e1b196
2021-04-16 07:36:51 -07:00
6b5ed5ec45 Revert D27803529: [pytorch][PR] .github: Add initial linux CI workflow
Test Plan: revert-hammer

Differential Revision:
D27803529 (7d410bc3c8)

Original commit changeset: 52a65ec8f7a8

fbshipit-source-id: ce968654f2aecd8b36b5f86e0fe5ed6056f0fb8a
2021-04-16 02:53:31 -07:00
7d410bc3c8 .github: Add initial linux CI workflow (#55176)
Summary:
This is a commandeer of https://github.com/pytorch/pytorch/issues/54091.

TODO:

- [x] understand why the build is [failing](https://github.com/pytorch/pytorch/pull/55176/checks?check_run_id=2254742265) here when it was [succeeding](https://github.com/pytorch/pytorch/pull/54091/checks?check_run_id=2177844748) on https://github.com/pytorch/pytorch/issues/54091
- [x] fix the build failure
- [x] fix the test failure(s)
- [x] add CI check to generate YAML workflows from templates, similar to https://github.com/pytorch/pytorch/issues/55171
- [ ] uncomment the rest of the matrix

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55176

Reviewed By: walterddr

Differential Revision: D27803529

Pulled By: seemethere

fbshipit-source-id: 52a65ec8f7a83b929fed47f0bbdca544210ec9c2
2021-04-15 16:54:04 -07:00