Summary:
This PR introduces a script to spit our a list of slow tests into a file `.pytorch-slow-tests`. The format is currently JSON, and is simply a dictionary with entries that look like: `("test_case_name (__main__.test_suite)" -> average time in seconds)`. This is one of the steps in maintaining a list of slow tests so we could retire the manual slowTest labeling process.
The script reads data from the previous day's viable/strict's data (to ensure we have fully uploaded data), and aggregates the test times for **passed** test cases. It then filters the individual test cases to exclude those faster than 60 seconds.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54907
Test Plan:
`python tools/export_slow_test.py`
Check that `.pytorch-slow-tests` contains data. Mine looks like:
```
{
"test_matmul_4d_4d_complex_cpu (__main__.TestAutogradDeviceTypeCPU)": 91.22675,
"test_unary_ops (__main__.TestTEFuser)": 68.6,
"test_fn_gradgrad_unfold_cpu_complex128 (__main__.TestGradientsCPU)": 82.49153333333334,
"test_conv1d_basic (__main__.TestXNNPACKConv1dTransformPass)": 94.0914375,
"test_ddp_uneven_inputs (__main__.TestDistBackendWithFork)": 134.4995,
"test_pdist_norm_large_cuda (__main__.TestTorchDeviceTypeCUDA)": 60.2634,
"test_cusparse_multiple_threads_same_device (__main__.TestCuda)": 97.9022,
"test_fn_gradgrad_unfold_cuda_complex128 (__main__.TestGradientsCUDA)": 130.7222,
"test_ddp_uneven_inputs (__main__.TestDistBackendWithSpawn)": 136.08133333333333,
"test_jit_cuda_archflags (__main__.TestCppExtensionJIT)": 112.80733333333333,
"test_lobpcg_ortho_cuda_float64 (__main__.TestLinalgCUDA)": 63.8312,
"test_matmul_4d_4d_complex_cuda (__main__.TestAutogradDeviceTypeCUDA)": 62.1062,
"test_inverse_many_batches_cuda_complex128 (__main__.TestLinalgCUDA)": 1434.505,
"test_inverse_many_batches_cuda_complex64 (__main__.TestLinalgCUDA)": 1403.846,
"test_inverse_many_batches_cuda_float32 (__main__.TestLinalgCUDA)": 2081.614,
"test_inverse_many_batches_cuda_float64 (__main__.TestLinalgCUDA)": 1410.788,
"test_matrix_exp_analytic_cuda_complex128 (__main__.TestLinalgCUDA)": 172.167,
"test_matrix_exp_analytic_cuda_complex64 (__main__.TestLinalgCUDA)": 172.57,
"test_matrix_exp_analytic_cuda_float32 (__main__.TestLinalgCUDA)": 258.61,
"test_matrix_exp_analytic_cuda_float64 (__main__.TestLinalgCUDA)": 174.793,
"test_inverse_many_batches_cpu_complex128 (__main__.TestLinalgCPU)": 666.464,
"test_inverse_many_batches_cpu_complex64 (__main__.TestLinalgCPU)": 667.26,
"test_inverse_many_batches_cpu_float32 (__main__.TestLinalgCPU)": 1100.719,
"test_inverse_many_batches_cpu_float64 (__main__.TestLinalgCPU)": 651.037,
"test_matrix_exp_analytic_cpu_complex128 (__main__.TestLinalgCPU)": 72.965,
"test_matrix_exp_analytic_cpu_complex64 (__main__.TestLinalgCPU)": 74.184,
"test_matrix_exp_analytic_cpu_float32 (__main__.TestLinalgCPU)": 128.768,
"test_matrix_exp_analytic_cpu_float64 (__main__.TestLinalgCPU)": 72.138,
"test_conv1d_with_relu_fc (__main__.TestXNNPACKConv1dTransformPass)": 123.728,
"test_fn_gradgrad_linalg_householder_product_cuda_complex128 (__main__.TestGradientsCUDA)": 60.708,
"test_lobpcg (__main__.TestAutograd)": 120.408,
"test_collect_callgrind (__main__.TestBenchmarkUtils)": 206.896,
"test_collect_cpp_callgrind (__main__.TestBenchmarkUtils)": 122.507,
"test_proper_exit (__main__.TestDataLoader)": 172.356,
"test_proper_exit (__main__.TestDataLoaderPersistentWorkers)": 172.02,
"testNBit (__main__.operator_test.fused_nbit_rowwise_conversion_ops_test.TestNBitGreedyFused)": 96.9435,
"IntegerDivider (__main__.TestCUDAIntegerDivider)": 156.73700000000002
}
```
Reviewed By: walterddr, malfet
Differential Revision: D27412861
Pulled By: janeyx99
fbshipit-source-id: ec3d327e0dc6c93093e8b1c8454e3166b0649909
Summary:
Step 2 to fixing https://github.com/pytorch/pytorch/issues/53882 :)
This changes TARGET_DET_LIST and sharding automation by checking if there's already cached data from the commit in `.pytorch-test-times`. If not, it pulls data from S3 and updates the file to have the stats. This way, S3 pulling does not need to happen more than once for the same commit.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54210
Test Plan:
the following methods should run the same set of tests.
First `export CIRCLE_JOB=pytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_test2` or your favorite CIRCLE JOB.
1. Pull data first and use it:
Download the data from S3 and write it to the cache file with `python test/run_test.py --export-historic-test-times .pytorch-test-times`
Now run `python test/run_test.py --shard 1 10`
2. Make the sharding job pull data:
Delete the file you just created: `rm .pytorch-test-times`
Now run `python test/run_test.py --shard 1 10`
Reviewed By: walterddr
Differential Revision: D27136849
Pulled By: janeyx99
fbshipit-source-id: 51a42c4e2fa3f8cf15e682679dd3eb6130aad927
Summary:
This will allow for future work to use the test times file (which will save computation time and also allow for more consistency). (Step one to fixing https://github.com/pytorch/pytorch/issues/53882)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54083
Test Plan:
export CIRCLE_JOB=your-favorite-circleci-job e.g., pytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_test2
`python test/run_test.py --export-historic-test-times` OR
`python test/run_test.py --export-historic-test-times .your-favorite-file`
When opening either .pytorch-test-times or .your-favorite-file, you should see something like:
```
{"commit": "2d559a09392aabb84dfb4a498010b2f01d99818c", "job_times": {"distributed/test_distributed_spawn": 583.5889999999973, "distributed/test_data_parallel": 4.866999999999997, "test_binary_ufuncs": 171.1569999999998, "test_numpy_interop": 2.5649999999999995, "test_public_bindings": 0.011,...}}
```
Note that no tests will be run when this option is specified.
Reviewed By: walterddr
Differential Revision: D27091351
Pulled By: janeyx99
fbshipit-source-id: e191d739268d86de0a0ba0eea0006969859d1940
Summary:
Do not build PyTorch if `setup.py` is called with 'sdist' option
Regenerate bundled license while sdist package is being built
Refactor `check_submodules` out of `build_deps` and check that submodules project are present during source package build stage.
Test that sdist package is configurable during `asan-build` step
Fixes https://github.com/pytorch/pytorch/issues/52843
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52908
Reviewed By: walterddr
Differential Revision: D26685176
Pulled By: malfet
fbshipit-source-id: 972a40ae36e194c0b4e0fc31c5e1af1e7a815185
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51754
This API allows you to manage multiple python interpreters in a single
process to deploy PyTorch models packaged with torch.package.
torch/csrc/deploy/deploy.h contains the API definition
torch/csrc/deploy/test_deploy.cpp has some examples.
Notes:
* mutex is added to PyTorchStreamReader to make it safe to use from multiple threads at once.
* USE_DEPLOY is only true for the special libtorch_deployinterpreter.so library, when enabled
we use a hash table to maintain PyObject <> at::Tensor mappping rather than the internal pointer
in Tensor since >1 interpreter may have a reference to the tensor.
* serialization.py has some additional functions for creating pickle objects
but keeping storages in memory for use transfering tensors between interpreters
Test Plan: Imported from OSS
Reviewed By: wconstab
Differential Revision: D26329468
Pulled By: zdevito
fbshipit-source-id: d75f4ebb9a27f1d911179d9996041bcb3ca04a07
Summary:
Usage explanation will be in the release note runbook.
This allows to generate diffs like:
```
Processing torch.nn
Things that were added:
{'quantizable', 'ChannelShuffle', 'LazyConvTranspose2d', 'LazyConv2d', 'LazyConvTranspose3d', 'LazyConv1d', 'GaussianNLLLoss', 'LazyConv3d', 'PixelUnshuffle', 'UninitializedParameter', 'LazyLinear', 'LazyConvTranspose1d'}
Things that were removed:
set()
```
This can then be shared with module owners along with the commits to help them validate that the namespace changes for their submodule is as expected.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51685
Reviewed By: zhangguanheng66
Differential Revision: D26260258
Pulled By: albanD
fbshipit-source-id: 40e40f86314e17246899d01ffa4b2631e93b52f7
Summary:
Uses cmake's `configure_file()` macro to generate a new `torch/csrc/api/include/torch/version.h` header with `TORCH_VERSION_{MAJOR,MINOR,PATCH}` \#defines from an input file `torch/csrc/api/include/torch/version.h.in`.
For Bazel builds, this is accomplished with `header_template_rule()`.
For Buck builds, this is accomplished with `fb_native.genrule()`.
Fixes https://github.com/pytorch/pytorch/issues/44365
<img width="1229" alt="Screen Shot 2021-01-05 at 3 19 24 PM" src="https://user-images.githubusercontent.com/75754324/103809279-3fd80380-5027-11eb-9039-fd23922cebd5.png">
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50073
Reviewed By: glaringlee
Differential Revision: D25855877
Pulled By: jbschlosser
fbshipit-source-id: 6bb792718c97e2c2dbaa74b7b7b831a4f6938e49
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51124
Original commit changeset: 1c7133627da2
Test Plan: Test locally with interpreter_test and on CI
Reviewed By: suo
Differential Revision: D26077905
fbshipit-source-id: fae83bf9822d79e9a9b5641bc5191a7f3fdea78d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50458
libinterpreter.so contains a frozen python distribution including
torch-python bindings.
Freezing refers to serializing bytecode of python standard library modules as
well as the torch python library and embedding them in the library code. This
library can then be dlopened multiple times in one process context, each
interpreter having its own python state and GIL. In addition, each python
environment is sealed off from the filesystem and can only import the frozen
modules included in the distribution.
This change relies on newly added frozenpython, a cpython 3.8.6 fork built for this purpose. Frozenpython provides libpython3.8-frozen.a which
contains frozen bytecode and object code for the python standard library.
Building on top of frozen python, the frozen torch-python bindings are added in
this diff, providing each embedded interpreter with a copy of the torch
bindings. Each interpreter is intended to share one instance of libtorch and
the underlying tensor libraries.
Known issues
- Autograd is not expected to work with the embedded interpreter currently, as it manages
its own python interactions and needs to coordinate with the duplicated python
states in each of the interpreters.
- Distributed and cuda stuff is disabled in libinterpreter.so build, needs to be revisited
- __file__ is not supported in the context of embedded python since there are no
files for the underlying library modules.
using __file__
- __version__ is not properly supported in the embedded torch-python, just a
workaround for now
Test Plan: tested locally and on CI with cmake and buck builds running torch::deploy interpreter_test
Reviewed By: ailzhang
Differential Revision: D25850783
fbshipit-source-id: a4656377caff25b73913daae7ae2f88bcab8fd88
Summary:
Cant think of a reason not .gitignore test-reports folder. this can be helpful when
1. running `python test/test*.py` from github root directory since it creates the folder at root.
2. CI test report path generated by `torch/testing/_internal/common_utils.py` creates the folder in the same path where the test python file locates.
Creating a PR to make sure CI is happy. this is also needed by https://github.com/pytorch/pytorch/issues/50923
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50952
Reviewed By: samestep
Differential Revision: D26022436
Pulled By: walterddr
fbshipit-source-id: 83e6296de802bd1754b802b8c70502c317f078c9
Summary:
draft enable fast_nvcc.
* cleaned up some non-standard usages
* added fall-back to wrap_nvcc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49773
Test Plan:
Configuration to enable fast nvcc:
- install and enable `ccache` but delete `.ccache/` folder before each build.
- `TORCH_CUDA_ARCH_LIST=6.0;6.1;6.2;7.0;7.5`
- Toggling `USE_FAST_NVCC=ON/OFF` cmake config and run `cmake --build` to verify the build time.
Initial statistic for a full compilation:
* `cmake --build . -- -j $(nproc)`:
- fast NVCC
```
real 48m55.706s
user 1559m14.218s
sys 318m41.138s
```
- normal NVCC:
```
real 43m38.723s
user 1470m28.131s
sys 90m46.879s
```
* `cmake --build . -- -j $(nproc/4)`:
- fast NVCC:
```
real 53m44.173s
user 1130m18.323s
sys 71m32.385s
```
- normal NVCC:
```
real 81m53.768s
user 858m45.402s
sys 61m15.539s
```
* Conclusion: fast NVCC doesn't provide too much gain when compiler is set to use full CPU utilization, in fact it is **even worse** because of the thread switcing.
initial statistic for partial recompile (edit .cu files)
* `cmake --build . -- -j $(nproc)`
- fast NVCC:
```
[2021-01-13 18:10:24] [ 86%] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_BinaryMiscOpsKernels.cu.o
[2021-01-13 18:11:08] [ 86%] Linking CXX shared library ../lib/libtorch_cuda.so
```
- normal NVCC:
```
[2021-01-13 17:35:40] [ 86%] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_BinaryMiscOpsKernels.cu.o
[2021-01-13 17:38:08] [ 86%] Linking CXX shared library ../lib/libtorch_cuda.so
```
* Conclusion: Effective compilation time for single CU file modification reduced from from 2min30sec to only 40sec when compiling multiple architecture. This shows **4X** gain in speed up using fast NVCC -- reaching the theoretical limit of 5X when compiling 5 gencode architecture at the same time.
Follow up PRs:
- should have better fallback mechanism to detect whether a build is supported by fast_nvcc or not instead of dryruning then fail with fallback.
- performance measurement instrumentation to measure what's the total compile time vs the parallel tasks critical path time.
- figure out why `-j $(nproc)` gives significant sys overhead (`sys 318m41.138s` vs `sys 90m46.879s`) over normal nvcc, guess this is context switching, but not exactly sure
Reviewed By: malfet
Differential Revision: D25692758
Pulled By: walterddr
fbshipit-source-id: c244d07b9b71f146e972b6b3682ca792b38c4457
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50132
When running mypy command using `dmypy run`, it creates a status file.
This PR adds the file to the ignore list.
Test Plan: Imported from OSS
Reviewed By: samestep
Differential Revision: D25834504
Pulled By: z-a-f
fbshipit-source-id: 6c5a8edd6d8eaf61983e3ca80e798e02d78e38ce
Summary:
These files are generated by MSVC when building with debug symbols `REL_WITH_DEB_INFO=1`:
```
PS C:\Users\Xiang Gao\source\repos\pytorch> git status
On branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
torch/lib/asmjit.pdb
torch/lib/c10.pdb
torch/lib/c10_cuda.pdb
torch/lib/caffe2_detectron_ops_gpu.pdb
torch/lib/caffe2_module_test_dynamic.pdb
torch/lib/caffe2_observers.pdb
torch/lib/fbgemm.pdb
torch/lib/shm.pdb
torch/lib/torch_cpu.pdb
torch/lib/torch_cuda.pdb
nothing added to commit but untracked files present (use "git add" to track)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47963
Reviewed By: heitorschueroff
Differential Revision: D25311564
Pulled By: malfet
fbshipit-source-id: 1a7125f3c6ff296b4bb0975ee97b59c23586b1cb
Summary:
Convert the NVFUSER's runtime CUDA sources (under `.../jit/codegen/cuda/runtime`) to string literals, then include the headers with the generated literals.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48283
Reviewed By: mrshenli
Differential Revision: D25163362
Pulled By: ngimel
fbshipit-source-id: 4e6c181688ddea78ce6f3c754fee62fa6df16641
Summary:
This PR tries to make building the docs less confusing for new contributors:
- `npm` is discouraged on devservers for Facebook employees, so I added another way to install `katex`
- the path to `check-doxygen.sh` was wrong, so I fixed it
- while generating the CPP docs, it created two new folders that weren't ignored by Git, so I added those to `.gitignore`
- I wasn't able to get the SSH tunnel to work, so I added instructions to use `scp` as an alternative
I'm not entirely sure how the `docs/cpp/source/{html,latex}/` directories were created since I haven't been able to reproduce them.
I also think that it would be better to use the SSH tunnel since `scp` is so much slower, but I just wasn't able to figure it out; I followed the instructions from `CONTRIBUTING.md` and then ran a [Python `http.server`](https://docs.python.org/3/library/http.server.html) on my devserver:
```bash
python -m http.server 8000 --bind 127.0.0.1 --directory build/html
```
but my browser failed to connect and my (local) terminal printed error messages (presumably from the SSH command).
If anyone knows how to properly set up the SSH tunnel and HTTP server, I add those more detailed instructions to `CONTRIBUTING.md` and remove the `scp` instructions from this PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47539
Reviewed By: malfet
Differential Revision: D24806833
Pulled By: samestep
fbshipit-source-id: 456691018a76efadde28fa5eb783b0895582e72d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45017
this is the default indexing folder for clangd 11.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D23817619
Pulled By: suo
fbshipit-source-id: 6a60136e591b2fec3d432ac5343cb76ac0934502
Summary:
I usually get this extra "legacy_conv2d.pt" file in my git "changed files". I found that this is from tests with `download_file`
42c895de4d/test/test_nn.py (L410-L426)
and its definition (see `data_dir` for download output location)
f17d7a5556/torch/testing/_internal/common_utils.py (L1338-L1357)
I assume a file "generated" by test should not be tracked in VCS? Also, if the file is updated on the server, users may still use the old version of it if they have already downloaded that before.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43941
Reviewed By: anjali411
Differential Revision: D23451264
Pulled By: ezyang
fbshipit-source-id: 7fcdfb24685a7e483914cc46b3b024df798bf7f7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42629
How to approach reviewing this diff:
- The new codegen itself lives in `tools/codegen`. Start with `gen.py`, then read `model.py` and them the `api/` folder. The comments at the top of the files describe what is going on. The CLI interface of the new codegen is similar to the old one, but (1) it is no longer necessary to explicitly specify cwrap inputs (and now we will error if you do so) and (2) the default settings for source and install dir are much better; to the extent that if you run the codegen from the root source directory as just `python -m tools.codegen.gen`, something reasonable will happen.
- The old codegen is (nearly) entirely deleted; every Python file in `aten/src/ATen` was deleted except for `common_with_cwrap.py`, which now permanently finds its home in `tools/shared/cwrap_common.py` (previously cmake copied the file there), and `code_template.py`, which now lives in `tools/codegen/code_template.py`. We remove the copying logic for `common_with_cwrap.py`.
- All of the inputs to the old codegen are deleted.
- Build rules now have to be adjusted to not refer to files that no longer exist, and to abide by the (slightly modified) CLI.
- LegacyTHFunctions files have been generated and checked in. We expect these to be deleted as these final functions get ported to ATen. The deletion process is straightforward; just delete the functions of the ones you are porting. There are 39 more functions left to port.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: bhosmer
Differential Revision: D23183978
Pulled By: ezyang
fbshipit-source-id: 6073ba432ad182c7284a97147b05f0574a02f763
Summary:
Replace `test` with `coverage_test` stage for `pytorch-linux-bionic-py3.8-gcc9` configuration
Add `coverage.xml` to the list of ignored files
Add `codecov.yml` that maps installed pytorch folders back to original locations
Cleanup coverage option utilization in `run_test.py` and adapt it towards combining coverage reports across the runs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43600
Reviewed By: seemethere
Differential Revision: D23351877
Pulled By: malfet
fbshipit-source-id: acf78ae4c8f3e23920a76cce1d50f2821b83eb06
Summary:
Run fastrnns benchmark using pytest-benchmark infra, then parse its json format and upload to scribe.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42030
Reviewed By: malfet
Differential Revision: D22970270
Pulled By: wconstab
fbshipit-source-id: 87da9b7ddf741da14b80d20779771d19123be3c5
Summary:
instead exporting schemas using the current binary being tested, install nightly and export its schemas to use in a back-compat test run by the current binary being tested.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41949
Reviewed By: houseroad
Differential Revision: D22731054
Pulled By: bradleyhd
fbshipit-source-id: 68a7e7637b9be2604c0ffcde2a40dd208057ba72
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41575
Fixes https://github.com/pytorch/pytorch/issues/34294
This updates the C++ argument parser to correctly handle `TensorList` operands. I've also included a number of updates to the testing infrastructure, this is because we're now doing a much more careful job of testing the signatures of aten kernels, using the type information about the arguments as read in from `Declarations.yaml`. The changes to the tests are required because we're now only checking for `__torch_function__` attributes on `Tensor`, `Optional[Tensor]` and elements of `TensorList` operands, whereas before we were checking for `__torch_function__` on all operands, so the relatively simplistic approach the tests were using before -- assuming all positional arguments might be tensors -- doesn't work anymore. I now think that checking for `__torch_function__` on all operands was a mistake in the original design.
The updates to the signatures of the `lambda` functions are to handle this new, more stringent checking of signatures.
I also added override support for `torch.nn.functional.threshold` `torch.nn.functional.layer_norm`, which did not yet have python-level support.
Benchmarks are still WIP.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34725
Reviewed By: mruberry
Differential Revision: D22357738
Pulled By: ezyang
fbshipit-source-id: 0e7f4a58517867b2e3f193a0a8390e2ed294e1f3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41312
I was hoping that exhale had gotten incremental recompilation
in its latest version, but experimentally this does not seem
to have been the case. Still, I had gotten the whole shebang
to be working on the latest version of these packages, so might
as well land the upgrade. There was one bug in Optional.h that
I had to fix; see the cited bug report.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: zou3519
Differential Revision: D22526349
Pulled By: ezyang
fbshipit-source-id: d4169c2f48ebd8dfd8a593cc8cd232224d008ae9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38157
This removes the error prone process of assembling `torch/__init__.pyi`
(and frequently forgetting to expose things), since now we can simply
rely on the true source file to get things done. Most of the old
codegen in gen_pyi.py is now rerouted to various files:
- `torch/_C/__init__.pyi` (the dumping pile of all misc bindings)
- `torch/_C/_nn.pyi` (NN function bindings)
- `torch/_C/_VariableFunctions.pyi` (torch function bindings)
`torch.types` grew a bunch more definitions that previously where
defined in `torch/__init__.pyi`
Some miscellaneous changes
- Fixed a bug where we treat single TensorList argument as implying
varargs are accepted. This is actually only supported on IntList.
This means we can correctly generate a stub for dequantize.
- Add missing manual stub for nonzero
- Switched torch/onnx/operators.py to directly refer to _C module,
since apparently mypy doesn't think that methods prefixed with
underscores get reexported. This may be a recurring theme; maybe
we need to find a better way to solve it.
Because I was really lazy, I dumped namedtuple definitions in both
`torch._C` and `torch._C._VariableFunctions`. This is definitely wrong.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Differential Revision: D21497400
Pulled By: ezyang
fbshipit-source-id: 07b126141c82efaca37be27c07255cb2b9b3f064
Summary:
xref gh-32838, gh-34032
This is a major refactor of parts of the documentation to split it up using sphinx's `autosummary` feature which will build out `autofuction` and `autoclass` stub files and link to them. The end result is that the top module pages like torch.nn.rst and torch.rst are now more like table-of-contents to the actual single-class or single-function documentations pages.
Along the way, I modified many of the docstrings to eliminate sphinx warnings when building. I think the only thing I changed from a non-documentation perspective is to add names to `__all__` when adding them to `globals()` in `torch.__init__.py`
I do not know the CI system: are the documentation build artifacts available after the build, so reviewers can preview before merging?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37419
Differential Revision: D21337640
Pulled By: ezyang
fbshipit-source-id: d4ad198780c3ae7a96a9f22651e00ff2d31a0c0f
Summary:
**Summary**
This commit adds `tools/clang_format_new.py`, which downloads a platform-appropriate
clang-format binary to a `.gitignored` location, verifies the binary by comparing its
SHA1 hash to a reference hash (also included in this commit), and runs it on all files
matched a specific regex in a list of whitelisted subdirectories of pytorch.
This script will eventually replace `tools/clang_format.py`.
**Testing**
Ran the script.
*No Args*
```
pytorch > ./tools/clang_format.py
Downloading clang-format to /Users/<user>/Desktop/pytorch/.clang-format-bin
0% |################################################################| 100%
Using clang-format located at /Users/<user>/Desktop/pytorch/.clang-format-bin/clang-format
> echo $?
0
> git status
<bunch of files>
```
`--diff` *mode*
```
> ./tools/clang_format.py --diff
Using clang-format located at /Users/<user>/Desktop/pytorch/.clang-format-bin/clang-format
Some files are not formatted correctly
> echo $?
1
<format files using the script>
> ./tools/clang_format.py --diff
Using clang-format located at /Users/<user>/Desktop/pytorch/.clang-format-bin/clang-format
All files are formatted correctly
> echo $?
0
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34566
Differential Revision: D20431290
Pulled By: SplitInfinity
fbshipit-source-id: 3966f769cfb923e58ead9376d85e97127415bdc6
Summary:
This PR move glu to Aten(CPU).
Test script:
```
import torch
import torch.nn.functional as F
import time
torch.manual_seed(0)
def _time():
if torch.cuda.is_available():
torch.cuda.synchronize()
return time.time()
device = "cpu"
#warm up
for n in [10, 100, 1000, 10000]:
input = torch.randn(128, n, requires_grad=True, device=device)
grad_output = torch.ones(128, n // 2, device=device)
for i in range(1000):
output = F.glu(input)
output.backward(grad_output)
for n in [10, 100, 1000, 10000]:
fwd_t = 0
bwd_t = 0
input = torch.randn(128, n, requires_grad=True, device=device)
grad_output = torch.ones(128, n // 2, device=device)
for i in range(10000):
t1 = _time()
output = F.glu(input)
t2 = _time()
output.backward(grad_output)
t3 = _time()
fwd_t = fwd_t + (t2 -t1)
bwd_t = bwd_t + (t3 - t2)
fwd_avg = fwd_t / 10000 * 1000
bwd_avg = bwd_t / 10000 * 1000
print("input size(128, %d) forward time is %.2f (ms); backwad avg time is %.2f (ms)."
% (n, fwd_avg, bwd_avg))
```
Test device: **skx-8180.**
Before:
```
input size(128, 10) forward time is 0.04 (ms); backwad avg time is 0.08 (ms).
input size(128, 100) forward time is 0.06 (ms); backwad avg time is 0.14 (ms).
input size(128, 1000) forward time is 0.11 (ms); backwad avg time is 0.31 (ms).
input size(128, 10000) forward time is 1.52 (ms); backwad avg time is 2.04 (ms).
```
After:
```
input size(128, 10) forward time is 0.02 (ms); backwad avg time is 0.05 (ms).
input size(128, 100) forward time is 0.04 (ms); backwad avg time is 0.09 (ms).
input size(128, 1000) forward time is 0.07 (ms); backwad avg time is 0.17 (ms).
input size(128, 10000) forward time is 0.13 (ms); backwad avg time is 1.03 (ms).
```
Fix https://github.com/pytorch/pytorch/issues/24707, https://github.com/pytorch/pytorch/issues/24708.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33179
Differential Revision: D19839835
Pulled By: VitalyFedyunin
fbshipit-source-id: e4d3438556a1068da2c4a7e573d6bbf8d2a6e2b9
Summary:
Gradle tasks for publishing to bintray and jcenter, mavencentral; snapshot buidls go to oss.sonatype.org
Those gradle changes adds tasks:
bintrayUpload - publishing on bintray, in 'facebook' org
uploadArchives - uploading to maven repos
Gradle tasks are copied from facebook open sourced libraries like https://github.com/facebook/litho, https://github.com/facebookincubator/spectrum
To do the publishing we need to provide somehow (e.g. in ~/.gradle/gradle.properties)
```
signing.keyId=
signing.password=
signing.secretKeyRingFile=
bintrayUsername=
bintrayApiKey=
bintrayGpgPassword=
SONATYPE_NEXUS_USERNAME=
SONATYPE_NEXUS_PASSWORD=
```
android/libs/fbjni is submodule, to be able to add publishing tasks to it (it needs to be published as separate maven dependency) - I created `android/libs/fbjni_local` that has only `build.gradle` with release tasks.
pytorch_android dependency for ':fbjni' changed from implementation -> api as implementation treated as 'private' dependency which is translated to scope=runtime in maven pom file, api works as 'compile'
Testing:
it's already published on bintray with version 0.0.4 and can be used in gradle files as
```
repositories {
maven {
url "https://dl.bintray.com/facebook/maven"
}
}
dependencies {
implementation 'com.facebook:pytorch_android:0.0.4'
implementation 'com.facebook:pytorch_android_torchvision:0.0.4'
}
```
It was published in com.facebook group
I requested sync to jcenter from bintray, that usually takes 2-3 days
Versioning added version suffixes to aar output files and circleCI jobs for android start failing as they expected just pytorch_android.aar pytorch_android_torchvision.aar, without any version
To avoid it - I changed circleCI android jobs to zip *.aar files and publish as single artifact with name artifacts.zip, I will add kostmo to check this part, if circleCI jobs finish ok - everything works :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25351
Reviewed By: kostmo
Differential Revision: D17135886
Pulled By: IvanKobzarev
fbshipit-source-id: 64eebac670bbccaaafa1b04eeab15760dd5ecdf9