Commit Graph

261 Commits

Author SHA1 Message Date
8bff59e41d [ROCm] add hipblaslt support (#114329)
Disabled by default. Enable with env var DISABLE_ADDMM_HIP_LT=0. Tested on both ROCm 5.7 and 6.0.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114329
Approved by: https://github.com/malfet
2023-12-20 19:09:25 +00:00
47908a608f Revert "[ROCm] add hipblaslt support (#114329)"
This reverts commit b062ea38039234c80404a8f5f4d5a93c4cb9832d.

Reverted https://github.com/pytorch/pytorch/pull/114329 on behalf of https://github.com/jeanschmidt due to Reverting due to inconsistencies on internal diff ([comment](https://github.com/pytorch/pytorch/pull/114329#issuecomment-1861933267))
2023-12-19 01:04:58 +00:00
b062ea3803 [ROCm] add hipblaslt support (#114329)
Disabled by default. Enable with env var DISABLE_ADDMM_HIP_LT=0. Tested on both ROCm 5.7 and 6.0.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114329
Approved by: https://github.com/malfet
2023-12-15 15:36:46 +00:00
59f7355f86 Revert "[ROCm] add hipblaslt support (#114329)"
This reverts commit bb2bb8cca1c00e3f6e7025a62688d0cfcbfee144.

Reverted https://github.com/pytorch/pytorch/pull/114329 on behalf of https://github.com/atalman due to OSSCI oncall, trunk  tests are failing ([comment](https://github.com/pytorch/pytorch/pull/114329#issuecomment-1857003155))
2023-12-14 23:53:30 +00:00
bb2bb8cca1 [ROCm] add hipblaslt support (#114329)
Disabled by default. Enable with env var DISABLE_ADDMM_HIP_LT=0. Tested on both ROCm 5.7 and 6.0.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114329
Approved by: https://github.com/malfet
2023-12-14 21:41:22 +00:00
a43c757275 Fixed error with cuda_ver in cpp_extension.py (#113555)
Reported in 71ca42787f (r132390833)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113555
Approved by: https://github.com/ezyang
2023-11-14 00:12:22 +00:00
5e10dd2c78 fix docstring issues in torch.utils (#113335)
Fixes #112634

Fixes all the issues listed except in `torch/utils/_pytree.py` as the file no longer exists.

### Error counts

|File | Count Before | Count now|
|---- | ---- | ---- |
|`torch/utils/collect_env.py` | 39 | 25|
|`torch/utils/cpp_extension.py` | 51 | 13|
|`torch/utils/flop_counter.py` | 25 | 8|
|`torch/utils/_foreach_utils.py.py` | 2 | 0|
|`torch/utils/_python_dispatch.py.py` | 26 | 25|
|`torch/utils/backend_registration.py` | 15 | 4|
|`torch/utils/checkpoint.py` | 29 | 21|

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113335
Approved by: https://github.com/ezyang
2023-11-13 19:37:25 +00:00
0a7eef9bcf [BE] Remove stale CUDA version check from cpp_extension.py (#113447)
As at least CUDA-11.x is needed to build PyTorch on latest trunk.
But still skip `--generate-dependencies-with-compile` if running on ROCm

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113447
Approved by: https://github.com/Skylion007, https://github.com/atalman, https://github.com/PaliC, https://github.com/huydhn
2023-11-11 00:20:08 +00:00
ae2c219de2 Revert "[BE] Remove stale CUDA version check from cpp_extension.py (#113447)"
This reverts commit 7ccca60927cdccde63d6a1d40480950f24e9877a.

Reverted https://github.com/pytorch/pytorch/pull/113447 on behalf of https://github.com/malfet due to Broke ROCM ([comment](https://github.com/pytorch/pytorch/pull/113447#issuecomment-1806407892))
2023-11-10 20:46:13 +00:00
7ccca60927 [BE] Remove stale CUDA version check from cpp_extension.py (#113447)
As at least CUDA-11.x is needed to build PyTorch on latest trunk

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113447
Approved by: https://github.com/Skylion007, https://github.com/atalman, https://github.com/PaliC, https://github.com/huydhn
2023-11-10 18:54:19 +00:00
71ca42787f Replaced deprecated pkg_resources.packaging with packaging module (#113023)
Usage of `from pkg_resources import packaging` leads to a deprecation warning:
```
DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
```
and in strict tests where warnings are errors, this leads to CI breaks, e.g.: https://github.com/pytorch/vision/pull/8092

Replacing `pkg_resources.package` with `package` as it is now a pytorch dependency:
fa9045a872/requirements.txt (L19)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113023
Approved by: https://github.com/Skylion007, https://github.com/malfet
2023-11-10 15:06:03 +00:00
aef9e43fe6 Revert "Replaced deprecated pkg_resources.packaging with packaging module (#113023)"
This reverts commit 81ea7a489a85d6f6de2c3b63206ca090927e203a.

Reverted https://github.com/pytorch/pytorch/pull/113023 on behalf of https://github.com/atalman due to breaks nightlies ([comment](https://github.com/pytorch/pytorch/pull/113023#issuecomment-1802720774))
2023-11-08 21:39:59 +00:00
21b6030ac3 Don't set CUDA_HOME when not compiled with CUDA support (#106310)
It doesn't make sense to set this (on import!) as CUDA cannot be used with PyTorch in this case but leads to messages like
> No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
when CUDA happens to be installed which is at least confusing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106310
Approved by: https://github.com/ezyang
2023-11-06 21:48:49 +00:00
81ea7a489a Replaced deprecated pkg_resources.packaging with packaging module (#113023)
Usage of `from pkg_resources import packaging` leads to a deprecation warning:
```
DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
```
and in strict tests where warnings are errors, this leads to CI breaks, e.g.: https://github.com/pytorch/vision/pull/8092

Replacing `pkg_resources.package` with `package` as it is now a pytorch dependency:
fa9045a872/requirements.txt (L19)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113023
Approved by: https://github.com/Skylion007
2023-11-06 20:26:32 +00:00
0adb28b77d Show CUDAExtension example commands as code (#112764)
The default rendering of these code snippets renders the `TORCH_CUDA_ARCH_LIST` values with typographic quotes which prevent the examples from being directly copyable. Use code style for the two extension examples.

Fixes #112763
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112764
Approved by: https://github.com/malfet
2023-11-02 21:47:50 +00:00
28c0b07d19 [ROCm] remove HCC references (#111975)
- rename `__HIP_PLATFORM_HCC__` to `__HIP_PLATFORM_AMD__`
- rename `HIP_HCC_FLAGS` to `HIP_CLANG_FLAGS`
- rename `PYTORCH_HIP_HCC_LIBRARIES` to `PYTORCH_HIP_LIBRARIES`
- workaround in tools/amd_build/build_amd.py until submodules are updated

These symbols have had a long deprecation cycle and will finally be removed in ROCm 6.0.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111975
Approved by: https://github.com/ezyang, https://github.com/hongxiayang
2023-10-26 02:39:10 +00:00
ba04d84089 S390x inductor support (#111367)
Use arch compile flags. They are needed for vectorization support on s390x.
Implement new helper functions for inductor.

This change fixes multiple tests in test_cpu_repro.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111367
Approved by: https://github.com/ezyang
2023-10-20 19:38:46 +00:00
cb856b08b2 [BE]: Attach cause to some exceptions and enable RUFF TRY200 (#111496)
Did some easy fixes from enabling TRY200. Most of these seem like oversights instead of intentional. The proper way to silence intentional errors is with `from None` to note that you thought about whether it should contain the cause and decided against it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111496
Approved by: https://github.com/malfet
2023-10-19 21:56:36 +00:00
bb89a9e48c Skipped CUDA Flags if C++ Extension Name includes "arch" Substring (#111211)
The CUDA architecture flags from TORCH_CUDA_ARCH_LIST will be skipped if the TORCH_EXTENSION_NAME includes the substring "arch". A C++ Extension should be allowed to have any name. I just manually skip the TORCH_EXTENSION_NAME flag when checking if one of the flags is "arch". There is probably a better fix, but I'll leave this to experts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111211
Approved by: https://github.com/ezyang
2023-10-14 00:10:01 +00:00
a0cea517e7 Add 9.0a to cpp_extension supported compute archs (#110587)
There's an extended compute capability 9.0a for Hopper that was introduced in Cuda 12.0: https://docs.nvidia.com/cuda/archive/12.0.0/cuda-compiler-driver-nvcc/index.html#gpu-feature-list

E.g. Cutlass leverages it: 5f13dcad78/python/cutlass/emit/pytorch.py (L684)

This adds it to the list of permitted architectures to use in `cpp_extension` directly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110587
Approved by: https://github.com/ezyang
2023-10-05 17:41:06 +00:00
20812d69e5 Fix extension rebuilding on Linux (#108613)
On Linux, CUDA header dependencies are not correctly tracked. After you modify a CUDA header, affected CUDA files won't be rebuilt. This PR will fix this problem.

```console
$ ninja -t deps
rep_penalty.o: #deps 2, deps mtime 1693956351892493247 (VALID)
    /home/qc/Workspace/NotMe/exllama/exllama_ext/cpu_func/rep_penalty.cpp
    /home/qc/Workspace/NotMe/exllama/exllama_ext/cpu_func/rep_penalty.h

rms_norm.cuda.o: #deps 0, deps mtime 1693961188871054130 (VALID)

rope.cuda.o: #deps 0, deps mtime 1693961188954388632 (VALID)

cuda_buffers.cuda.o: #deps 0, deps mtime 1693961188797719768 (VALID)

...
```

Historically, this line of code has been changed twice. It was first implemented in #49344 and there's no `if IS_WINDOWS`, just like now. Then in #56015 someone added `if IS_WINDOWS` for unknown reason. That PR has no description so I don't know what bug he encountered. I don't think there's any bug with these flags on Linux, at least for today. CMake generates exactly the same flags for CUDA.

```ninja
#############################################
# Rule for compiling CUDA files.

rule CUDA_COMPILER__cpp_cuda_unscanned_Debug
  depfile = $DEP_FILE
  deps = gcc
  command = ${LAUNCHER}${CODE_CHECK}/opt/cuda/bin/nvcc -forward-unknown-to-host-compiler $DEFINES $INCLUDES $FLAGS -MD -MT $out -MF $DEP_FILE -x cu -c $in -o $out
  description = Building CUDA object $out
```

where `-MD` is short for `--generate-dependencies-with-compile` and `-MF` is short for `--dependency-output`. My words can be verified by `nvcc --help`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108613
Approved by: https://github.com/ezyang
2023-09-06 17:58:21 +00:00
660e8060ad [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-22 23:16:38 +00:00
d59a6864fb Revert "[BE]: Update ruff to 0.285 (#107519)"
This reverts commit 88ab3e43228b7440a33bf534cde493446a31538c.

Reverted https://github.com/pytorch/pytorch/pull/107519 on behalf of https://github.com/ZainRizvi due to Sorry, but this PR breaks internal tests. @ezyang, can you please hep them get unblocked? It seems like one of the strings was prob accidentally modified ([comment](https://github.com/pytorch/pytorch/pull/107519#issuecomment-1688833480))
2023-08-22 19:53:32 +00:00
3f3479e85e reduce header file to boost cpp_wrapper build. (#107585)
1. Reduce cpp_wrapper un-used header files.
2. Clean pch cache, when use_pch is False.

The first change will reduce the build time from 7.35s to 4.94s.

Before change:
![image](https://github.com/pytorch/pytorch/assets/8433590/fc5c1d37-ec40-44f3-8d4d-bf26bdc674bb)
After change:
![image](https://github.com/pytorch/pytorch/assets/8433590/c7ccadd2-bf3a-4d30-bf56-6e3b0230a194)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107585
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/jgong5
2023-08-22 11:58:47 +00:00
5ed60477a7 Optimize load inline via pch (#106696)
Add PreCompiled Header(PCH) to reduce load_inline build time.
PCH is gcc built-in mechanism: https://gcc.gnu.org/onlinedocs/gcc-4.0.4/gcc/Precompiled-Headers.html

Add PCH for '#include <torch/extension.h>'. This file will used in all load_inline modules. All load_inline modules can take benifit from this PR.

Changes:
1. Add PCH signature to guarantee PCH(gch) file take effect.
2. Unification get cxx compiler funtions.
3. Unification get build flags funtions.

Before this PR:
![image](https://github.com/pytorch/pytorch/assets/8433590/f190cdcb-236c-4312-b165-d419a7efafe3)

Added this PR:
![image](https://github.com/pytorch/pytorch/assets/8433590/b45c5ad3-e902-4fc8-b450-743cf73505a4)

Compiling time is reduced from 14.06s to 7.36s.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106696
Approved by: https://github.com/jgong5, https://github.com/jansel
2023-08-21 10:08:30 +00:00
88ab3e4322 [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-20 01:36:18 +00:00
bcc0f4bcab Move ASAN to clang12 and Ubuntu-22.04 (Jammy) (#106355)
- Modify `install_conda` to remove libstdc++ from libstdcxx-ng to use one from OS
- Modify `install_torchvision` to workaround weird glibc bug, where malloc interposers (such as ASAN) are causing a hang in internationalization library, see https://sourceware.org/bugzilla/show_bug.cgi?id=27653 and https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90589
- Modify `torch.utils.cpp_extension` to recognize Ubuntu's clang as supported compiler

Extracted from https://github.com/pytorch/pytorch/pull/105260
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106355
Approved by: https://github.com/huydhn
ghstack dependencies: #106354
2023-08-03 05:36:04 +00:00
4cc1745b13 [BE] f-stringify torch/ and scripts (#105538)
This PR is a follow up on the pyupgrade series to convert more strings to use f-strings using `flynt`.

- https://docs.python.org/3/reference/lexical_analysis.html#f-strings
- https://pypi.org/project/flynt/

Command used:

```
flynt torch/ -ll 120
flynt scripts/ -ll 120
flynt tools/ -ll 120
```

and excluded `collect_env.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105538
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-07-21 19:35:24 +00:00
abc1cadddb [BE] Enable ruff's UP rules and autoformat utils/ (#105424)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105424
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-07-18 20:17:25 +00:00
004ff536e8 [ROCm] Fix circular recursion issue in hipification (#104085)
This PR fixes the circular issue during hipification process by introducing current_state to track whether a file is processed for hipification. (Iterative DFS)
The issue arises when two header files try to include themselves, which leads to a circular recursion or an infinite loop.

Fixes the related issues such as :
https://github.com/pytorch/pytorch/issues/93827
https://github.com/ROCmSoftwarePlatform/hipify_torch/issues/39

Error log:
```
  File "/opt/conda/lib/python3.8/posixpath.py", line 471, in relpath
    start_list = [x for x in abspath(start).split(sep) if x]
  File "/opt/conda/lib/python3.8/posixpath.py", line 375, in abspath
    if not isabs(path):
  File "/opt/conda/lib/python3.8/posixpath.py", line 63, in isabs
    sep = _get_sep(s)
  File "/opt/conda/lib/python3.8/posixpath.py", line 42, in _get_sep
    if isinstance(path, bytes):
RecursionError: maximum recursion depth exceeded while calling a Python object
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104085
Approved by: https://github.com/jithunnair-amd, https://github.com/malfet
2023-07-01 03:25:51 +00:00
e140c9cc92 Fixes ROCM_HOME detection in case no hipcc is found in path (#95634)
if ROCM_HOME is not set as environment variable,
it tries to find hipcc in the path,
but fails with an empty string instead of an exception,
returning an empty string instead of harcoded '/opt/rocm' as third case

Fixes #95633

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95634
Approved by: https://github.com/jithunnair-amd, https://github.com/ezyang
2023-06-28 19:39:26 +00:00
b81f1d1bee Speed up cpp extensions re-compilation (#104280)
Fixes https://github.com/pytorch/pytorch/issues/68066 to a large extend.

This is achieved by not touching files that don't need changing to make sure the ninja caching works as expected.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104280
Approved by: https://github.com/fmassa
2023-06-28 17:06:07 +00:00
347463fddf [cpp-extensions] Add clang to the list of supported Linux compilers (#103349)
Not sure, why was it excluded previous (oversight I guess).
Also, please note, that `clang++` is already considered acceptable compiler (as it ends with `g++` ;))

<!--
copilot:poem
-->
### <samp>🤖 Generated by Copilot at 55aa7db</samp>

> _`clang` or `gcc`, we don't care what you use_
> _We'll build our extensions with the tools we choose_
> _Don't try to stop us with your version string_
> _We'll update our logic and make our code sing_
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103349
Approved by: https://github.com/seemethere
2023-06-10 02:53:38 +00:00
3c0072e7c0 [MPS] Prerequisite for MPS C++ extension (#102483)
in order to add mps kernels to torchvision codebase, we need to expose mps headers and allow objc++ files used in extensions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102483
Approved by: https://github.com/malfet
2023-06-07 17:28:31 +00:00
29da75cc55 Enable mypy allow redefinition (#102046)
Related #101528

I tried to enable this in another PR but it uncovered a bunch of type errors: https://github.com/pytorch/pytorch/actions/runs/4999748262/jobs/8956555243?pr=101528#step:10:1305

The goal of this PR is to fix these errors.

---

This PR enables [allow_redefinition = True](https://mypy.readthedocs.io/en/stable/config_file.html#confval-allow_redefinition) in `mypy.ini`, which allows for a common pattern:

> Allows variables to be redefined with an arbitrary type, as long as the redefinition is in the same block and nesting level as the original definition.

`allow_redefinition` allows mypy to be more flexible by allowing reassignment to an existing variable with a different type... for instance (from the linked PR):

4a1e9230ba/torch/nn/parallel/data_parallel.py (L213)

A `Sequence[Union[int, torch.device]]` is narrowed to `Sequence[int]` thru reassignment to the same variable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102046
Approved by: https://github.com/ezyang
2023-05-24 07:05:30 +00:00
59a3759d97 Update cpp_extension.py (#101285)
When we need to link extra libs, we should notice that 64-bit CUDA may be installed in "lib", not in "lib64".

<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at 05c1ca6</samp>

Improve CUDA compatibility in `torch.utils.cpp_extension` by checking for `lib64` or `lib` directory.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101285
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-05-15 22:47:41 +00:00
5f92909faf Use correct standard when compiling NVCC on Windows (#100031)
Test Plan: Sandcastle

Differential Revision: D45129001

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100031
Approved by: https://github.com/ngimel
2023-05-01 16:28:23 +00:00
e2a3817dfd [BE] Enable C419 rule for any all shortcircuiting (#99890)
Apparently https://github.com/pytorch/pytorch/pull/78142 made torch.JIT allow for simple generator expressions which allows us to enable rules that replace unnecessary list comprehensions with generators in any/all. This was originally part of #99280 but I split it off into this PR so that it can be easily reverted should anything break.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99890
Approved by: https://github.com/justinchuby, https://github.com/kit1980, https://github.com/malfet
2023-04-25 15:02:13 +00:00
cfacb5eaaa Revert "Use correct standard when compiling NVCC on Windows (#99492)"
This reverts commit db6944562efad201c7c1dc2fc0539b1f34012666.

Reverted https://github.com/pytorch/pytorch/pull/99492 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2023-04-19 20:51:26 +00:00
db6944562e Use correct standard when compiling NVCC on Windows (#99492)
Test Plan: Sandcastle

Reviewed By: malfet

Differential Revision: D45108690

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99492
Approved by: https://github.com/ezyang
2023-04-19 20:36:05 +00:00
08f125bcac [ROCm] Remove usage of deprecated ROCm component header includes (#97620)
- clang parameter 'amdgpu-target' changed to 'offload-arch'
- HIP and MIOpen includes path updated for extensions

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97620
Approved by: https://github.com/ezyang, https://github.com/jithunnair-amd
2023-03-28 19:28:38 +00:00
8275e5d2a8 [cpp_extension.py] fix bogus _check_cuda_version (#97602)
Currently if `setuptools<49.4.0` and there is a minor version mismatch `_check_cuda_version` fails with a misleading non-actionable error:
```
2023-03-24T20:21:35.0625644Z   RuntimeError:
2023-03-24T20:21:35.0628441Z   The detected CUDA version (11.2) mismatches the version that was used to compile
2023-03-24T20:21:35.0630681Z   PyTorch (11.3). Please make sure to use the same CUDA versions.
```
This condition shouldn't be failing since minor version match isn't required.

It fails because the other condition to have a certain version of `setuptools` isn't met. But that condition is written in a comment (!!!). So this PR changes it to actually tell the user how to fix the problem.

While at it, I adjusted the version number as a lower `setuptools>=49.4.0` is sufficient for this to work.

Thanks.

p.s. this problem manifests on `nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04` docker image.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97602
Approved by: https://github.com/ezyang
2023-03-27 15:15:57 +00:00
461f088c96 add -std=c++17 to windows cuda compilations (#97515)
add -std=c++17 to windows cuda compilations

Summary:
We're using C++17 in headers that are compiled by C++
extensions. Support for this was not added when we upgraded to C++17.

Test Plan: Rely on CI.

---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/pytorch/pull/97515).
* #97175
* __->__ #97515
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97515
Approved by: https://github.com/ezyang
2023-03-26 15:23:52 +00:00
622a11d512 Fix typos under torch/utils directory (#97516)
This PR fixes typos in comments and messages of `.py` files under `torch/utils` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97516
Approved by: https://github.com/ezyang
2023-03-24 16:53:39 +00:00
bcff4773da add /std:c++17 to windows compilations when not using Ninja (#97445)
add /std:c++17 to windows compilations when not using Ninja

Summary:
This was overlooked when we upgraded to C++17.

Test Plan: Rely on CI.

Reviewers: ezyang

Subscribers:

Tasks:

Tags:

---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/pytorch/pull/97445).
* #96603
* #97473
* #97175
* #97515
* __->__ #97445
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97445
Approved by: https://github.com/ezyang
2023-03-24 14:52:29 +00:00
bdaf402565 build C++ extensions on windows with /std:c++17 (#97413)
build C++ extensions on windows with /std:c++17

Summary:
We added -std=c++17 to Posix builds, but neglected to add this for
Windows. This just brings back parity.

Test Plan: Rely on CI.

Reviewers: ezyang

Subscribers:

Tasks:

Tags:

---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/pytorch/pull/97413).
* #97175
* __->__ #97413
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97413
Approved by: https://github.com/ezyang
2023-03-23 13:31:29 +00:00
44d7bbfe22 [cpp extension] Allow setting PYTORCH_NVCC to a customized nvcc in torch cpp extension build (#96987)
per title

I can write a script named `nvcc` like this
```bash
#!/bin/bash
/opt/cache/bin/sccache /usr/local/cuda/bin/nvcc $@
```
and set its path to `PYTORCH_NVCC` (added in this PR), along with another `sccache-g++` script to env var `CXX`.
cfa6b52e02/torch/utils/cpp_extension.py (L2106-L2109)

With ninja, I can fully enable c-cached build on my cuda extensions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96987
Approved by: https://github.com/ezyang
2023-03-17 17:05:17 +00:00
dd5e6e8553 [BE]: Merge startswith calls - rule PIE810 (#96754)
Merges startswith, endswith calls to into a single call that feeds in a tuple. Not only are these calls more readable, but it will be more efficient as it iterates through each string only once.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96754
Approved by: https://github.com/ezyang
2023-03-14 22:05:20 +00:00
cyy
a32be76a53 Disable more warnings on Windows CI test (#95933)
These warnings are disabled to avoid long log on Windows tests. They are also disabled on CMake buildings currently.
'/wd4624': MSVC complains  "destructor was implicitly defined as delete" on c10::optional and other templates
'/wd4076': "unexpected tokens following preprocessor directive - expected a newline" on some header
'/wd4068': "The compiler ignored an unrecognized [pragma]"

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95933
Approved by: https://github.com/ezyang
2023-03-03 07:11:13 +00:00
db8e91ef73 [CUDA] Split out compute capability 8.7 and 7.2 from others (#95803)
Follow up of #95008 to avoid building Jetson compute capabilities unnecessarily, also adds missing 7.2.

CC @ptrblck @malfet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95803
Approved by: https://github.com/ezyang
2023-03-02 14:13:15 +00:00