Compare commits

...

58 Commits

Author SHA1 Message Date
0365423035 Pin triton-rocm to latest 2.3.1 commit (#126309)
* Revert "pin rocm"

This reverts commit 45ebb10bdd22e00c9035fbacf00634938941a192.

* Revert "lint"

This reverts commit 05860b9a444c96424cb0203fca90d7dd52fbc83c.

* rocm_pin
2024-05-15 16:14:39 -04:00
03baf94aae [EZ] Get rid of utf-8 quotes (#126301)
Replace `“important”` with `important"` and `Taylor’s` with `Taylor's`

Fixes the obvious symptoms of https://github.com/pytorch/pytorch/issues/124897

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2024-05-15 15:03:37 -04:00
7782f2866c [dynamo] fix 3.11+ refleak for 2.3.1 release (#126235) 2024-05-15 13:21:19 -04:00
be9a4076f0 Do not import transformers when import torch._dynamo (#124634) (#125755)
Fixes https://github.com/pytorch/pytorch/issues/123954

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124634
Approved by: https://github.com/thiagocrepaldi, https://github.com/Chillee
ghstack dependencies: #124343
2024-05-15 13:15:44 -04:00
d114e0488c [Release only] Advance triton version to 2.3.1 (#126204)
* Advance tition version to 2.3.1

* update_commit_pin

* pin

* pin rocm

* lint
2024-05-15 11:13:52 -04:00
194698a4ac Revert "[Release only] Release 2.3 start using triton package from pypi" (#126202)
Revert "[Release only] Release 2.3 start using triton package from pypi (#123…"

This reverts commit 97ff6cfd9c86c5c09d7ce775ab64ec5c99230f5d.
2024-05-14 17:01:07 -04:00
1d6a938090 [cherry-pick][device_mesh] add back the private init backend option (#124780) (#126147)
[device_mesh] add a private init backend option (#124780)

This PR adds a private init backend option, to tackle the issues sub
mesh creation:

in device mesh slicing we don't want to create process groups again,
so explicitly turn the group creation off it's useful

Also I think there might be more submesh creation functionality so
having this flag would ensure that there's no new group created

Differential Revision: [D56497780](https://our.internmc.facebook.com/intern/diff/D56497780)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124780
Approved by: https://github.com/awgu
2024-05-14 13:38:55 -04:00
4f0b3ad855 Fix ref leak in dtype.to_complex()/to_real() (#126101)
* Fix ref leak in `dtype.to_complex()`/`to_real()` (#125154)

By using `Py_NewRef`

Also, wrap `THPDtype_to_real`/`THPDtype_to_complex` calls with `HANDLE_TH_ERRORS`

Add regression test for the above issues, by calling to_complex for integral dtypes, that raises an exception and by preserving reference count to the same to_complex/to_real call to detect if leak is happeneing.

Replace
```cpp
auto dtype = (PyObject*)torch::getTHPDtype(current_dtype);
Py_INCREF(dtype);
return dtype;
```
with a more compact/streamlined equivalent
```cpp
return Py_NewRef(torch::getTHPDtype(current_dtype));
```

Fixes https://github.com/pytorch/pytorch/issues/124868

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125154
Approved by: https://github.com/Skylion007, https://github.com/albanD

(cherry picked from commit 744f341aa4eaa4f2e7068e5f83fa6fccb0a02ccc)

* Revert "Fix ref leak in `dtype.to_complex()`/`to_real()` (#125154)"

This reverts commit a1b04d8832c64d6112b432ce1e7725f02761cf21.

* Fix ref leak in `dtype.to_complex()`/`to_real()` (#125154)

By using `Py_NewRef`

Also, wrap `THPDtype_to_real`/`THPDtype_to_complex` calls with `HANDLE_TH_ERRORS`

Add regression test for the above issues, by calling to_complex for integral dtypes, that raises an exception and by preserving reference count to the same to_complex/to_real call to detect if leak is happeneing.

Replace
```cpp
auto dtype = (PyObject*)torch::getTHPDtype(current_dtype);
Py_INCREF(dtype);
return dtype;
```
with a more compact/streamlined equivalent
```cpp
return Py_NewRef(torch::getTHPDtype(current_dtype));
```

Fixes https://github.com/pytorch/pytorch/issues/124868

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125154
Approved by: https://github.com/Skylion007, https://github.com/albanD

(cherry picked from commit 744f341aa4eaa4f2e7068e5f83fa6fccb0a02ccc)

* Revert "Fix ref leak in `dtype.to_complex()`/`to_real()` (#125154)"

This reverts commit 5a28bad418ad8187036c64b3fe487109dfc4ff5d.

* Refactor autocast C++ APIs to be device-agnostic (#124359)

# Motivation
This PR aims to refactor autocast **C++** APIs to be device-agnostic and deprecate the device-specific autocast  **C++** APIs.
In C++ side,
- `is_enabled()` -> `is_enabled(device_type)`.
- `set_enabled(new_enabled)` -> `set_enabled(device_type, new_enabled)`.
- `get_autocast_dtype()` -> `get_autocast_dtype(device_type)`
- `set_autocast_dtype(dtype)` -> `set_autocast_dtype(device_type, dtype)`

These following C++ APIs are deprecated and should be removed in PyTorch 2.5
- `is_cpu_enabled`
- `set_cpu_enabled`
- `get_autocast_cpu_dtype`
- `set_autocast_cpu_dtype`
- `is_xpu_enabled`
- `set_xpu_enabled`
- `get_autocast_xpu_dtype`
- `set_autocast_xpu_dtype`
- `is_ipu_enabled`
- `set_ipu_enabled`
- `get_autocast_ipu_dtype`
- `set_autocast_ipu_dtype`
- `is_hpu_enabled`
- `set_hpu_enabled`
- `get_autocast_hpu_dtype`
- `set_autocast_hpu_dtype`
- `is_xla_enabled`
- `set_xla_enabled`
- `get_autocast_xla_dtype`
- `set_autocast_xla_dtype`
- `is_privateuseone_enabled`
- `set_privateuseone_enabled`
- `get_autocast_privateuseone_dtype`
- `set_autocast_privateuseone_dtype`

In Python side,
provide 4 generic autocast APIs:
- `torch.is_autocast_enabled(device_type)`
- `torch.set_autocast_enabled(device_type, new_enabled)`
- `torch.get_autocast_dtype(device_type)`
- `torch.set_autocast_dtype(device_type, dtype)`

# Additional Context
We will submit another PR to refactor autocast **Python** APIs based on this PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124359
Approved by: https://github.com/jgong5, https://github.com/albanD

* refactor autocast python APIs (#124479)

Refactor autocast usage scenario in `torch/amp/autocast_mode.py` and `torch/utils/checkpoint.py` to fix the bug - convention conflict between `torch.xxx.get_autocast_xxx_dtype` defined in `autocast_mode.py` and `torch.xxx.get_autocast_dtype` defined in `checkpoint.py`.

Use device-agnostic APIs like `torch.get_autocast_dtype`, ..., instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124479
Approved by: https://github.com/jgong5, https://github.com/gujinghui, https://github.com/EikanWang, https://github.com/albanD
ghstack dependencies: #124359

* Fix ref leak in `dtype.to_complex()`/`to_real()` (#125154)

By using `Py_NewRef`

Also, wrap `THPDtype_to_real`/`THPDtype_to_complex` calls with `HANDLE_TH_ERRORS`

Add regression test for the above issues, by calling to_complex for integral dtypes, that raises an exception and by preserving reference count to the same to_complex/to_real call to detect if leak is happeneing.

Replace
```cpp
auto dtype = (PyObject*)torch::getTHPDtype(current_dtype);
Py_INCREF(dtype);
return dtype;
```
with a more compact/streamlined equivalent
```cpp
return Py_NewRef(torch::getTHPDtype(current_dtype));
```

Fixes https://github.com/pytorch/pytorch/issues/124868

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125154
Approved by: https://github.com/Skylion007, https://github.com/albanD

* Revert "refactor autocast python APIs (#124479)"

This reverts commit 495b0c9aec07472d82b9fa5e5cdaab35ec16898d.

* Revert "Refactor autocast C++ APIs to be device-agnostic (#124359)"

This reverts commit 83106b7c4f7d6cc84ccc8beb0949eaa94c830d8d.

---------

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Co-authored-by: Huy Do <huydhn@gmail.com>
Co-authored-by: Yu, Guangye <guangye.yu@intel.com>
2024-05-14 13:29:45 -04:00
bf1b3a056a Lint: Update older-python test to 3.6 (#126108)
Lint: Update older-python test to 3.6 (#125843)

As python-3.5 can no longer connect to pypi after today's cert update
Fixes https://github.com/pytorch/pytorch/issues/125841

(cherry picked from commit 7e86a7c0155295539996e0cf422883571126073e)

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2024-05-13 14:45:40 -07:00
c365674171 Fixes format utils executable (#123482)
Fixes format utils executable (#123407)

Fixes an issue with the format utils executable, which was causing it to run as a no-op. :(

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123407
Approved by: https://github.com/wz337, https://github.com/fegin

(cherry picked from commit 18c9d460682293bae23386e9bc40932eee1d8add)

Co-authored-by: Lucas Pasqualin <lpasqualin@meta.com>
2024-05-13 14:44:35 -07:00
768e4b9420 [MPS] Fix large copy (#126104)
[MPS] Fix large copy (#124635)

By slicing `copyFromBuffer:sourceOffset:toBuffer:destinationOffset:size:` into 2Gb chunks

Add regression test, but limit it to machines with 12Gb of RAM or more, and MacOS 14+, as on MacOS 13 attempt to alloc 4Gb tensor fails with:
```
/AppleInternal/Library/BuildRoots/c651a45f-806e-11ed-a221-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:724: failed assertion `[MPSNDArray initWithDevice:descriptor:] Error: total bytes of NDArray > 2**32'
```

Fixes https://github.com/pytorch/pytorch/issues/124335

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124635
Approved by: https://github.com/kulinseth

(cherry picked from commit abf3f90781263ee45e2e79cf7f80102ffa7f1b14)

Co-authored-by: Nikita Shulga <nshulga@meta.com>
2024-05-13 14:42:29 -07:00
e25474c05d [MPS] Native nonzero implementation (#126100)
[MPS] Native nonzero implementation (#125355)

Fixes https://github.com/pytorch/pytorch/issues/124850

Replace previous MPSGraph nonzero construction with native nonzero op. For older OSes, fallback to CPU (previous implementation was not reliable and was comparable to CPU in speed).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125355
Approved by: https://github.com/kulinseth

(cherry picked from commit a40d6df448de1acb263ed8f6ff9e7d26f5a1a161)

Co-authored-by: Denis Vieriu <dvieriu@apple.com>
2024-05-13 14:39:39 -07:00
d8b35dac22 Publish PyTorch docs to pytorch/cpp repo (#126102)
Publish PyTorch docs to pytorch/cpp repo (#122895)

Updating the documents push to go to https://github.com/pytorch/docs repo instead of https://github.com/pytorch/pytorch.github.io as part of updating the PyTorch docs set up.
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122895
Approved by: https://github.com/malfet

(cherry picked from commit 535a84c12553a6a423bb3f7f7b175ee7bf8d5464)

Co-authored-by: sekyondaMeta <127536312+sekyondaMeta@users.noreply.github.com>
2024-05-13 14:35:46 -07:00
bbb838654c Separate arm64 and amd64 docker builds (#126099)
Separate arm64 and amd64 docker builds (#125617)

Fixes https://github.com/pytorch/pytorch/issues/125094

Please note: Docker CUDa 12.4 failure is existing issue, related to docker image not being available on gitlab:
```
docker.io/nvidia/cuda:12.4.0-cudnn8-devel-ubuntu22.04: docker.io/nvidia/cuda:12.4.0-cudnn8-devel-ubuntu22.04: not found
```
 https://github.com/pytorch/pytorch/actions/runs/8974959068/job/24648540236?pr=125617

Here is the reference issue: https://gitlab.com/nvidia/container-images/cuda/-/issues/225

Tracked on our side: https://github.com/pytorch/builder/issues/1811
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125617
Approved by: https://github.com/huydhn, https://github.com/malfet

(cherry picked from commit b29d77b54f5cee9e786f8a72329837043f36a349)

Co-authored-by: atalman <atalman@fb.com>
2024-05-13 14:33:28 -07:00
d983cb78e2 [MPS] Fix abs for complex types (#126096)
[MPS] Fix `abs` for complex types (#125662)

By calling `realPartOfTensor:` if input type is complex on Sonoma and fall back to `at::view_as_real` trick on Ventura.

Split `unary_op` template into `unary_op` and `unary_op_noresize`, which skips resize and empty checks

Marked `abs`, `isclose` and `nn.functional.softsign` OpInfo tests as supported by complex types

Fixes https://github.com/pytorch/pytorch/issues/125135

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125662
Approved by: https://github.com/kulinseth

(cherry picked from commit 0fd1fc17c3a53cb4cede1992d431d5384b2813f3)

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2024-05-13 14:32:05 -07:00
75e01e7df0 Add userbase library dir to windows dll search path (#126095)
Add userbase library dir to windows dll search path (#125684)

Fixes https://github.com/pytorch/pytorch/issues/125109 which is a regression introduced by https://github.com/pytorch/builder/pull/1467 that adds dynamic dependency to mkl, which if installed in the user-dir is placed into `sysconfig.sysconfig.get_config_var("userbase") / "Library" / "bin"`

Fix this one, but adding `userbase` folder to the DLL search path

Testing before this fix:
```
Python 3.12.3 (tags/v3.12.3:f6650f9, Apr  9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\torch\__init__.py", line 141, in <module>
    raise err
OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\torch\lib\shm.dll" or one of its dependencies.
>>> exit()
```

After:
```
c:\Program Files\Python312>python
Python 3.12.3 (tags/v3.12.3:f6650f9, Apr  9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> exit()
```
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125684
Approved by: https://github.com/malfet

(cherry picked from commit fdfef759a676ee7a853872e347537bc1e4b51390)

Co-authored-by: atalman <atalman@fb.com>
2024-05-13 14:30:59 -07:00
a696b3b7f6 Restore DILL_AVAILABLE for backwards compat with torchdata (#126094)
Restore DILL_AVAILABLE for backwards compat with torchdata (#122616)

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122616
Approved by: https://github.com/peterbell10

(cherry picked from commit f42818321b5694c5be76f16173c7ac9223ec8ae3)

Co-authored-by: Edward Z. Yang <ezyang@meta.com>
2024-05-13 14:30:14 -07:00
eqy
2e165ec9c2 Revert "Include torch warn in each error in cudnn/Conv_v8.cpp (#120719)" (#125790)
This reverts commit 5fd7f5c4e336c2c3041e10529990c620cc8cf9a5.

Reverted https://github.com/pytorch/pytorch/pull/120719 on behalf of https://github.com/janeyx99 due to sorry but am reverting as this prints unwanted warnings even when an exception is not thrown  ([comment](https://github.com/pytorch/pytorch/pull/120719#issuecomment-1994491826))

Co-authored-by: PyTorch MergeBot <pytorchmergebot@users.noreply.github.com>
2024-05-13 15:48:42 -04:00
1199df476e [Doc] Update docstrings for torch/random.py (#125443)
[Doc] Update docstrings for torch/random.py (#125265)

Updates the docstrings for torch/random.py to clarify what device / RNG each function operates on.

While trying to understand the difference between
```
state = torch.random.get_rng_state()
some_code
torch.random.set_rng_state(state)
```
and
```
with torch.random.fork_rng():
    some_code
```
I found out that there was a note about this in the docstring that wasn't being rendered on the website. I fixed that note and added additional clarifications on other functions in this file.

Test Plan:
Built the docs and verified that everything renders correctly.

<img width="911" alt="Screenshot 2024-04-30 at 2 22 08 PM" src="https://github.com/pytorch/pytorch/assets/9263852/f219bc35-89bd-4f5b-ba60-255b089499a4">

<img width="901" alt="Screenshot 2024-04-30 at 2 22 13 PM" src="https://github.com/pytorch/pytorch/assets/9263852/c141e7fa-afc9-4c66-b460-96668ce35606">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125265
Approved by: https://github.com/Balandat, https://github.com/lezcano
2024-05-13 15:46:47 -04:00
97ff6cfd9c [Release only] Release 2.3 start using triton package from pypi (#123580) 2024-04-08 16:27:33 -04:00
fb38ab7881 Fix for MPS regression in #122016 and #123178 (#123385)
Fixes #122016 and #123178. This regression is related to an OS side change that requires a slight adjustment from us on PyTorch side to restore the previous behavior. Additionally we cleared out pre-MacOS13 related workarounds.

Before the fix on MacOS 14.4:

```
python -c "import torch;x=torch.zeros(3, device='mps');x[1] = 1; x[2] = 3; print(x)"
tensor([0., 3., 3.], device='mps:0')
```

After the fix:
```
python -c "import torch;x=torch.zeros(3, device='mps');x[1] = 1; x[2] = 3; print(x)"
tensor([0., 1., 3.], device='mps:0')
```

This also fixes complex number initialization and as such makes `nn.functional.rms_norm` pass on MacOS-14+

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123234
Approved by: https://github.com/malfet, https://github.com/kulinseth

(cherry picked from commit 05289a278c3eaca271061649982f38c435b50674)

Co-authored-by: Joona Havukainen <jhavukainen@apple.com>
2024-04-05 18:46:31 -04:00
23961cef85 [Release/2.3] Set py3.x build-environment name consistently (#123446)
https://github.com/pytorch/pytorch/pull/122157 checks for the Python version using `"$BUILD_ENVIRONMENT" != *py3.8*`, but some build environment uses a different style with `py3_8` instead causing numpy 2.x to be installed there wrongly, i.e. 03b987fe3f
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122247
Approved by: https://github.com/malfet

(cherry picked from commit 6fefc52a2b4f814c5bc85f4087a92ad7f6ee3abe)

Co-authored-by: Huy Do <huydhn@gmail.com>
2024-04-05 09:01:19 -07:00
634cf5069a [Wheel] Change libtorch_cpu OpenMP search path (#123417) (#123442)
To prevent delocate from double-packing it, which makes Torch wheels
unusable with torch.compile out of the box

Fixes https://github.com/pytorch/pytorch/issues/122705

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123417
Approved by: https://github.com/atalman

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2024-04-05 10:22:39 -04:00
12d0e693d0 update submodule onnx==1.16.0 (#123387)
Fixes #121258

CC @malfet @atalman
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123125
Approved by: https://github.com/malfet

(cherry picked from commit 19c2ed15c099c7ed9f96074584af6ab9da206f92)

Co-authored-by: pbialecki <piotr.bialecki@hotmail.de>
2024-04-04 20:47:38 -04:00
38acd812ab [MPS] Fwd-fix for clamp regression (#122148) (#123383)
Forward fix for regressions introduced by https://github.com/pytorch/pytorch/pull/121381 as we failed to run MPS CI twice on it

- Do not call `minimumWithNaNPropagationWithPrimaryTensor` for integral tensors as it will crash with
  ```
    /AppleInternal/Library/BuildRoots/ce725a5f-c761-11ee-a4ec-b6ef2fd8d87b/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSKernelDAG.mm:805: failed assertion `Error getting visible function: (null) Function isNaN_i16_i8 was not found in the library'
   ```
- Change the order of max and min call as it's apparently important for
  consistency, as `min(max(a, b), c)` might not equal to `max(min(a, c), b)` if `c` is not always less or equal than `b`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122148
Approved by: https://github.com/huydhn

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2024-04-04 16:29:42 -07:00
b197f540bc Use numpy 2.0.0rc1 in CI (#123356)
Bump numpy version to 2.0.0rc1 in CI

Related to: https://github.com/pytorch/pytorch/issues/107302
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123286
Approved by: https://github.com/huydhn, https://github.com/kit1980, https://github.com/ZainRizvi

(cherry picked from commit 26b4ccf9d171a4abb3b25d9f88fc594ea5aca1ce)

Co-authored-by: atalman <atalman@fb.com>
2024-04-04 19:02:49 -04:00
dc81d19aac [CI] Test that NumPy-2.X builds are backward compatible with 1.X (#123354)
By compiling PyTorch against 2.x RC, but running all the tests with Numpy-1.X

This has no affects on binary builds
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122157
Approved by: https://github.com/atalman

(cherry picked from commit 03b987fe3fa93f398c0af5b40e512950c39a7cb6)

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2024-04-04 19:00:35 -04:00
108305e47b Upgrade submodule pybind to 2.12.0 (#123355)
To fix https://github.com/pytorch/pytorch/issues/122056

Building with NP 2.0 allows me to run locally with both NP 2.0 and 1.26.
Any other test we should run @rgommers  ?

FYI @Skylion007 @atalman
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122899
Approved by: https://github.com/Skylion007

(cherry picked from commit 6c2f36c9845f310db8ece23c0d2e4ad6f702bc57)

Co-authored-by: albanD <desmaison.alban@gmail.com>
2024-04-04 18:07:42 -04:00
a8b009185d Make PyTorch compilable against upcoming Numpy-2.0 (#121880) (#123380)
Test plan:
```
% python -c "import torch;import numpy;print(numpy.__version__, torch.tensor(numpy.arange(3, 10)))"
2.1.0.dev0+git20240312.9de8a80 tensor([3, 4, 5, 6, 7, 8, 9])
% python -c "import torch;print(torch.rand(3, 3).numpy())"
[[0.0931946  0.44874293 0.8480404 ]
 [0.93877375 0.10188377 0.67375803]
 [0.02520031 0.89019287 0.5691561 ]]

```
Fixes https://github.com/pytorch/pytorch/issues/121798

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121880
Approved by: https://github.com/albanD

(cherry picked from commit 38d9bb5abcc31ba97927a5399b88afe2cf60bf64)

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2024-04-04 14:22:26 -07:00
b67b277268 Fix torch.clamp in MPS to handle NaN correctly (#121381) (#122785)
Fixes #120899

So this is interesting. There are methods that specifically propagate NaN instead of clamping to real numbers.
https://developer.apple.com/documentation/metalperformanceshadersgraph/mpsgraph/3857573-maximumwithnanpropagationwithpri

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121381
Approved by: https://github.com/malfet

(cherry picked from commit 40acc84aafa82f00a5b3966302638f344bef07bd)

Co-authored-by: Roger Lam <mrlamroger@gmail.com>
2024-04-04 13:26:29 -07:00
a8f93a5c71 [ONNX] beartype to emit warning instead of error by default (#123363)
Making exporter more "robust" to advances in beartype tool.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123205
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi
2024-04-04 16:13:58 -04:00
fa07dc5132 [MPS] Fix naive matmul for BFloat16 (#123289)
Will only work on MacOS14 or newer, so compile the shader with `MTLLanguageVersion_3_1` when appropriate

Fixes https://github.com/pytorch/pytorch/issues/121583
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121731
Approved by: https://github.com/albanD

(cherry picked from commit 5498804ec2ac9aa62ba3bbf20149118142567d9b)

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2024-04-04 16:04:46 -04:00
2a82d31f78 fix breaking changes for ONNX Runtime Training (#123271)
Fixes breaking changes for ONNX Runtime Training.

PR https://github.com/pytorch/pytorch/pull/121102 introduced incompatibility with ORT training because of change in parameter type. Creating a PR to add previous parameter types and verified that it works with ORT training.

Error with current scenario:

```
site-packages/onnxruntime/training/ortmodule/torch_cpp_extensions/cpu/aten_op_executor/aten_op_executor.cc:60:40: error: invalid conversion from ‘const DLManagedTensor*’ to ‘DLManagedTensor*’ [-fpermissive]
at::Tensor tensor = at::fromDLPack(dlpack);

site-packages/torch/include/ATen/DLConvertor.h:15:46: note:   initializing argument 1 of ‘at::Tensor at::fromDLPack(DLManagedTensor*)’
TORCH_API Tensor fromDLPack(DLManagedTensor* src);
```
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122000
Approved by: https://github.com/malfet

(cherry picked from commit 765c3fc138fda4b49978403ee1394040221957cc)

Co-authored-by: Abhishek Jindal <abjindal@microsoft.com>
2024-04-03 18:52:06 -04:00
4bb5cb51e6 Fix swap_tensors path in _apply for modules that inherit from RNNBase (RNN, GRU, LSTM) (#122800) (#123116)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122800
Approved by: https://github.com/albanD

(cherry picked from commit cc12668053ad847ff4a430e99eeebf99c136f3cd)
2024-04-02 16:16:37 -07:00
ef38d0572e nn.Module: use swap_tensors for Tensor subclasses (#122755) (#123106)
This fixes a bug when casting a module that has DTensor parameters. The old behavior will swap the .data field of the Tensor subclass which is incorrect behavior when dealing with tensor subclasses that may have multiple child tensors.

This uses the `swap_tensors` method to swap all of the tensors not just the .data field.

Test plan:

```
pytest test/distributed/_tensor/test_api.py -k 'test_distribute_module_casting'
python test/distributed/fsdp/test_wrap.py -k test_auto_wrap_smoke_test_cuda_init_mode1_cpu_offload0_use_device_id_True
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122755
Approved by: https://github.com/wanchaol, https://github.com/mikaylagawarecki

(cherry picked from commit e6ee8322d767ab241ce1651e7c178f539e8e3199)

Co-authored-by: Tristan Rice <rice@fn.lc>
2024-04-02 16:16:16 -07:00
5a53185e65 Remove cuda dependencies when building AOTriton (#122982) (#123179)
Downloading CUDA sometimes failed and breaks the build process, but
AOTriton does not need these packages. This commit comments out the
related downloading scripts.
2024-04-02 19:08:22 -04:00
bc9e23abb5 Fix performance regression and memory storage handling of Flash Attention on ROCM (#122857) (#122967)
This PR fixes the two major issues that was discovered after the initial merge of PR #121561
1. The Flash Attention support added by has severe performance regressions on regular shapes (power of two head dimensions and sequence lengths) compared with PR #115981. Its performance is worse than the math backend and only has numerical stability advantages. This PR fixes this problem.
2. There is a flaw of memory storage handling in PR #121561 which does not copy the gradients back to the designated output tensor. This PR removes the deprecated `TensorStorageSanitizer` class which is unnecessary due to the more flexible backward kernel shipped by PR #121561

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122857
Approved by: https://github.com/jeffdaily, https://github.com/drisspg
2024-04-02 18:53:19 -04:00
8194fae625 Pin protobuf to 3.20.2 on macOS (#123197)
The newer protobuf 5.26.0 releasing on March 13rd is causing failures with `test_hparams_*` from `test_tensorboard` in which the stringify metadata is wrong when escaping double quote. For example, 3bc2bb6781.  This looks like an upstream issue from Tensorboard where it doesn't work with this brand new protobuf version https://github.com/tensorflow/tensorboard/blob/master/tensorboard/pip_package/requirements.txt#L29

The package has been pinned on Docker https://github.com/pytorch/pytorch/blob/main/.ci/docker/requirements-ci.txt#L155, so it should be pinned on macOS too.  We want to eventually just have one requirements.txt file.

Fixes https://github.com/pytorch/pytorch/issues/122008
Fixes https://github.com/pytorch/pytorch/issues/121927
Fixes https://github.com/pytorch/pytorch/issues/121946
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121918
Approved by: https://github.com/kit1980
2024-04-02 15:08:09 -04:00
12acd4c9b3 [Cherrypick][DeviceMesh] Cache and reuse sliced result (#122975) (#123073)
Fixes #118849

Add a map for parent_to_child_mappings in _mesh_resources so we can cache and reuse submesh slicing result so that we can avoid recreating submesh and the underlying sub pg repeatedly, which could lead to funky behaviors.

We will follow up with reusing pg from the parent_mesh during submesh creation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122975
Approved by: https://github.com/wanchaol
2024-04-02 15:05:07 -04:00
857797d148 [CherryPick] Inductor cpp wrapper: fix dtype of ShapeAsConstantBuffer (#122297) (#123064)
For `at::scalar_tensor` the default dtype will be `float` ([link to scalar_tensor](0d8e960f74/aten/src/ATen/native/TensorFactories.cpp (L856)), [link to default dtype](0d8e960f74/c10/core/TensorOptions.h (L551))) if we don't set the `dtype` value. However, the input scalar value is not necessarily a `float` value. With `torch::tensor(x)`, the dtype of the tensor will be decided according to the dtype of the scalar.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122297
Approved by: https://github.com/jgong5, https://github.com/desertfire
2024-04-02 15:03:25 -04:00
233dfe4d6a Proper view support for jagged layout NestedTensor (#122854)
* Proper view support for jagged layout NestedTensor (#113279)

This PR:
* Introduces an ATen op for creating true jagged views from a dense values buffer
    * `_nested_view_from_jagged(values, offsets, lengths, ragged_idx, dummy)`
    * This ops is implemented on the Python side using torch.library so we can return a subclass instance
    * `jagged_from_list()` now uses this instead of the old autograd.Function `NestedViewFromBuffer`
    * The latter op is used for non-contiguous JTs returned via `torch.nested.narrow()`
    * `dummy` is an awful hack to ensure that `NestedTensor.__torch_dispatch__()` is invoked for our view
* Introduces an ATen op for accessing the `values` component of an NT via a view
    * `_nested_get_values(nt)`
* **Removes** the autograd.Functions `ViewNestedFromBuffer` and `ViewBufferFromNested` in favor of `nested_from_values_offsets()` / `nested_from_values_offsets_lengths()` and `nt.values()`, respectively.
* Changes test code to prefer `as_nested_tensor()` over `jagged_from_list()` directly
    * Similarly, avoid `buffer_from_jagged()`, preferring `values()`
* Depends on general subclass view fake-ification on the PT2 side (handled solely in previous PRs in the stack)

With these changes, the semantics of jagged layout NTs are such that they are considered a true view of the underlying `values` buffer. This means views of jagged NTs are views of the underlying buffer as well, simplifying some handling.

Differential Revision: [D54269922](https://our.internmc.facebook.com/intern/diff/D54269922)
Co-authored-by: voznesenskym <voznesenskym@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113279
Approved by: https://github.com/ezyang

(cherry picked from commit cd6bfc7965fc5ae20720bae0994e332e56f819c0)

* Update executorch.txt

* Update executorch.txt

* Fix linter error

---------

Co-authored-by: Joel Schlosser <jbschlosser@meta.com>
Co-authored-by: Guang Yang <42389959+guangy10@users.noreply.github.com>
2024-04-02 11:46:53 -07:00
e22b534b10 Upgrade submodule oneDNN to v3.3.6 for release/2.3 (#122164) (#122930)
As the title. Including issue fixes for aarch64:
- https://github.com/oneapi-src/oneDNN/pull/1831
- https://github.com/oneapi-src/oneDNN/pull/1834

---

## Validation results
(on Intel CPU + Linux)
**Static quantization with Inductor on CV models**

Quant method | Geomean throughput ratio (v3.3.6/baseline)
-- | --
ptq | 0.982937
ptq (cpp wrapper) | 0.978384
qat | 0.978828

**Torchbench cpu userbenchmark with Inductor**

Items | Perf Geomean Ratio (v3.3.6/baseline)
-- | --
eager_throughtput_bf16_infer | 1.00x
eager_throughtput_fp32_infer | 1.00x
jit_llga_throughtput_amp_bf16 | 1.01x
jit_llga_throughtput_fp32 | 1.00x
eager_throughtput_fx_int8 | 1.00x
eager_throughtput_bf16_train | 1.46x
eager_throughtput_fp32_train | 1.41x

**Dynamo benchmarks tests**
Precision | Shape | Wrapper | Thread | Eager old/new GEOMEAN | Inductor old/new GEOMEAN
-- | -- | -- | -- | -- | --
Float32 | Static | Default | Multiple | 1.003836812 | 1.003425
Float32 | Static | Default | Single | 1.000181451 | 0.999611
Float32 | Dynamic | Default | Multiple | 1.003980183 | 1.006563
Float32 | Dynamic | Default | Single | 1.000076939 | 0.999969
AMP | Static | Default | Multiple | 0.996824772 | 0.998715
AMP | Static | Default | Single | 0.996402574 | 1.001483
AMP | Dynamic | Default | Multiple | 0.994919866 | 1.000467
AMP | Dynamic | Default | Single | 0.9962054 | 1.000767

(on Aarch64)
https://github.com/pytorch/pytorch/pull/122164#issuecomment-2007912919

---

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122164
Approved by: https://github.com/snadampal, https://github.com/malfet, https://github.com/atalman
2024-04-02 12:57:11 -04:00
8602990e3f [CherryPick] Back out "[DeviceMesh] Add support for nD slicing (#119752)" (#121763) (#122495)
Summary:
Original commit changeset: e52b8809c8d8

Original Phabricator Diff: D54778906

We have to backout this diff.
D54778906 seems to be causing test failures for APF blocking trunk health and hence release. Just starting to look at the issue. T182209248

Test Plan: Sandcastle

Reviewed By: satgera

Differential Revision: D54825114

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121763
Approved by: https://github.com/osalpekar

(cherry picked from commit e99fa0042cd3dcd2eded24585d59c53f2da9d9f5)
2024-03-28 14:25:08 -07:00
685cc955df [ROCm] Update triton rocm branch to release/2.3.x (#122493)
* Update triton rocm branch to release/2.3.x

* Remove ROCM_TRITION_VERSION and update to 2.3.0

* Remove unnecessary ROCm conditionalisation

* Skip failing UT
2024-03-28 14:18:37 -07:00
b1c2430fbd remove torchao dependency (#122635)
* remove torchao dependency (#122524)

Test Plan:
CI

```
buck2 run mode/dev-nosan mode/inplace executorch/examples/models/llama2:export_llama -- -c ~/llama/ultra_new_checkpoint.pt -p ~/llama/params.json -kv -E 8,8 -d fp32 --pt2e_quantize "xnnpack_dynamic" -2
```

```
buck run //executorch/backends/xnnpack/test:test_xnnpack_ops -- executorch.backends.xnnpack.test.ops.linear.TestLinear.test_qd8_fp32_per_token_weight_per_channel_group_int4
```

Differential Revision: D55263008

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122524
Approved by: https://github.com/jerryzh168

(cherry picked from commit c677221798d8ce87c97aac1bd9ae34af0767c383)

* Update executorch.txt

* Update _decomposed.py

* Update executorch.txt

* Update executorch.txt

* Update executorch.txt

* Update executorch.txt

* Update executorch.txt

---------

Co-authored-by: Guang Yang <guangyang@meta.com>
Co-authored-by: Guang Yang <42389959+guangy10@users.noreply.github.com>
2024-03-28 12:25:12 -07:00
3002eb2556 [export] hack skip index_put_ in dce (#122683) (#122721)
Summary: Ideally we should do whats in the todo. Just doing this for now to unblock llama capture

Test Plan: capturing llama and using pt2e to quantize it

Differential Revision: D55354487

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122683
Approved by: https://github.com/kimishpatel

(cherry picked from commit 41d24df08f72e059c4eebdde4315e63a9918406f)

Co-authored-by: Jacob Szwejbka <jakeszwe@meta.com>
2024-03-27 21:29:53 -07:00
e1a846d6b8 Fix auto_functionalize (#121990) (#122654)
Differential Revision: D54964130

When we re-export, auto_functionalize HOP will be in the graph. Therefore, we need to implement proper functionalization rule for it. Since the content inside auto_functionalize is guaranteed be functional, it is ok to just fall through it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121990
Approved by: https://github.com/ydwu4, https://github.com/zou3519

(cherry picked from commit 0d845f7b0781f091452a5fd31de14e1c2117f3d4)

Co-authored-by: Tugsbayasgalan (Tugsuu) Manlaibaatar <tmanlaibaatar@meta.com>
2024-03-27 21:28:56 -07:00
4a9a8c606d [export] add pass to remove auto functionalized hop (#122246) (#122655)
Summary: Adds a pass that blindly removes the functionalize hop without consideration on if its safe. Useful for ExecuTorch today and other usecases that have additional logic that can reason about when this pass is safe to use

Test Plan: added unit test

Differential Revision: D55103867

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122246
Approved by: https://github.com/angelayi

(cherry picked from commit c84f81b395fff969bbd2f784efad8ab1a8aa52de)

Co-authored-by: Jacob Szwejbka <jakeszwe@meta.com>
2024-03-27 21:05:15 -07:00
d3201f48b1 Revert "Revert "CI: Specify libc and libstdcxx versions in conda environments"" (#122523)
This reverts commit 74832f12fae2e1bc51bf1f9971dcd12c90a971f5.
2024-03-22 17:41:42 -04:00
74832f12fa Revert "CI: Specify libc and libstdcxx versions in conda environments" (#122497)
This reverts commit b4f90aae1b375bfe06d3c4a099240e06f93c81c4.
2024-03-22 11:27:50 -04:00
02cdb400d7 Use temporary name for triton package, fix lint (#122438)
* Use temporary name for triton package

* Fix lint
2024-03-21 17:30:38 -04:00
37257774c6 Triton wheel build using 2.3.x branch (#122403)
* Triton build 2.3.x

* Revert "[Release Only] Build triton using pinned version rather branch (#121765)"

This reverts commit d69c4219127e2cf5d9637b0daacc0a24e65f8133.

* Triton wheel change

* release
2024-03-21 12:52:21 -04:00
c4e5434423 necessary change to make torch2.3 work with triton2.2 (#122139) 2024-03-21 08:24:53 -04:00
b4f90aae1b CI: Specify libc and libstdcxx versions in conda environments (#121929)
Without this we get mismatches between the GLIBC and GLIBCXX ABI used
by conda packages vs pytorch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121556
Approved by: https://github.com/isuruf, https://github.com/malfet

(cherry picked from commit 7a53dedb07ed72b85d1e083ce38c43c7810fc5f1)

Co-authored-by: Peter Bell <peterbell10@live.co.uk>
2024-03-14 17:56:46 -04:00
94d6463255 [RELEASE ONLY CHANGES] Increase timeout for linux binary jobs, fix workflow lint (#121851)
* [release only] Increase timeout job for linux binary builds by 30min

* fix lint
2024-03-13 19:50:57 -04:00
6a89a753b1 [RELEASE ONLY CHANGES] Apply release only changes Release 2.3 (#121813)
* [Release only changes] Release only changes #2

* common+lint
2024-03-13 11:03:48 -04:00
d69c421912 [Release Only] Build triton using pinned version rather branch (#121765) 2024-03-12 19:05:23 -04:00
6725db07ae [RELEASE ONLY CHANGES] Apply release only changes Release 2.3 (#121726)
* Apply release only changes

* temp changes

* tweak

* fix

* Revert "tweak"

This reverts commit 38edcac21448829ac114c73423c84614628e2598.
2024-03-12 18:14:35 -04:00
146 changed files with 2321 additions and 1309 deletions

View File

@ -1 +1 @@
0a22a91d04c2b4a029a69a198eac390089c3e891
c8ad905211f45e162102823149f0d7f2cfaa4418

View File

@ -1 +1 @@
a9bc1a36470eefafe0e2ab2503b8698f1e89e7e3
958fccea74da58e7e0595ab88ae6cd3f6795a173

View File

@ -57,8 +57,21 @@ fi
# Uncomment the below when resolved to track the latest conda update
# as_jenkins conda update -y -n base conda
if [[ $(uname -m) == "aarch64" ]]; then
export SYSROOT_DEP="sysroot_linux-aarch64=2.17"
else
export SYSROOT_DEP="sysroot_linux-64=2.17"
fi
# Install correct Python version
as_jenkins conda create -n py_$ANACONDA_PYTHON_VERSION -y python="$ANACONDA_PYTHON_VERSION"
# Also ensure sysroot is using a modern GLIBC to match system compilers
as_jenkins conda create -n py_$ANACONDA_PYTHON_VERSION -y\
python="$ANACONDA_PYTHON_VERSION" \
${SYSROOT_DEP}
# libstdcxx from conda default channels are too old, we need GLIBCXX_3.4.30
# which is provided in libstdcxx 12 and up.
conda_install libstdcxx-ng=12.3.0 -c conda-forge
# Install PyTorch conda deps, as per https://github.com/pytorch/pytorch README
if [[ $(uname -m) == "aarch64" ]]; then
@ -110,14 +123,5 @@ fi
pip_install -r /opt/conda/requirements-docs.txt
fi
# HACK HACK HACK
# gcc-9 for ubuntu-18.04 from http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu
# Pulls llibstdc++6 13.1.0-8ubuntu1~18.04 which is too new for conda
# So remove libstdc++6.so.3.29 installed by https://anaconda.org/anaconda/libstdcxx-ng/files?version=11.2.0
# Same is true for gcc-12 from Ubuntu-22.04
if grep -e [12][82].04.[623] /etc/issue >/dev/null; then
rm /opt/conda/envs/py_$ANACONDA_PYTHON_VERSION/lib/libstdc++.so.6
fi
popd
fi

View File

@ -1 +1 @@
3.0.0
2.3.1

View File

@ -255,6 +255,11 @@ else
# or building non-XLA tests.
if [[ "$BUILD_ENVIRONMENT" != *rocm* &&
"$BUILD_ENVIRONMENT" != *xla* ]]; then
if [[ "$BUILD_ENVIRONMENT" != *py3.8* ]]; then
# Install numpy-2.0 release candidate for builds
# Which should be backward compatible with Numpy-1.X
python -mpip install --pre numpy==2.0.0rc1
fi
WERROR=1 python setup.py bdist_wheel
else
python setup.py bdist_wheel

View File

@ -178,7 +178,7 @@ function install_torchrec_and_fbgemm() {
function clone_pytorch_xla() {
if [[ ! -d ./xla ]]; then
git clone --recursive --quiet https://github.com/pytorch/xla.git
git clone --recursive -b r2.3 https://github.com/pytorch/xla.git
pushd xla
# pin the xla hash so that we don't get broken by changes to xla
git checkout "$(cat ../.github/ci_commit_pins/xla.txt)"

View File

@ -26,8 +26,8 @@ echo "error: python_doc_push_script.sh: version (arg2) not specified"
fi
# Argument 1: Where to copy the built documentation to
# (pytorch.github.io/$install_path)
install_path="${1:-${DOCS_INSTALL_PATH:-docs/${DOCS_VERSION}}}"
# (pytorch_docs/$install_path)
install_path="${1:-${DOCS_INSTALL_PATH:-${DOCS_VERSION}}}"
if [ -z "$install_path" ]; then
echo "error: python_doc_push_script.sh: install_path (arg1) not specified"
exit 1
@ -68,8 +68,8 @@ build_docs () {
}
git clone https://github.com/pytorch/pytorch.github.io -b "$branch" --depth 1
pushd pytorch.github.io
git clone https://github.com/pytorch/docs pytorch_docs -b "$branch" --depth 1
pushd pytorch_docs
export LC_ALL=C
export PATH=/opt/conda/bin:$PATH

View File

@ -78,7 +78,7 @@ TRITON_VERSION=$(cat $PYTORCH_ROOT/.ci/docker/triton_version.txt)
if [[ "$PACKAGE_TYPE" =~ .*wheel.* && -n "${PYTORCH_EXTRA_INSTALL_REQUIREMENTS:-}" ]]; then
# Only linux Python < 3.12 are supported wheels for triton
TRITON_CONSTRAINT="platform_system == 'Linux' and platform_machine == 'x86_64' and python_version < '3.12'"
TRITON_REQUIREMENT="triton==${TRITON_VERSION}; ${TRITON_CONSTRAINT}"
TRITON_REQUIREMENT="pytorch-triton==${TRITON_VERSION}; ${TRITON_CONSTRAINT}"
if [[ -n "$PYTORCH_BUILD_VERSION" && "$PYTORCH_BUILD_VERSION" =~ .*dev.* ]]; then
TRITON_SHORTHASH=$(cut -c1-10 $PYTORCH_ROOT/.ci/docker/ci_commit_pins/triton.txt)
TRITON_REQUIREMENT="pytorch-triton==${TRITON_VERSION}+${TRITON_SHORTHASH}; ${TRITON_CONSTRAINT}"

View File

@ -1 +1 @@
707a632930bfde19ffb361cdf5c31a7682af4e67
r2.3

View File

@ -27,3 +27,6 @@ rockset==1.0.3
z3-solver==4.12.2.0
tensorboard==2.13.0
optree==0.9.1
# NB: test_hparams_* from test_tensorboard is failing with protobuf 5.26.0 in
# which the stringify metadata is wrong when escaping double quote
protobuf==3.20.2

View File

@ -10,9 +10,6 @@ from typing import Optional
SCRIPT_DIR = Path(__file__).parent
REPO_DIR = SCRIPT_DIR.parent.parent
# TODO: Remove me once Triton version is again in sync for vanilla and ROCm
ROCM_TRITION_VERSION = "2.1.0"
def read_triton_pin(rocm_hash: bool = False) -> str:
triton_file = "triton.txt" if not rocm_hash else "triton-rocm.txt"
@ -99,7 +96,14 @@ def build_triton(
triton_repo = "https://github.com/openai/triton"
triton_pkg_name = "pytorch-triton"
check_call(["git", "clone", triton_repo], cwd=tmpdir)
check_call(["git", "checkout", commit_hash], cwd=triton_basedir)
if release:
ver, rev, patch = version.split(".")
check_call(
["git", "checkout", f"release/{ver}.{rev}.x"], cwd=triton_basedir
)
else:
check_call(["git", "checkout", commit_hash], cwd=triton_basedir)
if build_conda:
with open(triton_basedir / "meta.yaml", "w") as meta:
print(
@ -109,7 +113,7 @@ def build_triton(
print("source:\n path: .\n", file=meta)
print(
"build:\n string: py{{py}}\n number: 1\n script: cd python; "
"python setup.py install --record=record.txt\n",
"python setup.py install --single-version-externally-managed --record=record.txt\n",
" script_env:\n - MAX_JOBS\n",
file=meta,
)
@ -155,7 +159,7 @@ def build_triton(
patch_init_py(
triton_pythondir / "triton" / "__init__.py",
version=f"{version}",
expected_version=ROCM_TRITION_VERSION if build_rocm else None,
expected_version=None,
)
if build_rocm:
@ -164,7 +168,7 @@ def build_triton(
triton_pythondir / "setup.py",
name=triton_pkg_name,
version=f"{version}",
expected_version=ROCM_TRITION_VERSION,
expected_version=None,
)
check_call("scripts/amd/setup_rocm_libs.sh", cwd=triton_basedir, shell=True)
print("ROCm libraries setup for triton installation...")

View File

@ -62,9 +62,9 @@ SUPPORTED_PERIODICAL_MODES: Dict[str, Callable[[Optional[str]], bool]] = {
}
# The link to the published list of disabled jobs
DISABLED_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json"
DISABLED_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json?versionId=qO7aEr.Og33PtLXfNq0j0yj.bbLC7SzR"
# and unstable jobs
UNSTABLE_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/unstable-jobs.json"
UNSTABLE_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/unstable-jobs.json?versionId=7NhgpqKTtGXVUnL1C79KboTW_5qQx8y5"
# Some constants used to handle disabled and unstable jobs
JOB_NAME_SEP = "/"

View File

@ -21,6 +21,8 @@ DOCKER_IMAGE_TYPES = ["runtime", "devel"]
def generate_docker_matrix() -> Dict[str, List[Dict[str, str]]]:
ret: List[Dict[str, str]] = []
# CUDA amd64 Docker images are available as both runtime and devel while
# CPU arm64 image is only available as runtime.
for cuda, version in generate_binary_build_matrix.CUDA_ARCHES_FULL_VERSION.items():
for image in DOCKER_IMAGE_TYPES:
ret.append(
@ -31,9 +33,19 @@ def generate_docker_matrix() -> Dict[str, List[Dict[str, str]]]:
cuda
],
"image_type": image,
"platform": "linux/arm64,linux/amd64",
"platform": "linux/amd64",
}
)
ret.append(
{
"cuda": "cpu",
"cuda_full_version": "",
"cudnn_version": "",
"image_type": "runtime",
"platform": "linux/arm64",
}
)
return {"include": ret}

View File

@ -8,7 +8,7 @@
# NOTE: If testing pytorch/builder changes you can change this variable to change what pytorch/builder reference
# the binary builds will check out
{%- set builder_repo = "pytorch/builder" -%}
{%- set builder_branch = "main" -%}
{%- set builder_branch = "release/2.3" -%}
{%- macro concurrency(build_environment) -%}
concurrency:

View File

@ -100,8 +100,8 @@ jobs:
with:
name: !{{ config["build_name"] }}
path: "${{ runner.temp }}/artifacts/"
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: ROCm set GPU_FLAG
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"

View File

@ -81,8 +81,8 @@ jobs:
elif [ -d "/Applications/Xcode_13.3.1.app" ]; then
echo "DEVELOPER_DIR=/Applications/Xcode_13.3.1.app/Contents/Developer" >> "${GITHUB_ENV}"
fi
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
uses: nick-fields/retry@v2.8.2
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}

View File

@ -65,8 +65,8 @@ jobs:
steps:
!{{ common.setup_ec2_windows() }}
!{{ set_runner_specific_vars() }}
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Populate binary env
shell: bash
run: |
@ -105,8 +105,8 @@ jobs:
with:
name: !{{ config["build_name"] }}
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Populate binary env
shell: bash
run: |

View File

@ -37,7 +37,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
fetch-depth: 1
submodules: false
@ -59,25 +59,25 @@ jobs:
runs-on: ${{ matrix.runner }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with:
docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -141,5 +141,5 @@ jobs:
if: always()
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always()

View File

@ -37,7 +37,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
fetch-depth: 1
submodules: false
@ -59,25 +59,25 @@ jobs:
runs-on: ${{ matrix.runner }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with:
docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -186,5 +186,5 @@ jobs:
if: always()
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always()

View File

@ -42,7 +42,7 @@ jobs:
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
fetch-depth: 1
submodules: false
@ -64,30 +64,30 @@ jobs:
runs-on: ${{ matrix.runner }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with:
docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.3
if: ${{ inputs.cuda-version != 'cpu' }}
- name: Output disk space left
@ -196,5 +196,5 @@ jobs:
file-suffix: bazel-${{ github.job }}_${{ steps.get-job-id.outputs.job-id }}
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always()

View File

@ -78,7 +78,7 @@ on:
jobs:
build:
runs-on: ${{ inputs.runs_on }}
timeout-minutes: 180
timeout-minutes: 210
env:
PYTORCH_ROOT: ${{ inputs.PYTORCH_ROOT }}
BUILDER_ROOT: ${{ inputs.BUILDER_ROOT }}
@ -139,13 +139,13 @@ jobs:
run: env
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
continue-on-error: true
with:
github-secret: ${{ secrets.github-token }}
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' }}
@ -173,7 +173,6 @@ jobs:
- name: Checkout PyTorch to pytorch dir
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -187,7 +186,7 @@ jobs:
- name: Checkout pytorch/builder to builder dir
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -213,7 +212,7 @@ jobs:
- name: Pull Docker image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' }}
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ inputs.DOCKER_IMAGE }}
@ -270,7 +269,7 @@ jobs:
- name: Teardown Linux
if: always()
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
- name: Chown workspace
if: always()

View File

@ -127,14 +127,14 @@ jobs:
} >> "${GITHUB_ENV} }}"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
continue-on-error: true
with:
github-secret: ${{ secrets.github-token }}
# Setup the environment
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' }}
@ -155,7 +155,6 @@ jobs:
- name: Checkout PyTorch to pytorch dir
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
@ -168,7 +167,7 @@ jobs:
- name: Checkout pytorch/builder to builder dir
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -199,12 +198,12 @@ jobs:
path: "${{ runner.temp }}/artifacts/"
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.3
if: ${{ inputs.GPU_ARCH_TYPE == 'cuda' && steps.filter.outputs.is-test-matrix-empty == 'False' }}
- name: Pull Docker image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' }}
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ inputs.DOCKER_IMAGE }}
@ -214,7 +213,7 @@ jobs:
- name: Teardown Linux
if: always()
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
- name: Chown workspace
if: always()

View File

@ -95,7 +95,7 @@ jobs:
SHA1: ${{ github.event.pull_request.head.sha || github.sha }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
no-sudo: true

View File

@ -23,7 +23,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
fetch-depth: 1
submodules: false
@ -44,7 +44,7 @@ jobs:
runs-on: ${{ matrix.runner }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Set up JDK 8
uses: actions/setup-java@v3
@ -53,7 +53,7 @@ jobs:
distribution: 'temurin'
- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with:
python-version: 3.8
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}

View File

@ -66,7 +66,7 @@ jobs:
name: build-docs-${{ matrix.docs_type }}-${{ inputs.push }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: |
@ -77,19 +77,19 @@ jobs:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with:
docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -163,7 +163,7 @@ jobs:
retention-days: 14
s3-bucket: doc-previews
if-no-files-found: error
path: pytorch.github.io/docs/main/
path: pytorch_docs/main/
s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }}
- name: Upload C++ Docs Preview
@ -187,5 +187,5 @@ jobs:
s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }}/functorchdocs
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always()

View File

@ -46,7 +46,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
fetch-depth: 1
submodules: false
@ -80,7 +80,7 @@ jobs:
steps:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Populate CI build options
shell: bash
@ -102,7 +102,7 @@ jobs:
brew install libtool
- name: Setup miniconda for iOS
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with:
python-version: "3.9"
environment-file: .github/requirements/conda-env-iOS.txt

View File

@ -73,7 +73,7 @@ jobs:
test-matrix: ${{ steps.filter.outputs.test-matrix }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -82,14 +82,14 @@ jobs:
# checkout because when we run this action we don't *have* a local
# checkout. In other cases you should prefer a local checkout.
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with:
docker-image-name: ${{ inputs.docker-image-name }}
@ -103,7 +103,7 @@ jobs:
echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}"
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -209,5 +209,5 @@ jobs:
path: sccache-stats-*.json
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always()

View File

@ -57,7 +57,7 @@ jobs:
timeout-minutes: ${{ matrix.mem_leak_check == 'mem_leak_check' && 600 || inputs.timeout-minutes }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
if: ${{ !contains(matrix.runner, 'gcp.a100') }}
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -66,14 +66,14 @@ jobs:
docker exec -it $(docker container ps --format '{{.ID}}') bash
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with:
docker-image-name: ${{ inputs.docker-image }}
@ -87,13 +87,13 @@ jobs:
echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}"
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
id: install-nvidia-driver
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.3
if: contains(inputs.build-environment, 'cuda') && !contains(matrix.config, 'nogpu')
- name: Lock NVIDIA A100 40GB Frequency
@ -307,7 +307,7 @@ jobs:
path: ./**/core.[1-9]*
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always()
# NB: We are currently having an intermittent GPU-related issue on G5 runners with

View File

@ -71,11 +71,11 @@ jobs:
test-matrix: ${{ steps.filter.outputs.test-matrix }}
steps:
- name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.3
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Set xcode version
env:
@ -87,7 +87,7 @@ jobs:
- name: Setup miniconda
if: inputs.environment-file == ''
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with:
python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -97,7 +97,7 @@ jobs:
# environment even though the arch is x86-64
- name: Setup miniconda using the provided environment file
if: inputs.environment-file != ''
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with:
python-version: ${{ inputs.python-version }}
environment-file: ${{ inputs.environment-file }}
@ -207,4 +207,4 @@ jobs:
- name: Clean up disk space
if: always()
continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.3

View File

@ -40,7 +40,7 @@ jobs:
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
submodules: false
@ -81,7 +81,7 @@ jobs:
use-gha: true
- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with:
python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -159,4 +159,4 @@ jobs:
- name: Clean up disk space
if: always()
continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.3

View File

@ -79,11 +79,11 @@ jobs:
done
- name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.3
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Download build artifacts
uses: ./.github/actions/download-build-artifacts
@ -98,7 +98,7 @@ jobs:
use-gha: true
- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with:
python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -227,4 +227,4 @@ jobs:
- name: Clean up disk space
if: always()
continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.3

View File

@ -58,7 +58,7 @@ jobs:
steps:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
no-sudo: true
@ -80,12 +80,12 @@ jobs:
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with:
docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}

View File

@ -23,7 +23,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
fetch-depth: 1
submodules: false
@ -54,10 +54,10 @@ jobs:
SUPPORT_ABI: '${{ matrix.support_abi }}'
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with:
python-version: 3.8
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}.txt

View File

@ -60,10 +60,10 @@ jobs:
git config --global core.fsmonitor false
- name: Clean up leftover processes on non-ephemeral Windows runner
uses: pytorch/test-infra/.github/actions/cleanup-runner@main
uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.3
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: |
@ -78,7 +78,7 @@ jobs:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
no-sudo: true

View File

@ -54,10 +54,10 @@ jobs:
git config --global core.fsmonitor false
- name: Clean up leftover processes on non-ephemeral Windows runner
uses: pytorch/test-infra/.github/actions/cleanup-runner@main
uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.3
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: |
@ -73,7 +73,7 @@ jobs:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
no-sudo: true

View File

@ -54,7 +54,7 @@ jobs:
steps:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup XPU
uses: ./.github/actions/setup-xpu
@ -72,12 +72,12 @@ jobs:
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with:
docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}

View File

@ -3,7 +3,7 @@ name: Build Triton wheels
on:
push:
branches:
- main
- release/2.3
tags:
# NOTE: Binary build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1
@ -47,12 +47,12 @@ jobs:
BUILD_DEVICE: ${{ matrix.device }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
submodules: false
@ -60,7 +60,7 @@ jobs:
uses: ./.github/actions/setup-linux
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ env.DOCKER_IMAGE }}
@ -125,7 +125,7 @@ jobs:
path: ${{ runner.temp }}/artifacts/*
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always()
upload-wheel:
@ -203,12 +203,12 @@ jobs:
PY_VERS: ${{ matrix.py_vers }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
submodules: false
@ -216,7 +216,7 @@ jobs:
uses: ./.github/actions/setup-linux
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ env.DOCKER_IMAGE }}
@ -252,7 +252,7 @@ jobs:
path: ${{ runner.temp }}/artifacts/*
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always()
upload-conda:

View File

@ -31,7 +31,7 @@ jobs:
runs-on: linux.20_04.4x
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
submodules: false
fetch-depth: 1

View File

@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Run close_nonexistent_disable_issues.py
env:

View File

@ -74,21 +74,21 @@ jobs:
# [see note: pytorch repo ref]
# deep clone (fetch-depth 0) required for git merge-base
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Build docker image
id: build-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with:
docker-image-name: ${{ matrix.docker-image-name }}
always-rebuild: true
push: true
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ steps.build-docker-image.outputs.docker-image }}
@ -120,5 +120,5 @@ jobs:
if: always()
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always()

View File

@ -7,6 +7,7 @@ on:
- Dockerfile
- docker.Makefile
- .github/workflows/docker-release.yml
- .github/scripts/generate_docker_release_matrix.py
push:
branches:
- nightly
@ -40,7 +41,7 @@ jobs:
matrix: ${{ steps.generate-matrix.outputs.matrix }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
fetch-depth: 1
submodules: true
@ -68,7 +69,7 @@ jobs:
CUDNN_VERSION: ${{ matrix.cudnn_version }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref]
@ -129,17 +130,27 @@ jobs:
if: ${{ github.event.ref == 'refs/heads/nightly' && matrix.image_type == 'runtime' }}
run: |
PYTORCH_DOCKER_TAG="${PYTORCH_VERSION}-cuda${CUDA_VERSION_SHORT}-cudnn${CUDNN_VERSION}-runtime"
CUDA_SUFFIX="-cu${CUDA_VERSION}"
if [[ ${CUDA_VERSION_SHORT} == "cpu" ]]; then
PYTORCH_DOCKER_TAG="${PYTORCH_VERSION}-runtime"
CUDA_SUFFIX=""
fi
PYTORCH_NIGHTLY_COMMIT=$(docker run ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_DOCKER_TAG}" \
python -c 'import torch; print(torch.version.git_version[:7],end="")')
docker tag ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_DOCKER_TAG}" \
ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}-cu${CUDA_VERSION}"
docker push ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}-cu${CUDA_VERSION}"
ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}${CUDA_SUFFIX}"
docker push ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}${CUDA_SUFFIX}"
# Please note, here we ned to pin specific verison of CUDA as with latest label
if [[ ${CUDA_VERSION_SHORT} == "12.1" ]]; then
docker tag ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}${CUDA_SUFFIX}" \
ghcr.io/pytorch/pytorch-nightly:latest
docker push ghcr.io/pytorch/pytorch-nightly:latest
fi
docker tag ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}-cu${CUDA_VERSION}" \
ghcr.io/pytorch/pytorch-nightly:latest
docker push ghcr.io/pytorch/pytorch-nightly:latest
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always()

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.8"
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
@ -69,7 +69,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
@ -91,7 +91,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-aarch64
secrets:
@ -111,7 +111,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.9"
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
@ -132,7 +132,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
@ -154,7 +154,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-aarch64
secrets:
@ -174,7 +174,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.10"
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
@ -195,7 +195,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
@ -217,7 +217,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-aarch64
secrets:
@ -237,7 +237,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.11"
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
@ -258,7 +258,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
@ -280,7 +280,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-aarch64
secrets:
@ -300,7 +300,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.12"
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
@ -321,7 +321,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
@ -343,7 +343,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-aarch64
secrets:

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu
build_environment: linux-binary-conda
@ -66,7 +66,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu
build_environment: linux-binary-conda
@ -87,7 +87,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu
secrets:
@ -108,7 +108,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8"
runs_on: linux.24xlarge
build_name: conda-py3_8-cuda11_8
@ -128,7 +128,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda11_8
build_environment: linux-binary-conda
@ -150,7 +150,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda11_8
secrets:
@ -171,7 +171,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8"
runs_on: linux.24xlarge
build_name: conda-py3_8-cuda12_1
@ -191,7 +191,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda12_1
build_environment: linux-binary-conda
@ -213,7 +213,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda12_1
secrets:
@ -233,7 +233,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu
build_environment: linux-binary-conda
@ -251,7 +251,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu
build_environment: linux-binary-conda
@ -272,7 +272,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu
secrets:
@ -293,7 +293,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.9"
runs_on: linux.24xlarge
build_name: conda-py3_9-cuda11_8
@ -313,7 +313,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda11_8
build_environment: linux-binary-conda
@ -335,7 +335,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda11_8
secrets:
@ -356,7 +356,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.9"
runs_on: linux.24xlarge
build_name: conda-py3_9-cuda12_1
@ -376,7 +376,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda12_1
build_environment: linux-binary-conda
@ -398,7 +398,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda12_1
secrets:
@ -418,7 +418,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu
build_environment: linux-binary-conda
@ -436,7 +436,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu
build_environment: linux-binary-conda
@ -457,7 +457,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu
secrets:
@ -478,7 +478,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.10"
runs_on: linux.24xlarge
build_name: conda-py3_10-cuda11_8
@ -498,7 +498,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda11_8
build_environment: linux-binary-conda
@ -520,7 +520,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda11_8
secrets:
@ -541,7 +541,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.10"
runs_on: linux.24xlarge
build_name: conda-py3_10-cuda12_1
@ -561,7 +561,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda12_1
build_environment: linux-binary-conda
@ -583,7 +583,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda12_1
secrets:
@ -603,7 +603,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu
build_environment: linux-binary-conda
@ -621,7 +621,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu
build_environment: linux-binary-conda
@ -642,7 +642,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu
secrets:
@ -663,7 +663,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.11"
runs_on: linux.24xlarge
build_name: conda-py3_11-cuda11_8
@ -683,7 +683,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda11_8
build_environment: linux-binary-conda
@ -705,7 +705,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda11_8
secrets:
@ -726,7 +726,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.11"
runs_on: linux.24xlarge
build_name: conda-py3_11-cuda12_1
@ -746,7 +746,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda12_1
build_environment: linux-binary-conda
@ -768,7 +768,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda12_1
secrets:
@ -788,7 +788,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cpu
build_environment: linux-binary-conda
@ -806,7 +806,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cpu
build_environment: linux-binary-conda
@ -827,7 +827,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cpu
secrets:
@ -848,7 +848,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.12"
runs_on: linux.24xlarge
build_name: conda-py3_12-cuda11_8
@ -868,7 +868,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda11_8
build_environment: linux-binary-conda
@ -890,7 +890,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda11_8
secrets:
@ -911,7 +911,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.12"
runs_on: linux.24xlarge
build_name: conda-py3_12-cuda12_1
@ -931,7 +931,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda12_1
build_environment: linux-binary-conda
@ -953,7 +953,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda12_1
secrets:

View File

@ -43,7 +43,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -62,7 +62,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -67,7 +67,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -89,7 +89,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -111,7 +111,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
@ -131,7 +131,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
@ -154,7 +154,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
@ -176,7 +176,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
@ -196,7 +196,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
@ -219,7 +219,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
@ -241,7 +241,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.7-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm5_7-shared-with-deps-cxx11-abi
@ -263,7 +263,7 @@ jobs:
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.7-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
steps:
@ -277,7 +277,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -289,7 +288,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -305,7 +304,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/libtorch-cxx11-builder:rocm5.7-main
docker-image: pytorch/libtorch-cxx11-builder:rocm5.7-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -325,7 +324,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.7-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm5_7-shared-with-deps-cxx11-abi
@ -347,7 +346,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm6_0-shared-with-deps-cxx11-abi
@ -369,7 +368,7 @@ jobs:
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
steps:
@ -383,7 +382,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -395,7 +393,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -411,7 +409,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/libtorch-cxx11-builder:rocm6.0-main
docker-image: pytorch/libtorch-cxx11-builder:rocm6.0-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -431,7 +429,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm6_0-shared-with-deps-cxx11-abi

View File

@ -43,7 +43,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -62,7 +62,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -67,7 +67,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -89,7 +89,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -111,7 +111,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
@ -131,7 +131,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
@ -154,7 +154,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
@ -176,7 +176,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
@ -196,7 +196,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
@ -219,7 +219,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
@ -241,7 +241,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm5_7-shared-with-deps-pre-cxx11
@ -263,7 +263,7 @@ jobs:
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
steps:
@ -277,7 +277,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -289,7 +288,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -305,7 +304,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/manylinux-builder:rocm5.7-main
docker-image: pytorch/manylinux-builder:rocm5.7-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -325,7 +324,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm5_7-shared-with-deps-pre-cxx11
@ -347,7 +346,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm6_0-shared-with-deps-pre-cxx11
@ -369,7 +368,7 @@ jobs:
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
steps:
@ -383,7 +382,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -395,7 +393,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -411,7 +409,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/manylinux-builder:rocm6.0-main
docker-image: pytorch/manylinux-builder:rocm6.0-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -431,7 +429,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm6_0-shared-with-deps-pre-cxx11

View File

@ -44,7 +44,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8
build_environment: linux-binary-manywheel
@ -64,7 +64,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8
build_environment: linux-binary-manywheel
@ -84,7 +84,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1
build_environment: linux-binary-manywheel
@ -104,7 +104,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1
build_environment: linux-binary-manywheel

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu
build_environment: linux-binary-manywheel
@ -66,7 +66,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu
build_environment: linux-binary-manywheel
@ -87,7 +87,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu
secrets:
@ -107,7 +107,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-cxx11-abi
@ -126,7 +126,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-cxx11-abi
@ -148,7 +148,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-cxx11-abi
@ -170,7 +170,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8
build_environment: linux-binary-manywheel
@ -190,7 +190,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8
build_environment: linux-binary-manywheel
@ -212,7 +212,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8
secrets:
@ -233,7 +233,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1
build_environment: linux-binary-manywheel
@ -253,7 +253,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1
build_environment: linux-binary-manywheel
@ -275,7 +275,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1
secrets:
@ -296,7 +296,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-rocm5_7
build_environment: linux-binary-manywheel
@ -317,7 +317,7 @@ jobs:
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.8"
steps:
- name: Setup ROCm
@ -330,7 +330,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -342,7 +341,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -358,7 +357,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/manylinux-builder:rocm5.7-main
docker-image: pytorch/manylinux-builder:rocm5.7-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -378,7 +377,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-rocm5_7
secrets:
@ -399,7 +398,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-rocm6_0
build_environment: linux-binary-manywheel
@ -420,7 +419,7 @@ jobs:
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.8"
steps:
- name: Setup ROCm
@ -433,7 +432,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -445,7 +443,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -461,7 +459,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/manylinux-builder:rocm6.0-main
docker-image: pytorch/manylinux-builder:rocm6.0-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -481,7 +479,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-rocm6_0
secrets:
@ -501,7 +499,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu
build_environment: linux-binary-manywheel
@ -519,7 +517,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu
build_environment: linux-binary-manywheel
@ -540,7 +538,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu
secrets:
@ -560,7 +558,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-cxx11-abi
@ -579,7 +577,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-cxx11-abi
@ -601,7 +599,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-cxx11-abi
@ -623,7 +621,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda11_8
build_environment: linux-binary-manywheel
@ -643,7 +641,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda11_8
build_environment: linux-binary-manywheel
@ -665,7 +663,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda11_8
secrets:
@ -686,7 +684,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda12_1
build_environment: linux-binary-manywheel
@ -706,7 +704,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda12_1
build_environment: linux-binary-manywheel
@ -728,7 +726,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda12_1
secrets:
@ -749,7 +747,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-rocm5_7
build_environment: linux-binary-manywheel
@ -770,7 +768,7 @@ jobs:
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.9"
steps:
- name: Setup ROCm
@ -783,7 +781,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -795,7 +792,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -811,7 +808,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/manylinux-builder:rocm5.7-main
docker-image: pytorch/manylinux-builder:rocm5.7-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -831,7 +828,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-rocm5_7
secrets:
@ -852,7 +849,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-rocm6_0
build_environment: linux-binary-manywheel
@ -873,7 +870,7 @@ jobs:
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.9"
steps:
- name: Setup ROCm
@ -886,7 +883,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -898,7 +894,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -914,7 +910,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/manylinux-builder:rocm6.0-main
docker-image: pytorch/manylinux-builder:rocm6.0-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -934,7 +930,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-rocm6_0
secrets:
@ -954,7 +950,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu
build_environment: linux-binary-manywheel
@ -972,7 +968,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu
build_environment: linux-binary-manywheel
@ -993,7 +989,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu
secrets:
@ -1013,7 +1009,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-cxx11-abi
@ -1032,7 +1028,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-cxx11-abi
@ -1054,7 +1050,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-cxx11-abi
@ -1076,7 +1072,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda11_8
build_environment: linux-binary-manywheel
@ -1096,7 +1092,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda11_8
build_environment: linux-binary-manywheel
@ -1118,7 +1114,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda11_8
secrets:
@ -1139,7 +1135,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda12_1
build_environment: linux-binary-manywheel
@ -1159,7 +1155,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda12_1
build_environment: linux-binary-manywheel
@ -1181,7 +1177,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda12_1
secrets:
@ -1202,7 +1198,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-rocm5_7
build_environment: linux-binary-manywheel
@ -1223,7 +1219,7 @@ jobs:
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.10"
steps:
- name: Setup ROCm
@ -1236,7 +1232,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1248,7 +1243,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1264,7 +1259,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/manylinux-builder:rocm5.7-main
docker-image: pytorch/manylinux-builder:rocm5.7-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -1284,7 +1279,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-rocm5_7
secrets:
@ -1305,7 +1300,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-rocm6_0
build_environment: linux-binary-manywheel
@ -1326,7 +1321,7 @@ jobs:
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.10"
steps:
- name: Setup ROCm
@ -1339,7 +1334,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1351,7 +1345,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1367,7 +1361,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/manylinux-builder:rocm6.0-main
docker-image: pytorch/manylinux-builder:rocm6.0-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -1387,7 +1381,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-rocm6_0
secrets:
@ -1407,7 +1401,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu
build_environment: linux-binary-manywheel
@ -1425,7 +1419,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu
build_environment: linux-binary-manywheel
@ -1446,7 +1440,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu
secrets:
@ -1466,7 +1460,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-cxx11-abi
@ -1485,7 +1479,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-cxx11-abi
@ -1507,7 +1501,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-cxx11-abi
@ -1529,7 +1523,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda11_8
build_environment: linux-binary-manywheel
@ -1549,7 +1543,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda11_8
build_environment: linux-binary-manywheel
@ -1571,7 +1565,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda11_8
secrets:
@ -1592,7 +1586,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda12_1
build_environment: linux-binary-manywheel
@ -1612,7 +1606,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda12_1
build_environment: linux-binary-manywheel
@ -1634,7 +1628,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda12_1
secrets:
@ -1655,7 +1649,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-rocm5_7
build_environment: linux-binary-manywheel
@ -1676,7 +1670,7 @@ jobs:
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.11"
steps:
- name: Setup ROCm
@ -1689,7 +1683,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1701,7 +1694,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1717,7 +1710,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/manylinux-builder:rocm5.7-main
docker-image: pytorch/manylinux-builder:rocm5.7-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -1737,7 +1730,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-rocm5_7
secrets:
@ -1758,7 +1751,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-rocm6_0
build_environment: linux-binary-manywheel
@ -1779,7 +1772,7 @@ jobs:
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.11"
steps:
- name: Setup ROCm
@ -1792,7 +1785,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1804,7 +1796,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1820,7 +1812,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/manylinux-builder:rocm6.0-main
docker-image: pytorch/manylinux-builder:rocm6.0-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -1840,7 +1832,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-rocm6_0
secrets:
@ -1860,7 +1852,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu
build_environment: linux-binary-manywheel
@ -1878,7 +1870,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu
build_environment: linux-binary-manywheel
@ -1899,7 +1891,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu
secrets:
@ -1919,7 +1911,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-cxx11-abi
@ -1938,7 +1930,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-cxx11-abi
@ -1960,7 +1952,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-cxx11-abi
@ -1982,7 +1974,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda11_8
build_environment: linux-binary-manywheel
@ -2002,7 +1994,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda11_8
build_environment: linux-binary-manywheel
@ -2024,7 +2016,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda11_8
secrets:
@ -2045,7 +2037,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda12_1
build_environment: linux-binary-manywheel
@ -2065,7 +2057,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda12_1
build_environment: linux-binary-manywheel
@ -2087,7 +2079,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda12_1
secrets:
@ -2108,7 +2100,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-rocm5_7
build_environment: linux-binary-manywheel
@ -2129,7 +2121,7 @@ jobs:
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.12"
steps:
- name: Setup ROCm
@ -2142,7 +2134,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2154,7 +2145,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2170,7 +2161,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/manylinux-builder:rocm5.7-main
docker-image: pytorch/manylinux-builder:rocm5.7-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -2190,7 +2181,7 @@ jobs:
DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-rocm5_7
secrets:
@ -2211,7 +2202,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-rocm6_0
build_environment: linux-binary-manywheel
@ -2232,7 +2223,7 @@ jobs:
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.12"
steps:
- name: Setup ROCm
@ -2245,7 +2236,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2257,7 +2247,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2273,7 +2263,7 @@ jobs:
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: pytorch/manylinux-builder:rocm6.0-main
docker-image: pytorch/manylinux-builder:rocm6.0-2.3
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -2293,7 +2283,7 @@ jobs:
DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-rocm6_0
secrets:

View File

@ -77,7 +77,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -89,7 +88,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -141,7 +140,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu
use_s3: False
@ -195,7 +194,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -207,7 +205,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -259,7 +257,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu
use_s3: False
@ -313,7 +311,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -325,7 +322,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -377,7 +374,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu
use_s3: False
@ -431,7 +428,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -443,7 +439,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -495,7 +491,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu
use_s3: False
@ -549,7 +545,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -561,7 +556,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -613,7 +608,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cpu
use_s3: False

View File

@ -81,7 +81,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -93,7 +92,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -145,7 +144,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi

View File

@ -78,7 +78,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -90,7 +89,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -142,7 +141,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.8"
build_name: wheel-py3_8-cpu
use_s3: False
@ -197,7 +196,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -209,7 +207,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -261,7 +259,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.9"
build_name: wheel-py3_9-cpu
use_s3: False
@ -316,7 +314,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -328,7 +325,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -380,7 +377,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.10"
build_name: wheel-py3_10-cpu
use_s3: False
@ -435,7 +432,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -447,7 +443,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -499,7 +495,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.11"
build_name: wheel-py3_11-cpu
use_s3: False
@ -554,7 +550,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -566,7 +561,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -618,7 +613,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.12"
build_name: wheel-py3_12-cpu
use_s3: False

View File

@ -93,7 +93,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -105,7 +104,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -210,7 +209,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -222,7 +220,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -336,7 +334,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -348,7 +345,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -454,7 +451,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -466,7 +462,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -581,7 +577,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -593,7 +588,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -699,7 +694,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -711,7 +705,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -825,7 +819,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -837,7 +830,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -942,7 +935,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -954,7 +946,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1068,7 +1060,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1080,7 +1071,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1186,7 +1177,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1198,7 +1188,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1313,7 +1303,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1325,7 +1314,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1431,7 +1420,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1443,7 +1431,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1557,7 +1545,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1569,7 +1556,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1674,7 +1661,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1686,7 +1672,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1800,7 +1786,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1812,7 +1797,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1918,7 +1903,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1930,7 +1914,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2045,7 +2029,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2057,7 +2040,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2163,7 +2146,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2175,7 +2157,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2289,7 +2271,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2301,7 +2282,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2406,7 +2387,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2418,7 +2398,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2532,7 +2512,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2544,7 +2523,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2650,7 +2629,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2662,7 +2640,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2777,7 +2755,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2789,7 +2766,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2895,7 +2872,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2907,7 +2883,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -3021,7 +2997,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3033,7 +3008,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -3138,7 +3113,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3150,7 +3124,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -3264,7 +3238,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3276,7 +3249,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -3382,7 +3355,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3394,7 +3366,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -3509,7 +3481,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3521,7 +3492,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -3627,7 +3598,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3639,7 +3609,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder

View File

@ -90,7 +90,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -102,7 +101,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -211,7 +210,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -223,7 +221,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder

View File

@ -97,7 +97,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -109,7 +108,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -218,7 +217,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -230,7 +228,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -352,7 +350,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -364,7 +361,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -474,7 +471,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -486,7 +482,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -609,7 +605,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -621,7 +616,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -731,7 +726,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -743,7 +737,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder

View File

@ -90,7 +90,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -102,7 +101,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -211,7 +210,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -223,7 +221,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder

View File

@ -97,7 +97,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -109,7 +108,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -218,7 +217,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -230,7 +228,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -352,7 +350,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -364,7 +361,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -474,7 +471,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -486,7 +482,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -609,7 +605,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -621,7 +616,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -731,7 +726,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -743,7 +737,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder

View File

@ -94,7 +94,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -106,7 +105,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -211,7 +210,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -223,7 +221,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -338,7 +336,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -350,7 +347,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -456,7 +453,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -468,7 +464,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -584,7 +580,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -596,7 +591,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -702,7 +697,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -714,7 +708,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -829,7 +823,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -841,7 +834,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -946,7 +939,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -958,7 +950,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1073,7 +1065,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1085,7 +1076,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1191,7 +1182,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1203,7 +1193,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1319,7 +1309,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1331,7 +1320,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1437,7 +1426,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1449,7 +1437,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1564,7 +1552,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1576,7 +1563,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1681,7 +1668,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1693,7 +1679,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1808,7 +1794,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1820,7 +1805,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -1926,7 +1911,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1938,7 +1922,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2054,7 +2038,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2066,7 +2049,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2172,7 +2155,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2184,7 +2166,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2299,7 +2281,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2311,7 +2292,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2416,7 +2397,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2428,7 +2408,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2543,7 +2523,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2555,7 +2534,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2661,7 +2640,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2673,7 +2651,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2789,7 +2767,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2801,7 +2778,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -2907,7 +2884,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2919,7 +2895,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -3034,7 +3010,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3046,7 +3021,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -3151,7 +3126,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3163,7 +3137,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -3278,7 +3252,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3290,7 +3263,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -3396,7 +3369,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3408,7 +3380,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -3524,7 +3496,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3536,7 +3507,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder
@ -3642,7 +3613,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3654,7 +3624,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.3
submodules: recursive
repository: pytorch/builder
path: builder

View File

@ -111,7 +111,7 @@ jobs:
name: linux-jammy-cpu-py3.8-gcc11-inductor
uses: ./.github/workflows/_linux-build.yml
with:
build-environment: linux-jammy-py3_8-gcc11-build
build-environment: linux-jammy-py3.8-gcc11-build
docker-image-name: pytorch-linux-jammy-py3.8-gcc11-inductor-benchmarks
test-matrix: |
{ include: [
@ -135,7 +135,7 @@ jobs:
uses: ./.github/workflows/_linux-test.yml
needs: linux-jammy-cpu-py3_8-gcc11-inductor-build
with:
build-environment: linux-jammy-py3_8-gcc11-build
build-environment: linux-jammy-py3.8-gcc11-build
docker-image: ${{ needs.linux-jammy-cpu-py3_8-gcc11-inductor-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-jammy-cpu-py3_8-gcc11-inductor-build.outputs.test-matrix }}
secrets:

View File

@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Run BC Lint Action
uses: pytorch/test-infra/.github/actions/bc-lint@main
uses: pytorch/test-infra/.github/actions/bc-lint@release/2.3
with:
repo: ${{ github.event.pull_request.head.repo.full_name }}
base_sha: ${{ github.event.pull_request.base.sha }}

View File

@ -16,7 +16,7 @@ permissions: read-all
# When any other step fails, it's job will be retried once by retryBot.
jobs:
lintrunner-clang:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.3
with:
timeout: 120
runner: linux.2xlarge
@ -32,7 +32,7 @@ jobs:
.github/scripts/lintrunner.sh
lintrunner-noclang:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.3
with:
timeout: 120
runner: linux.2xlarge
@ -47,7 +47,7 @@ jobs:
.github/scripts/lintrunner.sh
quick-checks:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.3
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
@ -88,7 +88,7 @@ jobs:
if: github.event_name == 'pull_request' && !contains(github.event.pull_request.labels.*.name, 'skip-pr-sanity-checks')
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
submodules: false
fetch-depth: -1
@ -101,7 +101,7 @@ jobs:
bash .github/scripts/pr-sanity-check.sh
workflow-checks:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.3
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
@ -113,6 +113,7 @@ jobs:
CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]")
conda activate "${CONDA_ENV}"
export RELEASE_VERSION_TAG="2.3"
# Regenerate workflows
.github/scripts/generate_ci_workflows.py
@ -137,7 +138,7 @@ jobs:
exit $RC
toc:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.3
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
@ -175,7 +176,7 @@ jobs:
test-tools:
name: Test tools
if: ${{ github.repository == 'pytorch/pytorch' }}
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.3
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
@ -196,7 +197,7 @@ jobs:
runs-on: linux.20_04.4x
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
submodules: false
fetch-depth: 1
@ -226,15 +227,15 @@ jobs:
# [see note: pytorch repo ref]
# deep clone (fetch-depth 0) required, to allow us to use git log
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
submodules: false
fetch-depth: 1
- name: Setup Python 3.5
- name: Setup Python 3.6
if: matrix.test_type == 'older_python_version'
uses: actions/setup-python@v4
with:
python-version: '3.5'
python-version: '3.6'
architecture: x64
check-latest: false
cache: pip

View File

@ -21,7 +21,7 @@ jobs:
environment: upload-stats
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
fetch-depth: 1
submodules: false

View File

@ -41,7 +41,7 @@ jobs:
environment: update-commit-hash
steps:
- name: update-vision-commit-hash
uses: pytorch/test-infra/.github/actions/update-commit-hash@main
uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.3
if: ${{ github.event_name == 'schedule' }}
with:
repo-name: vision
@ -56,7 +56,7 @@ jobs:
environment: update-commit-hash
steps:
- name: update-audio-commit-hash
uses: pytorch/test-infra/.github/actions/update-commit-hash@main
uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.3
if: ${{ github.event_name == 'schedule' }}
with:
repo-name: audio
@ -71,7 +71,7 @@ jobs:
environment: update-commit-hash
steps:
- name: update-executorch-commit-hash
uses: pytorch/test-infra/.github/actions/update-commit-hash@main
uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.3
if: ${{ github.event_name == 'schedule' }}
with:
repo-name: executorch

View File

@ -311,7 +311,7 @@ jobs:
name: linux-focal-py3_8-clang9-xla
uses: ./.github/workflows/_linux-build.yml
with:
build-environment: linux-focal-py3_8-clang9-xla
build-environment: linux-focal-py3.8-clang9-xla
docker-image-name: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/xla_base:v1.1-lite
test-matrix: |
{ include: [
@ -323,7 +323,7 @@ jobs:
uses: ./.github/workflows/_linux-test.yml
needs: linux-focal-py3_8-clang9-xla-build
with:
build-environment: linux-focal-py3_8-clang9-xla
build-environment: linux-focal-py3.8-clang9-xla
docker-image: ${{ needs.linux-focal-py3_8-clang9-xla-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-focal-py3_8-clang9-xla-build.outputs.test-matrix }}

View File

@ -18,7 +18,7 @@ jobs:
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with:
docker-image-name: pytorch-linux-focal-cuda12.1-cudnn8-py3-gcc9
@ -32,13 +32,13 @@ jobs:
echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}"
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
id: install-nvidia-driver
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.3
- name: Clone PyTorch
uses: actions/checkout@v3

View File

@ -14,7 +14,7 @@ jobs:
# checkout because when we run this action we don't *have* a local
# checkout. In other cases you should prefer a local checkout.
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
submodules: false

View File

@ -16,7 +16,7 @@ jobs:
environment: ${{ (github.event_name == 'schedule') && 'mergebot' || '' }}
steps:
- name: Update viable/strict
uses: pytorch/test-infra/.github/actions/update-viablestrict@main
uses: pytorch/test-infra/.github/actions/update-viablestrict@release/2.3
with:
repository: pytorch/pytorch
stable-branch: viable/strict

View File

@ -17,7 +17,7 @@ jobs:
contents: read
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
fetch-depth: 1
submodules: false

View File

@ -44,7 +44,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
uses: pytorch/test-infra/.github/actions/upload-alerts@main
uses: pytorch/test-infra/.github/actions/upload-alerts@release/2.3
with:
alerts: '${{ steps.alert_creation_step.outputs.script-output }}'
organization: "pytorch"

View File

@ -39,7 +39,7 @@ jobs:
run: echo "${TRIGGERING_WORKFLOW}"
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- uses: actions/setup-python@v4
with:

View File

@ -29,7 +29,7 @@ jobs:
name: Upload dynamo performance stats for ${{ github.event.workflow_run.id }}, attempt ${{ github.event.workflow_run.run_attempt }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with:
submodules: false
fetch-depth: 1

View File

@ -21,7 +21,7 @@ jobs:
fetch-depth: 0
- name: update-xla-commit-hash
continue-on-error: true
uses: pytorch/test-infra/.github/actions/update-commit-hash@main
uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.3
with:
repo-name: xla
branch: master
@ -30,7 +30,7 @@ jobs:
updatebot-token: ${{ secrets.UPDATEBOT_TOKEN }}
pytorchbot-token: ${{ secrets.GH_PYTORCHBOT_TOKEN }}
- name: update-triton-commit-hash
uses: pytorch/test-infra/.github/actions/update-commit-hash@main
uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.3
with:
repo-owner: openai
repo-name: triton

View File

@ -218,7 +218,7 @@ Validate the release jobs for pytorch and domain libraries should be green. Vali
* [TorchVision](https://hud.pytorch.org/hud/pytorch/vision/release%2F1.12)
* [TorchAudio](https://hud.pytorch.org/hud/pytorch/audio/release%2F1.12)
Validate that the documentation build has completed and generated entry corresponding to the release in [docs folder](https://github.com/pytorch/pytorch.github.io/tree/site/docs/) of pytorch.github.io repository
Validate that the documentation build has completed and generated entry corresponding to the release in [docs repository](https://github.com/pytorch/docs/tree/main/).
### Cherry Picking Fixes

View File

@ -13,6 +13,10 @@ namespace at {
TORCH_API ScalarType toScalarType(const DLDataType& dtype);
TORCH_API DLManagedTensor* toDLPack(const Tensor& src);
TORCH_API Tensor fromDLPack(DLManagedTensor* src);
C10_DEPRECATED_MESSAGE("Please migrate to a non-const variant")
inline Tensor fromDLPack(const DLManagedTensor* src) {
return fromDLPack(const_cast<DLManagedTensor*>(src));
}
TORCH_API Tensor
fromDLPack(DLManagedTensor* src, std::function<void(void*)> deleter);
TORCH_API DLDataType getDLDataType(const Tensor& t);

View File

@ -303,6 +303,29 @@ Tensor FunctionalInverses::_nested_view_from_buffer_inverse(const Tensor& base,
return Tensor();
}
Tensor FunctionalInverses::_nested_view_from_jagged_inverse(const Tensor& base, const Tensor& mutated_view, InverseReturnMode inverse_return_mode, const Tensor& offsets, const Tensor& dummy, const std::optional<Tensor>& lengths, int64_t ragged_idx) {
auto values = at::_nested_get_values(mutated_view);
if (inverse_return_mode != InverseReturnMode::NeverView) {
return values;
} else {
return values.clone(/*memory_format=*/at::MemoryFormat::Contiguous);
}
}
Tensor FunctionalInverses::_nested_get_values_inverse(const Tensor& base, const Tensor& mutated_view, InverseReturnMode inverse_return_mode) {
auto offsets = at::_nested_get_offsets(base);
auto lengths = at::_nested_get_lengths(base);
auto ragged_idx = at::_nested_get_ragged_idx(base);
auto dummy = at::_nested_get_jagged_dummy(base);
auto nt = at::_nested_view_from_jagged(mutated_view, offsets, dummy, lengths, ragged_idx);
if (inverse_return_mode != InverseReturnMode::NeverView) {
return nt;
} else {
return nt.clone(/*memory_format=*/at::MemoryFormat::Contiguous);
}
}
Tensor FunctionalInverses::unsqueeze_inverse(const Tensor& base, const Tensor& mutated_view, InverseReturnMode inverse_return_mode, int64_t dim) {
if (inverse_return_mode != InverseReturnMode::NeverView) {
return at::squeeze(mutated_view, dim);

View File

@ -173,11 +173,22 @@ void MPSStream::copy(id<MTLBuffer> srcBuffer,
endKernelCoalescing();
id<MTLBlitCommandEncoder> blitEncoder = [commandBuffer() blitCommandEncoder];
[blitEncoder copyFromBuffer:srcBuffer
sourceOffset:(NSUInteger)srcOffset
toBuffer:dstBuffer
destinationOffset:(NSUInteger)dstOffset
size:(NSUInteger)length];
// For some reason copyFromBuffer for 4Gb fails without returning an error
// See https://github.com/pytorch/pytorch/issues/124335
// Workaround by batching copy commands into 2Gb chunks
constexpr size_t max_copy_size = 0x80000000; // 2GB
size_t bytes_copied = 0;
size_t bytes_remains = length;
while (bytes_remains > 0) {
NSUInteger bytes_to_copy = std::min(max_copy_size, bytes_remains);
[blitEncoder copyFromBuffer:srcBuffer
sourceOffset:(NSUInteger)srcOffset + bytes_copied
toBuffer:dstBuffer
destinationOffset:(NSUInteger)dstOffset + bytes_copied
size:bytes_to_copy];
bytes_copied += bytes_to_copy;
bytes_remains -= bytes_to_copy;
}
[blitEncoder endEncoding];
// profilerId has a value only if copy profiling is enabled

View File

@ -850,19 +850,13 @@ void try_plans(
benchmark_cache.update(key, plan);
return;
} catch (cudnn_frontend::cudnnException& e) {
TORCH_WARN("Plan failed with a cudnnException: ", e.what());
} catch (CuDNNError& e) {
TORCH_WARN("Plan failed with a CuDNNError: ", e.what());
} catch (c10::OutOfMemoryError& e) {
(void)cudaGetLastError(); // clear CUDA error
TORCH_WARN("Plan failed with an OutOfMemoryError: ", e.what());
}
}
TORCH_CHECK(
false,
"FIND was unable to find an engine to execute this computation after trying ",
plans.size(),
" plans.");
false, "FIND was unable to find an engine to execute this computation");
}
void try_plans_fused(
@ -880,19 +874,13 @@ void try_plans_fused(
benchmark_cache_fused.update(key, plan);
return;
} catch (cudnn_frontend::cudnnException& e) {
TORCH_WARN("Plan failed with a cudnnException: ", e.what());
} catch (CuDNNError& e) {
TORCH_WARN("Plan failed with a CuDNNError: ", e.what());
} catch (c10::OutOfMemoryError& e) {
(void)cudaGetLastError(); // clear CUDA error
TORCH_WARN("Plan failed with an OutOfMemoryError: ", e.what());
}
}
TORCH_CHECK(
false,
"FIND was unable to find an engine to execute this computation after trying ",
plans.size(),
" plans.");
false, "FIND was unable to find an engine to execute this computation");
}
bool try_configs(
@ -916,12 +904,9 @@ bool try_configs(
benchmark_cache.update(key, plan);
return true;
} catch (cudnn_frontend::cudnnException& e) {
TORCH_WARN("Plan failed with a cudnnException: ", e.what());
} catch (CuDNNError& e) {
TORCH_WARN("Plan failed with a CuDNNError: ", e.what());
} catch (c10::OutOfMemoryError& e) {
(void)cudaGetLastError(); // clear CUDA error
TORCH_WARN("Plan failed with an OutOfMemoryError: ", e.what());
}
}
return false;
@ -950,12 +935,9 @@ bool try_configs_fused(
benchmark_cache_fused.update(key, plan);
return true;
} catch (cudnn_frontend::cudnnException& e) {
TORCH_WARN("Plan failed with a cudnnException: ", e.what());
} catch (CuDNNError& e) {
TORCH_WARN("Plan failed with a CuDNNError: ", e.what());
} catch (c10::OutOfMemoryError& e) {
(void)cudaGetLastError(); // clear CUDA error
TORCH_WARN("Plan failed with an OutOfMemoryError: ", e.what());
}
}
return false;

View File

@ -25,6 +25,10 @@ typedef NS_ENUM(NSUInteger, MPSGraphFFTScalingMode)
-(MPSGraphTensor * _Nonnull) conjugateWithTensor:(MPSGraphTensor * _Nonnull) tensor
name:(NSString * _Nullable) name;
-(MPSGraphTensor * _Nonnull) realPartOfTensor:(MPSGraphTensor * _Nonnull) tensor
name:(NSString * _Nullable) name;
-(MPSGraphTensor * _Nonnull) fastFourierTransformWithTensor:(MPSGraphTensor * _Nonnull) tensor
axes:(NSArray<NSNumber *> * _Nonnull) axes
descriptor:(MPSGraphFFTDescriptor * _Nonnull) descriptor

View File

@ -210,6 +210,9 @@ std::string scalarToMetalTypeString(const c10::ScalarType& scalar_type) {
return "float";
case ScalarType::Half:
return "half";
case ScalarType::BFloat16:
checkSupportsBFloat16();
return "bfloat";
case ScalarType::Int:
return "int";
case ScalarType::Long:

View File

@ -28,6 +28,7 @@ static Tensor& fill_scalar_mps_impl(Tensor& self, const Scalar& value) {
struct CachedGraph : public MPSCachedGraph {
CachedGraph(MPSGraph* graph) : MPSCachedGraph(graph) {}
MPSGraphTensor* inputTensor_ = nil;
MPSGraphTensor* outputTensor_ = nil;
};
@ -35,36 +36,23 @@ static Tensor& fill_scalar_mps_impl(Tensor& self, const Scalar& value) {
string key = "fill_scalar_mps_impl" + getTensorsStringKey(self) + ":" + to_string(value.toDouble());
auto cachedGraph = LookUpOrCreateCachedGraph<CachedGraph>(key, [&](auto mpsGraph, auto newCachedGraph) {
auto isBool = self.scalar_type() == c10::ScalarType::Bool;
auto isUInt8 = self.scalar_type() == c10::ScalarType::Byte;
auto dataType = !isUInt8 ? !isBool ? getMPSScalarType(self.scalar_type()) : MPSDataTypeInt8 : MPSDataTypeUInt32;
// constantWithScalar does not work for boolTypes on MacOS-12.[34]
// workaround by filing it as int8 tensor and than casting to bool
// See https://github.com/pytorch/pytorch/issues/82427
// constantWithScalar does not work for UInt8 Types on MacOS-12.[34]/Ventura preview
// workaround by filing it as uint32 tensor and than casting to uint8
// See https://github.com/pytorch/pytorch/issues/83692
MPSGraphTensor* inputTensor = [mpsGraph constantWithScalar:value.toDouble()
shape:getMPSShape(self)
dataType:dataType];
MPSGraphTensor* inputTensor = mpsGraphScalarPlaceHolder(mpsGraph, getMPSDataType(self.scalar_type()));
MPSGraphTensor* outputTensor = [mpsGraph identityWithTensor:inputTensor name:nil];
if (isBool) {
outputTensor = [mpsGraph castTensor:outputTensor toType:MPSDataTypeBool name:@"constWithBool-workaround"];
}
if (isUInt8) {
outputTensor = [mpsGraph castTensor:outputTensor toType:MPSDataTypeUInt8 name:@"constWithUInt8-workaround"];
}
newCachedGraph->inputTensor_ = inputTensor;
newCachedGraph->outputTensor_ = outputTensor;
});
auto mpsScalar = getMPSScalar(value, self.scalar_type());
auto mpsScalarData = getMPSGraphTensorFromScalar(getCurrentMPSStream(), mpsScalar);
NSDictionary<MPSGraphTensor*, MPSGraphTensorData*>* feeds = @{cachedGraph->inputTensor_ : mpsScalarData};
Placeholder outputPlaceholder =
Placeholder(cachedGraph->outputTensor_, needsCopyToOutput ? output : self, nullptr, !needsCopyToOutput);
NSDictionary<MPSGraphTensor*, MPSGraphTensorData*>* results =
@{outputPlaceholder.getMPSGraphTensor() : outputPlaceholder.getMPSGraphTensorData()};
runMPSGraph(getCurrentMPSStream(), cachedGraph->graph(), /*feeds*/ nil, results);
runMPSGraph(getCurrentMPSStream(), cachedGraph->graph(), feeds, results);
if (needsCopyToOutput) {
self.copy_(output);

View File

@ -240,14 +240,20 @@ static void index_put_kernel_mps(TensorIterator& iter,
} // namespace mps
static Tensor nonzero_fallback(const Tensor& self) {
TORCH_WARN_ONCE("MPS: nonzero op is supported natively starting from macOS 13.0. ",
"Falling back on CPU. This may have performance implications.");
return at::nonzero(self.to("cpu")).clone().to("mps");
}
Tensor& nonzero_out_mps(const Tensor& self, Tensor& out_) {
if (!is_macos_13_or_newer()) {
if (!is_macos_13_or_newer(MacOSVersion::MACOS_VER_14_0_PLUS)) {
TORCH_WARN_ONCE("MPS: nonzero op is supported natively starting from macOS 13.0. ",
"Falling back on CPU. This may have performance implications.");
Tensor out_fallback = nonzero_fallback(self);
at::native::resize_output(out_, out_fallback.sizes());
out_.copy_(out_fallback.to("mps"));
return out_;
} else if (self.is_complex()) {
TORCH_WARN_ONCE("MPS: nonzero op is not supported for complex datatypes. ",
"Falling back on CPU. This may have performance implications.");
Tensor out_fallback = nonzero_fallback(self);
at::native::resize_output(out_, out_fallback.sizes());
out_.copy_(out_fallback.to("mps"));
@ -281,7 +287,6 @@ Tensor& nonzero_out_mps(const Tensor& self, Tensor& out_) {
CachedGraph(MPSGraph* graph) : MPSCachedGraph(graph) {}
MPSGraphTensor* inputTensor_ = nil;
MPSGraphTensor* outputTensor_ = nil;
MPSGraphTensor* scatterDataTensor_ = nil;
};
dispatch_sync(stream->queue(), ^() {
@ -299,93 +304,20 @@ Tensor& nonzero_out_mps(const Tensor& self, Tensor& out_) {
out = at::empty(out_.sizes(), out_.scalar_type(), c10::nullopt, kMPS, c10::nullopt, c10::nullopt);
}
int64_t _apparentInputShape = 1;
for (auto dim : self.sizes()) {
_apparentInputShape *= dim;
}
MPSShape* apparentOutputShape = @[ @(total_nonzero * nDim) ];
MPSShape* apparentInputShape = @[ @(_apparentInputShape) ];
// Pseudocode:
//
// inputTensor = [1, 0, 0, 3]
// inputNonZero = [1, 0, 0, 1]
// indices = [1, 1, 1, 2]
// maskedIndices = [0, -1, -1, 1]
// coordinates = [0, 1, 2, 3]
// scatterResult = [0, 3]
@autoreleasepool {
string key = "nonzero_out_mps" + getTensorsStringKey(self);
auto cachedGraph = LookUpOrCreateCachedGraph<CachedGraph>(key, [&](auto mpsGraph, auto newCachedGraph) {
MPSDataType inputDataType = getMPSDataType(self);
MPSShape* inputShape = getMPSShape(self);
MPSGraphTensor* inputTensor = mpsGraphRankedPlaceHolder(mpsGraph, getMPSDataType(self), getMPSShape(self));
MPSGraphTensor* inputTensor =
mpsGraphRankedPlaceHolder(mpsGraph, getMPSScalarType(self.scalar_type()), apparentInputShape);
MPSGraphTensor* scatterDataTensor = mpsGraphUnrankedPlaceHolder(mpsGraph, getMPSScalarType(out.scalar_type()));
MPSGraphTensor* zeroTensor = [mpsGraph constantWithScalar:0.0 dataType:inputDataType];
MPSGraphTensor* oneTensor = [mpsGraph constantWithScalar:1.0 dataType:MPSDataTypeInt32];
MPSGraphTensor* minusMaxDimTensor = [mpsGraph constantWithScalar:-maxDimensions dataType:MPSDataTypeInt32];
MPSGraphTensor* inputNotEqualToZeroTensor = [mpsGraph notEqualWithPrimaryTensor:inputTensor
secondaryTensor:zeroTensor
name:nil];
MPSGraphTensor* maskTensor = [mpsGraph castTensor:inputNotEqualToZeroTensor
toType:MPSDataTypeInt32
name:@"castToInt32"];
MPSGraphTensor* indicesTensor = [mpsGraph cumulativeSumWithTensor:maskTensor axis:0 name:nil];
MPSGraphTensor* indicesMinusOneTensor = [mpsGraph subtractionWithPrimaryTensor:indicesTensor
secondaryTensor:oneTensor
name:nil];
MPSGraphTensor* maskedIndicesTensor = [mpsGraph selectWithPredicateTensor:inputNotEqualToZeroTensor
truePredicateTensor:indicesMinusOneTensor
falsePredicateTensor:minusMaxDimTensor
name:nil];
MPSGraphTensor* coordinatesTensor = [mpsGraph reshapeTensor:[mpsGraph coordinateAlongAxis:0
withShape:inputShape
name:nil]
withShape:@[ @-1 ]
name:nil];
if (nDim > 1) {
NSMutableArray<MPSGraphTensor*>* maskedIndicesTensorArray = [NSMutableArray arrayWithCapacity:nDim];
NSMutableArray<MPSGraphTensor*>* coordinatesTensorArray = [NSMutableArray arrayWithCapacity:nDim];
MPSGraphTensor* constantRankTensor = [mpsGraph constantWithScalar:nDim dataType:MPSDataTypeInt32];
maskedIndicesTensorArray[0] = [mpsGraph multiplicationWithPrimaryTensor:maskedIndicesTensor
secondaryTensor:constantRankTensor
name:nil];
coordinatesTensorArray[0] = coordinatesTensor;
for (int i = 1; i < nDim; i++) {
maskedIndicesTensorArray[i] = [mpsGraph additionWithPrimaryTensor:maskedIndicesTensorArray[i - 1]
secondaryTensor:oneTensor
name:nil];
coordinatesTensorArray[i] = [mpsGraph reshapeTensor:[mpsGraph coordinateAlongAxis:i
withShape:inputShape
name:nil]
withShape:@[ @-1 ]
name:nil];
}
maskedIndicesTensor = [mpsGraph concatTensors:maskedIndicesTensorArray dimension:0 interleave:YES name:nil];
coordinatesTensor = [mpsGraph concatTensors:coordinatesTensorArray dimension:0 interleave:YES name:nil];
}
MPSGraphTensor* outputTensor = [mpsGraph scatterWithDataTensor:scatterDataTensor
updatesTensor:coordinatesTensor
indicesTensor:maskedIndicesTensor
axis:0
mode:MPSGraphScatterModeSet
name:nil];
MPSGraphTensor* outputTensor = [mpsGraph nonZeroIndicesOfTensor:inputTensor name:nil];
newCachedGraph->inputTensor_ = inputTensor;
newCachedGraph->scatterDataTensor_ = scatterDataTensor;
newCachedGraph->outputTensor_ = outputTensor;
});
Placeholder selfPlaceholder = Placeholder(cachedGraph->inputTensor_, self, apparentInputShape);
Placeholder outputPlaceholder = Placeholder(cachedGraph->outputTensor_, out, apparentOutputShape);
Placeholder scatterPlaceholder = Placeholder(cachedGraph->scatterDataTensor_, out, apparentOutputShape);
auto feeds = dictionaryFromPlaceholders(selfPlaceholder, scatterPlaceholder);
Placeholder selfPlaceholder = Placeholder(cachedGraph->inputTensor_, self);
Placeholder outputPlaceholder = Placeholder(cachedGraph->outputTensor_, out);
auto feeds = dictionaryFromPlaceholders(selfPlaceholder);
runMPSGraph(stream, cachedGraph->graph(), feeds, outputPlaceholder);
}
@ -397,7 +329,13 @@ Tensor& nonzero_out_mps(const Tensor& self, Tensor& out_) {
}
Tensor nonzero_mps(const Tensor& self) {
if (!is_macos_13_or_newer()) {
if (!is_macos_13_or_newer(MacOSVersion::MACOS_VER_14_0_PLUS)) {
TORCH_WARN_ONCE("MPS: nonzero op is supported natively starting from macOS 13.0. ",
"Falling back on CPU. This may have performance implications.");
return nonzero_fallback(self);
} else if (self.is_complex()) {
TORCH_WARN_ONCE("MPS: nonzero op is not supported for complex datatypes ",
"Falling back on CPU. This may have performance implications.");
return nonzero_fallback(self);
}

View File

@ -4,6 +4,8 @@
#include <ATen/mps/MPSProfiler.h>
#include <ATen/native/LinearAlgebraUtils.h>
#include <ATen/native/Resize.h>
// For MTLLanguageVersion_3_1
#include <ATen/native/mps/MPSGraphSonomaOps.h>
#include <ATen/native/mps/OperationUtils.h>
#ifndef AT_PER_OPERATOR_HEADERS
@ -29,7 +31,7 @@ static const char* METAL_LINALG = R"MATMUL_METAL(
using namespace metal;
template<typename T>
T dot_product(constant T *v1, constant T* v2, ulong2 strides, uint32_t size) {
T rc = 0.0;
T rc = T(0.0);
for (uint32_t i = 0; i < size; ++i) {
rc += v1[i * strides.x] * v2[i * strides.y];
}
@ -69,6 +71,9 @@ kernel void naive_matmul<DTYPE>( \
INSTANTIATE_NAIVE_MM(float);
INSTANTIATE_NAIVE_MM(half);
#if __METAL_VERSION__ >= 310
INSTANTIATE_NAIVE_MM(bfloat);
#endif
)MATMUL_METAL";
id<MTLLibrary> compileLinalgOpLibrary(id<MTLDevice> device) {
@ -79,7 +84,8 @@ id<MTLLibrary> compileLinalgOpLibrary(id<MTLDevice> device) {
NSError* error = nil;
MTLCompileOptions* options = [[MTLCompileOptions new] autorelease];
[options setLanguageVersion:MTLLanguageVersion2_3];
[options setLanguageVersion:is_macos_13_or_newer(MacOSVersion::MACOS_VER_14_0_PLUS) ? MTLLanguageVersion3_1
: MTLLanguageVersion2_3];
linalgLibrary = [device newLibraryWithSource:[NSString stringWithCString:METAL_LINALG encoding:NSASCIIStringEncoding]
options:options
error:&error];

View File

@ -30,41 +30,53 @@ static void clamp_mps_graph(CachedGraph* cachedGraph,
const Tensor& min_tensor,
const Tensor& max_tensor) {
auto input_dtype = input_tensor.scalar_type();
auto min_dtype = input_dtype;
auto max_dtype = input_dtype;
if (cachedGraph->minTensor) {
min_dtype = min_tensor.scalar_type();
}
if (cachedGraph->maxTensor) {
max_dtype = max_tensor.scalar_type();
}
auto min_dtype = cachedGraph->minTensor ? min_tensor.scalar_type() : input_dtype;
auto max_dtype = cachedGraph->maxTensor ? max_tensor.scalar_type() : input_dtype;
MPSGraph* mpsGraph = cachedGraph->graph();
cachedGraph->inputTensor = mpsGraphRankedPlaceHolder(mpsGraph, input_tensor);
MPSGraphTensor* minTensor = cachedGraph->minTensor;
MPSGraphTensor* maxTensor = cachedGraph->maxTensor;
auto minTensor = cachedGraph->minTensor;
auto maxTensor = cachedGraph->maxTensor;
if (input_dtype != min_dtype) {
minTensor = castMPSTensor(mpsGraph, cachedGraph->minTensor, input_dtype);
}
if (input_dtype != max_dtype) {
maxTensor = castMPSTensor(mpsGraph, cachedGraph->maxTensor, input_dtype);
}
if (cachedGraph->minTensor && cachedGraph->maxTensor) {
cachedGraph->outputTensor = [mpsGraph clampWithTensor:cachedGraph->inputTensor
minValueTensor:minTensor
maxValueTensor:maxTensor
name:nil];
} else if (cachedGraph->maxTensor) {
cachedGraph->outputTensor = [mpsGraph minimumWithPrimaryTensor:cachedGraph->inputTensor
secondaryTensor:maxTensor
name:nil];
} else if (cachedGraph->minTensor) {
cachedGraph->outputTensor = [mpsGraph maximumWithPrimaryTensor:cachedGraph->inputTensor
secondaryTensor:minTensor
name:nil];
if (c10::isIntegralType(input_dtype, /*includeBool=*/true)) {
if (minTensor && maxTensor) {
cachedGraph->outputTensor = [mpsGraph clampWithTensor:cachedGraph->inputTensor
minValueTensor:minTensor
maxValueTensor:maxTensor
name:nil];
} else if (maxTensor) {
cachedGraph->outputTensor = [mpsGraph minimumWithPrimaryTensor:cachedGraph->inputTensor
secondaryTensor:maxTensor
name:nil];
} else if (minTensor) {
cachedGraph->outputTensor = [mpsGraph maximumWithPrimaryTensor:cachedGraph->inputTensor
secondaryTensor:minTensor
name:nil];
}
return;
}
// clampWithTensor doesn't propagate NaN through so simulate it as composition of
// maximumWithNaNPropagationWithPrimaryTensor and minimumWithNaNPropagationWithPrimaryTensor
auto outputTensor = cachedGraph->inputTensor;
if (minTensor) {
outputTensor = [mpsGraph maximumWithNaNPropagationWithPrimaryTensor:outputTensor
secondaryTensor:minTensor
name:nil];
}
if (maxTensor) {
outputTensor = [mpsGraph minimumWithNaNPropagationWithPrimaryTensor:outputTensor
secondaryTensor:maxTensor
name:nil];
}
cachedGraph->outputTensor = outputTensor;
}
static void check_min_max_dims(const OptionalTensorRef clamp_opt, const Tensor& input_t, string op_name) {

View File

@ -75,23 +75,10 @@ static bool is_empty_tensor(const Tensor& self) {
return self.numel() == 0;
}
static void unary_op(const Tensor& self,
const Tensor& output_,
std::string op_name,
UnaryOpBlock unaryBlock,
is_noop_p is_noop = is_empty_tensor) {
static void unary_op_noresize(const Tensor& self, const Tensor& output_, std::string op_name, UnaryOpBlock unaryBlock) {
TORCH_CHECK(!(!is_macos_13_or_newer() && self.scalar_type() == ScalarType::Byte),
"MPS support unary op with uint8 natively starting from macOS 13.0");
if (!output_.is_same_size(self)) {
output_.resize_(self.sizes());
}
if (is_noop(self)) {
output_.copy_(self);
return;
}
auto output = output_;
bool needsCopyToOutput = false;
if (output.storage_offset() || !output.is_contiguous()) {
@ -139,6 +126,23 @@ static void unary_op(const Tensor& self,
}
}
static void unary_op(const Tensor& self,
const Tensor& output_,
std::string op_name,
UnaryOpBlock unaryBlock,
is_noop_p is_noop = is_empty_tensor) {
if (!output_.is_same_size(self)) {
output_.resize_(self.sizes());
}
if (is_noop(self)) {
output_.copy_(self);
return;
}
unary_op_noresize(self, output_, op_name, unaryBlock);
}
MPSGraphTensor* trunc_tensor(MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) {
// Rounding is a no-op for integral types, and also a reasonable workaround
// For MPSGraph bug on Apple Silicon, that throws `Function floorOp_i64 was not found in the library`
@ -168,6 +172,12 @@ MPSGraphTensor* log1p(MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) {
return [mpsGraph logarithmWithTensor:addedTensor name:nil];
}
static MPSGraphTensor* lengthOfComplexAsReal(MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) {
auto squares = [mpsGraph squareWithTensor:inputTensor name:nil];
auto sumSquares = [mpsGraph reductionSumWithTensor:squares axis:-1 name:nil];
return [mpsGraph squareRootWithTensor:sumSquares name:nil];
}
} // namespace mps
TORCH_IMPL_FUNC(trunc_out_mps)(const Tensor& self, const Tensor& output) {
@ -226,14 +236,6 @@ CREATE_MPS_STRUCTURED_UNARY_ROUNDING_TORCH_IMPL_FUNC(round_out_mps, round)
}); \
}
#define CREATE_MPS_UNARY_TORCH_IMPL_FUNC(func_out, func_stub) \
Tensor& func_out(const Tensor& self, Tensor& output) { \
mps::unary_op(self, output, #func_out, ^MPSGraphTensor*(MPSGraph * mpsGraph, MPSGraphTensor * inputTensor) { \
return [mpsGraph func_stub##WithTensor:inputTensor name:nil]; \
}); \
return output; \
}
CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(exp_out_mps, exponent)
CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(exp2_out_mps, exponentBase2)
CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(reciprocal_out_mps, reciprocal)
@ -257,7 +259,35 @@ CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(asinh_out_mps, asinh)
CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(acosh_out_mps, acosh)
CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(atanh_out_mps, atanh)
CREATE_MPS_UNARY_TORCH_IMPL_FUNC(abs_out_mps, absolute)
Tensor& abs_out_mps(const Tensor& self, Tensor& output) {
using namespace mps;
if (!output.is_same_size(self)) {
output.resize_(self.sizes());
}
if (self.numel() == 0) {
return output;
}
if (supportsComplex() || !self.is_complex()) {
unary_op_noresize(self, output, "abs_out_mps", ^MPSGraphTensor*(MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) {
auto rc = [mpsGraph absoluteWithTensor:inputTensor name:nil];
if (self.is_complex()) {
rc = [mpsGraph realPartOfTensor:rc name:nil];
}
return rc;
});
} else {
Tensor realInput = at::view_as_real(self);
unary_op_noresize(
realInput, output, "abs_out_mps", ^MPSGraphTensor*(MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) {
auto rc = lengthOfComplexAsReal(mpsGraph, inputTensor);
return [mpsGraph reshapeTensor:rc withShape:getMPSShape(output) name:nil];
});
}
return output;
}
Tensor& logical_not_out_mps(const Tensor& self, Tensor& output) {
auto bool_self = self.to(ScalarType::Bool);
@ -484,9 +514,7 @@ TORCH_IMPL_FUNC(sgn_out_mps)(const Tensor& self, const Tensor& output) {
Tensor realOutput = at::view_as_real(output);
auto complex_sgn_op = [&](MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) -> MPSGraphTensor* {
MPSGraphTensor* squares = [mpsGraph squareWithTensor:inputTensor name:nil];
MPSGraphTensor* sumSquares = [mpsGraph reductionSumWithTensor:squares axis:-1 name:nil];
MPSGraphTensor* norm = [mpsGraph squareRootWithTensor:sumSquares name:nil];
MPSGraphTensor* norm = mps::lengthOfComplexAsReal(mpsGraph, inputTensor);
MPSGraphTensor* zero = [mpsGraph constantWithScalar:0.0 dataType:norm.dataType];
MPSGraphTensor* isZero = [mpsGraph equalWithPrimaryTensor:norm secondaryTensor:zero name:nil];
MPSGraphTensor* sgnTensor = [mpsGraph divisionWithPrimaryTensor:inputTensor secondaryTensor:norm name:nil];

View File

@ -6154,6 +6154,52 @@
CompositeExplicitAutogradNonFunctional: _nested_view_from_buffer_copy
autogen: _nested_view_from_buffer_copy.out
- func: _nested_view_from_jagged(Tensor(a) self, Tensor offsets, Tensor dummy, Tensor? lengths=None, int ragged_idx=1) -> Tensor(a)
variants: function
device_check: NoCheck
dispatch: {}
- func: _nested_view_from_jagged_copy(Tensor self, Tensor offsets, Tensor dummy, Tensor? lengths=None, int ragged_idx=1) -> Tensor
variants: function
device_check: NoCheck
tags: view_copy
dispatch:
CompositeExplicitAutogradNonFunctional: _nested_view_from_jagged_copy
autogen: _nested_view_from_jagged_copy.out
- func: _nested_get_values(Tensor(a) self) -> Tensor(a)
variants: function
device_check: NoCheck
dispatch: {}
- func: _nested_get_values_copy(Tensor self) -> Tensor
variants: function
device_check: NoCheck
tags: view_copy
dispatch:
CompositeExplicitAutogradNonFunctional: _nested_get_values_copy
autogen: _nested_get_values_copy.out
- func: _nested_get_offsets(Tensor self) -> Tensor
variants: function
device_check: NoCheck
dispatch: {}
# returns undefined Tensor if no lengths present
- func: _nested_get_lengths(Tensor self) -> Tensor
variants: function
device_check: NoCheck
dispatch: {}
- func: _nested_get_ragged_idx(Tensor self) -> int
variants: function
device_check: NoCheck
dispatch: {}
- func: _nested_get_jagged_dummy(Tensor any) -> Tensor
category_override: dummy
dispatch: {}
- func: _trilinear(Tensor i1, Tensor i2, Tensor i3, int[] expand1, int[] expand2, int[] expand3, int[] sumdim, int unroll_dim=1) -> Tensor
dispatch:
# calls unsqueeze

View File

@ -157,43 +157,6 @@ aotriton::TensorView<Rank> mk_aotensor(const at::Tensor& q, c10::string_view ten
cast_dtype(q.dtype()));
}
template<bool COPY_FROM_INPUT, // For Input Tensor
bool COPY_BACK> // For Output Tensor
class TensorStorageSanitizer {
public:
TensorStorageSanitizer(const at::Tensor& ref,
at::Tensor& to_sanitize)
: ref_(ref), to_sanitize_(to_sanitize)
{
need_sanitize = ref_.strides() != to_sanitize_.strides();
if (!need_sanitize)
return;
temp_ = at::empty_like(ref_);
if (COPY_FROM_INPUT) {
temp_.copy_(to_sanitize_);
}
}
~TensorStorageSanitizer()
{
if (need_sanitize && COPY_BACK)
to_sanitize_.copy_(temp_);
}
at::Tensor& sanitized_tensor()
{
if (need_sanitize)
return temp_;
return to_sanitize_;
}
private:
const at::Tensor& ref_;
at::Tensor& to_sanitize_;
at::Tensor temp_;
bool need_sanitize = false;
};
}
#define CHECK_DEVICE(x) TORCH_CHECK(x.is_cuda(), #x " must be on CUDA")
@ -531,9 +494,6 @@ mha_bwd(const at::Tensor &dout, // batch_size x seqlen_q x num_heads, x head_si
int d_head = head_size_og;
hipError_t err; // TODO: Error handling
{
TensorStorageSanitizer<true, false> dq_s(q_t, dq_t);
TensorStorageSanitizer<true, false> dk_s(k_t, dk_t);
TensorStorageSanitizer<true, false> dv_s(v_t, dv_t);
using aotriton::v2::flash::attn_bwd;
err = attn_bwd(mk_aotensor(q_t, "q"),
mk_aotensor(k_t, "k"),
@ -541,9 +501,9 @@ mha_bwd(const at::Tensor &dout, // batch_size x seqlen_q x num_heads, x head_si
softmax_scale,
mk_aotensor(out_t, "out"),
mk_aotensor(dout_t, "dout"),
mk_aotensor(dq_s.sanitized_tensor(), "dq"),
mk_aotensor(dk_s.sanitized_tensor(), "dk"),
mk_aotensor(dv_s.sanitized_tensor(), "dv"),
mk_aotensor(dq_t, "dq"),
mk_aotensor(dk_t, "dk"),
mk_aotensor(dv_t, "dv"),
mk_aotensor<2>(softmax_lse_cont, "L"),
mk_aotensor<2>(delta, "delta"),
p_dropout,

View File

@ -6,7 +6,7 @@ if(NOT __AOTRITON_INCLUDED)
set(__AOTRITON_INSTALL_DIR "${PROJECT_SOURCE_DIR}/torch")
ExternalProject_Add(aotriton_external
GIT_REPOSITORY https://github.com/ROCm/aotriton.git
GIT_TAG 9044fe5eb16130e49a0a1f781ea15037353ad542
GIT_TAG 24a3fe9cb57e5cda3c923df29743f9767194cc27
SOURCE_DIR ${__AOTRITON_SOURCE_DIR}
BINARY_DIR ${__AOTRITON_BUILD_DIR}
PREFIX ${__AOTRITON_INSTALL_DIR}

View File

@ -83,6 +83,22 @@ devel-push: DOCKER_TAG := $(PYTORCH_VERSION)-cuda$(CUDA_VERSION_SHORT)-cudnn$(CU
devel-push:
$(DOCKER_PUSH)
ifeq ("$(CUDA_VERSION_SHORT)","cpu")
.PHONY: runtime-image
runtime-image: BASE_IMAGE := $(BASE_RUNTIME)
runtime-image: DOCKER_TAG := $(PYTORCH_VERSION)-runtime
runtime-image:
$(DOCKER_BUILD)
.PHONY: runtime-push
runtime-push: BASE_IMAGE := $(BASE_RUNTIME)
runtime-push: DOCKER_TAG := $(PYTORCH_VERSION)-runtime
runtime-push:
$(DOCKER_PUSH)
else
.PHONY: runtime-image
runtime-image: BASE_IMAGE := $(BASE_RUNTIME)
runtime-image: DOCKER_TAG := $(PYTORCH_VERSION)-cuda$(CUDA_VERSION_SHORT)-cudnn$(CUDNN_VERSION)-runtime
@ -95,6 +111,8 @@ runtime-push: DOCKER_TAG := $(PYTORCH_VERSION)-cuda$(CUDA_VERSION_SHORT)-cudnn$(
runtime-push:
$(DOCKER_PUSH)
endif
.PHONY: clean
clean:
-docker rmi -f $(shell docker images -q $(DOCKER_FULL_NAME))

View File

@ -548,7 +548,9 @@ class build_ext(setuptools.command.build_ext.build_ext):
omp_lib_name = (
"libomp.dylib" if os.uname().machine == "arm64" else "libiomp5.dylib"
)
if os.path.join("@rpath", omp_lib_name) not in libs:
omp_rpath_lib_path = os.path.join("@rpath", omp_lib_name)
omp_loader_lib_path = os.path.join("@loader_path", omp_lib_name)
if omp_rpath_lib_path not in libs:
return
# Copy libomp/libiomp5 from rpath locations
@ -558,6 +560,20 @@ class build_ext(setuptools.command.build_ext.build_ext):
continue
target_lib = os.path.join(self.build_lib, "torch", "lib", omp_lib_name)
self.copy_file(source_lib, target_lib)
# Change OMP library load path to loader_path and delete old rpath
# This should prevent delocate from attempting to package another instance
# of OpenMP library in torch wheel
subprocess.check_call(
[
"install_name_tool",
"-change",
omp_rpath_lib_path,
omp_loader_lib_path,
"-delete_rpath",
rpath,
libtorch_cpu_path,
]
)
break
# Copy omp.h from OpenMP_C_FLAGS and copy it into include folder

View File

@ -250,6 +250,44 @@ class DTensorAPITest(DTensorTestBase):
self.assertIsInstance(local_out, torch.Tensor)
self.assertNotIsInstance(local_out, DTensor)
@with_comms
def test_distribute_module_casting(self):
device_mesh = DeviceMesh(self.device_type, list(range(self.world_size)))
# check DTensor casting
dt = DTensor.from_local(torch.rand(10), device_mesh, [Replicate()])
dt = dt.to(torch.bfloat16)
self.assertEqual(dt.dtype, torch.bfloat16)
self.assertEqual(dt._local_tensor.dtype, torch.bfloat16)
# check distribute_tensor casting
dt = distribute_tensor(torch.rand(10), device_mesh, [Replicate()])
dt = dt.to(torch.bfloat16)
self.assertEqual(dt.dtype, torch.bfloat16)
self.assertEqual(dt._local_tensor.dtype, torch.bfloat16)
# check distribute_module casting
model = MyModel(10, 10, device=self.device_type)
replica_model = distribute_module(
model,
device_mesh,
)
replica_model = replica_model.to(torch.bfloat16)
self.assertEqual(replica_model.seq[0].weight.dtype, torch.bfloat16)
self.assertEqual(
replica_model.seq[0].weight._local_tensor.dtype, torch.bfloat16
)
# check autocast
dt = distribute_tensor(torch.rand(10), device_mesh, [Replicate()])
replica_model = distribute_module(
model,
device_mesh,
)
with torch.autocast(device_type=self.device_type, dtype=torch.bfloat16):
output = replica_model(dt)
self.assertEqual(output.dtype, torch.bfloat16)
@with_comms
def test_distribute_module_meta(self):
# If the model is too big, the user may first the create entire model on the meta device and then initialize

View File

@ -14,6 +14,7 @@ from torch.distributed._tensor.placement_types import _Partial, Shard
from torch.distributed.device_mesh import _mesh_resources, DeviceMesh, init_device_mesh
from torch.distributed.distributed_c10d import (
_world,
get_global_rank,
get_world_size,
init_process_group,
@ -28,7 +29,6 @@ from torch.testing._internal.common_utils import (
)
from torch.testing._internal.distributed._tensor.common_dtensor import (
DTensorTestBase,
skip_if_lt_x_gpu,
skip_unless_torch_gpu,
with_comms,
)
@ -79,10 +79,7 @@ class DeviceMeshTest(DTensorTestBase):
@with_comms
@run_with_both_funcol_impls
@skip_unless_torch_gpu
def test_get_group(self):
# TODO: `test_get_group` still periodically timeout on cpu
# remove `@skip_unless_torch_gpu` after the problem is fixed.
mesh_shape = (2, self.world_size // 2)
mesh_2d = init_device_mesh(
self.device_type, mesh_shape, mesh_dim_names=("dp", "tp")
@ -103,10 +100,7 @@ class DeviceMeshTest(DTensorTestBase):
@with_comms
@run_with_both_funcol_impls
@skip_unless_torch_gpu
def test_get_local_rank_raises_exception(self):
# TODO: `test_get_local_rank_raises_exception` still periodically timeout on cpu
# remove `@skip_unless_torch_gpu` after the problem is fixed.
mesh_shape = (2, self.world_size // 2)
mesh_2d = init_device_mesh(
self.device_type, mesh_shape, mesh_dim_names=("dp", "tp")
@ -120,10 +114,7 @@ class DeviceMeshTest(DTensorTestBase):
@with_comms
@run_with_both_funcol_impls
@skip_unless_torch_gpu
def test_get_local_rank(self):
# TODO: `test_get_local_rank_raises_exception` still periodically timeout on cpu
# remove `@skip_unless_torch_gpu` after the problem is fixed.
mesh_shape = (2, self.world_size // 2)
mesh_2d = init_device_mesh(
self.device_type, mesh_shape, mesh_dim_names=("dp", "tp")
@ -162,7 +153,17 @@ class DeviceMeshTest(DTensorTestBase):
)
self.assertEqual(global_ranks, current_rank_expected_group_ranks)
@run_with_both_funcol_impls
@with_comms
def test_device_mesh_init_backend(self):
mesh = DeviceMesh(self.device_type, [1], _init_backend=False)
with self.assertRaisesRegex(RuntimeError, "process groups not initialized!"):
mesh.get_group()
# coordinates should always been populated when init_backend is False, as whenever
# we call init_backend we should make sure the default pg already created
mesh.get_coordinate()
def test_fake_pg_device_mesh(self):
fake_store = FakeStore()
init_process_group("fake", store=fake_store, rank=0, world_size=self.world_size)
@ -276,71 +277,47 @@ class TestDeviceMeshGetItem(DTensorTestBase):
@with_comms
@run_with_both_funcol_impls
def test_raises_invalid_mesh_dim_names(self):
error_msg = "Invalid mesh_dim_name"
# Case 1: the DeviceMesh does not have a mesh_dim_names attribute
with self.assertRaisesRegex(
RuntimeError, "Cannot slice a DeviceMesh without mesh_dim_names."
):
def test_raises_no_mesh_dim_found(self):
with self.assertRaisesRegex(KeyError, "No `mesh_dim_names` found."):
mesh = init_device_mesh(self.device_type, (2, 4))
child_mesh = mesh["DP"]
child_mesh_dim_names = "PP"
with self.assertRaisesRegex(ValueError, error_msg):
@with_comms
@run_with_both_funcol_impls
def test_raises_invalid_mesh_dim_name(self):
child_mesh_dim_name = "PP"
with self.assertRaisesRegex(
KeyError, f"Mesh dimension '{child_mesh_dim_name}' does not exist."
):
mesh_dim_names = ("DP", "TP")
mesh = init_device_mesh(
self.device_type, (2, 4), mesh_dim_names=mesh_dim_names
)
child_mesh = mesh[child_mesh_dim_names]
# Case 2
child_mesh_dim_names = ["PP", "CP"]
with self.assertRaisesRegex(ValueError, error_msg):
mesh_dim_names = ("DP", "TP")
mesh = init_device_mesh(
self.device_type, (2, 4), mesh_dim_names=mesh_dim_names
)
child_mesh = mesh[child_mesh_dim_names]
# Case 3: a given child_mesh_dim_name is not a contiguous subset of the parent mesh's mesh_dim_names.
child_mesh_dim_names = ("TP", "DP")
with self.assertRaisesRegex(ValueError, error_msg):
mesh_dim_names = ("DP", "TP")
mesh = init_device_mesh(
self.device_type, (2, 4), mesh_dim_names=mesh_dim_names
)
child_mesh = mesh[child_mesh_dim_names]
# Case 3
child_mesh_dim_names = ("PP", "TP")
with self.assertRaisesRegex(ValueError, error_msg):
mesh_dim_names = ("PP", "DP", "TP")
mesh = init_device_mesh(
self.device_type, (2, 2, 2), mesh_dim_names=mesh_dim_names
)
child_mesh = mesh[child_mesh_dim_names]
child_mesh = mesh[child_mesh_dim_name]
@with_comms
@run_with_both_funcol_impls
@skip_if_lt_x_gpu(8)
def test_get_item_2d(self):
# TODO: `test_get_item_2d` still periodically timeout on cpu
# remove `@skip_if_lt_x_gpu` after the problem is fixed.
def test_get_item(self):
mesh_shape = (2, 4)
mesh_dim_names = ("DP", "TP")
mesh_2d = init_device_mesh(
self.device_type, mesh_shape, mesh_dim_names=mesh_dim_names
)
pg_ranks_by_dim_name = {}
for mesh_dim_name in mesh_dim_names:
mesh_dim = mesh_dim_names.index(mesh_dim_name)
pg_ranks_by_dim_name[mesh_dim_name] = mesh_2d.mesh.swapdims(
-1, mesh_dim
).reshape(-1, mesh_2d.mesh.size(mesh_dim))
tp_mesh = mesh_2d["TP"]
tp_group = [[0, 1, 2, 3], [4, 5, 6, 7]]
tp_group_idx = self.rank // 4
self.assertEqual(tp_mesh.mesh.tolist(), tp_group[tp_group_idx])
self.assertEqual(tp_mesh.mesh, pg_ranks_by_dim_name["TP"][tp_group_idx])
dp_mesh = mesh_2d["DP"]
dp_group = [[0, 4], [1, 5], [2, 6], [3, 7]]
dp_group_idx = self.rank % 4
self.assertEqual(dp_mesh.mesh.tolist(), dp_group[dp_group_idx])
self.assertEqual(mesh_2d["DP"].mesh, pg_ranks_by_dim_name["DP"][dp_group_idx])
@with_comms
@run_with_both_funcol_impls
@ -351,50 +328,32 @@ class TestDeviceMeshGetItem(DTensorTestBase):
dp_mesh = mesh["dp"]
self.assertEqual(dp_mesh, mesh)
with self.assertRaisesRegex(ValueError, "Invalid mesh_dim_name"):
with self.assertRaisesRegex(RuntimeError, "Invalid mesh_dim_name"):
dp_mesh = mesh["dim0"]
@with_comms
@skip_if_lt_x_gpu(8)
def test_get_item_3d(self):
# TODO: `test_get_item_3d` still periodically timeout on cpu
# remove `@skip_if_lt_x_gpu` after the problem is fixed.
mesh_shape = (2, 2, 2)
mesh_dim_names = ("Replicate", "Shard", "TP")
mesh_3d = init_device_mesh(
self.device_type, mesh_shape, mesh_dim_names=mesh_dim_names
)
@run_with_both_funcol_impls
def test_cache_and_reuse_submesh_slice_result(self):
mesh = init_device_mesh(self.device_type, (2, 4), mesh_dim_names=("dp", "tp"))
tp_group = [[0, 1], [2, 3], [4, 5], [6, 7]]
tp_group_idx = int(self.rank / 2)
self.assertEqual(mesh_3d["TP"].mesh.tolist(), tp_group[tp_group_idx])
dp_mesh = mesh["dp"]
ref_pg_count = _world.group_count
shard_group = [[0, 2], [1, 3], [4, 6], [5, 7]]
shard_group_idx = self.rank % 2 + self.rank // 4 * 2
self.assertEqual(mesh_3d["Shard"].mesh.tolist(), shard_group[shard_group_idx])
# When we call the "dp" slice second time, it should not create any new pg.
# As we are just using the cached result so the pg count should be the same.
dp_mesh_2 = mesh["dp"]
self.assertEqual(ref_pg_count, _world.group_count)
replicate_group = [[0, 4], [1, 5], [2, 6], [3, 7]]
replicate_group_idx = self.rank % 4
self.assertEqual(
mesh_3d["Replicate"].mesh.tolist(), replicate_group[replicate_group_idx]
)
# We support both UX for nD slicing.
# mesh_3d[["Replicate", "Shard"]] or mesh_3d["Replicate", "Shard"]
hsdp_mesh_1 = mesh_3d[["Replicate", "Shard"]]
hsdp_mesh_2 = mesh_3d["Replicate", "Shard"]
hsdp_group = [[[0, 2], [4, 6]], [[1, 3], [5, 7]]]
hsdp_group_idx = self.rank % 2
self.assertEqual(hsdp_mesh_1.mesh.tolist(), hsdp_group[hsdp_group_idx])
self.assertEqual(hsdp_mesh_2.mesh.tolist(), hsdp_group[hsdp_group_idx])
self.assertEqual(hsdp_mesh_1, hsdp_mesh_2)
# When we call the "tp" slice, it should not create a new pg, as the "tp" slice would
# just reuse the parent mesh pg.
tp_mesh = mesh["tp"]
self.assertEqual(_world.group_count, ref_pg_count)
@instantiate_parametrized_tests
class TestMeshEnv(DTensorTestBase):
@with_comms
@run_with_both_funcol_impls
@skip_unless_torch_gpu
def test_get_parent_mesh(self):
mesh_shape = (2, self.world_size // 2)
mesh_dim_names = ("DP", "TP")
@ -415,7 +374,6 @@ class TestMeshEnv(DTensorTestBase):
@with_comms
@run_with_both_funcol_impls
@skip_unless_torch_gpu
def test_get_parent_mesh_dim_exist(self):
mesh_shape = (2, self.world_size // 2)
mesh_dim_names = ("DP", "TP")
@ -428,7 +386,6 @@ class TestMeshEnv(DTensorTestBase):
@with_comms
@run_with_both_funcol_impls
@skip_unless_torch_gpu
def test_get_parent_mesh_dim_not_exist(self):
mesh_shape = (self.world_size,)
mesh = init_device_mesh(self.device_type, mesh_shape)
@ -437,7 +394,6 @@ class TestMeshEnv(DTensorTestBase):
@with_comms
@run_with_both_funcol_impls
@skip_unless_torch_gpu
def test_get_mesh_dim_by_name(self):
mesh_shape = (2, self.world_size // 2)
mesh_dim_names = ("DP", "TP")
@ -555,9 +511,11 @@ class DeviceMeshCollectiveTest(DTensorTestBase):
torch.chunk(big_tensor, device_mesh.size(), dim=shard_dim)
)
unpadded_list = [
shard_placement._unpad_tensor(big_tensor_chunks[i], pad_sizes[i])
if pad_sizes[i] > 0
else big_tensor_chunks[i]
(
shard_placement._unpad_tensor(big_tensor_chunks[i], pad_sizes[i])
if pad_sizes[i] > 0
else big_tensor_chunks[i]
)
for i, big_tensor in enumerate(big_tensor_chunks)
]
all_gathered_tensor = torch.cat(unpadded_list, dim=shard_dim)

View File

@ -48,7 +48,6 @@ from torch._dynamo.testing import (
same,
skipIfNotPy311,
unsupported,
xfailIfPy311,
)
from torch._dynamo.utils import CompileProfiler, counters, ifdynstaticdefault
from torch._inductor.utils import run_and_get_code
@ -9740,7 +9739,6 @@ fn
lambda mod: mod.fc,
)
@xfailIfPy311
def test_sequential_module_free(self):
self._test_compile_model_free(
lambda: (

View File

@ -19,10 +19,10 @@ from torch.fx.experimental.symbolic_shapes import (
StatelessSymbolicContext,
)
from torch.nested._internal.nested_tensor import (
buffer_from_jagged,
jagged_from_list,
jagged_from_tensor_and_lengths,
ViewBufferFromNested,
nested_view_from_values_offsets,
NestedTensor,
)
from torch.testing._internal.common_utils import (
instantiate_parametrized_tests,
@ -1166,19 +1166,20 @@ class TestNestedTensor(torch._dynamo.test_case.TestCase):
a = torch.randn(2, 3, requires_grad=True, dtype=torch.float64)
b = torch.randn(3, 3, requires_grad=True, dtype=torch.float64)
c = torch.randn(4, 3, requires_grad=True, dtype=torch.float64)
nt, offsets = jagged_from_list([a, b, c], None)
nt2, _ = jagged_from_list([a, b, c], offsets)
nt = torch.nested.as_nested_tensor([a, b, c], layout=torch.jagged)
# TODO: Switch to public API when it exists
nt2, _ = jagged_from_list([a, b, c], nt.offsets())
def fn1(nt1, nt2):
return (nt1 + nt2).sin().cos()
compiled_f = torch.compile(fn1, fullgraph=True, backend=backend, dynamic=True)
out = compiled_f(nt, nt2)
out_buffer = ViewBufferFromNested.apply(out)
out_buffer = out.values()
ga, gb, gc = torch.autograd.grad(out_buffer.sum(), (a, b, c))
out_ref = fn1(nt, nt2)
out_buffer_ref = ViewBufferFromNested.apply(out_ref)
out_buffer_ref = out_ref.values()
ga_ref, gb_ref, gc_ref = torch.autograd.grad(out_buffer_ref.sum(), (a, b, c))
self.assertTrue(torch.allclose(ga, ga_ref))
@ -1218,10 +1219,10 @@ class TestNestedTensor(torch._dynamo.test_case.TestCase):
ret = fn_c(nt, y)[0]
ref = fn(nt_copy, y_copy)[0]
self.assertEqual(buffer_from_jagged(ret), buffer_from_jagged(ref))
self.assertEqual(ret.values(), ref.values())
buffer_from_jagged(ret).sum().backward()
buffer_from_jagged(ref).sum().backward()
ret.values().sum().backward()
ref.values().sum().backward()
for ref_v, res_v in zip(values_copy, values):
self.assertEqual(ref_v.grad, res_v.grad)
@ -1254,83 +1255,112 @@ class TestNestedTensor(torch._dynamo.test_case.TestCase):
self._check_recompiles(fn, (nt,), (nt3,), True)
def _get_views(self):
# There are three cases to consider here based on the logic in
# meta_utils.py
#
# (1) basic case:
# view is not a leaf and has the same requires grad as its basic case
x, _ = self._get_jagged_tensor(((2, 3, 4), 3), None, requires_grad=True)
self.assertEqual(x.is_leaf, False)
yield x.unsqueeze(-1)
# Test all cases with both an NT base and a dense base
# Subclass -> Subclass
# Dense -> Subclass
for base_is_nt in [False, True]:
# There are three cases to consider here based on the logic in
# meta_utils.py
#
# (1) basic case:
# view is not a leaf and has the same requires grad as its basic case
x, _ = self._get_jagged_tensor(((2, 3, 4), 3), None, requires_grad=True)
x = x.clone() if base_is_nt else x
self.assertEqual(x.is_leaf, False)
yield x.unsqueeze(-1)
# (2) leaf view case:
# the view has to be a leaf (w/ requires_grad True or requires_grad False)
# base w/ requires_grad True or requires_grad False
for requires_grad_1, requires_grad_2 in itertools.product(
[True, False], repeat=2
):
x, _ = self._get_jagged_tensor(
((2, 3, 4), 3), None, requires_grad=requires_grad_1
)
# (2) leaf view case:
# the view has to be a leaf (w/ requires_grad True or requires_grad False)
# base w/ requires_grad True or requires_grad False
for requires_grad_1, requires_grad_2 in itertools.product(
[True, False], repeat=2
):
x, _ = self._get_jagged_tensor(
((2, 3, 4), 3), None, requires_grad=requires_grad_1
)
x = x.clone() if base_is_nt else x
with torch.no_grad():
x_view = x.unsqueeze(-1)
# The issue is this doesn't quite work
x_view.requires_grad_(requires_grad_2)
yield x_view
# (3) obscure case:
# view is not a leaf (implies requires_grad True)
# base w/ requires_grad False)
x, _ = self._get_jagged_tensor(((2, 3, 4), 3), None, requires_grad=False)
x = x.clone() if base_is_nt else x
# intermediate leaf view
with torch.no_grad():
x_view = x.unsqueeze(-1)
# The issue is this doesn't quite work
x_view.requires_grad_(requires_grad_2)
yield x_view
x_view.requires_grad_(True)
x_view_view = x_view.unsqueeze(-1)
yield x_view_view
# (3) obscure case:
# view is not a leaf (implies requires_grad True)
# base w/ requires_grad False)
x, _ = self._get_jagged_tensor(((2, 3, 4), 3), None, requires_grad=False)
# intermediate leaf view
with torch.no_grad():
x_view = x.unsqueeze(-1)
x_view.requires_grad_(True)
x_view_view = x_view.unsqueeze(-1)
yield x_view_view
# Subclass -> Dense
x = self._get_jagged_tensor(((2, 3, 4), 3), None, requires_grad=True)[0].clone()
yield x.values()
# Dense -> Subclass -> Dense -> Subclass
values = torch.randn(10, 5)
offsets = torch.tensor([0, 3, 6, 10])
offsets2 = offsets.clone().detach()
yield nested_view_from_values_offsets(
nested_view_from_values_offsets(values, offsets).values(), offsets
)
def _input_view_test(self, nt_view):
def fn(x):
return x.sin()
out_ref = fn(nt_view)
torch._dynamo.reset()
compile_fn = torch.compile(
fn, fullgraph=True, backend="aot_eager", dynamic=True
)
out = compile_fn(nt_view)
# Check metadata and values are correct
self.assertTrue(out.size() == out_ref.size())
self.assertTrue(out.stride() == out_ref.stride())
if out.is_nested:
self.assertTrue(torch.allclose(out.values(), out_ref.values()))
else:
self.assertTrue(torch.allclose(out, out_ref))
# Check that no upper/lower bound guards are incurred
def backend(gm, args):
context = torch._guards.TracingContext.get()
guards = [str(g.expr) for g in context.fake_mode.shape_env.guards]
# varies based on the type of view
guard_str = "\n".join(guards)
if isinstance(nt_view._base, NestedTensor):
self.assertExpectedInline(guard_str, """Eq(s3 - 1, s0)""")
else:
self.assertExpectedInline(guard_str, """""")
return gm
torch._dynamo.reset()
compile_fn = torch.compile(fn, fullgraph=True, backend=backend, dynamic=True)
out = compile_fn(nt_view)
def test_inputs_to_compiled_fn_are_views(self):
for nt_view in self._get_views():
self._input_view_test(nt_view)
def fn(x):
return x.sin()
out_ref = fn(nt_view)
torch._dynamo.reset()
compile_fn = torch.compile(
fn, fullgraph=True, backend="aot_eager", dynamic=True
)
out = compile_fn(nt_view)
# Check metadata and values are correct
self.assertTrue(out.size() == out_ref.size())
self.assertTrue(out.stride() == out_ref.stride())
self.assertTrue(torch.allclose(out.values(), out_ref.values()))
# Check that no upper/lower bound guards are incurred
def backend(gm, args):
context = torch._guards.TracingContext.get()
guards = [str(g.expr) for g in context.fake_mode.shape_env.guards]
ranges = [
f"{s}: [{vr.lower}, {vr.upper}]"
for s, vr in context.fake_mode.shape_env.var_to_range.items()
]
self.assertExpectedInline("\n".join(guards), """Eq(s3 - 1, s0)""")
self.assertExpectedInline(
"\n".join(ranges),
"""\
s0: [2, 9223372036854775805]
s2: [2, 9223372036854775806]
s3: [3, 9223372036854775806]
s5: [2, 9223372036854775806]""",
)
return gm
torch._dynamo.reset()
compile_fn = torch.compile(
fn, fullgraph=True, backend=backend, dynamic=True
)
out = compile_fn(nt_view)
# NJT1 -> Dense -> NJT2 -> Dense view
# During view replay, the Dense -> NJT2 part will construct an intermediate,
# symbolically-sized NJT that is immediately deconstructed to return the final dense
# view. To construct this intermediate properly, we need the associated nested int
# to be symbolic. This view is expected to fail compilation until symbolic nested ints
# are cached onto fake offsets to solve this problem.
@unittest.expectedFailure
def test_subclass_dense_subclass_dense_view(self):
x = self._get_jagged_tensor(((2, 3, 4), 3), None, requires_grad=True)[0].clone()
offsets2 = x.offsets().clone().detach()
nt_view = nested_view_from_values_offsets(x.values(), offsets2).values()
self._input_view_test(nt_view)
if __name__ == "__main__":

View File

@ -435,6 +435,13 @@ aten::_nested_from_padded
aten::_nested_from_padded.out
aten::_nested_from_padded_and_nested_example
aten::_nested_from_padded_and_nested_example.out
aten::_nested_get_jagged_dummy
aten::_nested_get_lengths
aten::_nested_get_offsets
aten::_nested_get_ragged_idx
aten::_nested_get_values
aten::_nested_get_values_copy
aten::_nested_get_values_copy.out
aten::_nested_select_backward
aten::_nested_sum_backward
aten::_nested_tensor_from_mask
@ -452,6 +459,9 @@ aten::_nested_tensor_strides.out
aten::_nested_view_from_buffer
aten::_nested_view_from_buffer_copy
aten::_nested_view_from_buffer_copy.out
aten::_nested_view_from_jagged
aten::_nested_view_from_jagged_copy
aten::_nested_view_from_jagged_copy.out
aten::_new_zeros_with_same_feature_meta
aten::_new_zeros_with_same_feature_meta.out
aten::_nnpack_spatial_convolution

View File

@ -72,12 +72,28 @@ from torch.export import export
torch.library.define("testlib::returns_tensor_symint", "(Tensor x) -> (Tensor, SymInt)")
torch.library.define(
"testlib::foo",
"(Tensor(a!) x, Tensor(b!) z) -> (Tensor, Tensor, Tensor)",
tags=torch.Tag.pt2_compliant_tag,
)
@torch.library.impl("testlib::returns_tensor_symint", "cpu")
@torch.library.impl_abstract("testlib::returns_tensor_symint")
def returns_tensor_symint_impl(x):
return x, x.shape[0]
@torch.library.impl("testlib::foo", "cpu")
@torch._dynamo.disable
def foo_impl(x, z):
x.add_(5)
z.add_(5)
return x, z, x + z
@torch.library.impl_abstract("testlib::foo")
def foo_abstract(x, z):
return x, z, x + z
@unittest.skipIf(not torchdynamo.is_dynamo_supported(), "dynamo isn't support")
class TestDynamismExpression(TestCase):
@ -3551,6 +3567,30 @@ def forward(self, arg0_1, arg1_1, arg2_1):
self._test_export_same_as_eager(Module(), (torch.randn(4, 4),))
def test_custom_op_auto_functionalize(self):
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, z):
return torch.ops.testlib.foo(x, z)
inps = (torch.ones(5), torch.ones(5))
inps_for_export = (torch.ones(5), torch.ones(5))
inps_for_export_with_decomp = (torch.ones(5), torch.ones(5))
ep = torch.export.export(M(), inps_for_export)
x_new_eager, z_new_eager, legit_eager = M()(*inps)
x_new_export, z_new_export, legit_export = ep.module()(*inps_for_export)
self.assertTrue(torch.allclose(x_new_eager, x_new_export))
self.assertTrue(torch.allclose(z_new_eager, z_new_export))
self.assertTrue(torch.allclose(legit_eager, legit_export))
ep = ep.run_decompositions()
x_new_export, z_new_export, legit_export = ep.module()(*inps_for_export_with_decomp)
self.assertTrue(torch.allclose(x_new_eager, x_new_export))
self.assertTrue(torch.allclose(z_new_eager, z_new_export))
self.assertTrue(torch.allclose(legit_eager, legit_export))
@unittest.skipIf(not torchdynamo.is_dynamo_supported(), "dynamo isn't support")
class TestOneOffModelExportResult(TestCase):

View File

@ -7,8 +7,8 @@ with test_functionalization_with_native_python_assertion)
import math
import operator
import unittest
from typing import List, Set
from re import escape
from typing import List, Set
import torch
from functorch.experimental.control_flow import cond
@ -17,22 +17,38 @@ from torch._export.pass_base import _ExportPassBaseDeprecatedDoNotUse
from torch._export.passes.functionalize_side_effectful_ops_pass import (
_FunctionalizeSideEffectfulOpsPass,
)
from torch._export.passes.replace_set_grad_with_hop_pass import (
_is_set_grad_enabled_node,
_is_set_grad_enabled_sub_mod,
)
from torch._export.passes.replace_view_ops_with_view_copy_ops_pass import (
get_view_copy_of_view_op,
is_view_op,
ReplaceViewOpsWithViewCopyOpsPass,
)
from torch._export.utils import (
node_inline_,
nodes_count,
nodes_filter,
nodes_map,
sequential_split,
)
from torch._higher_order_ops.auto_functionalize import auto_functionalized
from torch.export import export
from torch.export._remove_auto_functionalized_pass import (
unsafe_remove_auto_functionalized_pass,
)
from torch.fx.passes.infra.partitioner import Partition
from torch.fx.passes.operator_support import OperatorSupport
from torch.library import impl, _scoped_library
from torch.testing import FileCheck
from torch.testing._internal.common_utils import run_tests, TestCase, skipIfTorchDynamo, IS_WINDOWS
from torch.utils import _pytree as pytree
from torch._export.utils import sequential_split, nodes_filter, nodes_map, node_inline_, nodes_count
from torch._export.passes.replace_set_grad_with_hop_pass import (
_is_set_grad_enabled_node, _is_set_grad_enabled_sub_mod
from torch.testing._internal.common_utils import (
IS_WINDOWS,
run_tests,
skipIfTorchDynamo,
TestCase,
)
from torch.utils import _pytree as pytree
def count_call_function(graph: torch.fx.Graph, target: torch.ops.OpOverload) -> int:
count = 0
@ -620,5 +636,96 @@ def forward(self, sin, cos):
self.assertEqual(before_str, after_inline_str)
self.assertEqual(gm(*args), new_gm(*args))
def test_remove_auto_functionalized_pass(self) -> None:
with _scoped_library("DO_NOT_USE_TEST_ONLY", "DEF") as lib:
lib.define("custom_mutator(Tensor x, Tensor(a!) y) -> Tensor")
@impl(lib, "custom_mutator", "Meta")
def custom_mutator_meta(
x: torch.Tensor,
y: torch.Tensor,
) -> torch.Tensor:
return torch.empty_like(x)
@impl(lib, "custom_mutator", "CompositeExplicitAutograd")
def custom_mutator(
x: torch.Tensor,
y: torch.Tensor,
) -> torch.Tensor:
return x + y.add_(1)
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.register_buffer("state", torch.zeros(1))
def forward(self, x):
return torch.ops.DO_NOT_USE_TEST_ONLY.custom_mutator(x, self.state)
mod = M()
x = torch.randn([3, 3])
ep = export(mod, (x,))
inplace_ep = unsafe_remove_auto_functionalized_pass(ep)
nodes = inplace_ep.graph.nodes
for node in nodes:
if node.op == "call_function":
self.assertFalse(node.target is auto_functionalized)
self.assertFalse(node.target is operator.getitem)
for spec in inplace_ep.graph_signature.output_specs:
self.assertFalse("getitem" in spec.arg.name)
def test_remove_auto_functionalized_pass_tuple(self) -> None:
with _scoped_library("DO_NOT_USE_TEST_ONLY", "DEF") as lib:
lib.define("custom_mutator_tuple(Tensor x, Tensor(a!) y) -> (Tensor, Tensor)")
@impl(lib, "custom_mutator_tuple", "Meta")
def custom_mutator_tuple_meta(
x: torch.Tensor,
y: torch.Tensor,
):
return (torch.empty_like(x), torch.empty_like(x))
@impl(lib, "custom_mutator_tuple", "CompositeExplicitAutograd")
def custom_mutator_tuple(
x: torch.Tensor,
y: torch.Tensor,
):
return (x, x + y.add_(1))
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.register_buffer("state", torch.zeros(1))
def forward(self, x):
return torch.ops.DO_NOT_USE_TEST_ONLY.custom_mutator_tuple(
x, self.state
)
mod = M()
x = torch.randn([3, 3])
ep = export(mod, (x,))
inplace_ep = unsafe_remove_auto_functionalized_pass(ep)
nodes = inplace_ep.graph.nodes
getitems = 0
for node in nodes:
if node.op == "call_function":
self.assertFalse(node.target is auto_functionalized)
if node.target is operator.getitem:
getitems += 1
self.assertEqual(getitems, 2) # tuple return of len 2
out_specs = inplace_ep.graph_signature.output_specs
self.assertEqual(out_specs[0].arg.name, "arg0_1") # state
self.assertEqual(out_specs[1].arg.name, "getitem") # tuple return 1
self.assertEqual(out_specs[2].arg.name, "getitem_1") # tuple return 2
if __name__ == '__main__':
run_tests()

View File

@ -322,6 +322,7 @@ if RUN_CPU:
BaseTest("test_relu"), # multiple inputs
BaseTest("test_repeat_interleave", "", test_cpu_repro.CPUReproTests()),
BaseTest("test_scalar_input"),
BaseTest("test_scalar_output"),
BaseTest("test_scaled_dot_product_attention"),
BaseTest("test_scatter1"),
BaseTest("test_scatter2"),

View File

@ -415,6 +415,12 @@ def check_model(
if check_has_compiled:
assert called, "Ran graph without calling compile_fx"
assert type(actual) == type(correct)
if isinstance(actual, (tuple, list)):
assert len(actual) == len(correct)
assert all(
type(actual_item) == type(correct_item)
for actual_item, correct_item in zip(actual, correct)
)
correct_flat, correct_spec = tree_flatten(correct)
actual_flat = pytree.tree_leaves(actual)
@ -2452,6 +2458,20 @@ class CommonTemplate:
self.common(fn, [torch.randint(5, (1, 8)), 5400])
@torch._dynamo.config.patch(dynamic_shapes=True)
@torch._dynamo.config.patch(assume_static_by_default=False)
def test_scalar_output(self):
def fn(arg0_1, arg2_1):
arg1_1 = arg2_1.size(1)
view = torch.ops.aten.view.default(arg2_1, [-1, arg1_1])
embedding = torch.ops.aten.embedding.default(arg0_1, view)
full = torch.ops.aten.full.default([1, arg1_1], 1, dtype=torch.float32)
return (full, arg1_1, embedding)
arg0_1 = rand_strided((32128, 768), (768, 1), device="cpu", dtype=torch.float32)
arg2_1 = rand_strided((1, 22), (22, 1), device="cpu", dtype=torch.int64)
self.common(fn, [arg0_1, arg2_1])
def test_shape_prop_torch_ones(self):
class Model(torch.nn.Module):
def forward(self, attention_scores):

View File

@ -5,7 +5,7 @@ import unittest
import torch
from torch.testing._internal.common_utils import IS_LINUX
from torch.testing._internal.common_utils import IS_LINUX, skipIfRocm
from torch.testing._internal.inductor_utils import HAS_GPU
try:
@ -71,6 +71,7 @@ class TestTritonHeuristics(TestCase):
]
self.assertEqual(forward(*args), foo_c(*args))
@skipIfRocm
def test_artificial_zgrid(self):
self._test_artificial_zgrid()

View File

@ -8822,7 +8822,7 @@ get_out().sum().backward()
self.assertEqual(a.device, b.device)
self.assertEqual(a.dtype, b.dtype)
def _test_fn(fn, inp, *args):
def _test_fn(fn, inp, *args, use_unsafe_view_func=False):
outs = fn(inp, *args)
# handle functions that return multiple views (e.g. split)
if isinstance(outs, torch.Tensor):
@ -8835,7 +8835,10 @@ get_out().sum().backward()
# forward view_func
new_inp = inp.clone()
_assert_match_metadata(new_inp, inp)
new_out = out._view_func(new_inp)
if use_unsafe_view_func:
new_out = out._view_func_unsafe(new_inp)
else:
new_out = out._view_func(new_inp)
_assert_match_metadata(new_out, out)
self.assertEqual(new_out, out)
@ -8901,6 +8904,33 @@ get_out().sum().backward()
_test_fn(chain_with_only_current_view_func, torch.randn(2, 3, 4))
# TODO: Move this somewhere else
# test NT views
from torch.nested._internal.nested_tensor import nested_view_from_values_offsets
values = torch.randn(10, 5)
offsets = torch.tensor([0, 3, 6, 10])
_test_fn(nested_view_from_values_offsets, values, offsets)
nt = nested_view_from_values_offsets(values, offsets).clone().detach()
_test_fn(torch.ops.aten._nested_get_values.default, nt, use_unsafe_view_func=True)
def chain_nt_to_dense_back_and_forth(nt):
# NJT1 -> dense -> NJT2 -> dense
offsets2 = nt.offsets().clone().detach()
return nested_view_from_values_offsets(nt.values(), offsets2).values()
_test_fn(chain_nt_to_dense_back_and_forth, nt, use_unsafe_view_func=True)
def chain_dense_to_nt_back_and_forth(values, offsets):
offsets2 = offsets.clone().detach()
# dense -> NJT1 -> dense -> NJT2
return nested_view_from_values_offsets(
nested_view_from_values_offsets(values, offsets).values(),
offsets2)
_test_fn(chain_dense_to_nt_back_and_forth, values, offsets, use_unsafe_view_func=True)
def test_view_func_replay_with_modified_state(self):
with torch.autograd._force_original_view_tracking(True):
base = torch.randn(3, 4, 5)

Some files were not shown because too many files have changed in this diff Show More