Compare commits

...

72 Commits

Author SHA1 Message Date
63d5e9221b [EZ] Pin scipy to 1.12 for Py-3.12 (#127322)
[EZ] Pin scipy to 1.12 for Py-3.12 (#123795)

This caused false positive failures/reverts for https://github.com/pytorch/pytorch/pull/123689 and https://github.com/pytorch/pytorch/pull/123595

Fixes https://github.com/pytorch/pytorch/issues/123655

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123795
Approved by: https://github.com/huydhn

(cherry picked from commit 2a597cfd2c63459dd303cf7922eb4c3750a76e75)

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2024-05-29 11:15:01 -04:00
91bdec37c8 Update hf_BirdBird periodic-dynamo-benchmarks results (#127312)
Update hf_BirdBird periodic-dynamo-benchmarks results (#126414)

can't repro this regression. also nothing in the faulty PR range would cause it only for 1 model. the job is still causing noise, so we should mute it. I think just updating the graph break count is better than skipping the model here since it's still passing

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126414
Approved by: https://github.com/ezyang

(cherry picked from commit 5ea956a61f43f910f4192ee9cf00268db1ede5ef)

Co-authored-by: Simon Fan <xmfan@meta.com>
2024-05-28 20:41:42 -04:00
d44533f9d0 Put back "[Release only] Release 2.3 start using triton package from pypi"" (#127290)
Revert "Revert "[Release only] Release 2.3 start using triton package from py…"

This reverts commit 194698a4ac82c485acbe6d2f40b7de063e28aa8f.
2024-05-28 09:51:00 -04:00
bd1040c3b0 [DSD] Fix to remove non_persistent buffer in distributed state dict (#125337) (#127219)
* [DSD] Fix to remove non_persistent buffer in distributed state dict (#125337)

Summary:
Fixes #122792

state_dict includes only persistent buffers, while named_buffers() would
include non_persistent buffers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125337
Approved by: https://github.com/awgu
ghstack dependencies: #125333, #125501, #125334, #125335, #125336

* lintrunner

* lint

---------

Co-authored-by: Chien-Chin Huang <chienchin@fb.com>
Co-authored-by: Andrey Talman <atalman@fb.com>
2024-05-27 11:56:58 -07:00
81b88543f0 [DSD] Add a test to verify FSDP lazy initialization case (#127069) (#127130)
* [DSD] Add a test to verify FSDP lazy initialization case (#127069)

Summary:
Distributed state_dict should not error out because the `model.state_dict()` will trigger FSDP to initialize.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127069
Approved by: https://github.com/wz337

* Add missing import get_optimizer_state_dict

---------

Co-authored-by: Andrey Talman <atalman@fb.com>
2024-05-27 12:41:56 -04:00
e63004b649 [DCP][state_dict] Remove the check of FSDP has root (#121544) (#126557)
Root may not exist due to FSDP lazy initialization.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121544
Approved by: https://github.com/Skylion007
ghstack dependencies: #121273, #121276, #121290

Co-authored-by: Chien-Chin Huang <chienchin@fb.com>
2024-05-27 08:26:48 -04:00
00804a79e4 [DSD] Correctly handle _extra_state (#125336) (#126567)
* [DSD] Correctly handle _extra_state (#125336)

Summary:
distributed_state_dict should not try to use `getattr` to get `_extra_state` as this is not well-defined.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125336
Approved by: https://github.com/LucasLLC
ghstack dependencies: #125333, #125501, #125334, #125335

* lint

* lint

---------

Co-authored-by: Chien-Chin Huang <chienchin@fb.com>
Co-authored-by: Andrey Talman <atalman@fb.com>
2024-05-27 08:24:23 -04:00
cd033a128c [Cherry-pick][DCP][AC] Add test for apply AC with FSDP1 (#126935) (#126992)
[DCP][AC] Add test for apply AC with FSDP1 (#126935)

Adding test for this cherry pick. https://github.com/pytorch/pytorch/pull/126559/

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126935
Approved by: https://github.com/fegin
2024-05-24 10:35:55 -04:00
19058a60b0 Remove activation checkpointing tag to get correct FQNs (#124698) (#126559)
Fixes #124546

When setting `use_orig_params = False` and using activation checkpointing, the FQN mapping as retrieved by the `_get_fqns` function is incorrect because the prefix that is added to the name of each activation checkpointed module, `_checkpoint_wrapped_module`, can still be present. I think this is an edge case with the `_get_fqns` function that was not addressed by this previous commit #118119.

Without the change, the list of object names for an activation checkpointed module with FSDP (and `use_orig_params=False`) can be something like:
```
['model', '_fsdp_wrapped_module', 'transformer', 'blocks', '0', '_fsdp_wrapped_module', '_checkpoint_wrapped_module', '_flat_param']
```
Which will incorrectly return just one FQN, `{'model.transformer.blocks.0._flat_param'}`, when all the FQNs of the parameters of the transformer block should be returned.

With the change, the list of object names will now have `_checkpoint_wrapped_module` removed:
```
['model', '_fsdp_wrapped_module', 'transformer', 'blocks', '0', '_fsdp_wrapped_module', '_flat_param']
```
And the FQNs are correctly retrieved and returned in `_get_fqns` when [this condition](ea61c9cb29/torch/distributed/checkpoint/state_dict.py (L168)) is satisfied. The correct FQNs are:
```
{'model.transformer.blocks.0.attn.Wqkv.bias', 'model.transformer.blocks.0.ffn.up_proj.bias',
'model.transformer.blocks.0.attn.out_proj.weight', 'model.transformer.blocks.0.norm_2.weight',
'model.transformer.blocks.0.ffn.down_proj.weight', 'model.transformer.blocks.0.attn.Wqkv.weight',
'model.transformer.blocks.0.norm_2.bias', 'model.transformer.blocks.0.ffn.up_proj.weight',
'model.transformer.blocks.0.ffn.down_proj.bias', 'model.transformer.blocks.0.norm_1.bias',
'model.transformer.blocks.0.norm_1.weight', 'model.transformer.blocks.0.attn.out_proj.bias'}
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124698
Approved by: https://github.com/Skylion007

Co-authored-by: Saaketh <narayan.saaketh@gmail.com>
2024-05-24 10:35:22 -04:00
30650e0add [FSDP1] fix _same_storage check for DTensor (#123617) (#126957)
for FSDP (SHARD_GRAD_OP + use_orig_params) + TP, params in the backward are DTensors. However,  ``DTensor.untyped_storage().data_ptr()`` does not work in ``_same_storage``. Thus desugar to ``DTensor._local_tensor.untyped_storage().data_ptr()`` https://github.com/pytorch/pytorch/issues/123272

credit to @bigning for the original fix. after landing, we would not need patching in mosaic composer https://github.com/mosaicml/composer/pull/3175/files

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123617
Approved by: https://github.com/awgu
2024-05-23 19:53:46 -04:00
661c3de2a7 [release/2.3] Fix miopenStatusInternalError caused in new ROCm6.0 CI docker images (#126942)
* Update setting of journal_mode to delete

* Revert "[Release only] Pin rocm docker images (#126452)"

This reverts commit ee68b41571287aaecf4216f752fb592496fea49e.

* Replace tabs with spaces for lint
2024-05-23 16:49:49 -04:00
71dd2de836 [release/2.3] Added cublasGemmAlgo_t -> hipblasGemmAlgo_t (#126448)
Added cublasGemmAlgo_t -> hipblasGemmAlgo_t
2024-05-23 11:09:21 -04:00
6cd59f1f07 [dynamo] use proxies to nn.Module in dynamo generated GraphModules (#126332)
* [dynamo] use proxies to nn.Module in dynamo generated GraphModules (#120756)

Fixes remaining refleaks found when debugging https://github.com/pytorch/pytorch/issues/119607, tests added in https://github.com/pytorch/pytorch/pull/120657.

Also fixes some tests that xfail: https://github.com/pytorch/pytorch/issues/120631 (not entirely sure why), but introduced tests now fail.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120756
Approved by: https://github.com/jansel

* [dynamo] use proxies to nn.Module in dynamo generated GraphModules (#120756)

Fixes remaining refleaks found when debugging https://github.com/pytorch/pytorch/issues/119607, tests added in https://github.com/pytorch/pytorch/pull/120657.

Also fixes some tests that xfail: https://github.com/pytorch/pytorch/issues/120631 (not entirely sure why), but introduced tests now fail.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120756
Approved by: https://github.com/jansel
2024-05-22 12:56:06 -04:00
ee68b41571 [Release only] Pin rocm docker images (#126452)
* Pin rocm docker images

* test

* test

* test

* test
2024-05-17 13:43:42 -04:00
0365423035 Pin triton-rocm to latest 2.3.1 commit (#126309)
* Revert "pin rocm"

This reverts commit 45ebb10bdd22e00c9035fbacf00634938941a192.

* Revert "lint"

This reverts commit 05860b9a444c96424cb0203fca90d7dd52fbc83c.

* rocm_pin
2024-05-15 16:14:39 -04:00
03baf94aae [EZ] Get rid of utf-8 quotes (#126301)
Replace `“important”` with `important"` and `Taylor’s` with `Taylor's`

Fixes the obvious symptoms of https://github.com/pytorch/pytorch/issues/124897

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2024-05-15 15:03:37 -04:00
7782f2866c [dynamo] fix 3.11+ refleak for 2.3.1 release (#126235) 2024-05-15 13:21:19 -04:00
be9a4076f0 Do not import transformers when import torch._dynamo (#124634) (#125755)
Fixes https://github.com/pytorch/pytorch/issues/123954

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124634
Approved by: https://github.com/thiagocrepaldi, https://github.com/Chillee
ghstack dependencies: #124343
2024-05-15 13:15:44 -04:00
d114e0488c [Release only] Advance triton version to 2.3.1 (#126204)
* Advance tition version to 2.3.1

* update_commit_pin

* pin

* pin rocm

* lint
2024-05-15 11:13:52 -04:00
194698a4ac Revert "[Release only] Release 2.3 start using triton package from pypi" (#126202)
Revert "[Release only] Release 2.3 start using triton package from pypi (#123…"

This reverts commit 97ff6cfd9c86c5c09d7ce775ab64ec5c99230f5d.
2024-05-14 17:01:07 -04:00
1d6a938090 [cherry-pick][device_mesh] add back the private init backend option (#124780) (#126147)
[device_mesh] add a private init backend option (#124780)

This PR adds a private init backend option, to tackle the issues sub
mesh creation:

in device mesh slicing we don't want to create process groups again,
so explicitly turn the group creation off it's useful

Also I think there might be more submesh creation functionality so
having this flag would ensure that there's no new group created

Differential Revision: [D56497780](https://our.internmc.facebook.com/intern/diff/D56497780)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124780
Approved by: https://github.com/awgu
2024-05-14 13:38:55 -04:00
4f0b3ad855 Fix ref leak in dtype.to_complex()/to_real() (#126101)
* Fix ref leak in `dtype.to_complex()`/`to_real()` (#125154)

By using `Py_NewRef`

Also, wrap `THPDtype_to_real`/`THPDtype_to_complex` calls with `HANDLE_TH_ERRORS`

Add regression test for the above issues, by calling to_complex for integral dtypes, that raises an exception and by preserving reference count to the same to_complex/to_real call to detect if leak is happeneing.

Replace
```cpp
auto dtype = (PyObject*)torch::getTHPDtype(current_dtype);
Py_INCREF(dtype);
return dtype;
```
with a more compact/streamlined equivalent
```cpp
return Py_NewRef(torch::getTHPDtype(current_dtype));
```

Fixes https://github.com/pytorch/pytorch/issues/124868

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125154
Approved by: https://github.com/Skylion007, https://github.com/albanD

(cherry picked from commit 744f341aa4eaa4f2e7068e5f83fa6fccb0a02ccc)

* Revert "Fix ref leak in `dtype.to_complex()`/`to_real()` (#125154)"

This reverts commit a1b04d8832c64d6112b432ce1e7725f02761cf21.

* Fix ref leak in `dtype.to_complex()`/`to_real()` (#125154)

By using `Py_NewRef`

Also, wrap `THPDtype_to_real`/`THPDtype_to_complex` calls with `HANDLE_TH_ERRORS`

Add regression test for the above issues, by calling to_complex for integral dtypes, that raises an exception and by preserving reference count to the same to_complex/to_real call to detect if leak is happeneing.

Replace
```cpp
auto dtype = (PyObject*)torch::getTHPDtype(current_dtype);
Py_INCREF(dtype);
return dtype;
```
with a more compact/streamlined equivalent
```cpp
return Py_NewRef(torch::getTHPDtype(current_dtype));
```

Fixes https://github.com/pytorch/pytorch/issues/124868

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125154
Approved by: https://github.com/Skylion007, https://github.com/albanD

(cherry picked from commit 744f341aa4eaa4f2e7068e5f83fa6fccb0a02ccc)

* Revert "Fix ref leak in `dtype.to_complex()`/`to_real()` (#125154)"

This reverts commit 5a28bad418ad8187036c64b3fe487109dfc4ff5d.

* Refactor autocast C++ APIs to be device-agnostic (#124359)

# Motivation
This PR aims to refactor autocast **C++** APIs to be device-agnostic and deprecate the device-specific autocast  **C++** APIs.
In C++ side,
- `is_enabled()` -> `is_enabled(device_type)`.
- `set_enabled(new_enabled)` -> `set_enabled(device_type, new_enabled)`.
- `get_autocast_dtype()` -> `get_autocast_dtype(device_type)`
- `set_autocast_dtype(dtype)` -> `set_autocast_dtype(device_type, dtype)`

These following C++ APIs are deprecated and should be removed in PyTorch 2.5
- `is_cpu_enabled`
- `set_cpu_enabled`
- `get_autocast_cpu_dtype`
- `set_autocast_cpu_dtype`
- `is_xpu_enabled`
- `set_xpu_enabled`
- `get_autocast_xpu_dtype`
- `set_autocast_xpu_dtype`
- `is_ipu_enabled`
- `set_ipu_enabled`
- `get_autocast_ipu_dtype`
- `set_autocast_ipu_dtype`
- `is_hpu_enabled`
- `set_hpu_enabled`
- `get_autocast_hpu_dtype`
- `set_autocast_hpu_dtype`
- `is_xla_enabled`
- `set_xla_enabled`
- `get_autocast_xla_dtype`
- `set_autocast_xla_dtype`
- `is_privateuseone_enabled`
- `set_privateuseone_enabled`
- `get_autocast_privateuseone_dtype`
- `set_autocast_privateuseone_dtype`

In Python side,
provide 4 generic autocast APIs:
- `torch.is_autocast_enabled(device_type)`
- `torch.set_autocast_enabled(device_type, new_enabled)`
- `torch.get_autocast_dtype(device_type)`
- `torch.set_autocast_dtype(device_type, dtype)`

# Additional Context
We will submit another PR to refactor autocast **Python** APIs based on this PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124359
Approved by: https://github.com/jgong5, https://github.com/albanD

* refactor autocast python APIs (#124479)

Refactor autocast usage scenario in `torch/amp/autocast_mode.py` and `torch/utils/checkpoint.py` to fix the bug - convention conflict between `torch.xxx.get_autocast_xxx_dtype` defined in `autocast_mode.py` and `torch.xxx.get_autocast_dtype` defined in `checkpoint.py`.

Use device-agnostic APIs like `torch.get_autocast_dtype`, ..., instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124479
Approved by: https://github.com/jgong5, https://github.com/gujinghui, https://github.com/EikanWang, https://github.com/albanD
ghstack dependencies: #124359

* Fix ref leak in `dtype.to_complex()`/`to_real()` (#125154)

By using `Py_NewRef`

Also, wrap `THPDtype_to_real`/`THPDtype_to_complex` calls with `HANDLE_TH_ERRORS`

Add regression test for the above issues, by calling to_complex for integral dtypes, that raises an exception and by preserving reference count to the same to_complex/to_real call to detect if leak is happeneing.

Replace
```cpp
auto dtype = (PyObject*)torch::getTHPDtype(current_dtype);
Py_INCREF(dtype);
return dtype;
```
with a more compact/streamlined equivalent
```cpp
return Py_NewRef(torch::getTHPDtype(current_dtype));
```

Fixes https://github.com/pytorch/pytorch/issues/124868

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125154
Approved by: https://github.com/Skylion007, https://github.com/albanD

* Revert "refactor autocast python APIs (#124479)"

This reverts commit 495b0c9aec07472d82b9fa5e5cdaab35ec16898d.

* Revert "Refactor autocast C++ APIs to be device-agnostic (#124359)"

This reverts commit 83106b7c4f7d6cc84ccc8beb0949eaa94c830d8d.

---------

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Co-authored-by: Huy Do <huydhn@gmail.com>
Co-authored-by: Yu, Guangye <guangye.yu@intel.com>
2024-05-14 13:29:45 -04:00
bf1b3a056a Lint: Update older-python test to 3.6 (#126108)
Lint: Update older-python test to 3.6 (#125843)

As python-3.5 can no longer connect to pypi after today's cert update
Fixes https://github.com/pytorch/pytorch/issues/125841

(cherry picked from commit 7e86a7c0155295539996e0cf422883571126073e)

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2024-05-13 14:45:40 -07:00
c365674171 Fixes format utils executable (#123482)
Fixes format utils executable (#123407)

Fixes an issue with the format utils executable, which was causing it to run as a no-op. :(

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123407
Approved by: https://github.com/wz337, https://github.com/fegin

(cherry picked from commit 18c9d460682293bae23386e9bc40932eee1d8add)

Co-authored-by: Lucas Pasqualin <lpasqualin@meta.com>
2024-05-13 14:44:35 -07:00
768e4b9420 [MPS] Fix large copy (#126104)
[MPS] Fix large copy (#124635)

By slicing `copyFromBuffer:sourceOffset:toBuffer:destinationOffset:size:` into 2Gb chunks

Add regression test, but limit it to machines with 12Gb of RAM or more, and MacOS 14+, as on MacOS 13 attempt to alloc 4Gb tensor fails with:
```
/AppleInternal/Library/BuildRoots/c651a45f-806e-11ed-a221-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:724: failed assertion `[MPSNDArray initWithDevice:descriptor:] Error: total bytes of NDArray > 2**32'
```

Fixes https://github.com/pytorch/pytorch/issues/124335

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124635
Approved by: https://github.com/kulinseth

(cherry picked from commit abf3f90781263ee45e2e79cf7f80102ffa7f1b14)

Co-authored-by: Nikita Shulga <nshulga@meta.com>
2024-05-13 14:42:29 -07:00
e25474c05d [MPS] Native nonzero implementation (#126100)
[MPS] Native nonzero implementation (#125355)

Fixes https://github.com/pytorch/pytorch/issues/124850

Replace previous MPSGraph nonzero construction with native nonzero op. For older OSes, fallback to CPU (previous implementation was not reliable and was comparable to CPU in speed).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125355
Approved by: https://github.com/kulinseth

(cherry picked from commit a40d6df448de1acb263ed8f6ff9e7d26f5a1a161)

Co-authored-by: Denis Vieriu <dvieriu@apple.com>
2024-05-13 14:39:39 -07:00
d8b35dac22 Publish PyTorch docs to pytorch/cpp repo (#126102)
Publish PyTorch docs to pytorch/cpp repo (#122895)

Updating the documents push to go to https://github.com/pytorch/docs repo instead of https://github.com/pytorch/pytorch.github.io as part of updating the PyTorch docs set up.
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122895
Approved by: https://github.com/malfet

(cherry picked from commit 535a84c12553a6a423bb3f7f7b175ee7bf8d5464)

Co-authored-by: sekyondaMeta <127536312+sekyondaMeta@users.noreply.github.com>
2024-05-13 14:35:46 -07:00
bbb838654c Separate arm64 and amd64 docker builds (#126099)
Separate arm64 and amd64 docker builds (#125617)

Fixes https://github.com/pytorch/pytorch/issues/125094

Please note: Docker CUDa 12.4 failure is existing issue, related to docker image not being available on gitlab:
```
docker.io/nvidia/cuda:12.4.0-cudnn8-devel-ubuntu22.04: docker.io/nvidia/cuda:12.4.0-cudnn8-devel-ubuntu22.04: not found
```
 https://github.com/pytorch/pytorch/actions/runs/8974959068/job/24648540236?pr=125617

Here is the reference issue: https://gitlab.com/nvidia/container-images/cuda/-/issues/225

Tracked on our side: https://github.com/pytorch/builder/issues/1811
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125617
Approved by: https://github.com/huydhn, https://github.com/malfet

(cherry picked from commit b29d77b54f5cee9e786f8a72329837043f36a349)

Co-authored-by: atalman <atalman@fb.com>
2024-05-13 14:33:28 -07:00
d983cb78e2 [MPS] Fix abs for complex types (#126096)
[MPS] Fix `abs` for complex types (#125662)

By calling `realPartOfTensor:` if input type is complex on Sonoma and fall back to `at::view_as_real` trick on Ventura.

Split `unary_op` template into `unary_op` and `unary_op_noresize`, which skips resize and empty checks

Marked `abs`, `isclose` and `nn.functional.softsign` OpInfo tests as supported by complex types

Fixes https://github.com/pytorch/pytorch/issues/125135

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125662
Approved by: https://github.com/kulinseth

(cherry picked from commit 0fd1fc17c3a53cb4cede1992d431d5384b2813f3)

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2024-05-13 14:32:05 -07:00
75e01e7df0 Add userbase library dir to windows dll search path (#126095)
Add userbase library dir to windows dll search path (#125684)

Fixes https://github.com/pytorch/pytorch/issues/125109 which is a regression introduced by https://github.com/pytorch/builder/pull/1467 that adds dynamic dependency to mkl, which if installed in the user-dir is placed into `sysconfig.sysconfig.get_config_var("userbase") / "Library" / "bin"`

Fix this one, but adding `userbase` folder to the DLL search path

Testing before this fix:
```
Python 3.12.3 (tags/v3.12.3:f6650f9, Apr  9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\torch\__init__.py", line 141, in <module>
    raise err
OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\torch\lib\shm.dll" or one of its dependencies.
>>> exit()
```

After:
```
c:\Program Files\Python312>python
Python 3.12.3 (tags/v3.12.3:f6650f9, Apr  9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> exit()
```
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125684
Approved by: https://github.com/malfet

(cherry picked from commit fdfef759a676ee7a853872e347537bc1e4b51390)

Co-authored-by: atalman <atalman@fb.com>
2024-05-13 14:30:59 -07:00
a696b3b7f6 Restore DILL_AVAILABLE for backwards compat with torchdata (#126094)
Restore DILL_AVAILABLE for backwards compat with torchdata (#122616)

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122616
Approved by: https://github.com/peterbell10

(cherry picked from commit f42818321b5694c5be76f16173c7ac9223ec8ae3)

Co-authored-by: Edward Z. Yang <ezyang@meta.com>
2024-05-13 14:30:14 -07:00
eqy
2e165ec9c2 Revert "Include torch warn in each error in cudnn/Conv_v8.cpp (#120719)" (#125790)
This reverts commit 5fd7f5c4e336c2c3041e10529990c620cc8cf9a5.

Reverted https://github.com/pytorch/pytorch/pull/120719 on behalf of https://github.com/janeyx99 due to sorry but am reverting as this prints unwanted warnings even when an exception is not thrown  ([comment](https://github.com/pytorch/pytorch/pull/120719#issuecomment-1994491826))

Co-authored-by: PyTorch MergeBot <pytorchmergebot@users.noreply.github.com>
2024-05-13 15:48:42 -04:00
1199df476e [Doc] Update docstrings for torch/random.py (#125443)
[Doc] Update docstrings for torch/random.py (#125265)

Updates the docstrings for torch/random.py to clarify what device / RNG each function operates on.

While trying to understand the difference between
```
state = torch.random.get_rng_state()
some_code
torch.random.set_rng_state(state)
```
and
```
with torch.random.fork_rng():
    some_code
```
I found out that there was a note about this in the docstring that wasn't being rendered on the website. I fixed that note and added additional clarifications on other functions in this file.

Test Plan:
Built the docs and verified that everything renders correctly.

<img width="911" alt="Screenshot 2024-04-30 at 2 22 08 PM" src="https://github.com/pytorch/pytorch/assets/9263852/f219bc35-89bd-4f5b-ba60-255b089499a4">

<img width="901" alt="Screenshot 2024-04-30 at 2 22 13 PM" src="https://github.com/pytorch/pytorch/assets/9263852/c141e7fa-afc9-4c66-b460-96668ce35606">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125265
Approved by: https://github.com/Balandat, https://github.com/lezcano
2024-05-13 15:46:47 -04:00
97ff6cfd9c [Release only] Release 2.3 start using triton package from pypi (#123580) 2024-04-08 16:27:33 -04:00
fb38ab7881 Fix for MPS regression in #122016 and #123178 (#123385)
Fixes #122016 and #123178. This regression is related to an OS side change that requires a slight adjustment from us on PyTorch side to restore the previous behavior. Additionally we cleared out pre-MacOS13 related workarounds.

Before the fix on MacOS 14.4:

```
python -c "import torch;x=torch.zeros(3, device='mps');x[1] = 1; x[2] = 3; print(x)"
tensor([0., 3., 3.], device='mps:0')
```

After the fix:
```
python -c "import torch;x=torch.zeros(3, device='mps');x[1] = 1; x[2] = 3; print(x)"
tensor([0., 1., 3.], device='mps:0')
```

This also fixes complex number initialization and as such makes `nn.functional.rms_norm` pass on MacOS-14+

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123234
Approved by: https://github.com/malfet, https://github.com/kulinseth

(cherry picked from commit 05289a278c3eaca271061649982f38c435b50674)

Co-authored-by: Joona Havukainen <jhavukainen@apple.com>
2024-04-05 18:46:31 -04:00
23961cef85 [Release/2.3] Set py3.x build-environment name consistently (#123446)
https://github.com/pytorch/pytorch/pull/122157 checks for the Python version using `"$BUILD_ENVIRONMENT" != *py3.8*`, but some build environment uses a different style with `py3_8` instead causing numpy 2.x to be installed there wrongly, i.e. 03b987fe3f
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122247
Approved by: https://github.com/malfet

(cherry picked from commit 6fefc52a2b4f814c5bc85f4087a92ad7f6ee3abe)

Co-authored-by: Huy Do <huydhn@gmail.com>
2024-04-05 09:01:19 -07:00
634cf5069a [Wheel] Change libtorch_cpu OpenMP search path (#123417) (#123442)
To prevent delocate from double-packing it, which makes Torch wheels
unusable with torch.compile out of the box

Fixes https://github.com/pytorch/pytorch/issues/122705

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123417
Approved by: https://github.com/atalman

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2024-04-05 10:22:39 -04:00
12d0e693d0 update submodule onnx==1.16.0 (#123387)
Fixes #121258

CC @malfet @atalman
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123125
Approved by: https://github.com/malfet

(cherry picked from commit 19c2ed15c099c7ed9f96074584af6ab9da206f92)

Co-authored-by: pbialecki <piotr.bialecki@hotmail.de>
2024-04-04 20:47:38 -04:00
38acd812ab [MPS] Fwd-fix for clamp regression (#122148) (#123383)
Forward fix for regressions introduced by https://github.com/pytorch/pytorch/pull/121381 as we failed to run MPS CI twice on it

- Do not call `minimumWithNaNPropagationWithPrimaryTensor` for integral tensors as it will crash with
  ```
    /AppleInternal/Library/BuildRoots/ce725a5f-c761-11ee-a4ec-b6ef2fd8d87b/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSKernelDAG.mm:805: failed assertion `Error getting visible function: (null) Function isNaN_i16_i8 was not found in the library'
   ```
- Change the order of max and min call as it's apparently important for
  consistency, as `min(max(a, b), c)` might not equal to `max(min(a, c), b)` if `c` is not always less or equal than `b`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122148
Approved by: https://github.com/huydhn

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2024-04-04 16:29:42 -07:00
b197f540bc Use numpy 2.0.0rc1 in CI (#123356)
Bump numpy version to 2.0.0rc1 in CI

Related to: https://github.com/pytorch/pytorch/issues/107302
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123286
Approved by: https://github.com/huydhn, https://github.com/kit1980, https://github.com/ZainRizvi

(cherry picked from commit 26b4ccf9d171a4abb3b25d9f88fc594ea5aca1ce)

Co-authored-by: atalman <atalman@fb.com>
2024-04-04 19:02:49 -04:00
dc81d19aac [CI] Test that NumPy-2.X builds are backward compatible with 1.X (#123354)
By compiling PyTorch against 2.x RC, but running all the tests with Numpy-1.X

This has no affects on binary builds
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122157
Approved by: https://github.com/atalman

(cherry picked from commit 03b987fe3fa93f398c0af5b40e512950c39a7cb6)

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2024-04-04 19:00:35 -04:00
108305e47b Upgrade submodule pybind to 2.12.0 (#123355)
To fix https://github.com/pytorch/pytorch/issues/122056

Building with NP 2.0 allows me to run locally with both NP 2.0 and 1.26.
Any other test we should run @rgommers  ?

FYI @Skylion007 @atalman
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122899
Approved by: https://github.com/Skylion007

(cherry picked from commit 6c2f36c9845f310db8ece23c0d2e4ad6f702bc57)

Co-authored-by: albanD <desmaison.alban@gmail.com>
2024-04-04 18:07:42 -04:00
a8b009185d Make PyTorch compilable against upcoming Numpy-2.0 (#121880) (#123380)
Test plan:
```
% python -c "import torch;import numpy;print(numpy.__version__, torch.tensor(numpy.arange(3, 10)))"
2.1.0.dev0+git20240312.9de8a80 tensor([3, 4, 5, 6, 7, 8, 9])
% python -c "import torch;print(torch.rand(3, 3).numpy())"
[[0.0931946  0.44874293 0.8480404 ]
 [0.93877375 0.10188377 0.67375803]
 [0.02520031 0.89019287 0.5691561 ]]

```
Fixes https://github.com/pytorch/pytorch/issues/121798

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121880
Approved by: https://github.com/albanD

(cherry picked from commit 38d9bb5abcc31ba97927a5399b88afe2cf60bf64)

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2024-04-04 14:22:26 -07:00
b67b277268 Fix torch.clamp in MPS to handle NaN correctly (#121381) (#122785)
Fixes #120899

So this is interesting. There are methods that specifically propagate NaN instead of clamping to real numbers.
https://developer.apple.com/documentation/metalperformanceshadersgraph/mpsgraph/3857573-maximumwithnanpropagationwithpri

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121381
Approved by: https://github.com/malfet

(cherry picked from commit 40acc84aafa82f00a5b3966302638f344bef07bd)

Co-authored-by: Roger Lam <mrlamroger@gmail.com>
2024-04-04 13:26:29 -07:00
a8f93a5c71 [ONNX] beartype to emit warning instead of error by default (#123363)
Making exporter more "robust" to advances in beartype tool.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123205
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi
2024-04-04 16:13:58 -04:00
fa07dc5132 [MPS] Fix naive matmul for BFloat16 (#123289)
Will only work on MacOS14 or newer, so compile the shader with `MTLLanguageVersion_3_1` when appropriate

Fixes https://github.com/pytorch/pytorch/issues/121583
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121731
Approved by: https://github.com/albanD

(cherry picked from commit 5498804ec2ac9aa62ba3bbf20149118142567d9b)

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2024-04-04 16:04:46 -04:00
2a82d31f78 fix breaking changes for ONNX Runtime Training (#123271)
Fixes breaking changes for ONNX Runtime Training.

PR https://github.com/pytorch/pytorch/pull/121102 introduced incompatibility with ORT training because of change in parameter type. Creating a PR to add previous parameter types and verified that it works with ORT training.

Error with current scenario:

```
site-packages/onnxruntime/training/ortmodule/torch_cpp_extensions/cpu/aten_op_executor/aten_op_executor.cc:60:40: error: invalid conversion from ‘const DLManagedTensor*’ to ‘DLManagedTensor*’ [-fpermissive]
at::Tensor tensor = at::fromDLPack(dlpack);

site-packages/torch/include/ATen/DLConvertor.h:15:46: note:   initializing argument 1 of ‘at::Tensor at::fromDLPack(DLManagedTensor*)’
TORCH_API Tensor fromDLPack(DLManagedTensor* src);
```
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122000
Approved by: https://github.com/malfet

(cherry picked from commit 765c3fc138fda4b49978403ee1394040221957cc)

Co-authored-by: Abhishek Jindal <abjindal@microsoft.com>
2024-04-03 18:52:06 -04:00
4bb5cb51e6 Fix swap_tensors path in _apply for modules that inherit from RNNBase (RNN, GRU, LSTM) (#122800) (#123116)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122800
Approved by: https://github.com/albanD

(cherry picked from commit cc12668053ad847ff4a430e99eeebf99c136f3cd)
2024-04-02 16:16:37 -07:00
ef38d0572e nn.Module: use swap_tensors for Tensor subclasses (#122755) (#123106)
This fixes a bug when casting a module that has DTensor parameters. The old behavior will swap the .data field of the Tensor subclass which is incorrect behavior when dealing with tensor subclasses that may have multiple child tensors.

This uses the `swap_tensors` method to swap all of the tensors not just the .data field.

Test plan:

```
pytest test/distributed/_tensor/test_api.py -k 'test_distribute_module_casting'
python test/distributed/fsdp/test_wrap.py -k test_auto_wrap_smoke_test_cuda_init_mode1_cpu_offload0_use_device_id_True
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122755
Approved by: https://github.com/wanchaol, https://github.com/mikaylagawarecki

(cherry picked from commit e6ee8322d767ab241ce1651e7c178f539e8e3199)

Co-authored-by: Tristan Rice <rice@fn.lc>
2024-04-02 16:16:16 -07:00
5a53185e65 Remove cuda dependencies when building AOTriton (#122982) (#123179)
Downloading CUDA sometimes failed and breaks the build process, but
AOTriton does not need these packages. This commit comments out the
related downloading scripts.
2024-04-02 19:08:22 -04:00
bc9e23abb5 Fix performance regression and memory storage handling of Flash Attention on ROCM (#122857) (#122967)
This PR fixes the two major issues that was discovered after the initial merge of PR #121561
1. The Flash Attention support added by has severe performance regressions on regular shapes (power of two head dimensions and sequence lengths) compared with PR #115981. Its performance is worse than the math backend and only has numerical stability advantages. This PR fixes this problem.
2. There is a flaw of memory storage handling in PR #121561 which does not copy the gradients back to the designated output tensor. This PR removes the deprecated `TensorStorageSanitizer` class which is unnecessary due to the more flexible backward kernel shipped by PR #121561

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122857
Approved by: https://github.com/jeffdaily, https://github.com/drisspg
2024-04-02 18:53:19 -04:00
8194fae625 Pin protobuf to 3.20.2 on macOS (#123197)
The newer protobuf 5.26.0 releasing on March 13rd is causing failures with `test_hparams_*` from `test_tensorboard` in which the stringify metadata is wrong when escaping double quote. For example, 3bc2bb6781.  This looks like an upstream issue from Tensorboard where it doesn't work with this brand new protobuf version https://github.com/tensorflow/tensorboard/blob/master/tensorboard/pip_package/requirements.txt#L29

The package has been pinned on Docker https://github.com/pytorch/pytorch/blob/main/.ci/docker/requirements-ci.txt#L155, so it should be pinned on macOS too.  We want to eventually just have one requirements.txt file.

Fixes https://github.com/pytorch/pytorch/issues/122008
Fixes https://github.com/pytorch/pytorch/issues/121927
Fixes https://github.com/pytorch/pytorch/issues/121946
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121918
Approved by: https://github.com/kit1980
2024-04-02 15:08:09 -04:00
12acd4c9b3 [Cherrypick][DeviceMesh] Cache and reuse sliced result (#122975) (#123073)
Fixes #118849

Add a map for parent_to_child_mappings in _mesh_resources so we can cache and reuse submesh slicing result so that we can avoid recreating submesh and the underlying sub pg repeatedly, which could lead to funky behaviors.

We will follow up with reusing pg from the parent_mesh during submesh creation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122975
Approved by: https://github.com/wanchaol
2024-04-02 15:05:07 -04:00
857797d148 [CherryPick] Inductor cpp wrapper: fix dtype of ShapeAsConstantBuffer (#122297) (#123064)
For `at::scalar_tensor` the default dtype will be `float` ([link to scalar_tensor](0d8e960f74/aten/src/ATen/native/TensorFactories.cpp (L856)), [link to default dtype](0d8e960f74/c10/core/TensorOptions.h (L551))) if we don't set the `dtype` value. However, the input scalar value is not necessarily a `float` value. With `torch::tensor(x)`, the dtype of the tensor will be decided according to the dtype of the scalar.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122297
Approved by: https://github.com/jgong5, https://github.com/desertfire
2024-04-02 15:03:25 -04:00
233dfe4d6a Proper view support for jagged layout NestedTensor (#122854)
* Proper view support for jagged layout NestedTensor (#113279)

This PR:
* Introduces an ATen op for creating true jagged views from a dense values buffer
    * `_nested_view_from_jagged(values, offsets, lengths, ragged_idx, dummy)`
    * This ops is implemented on the Python side using torch.library so we can return a subclass instance
    * `jagged_from_list()` now uses this instead of the old autograd.Function `NestedViewFromBuffer`
    * The latter op is used for non-contiguous JTs returned via `torch.nested.narrow()`
    * `dummy` is an awful hack to ensure that `NestedTensor.__torch_dispatch__()` is invoked for our view
* Introduces an ATen op for accessing the `values` component of an NT via a view
    * `_nested_get_values(nt)`
* **Removes** the autograd.Functions `ViewNestedFromBuffer` and `ViewBufferFromNested` in favor of `nested_from_values_offsets()` / `nested_from_values_offsets_lengths()` and `nt.values()`, respectively.
* Changes test code to prefer `as_nested_tensor()` over `jagged_from_list()` directly
    * Similarly, avoid `buffer_from_jagged()`, preferring `values()`
* Depends on general subclass view fake-ification on the PT2 side (handled solely in previous PRs in the stack)

With these changes, the semantics of jagged layout NTs are such that they are considered a true view of the underlying `values` buffer. This means views of jagged NTs are views of the underlying buffer as well, simplifying some handling.

Differential Revision: [D54269922](https://our.internmc.facebook.com/intern/diff/D54269922)
Co-authored-by: voznesenskym <voznesenskym@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113279
Approved by: https://github.com/ezyang

(cherry picked from commit cd6bfc7965fc5ae20720bae0994e332e56f819c0)

* Update executorch.txt

* Update executorch.txt

* Fix linter error

---------

Co-authored-by: Joel Schlosser <jbschlosser@meta.com>
Co-authored-by: Guang Yang <42389959+guangy10@users.noreply.github.com>
2024-04-02 11:46:53 -07:00
e22b534b10 Upgrade submodule oneDNN to v3.3.6 for release/2.3 (#122164) (#122930)
As the title. Including issue fixes for aarch64:
- https://github.com/oneapi-src/oneDNN/pull/1831
- https://github.com/oneapi-src/oneDNN/pull/1834

---

## Validation results
(on Intel CPU + Linux)
**Static quantization with Inductor on CV models**

Quant method | Geomean throughput ratio (v3.3.6/baseline)
-- | --
ptq | 0.982937
ptq (cpp wrapper) | 0.978384
qat | 0.978828

**Torchbench cpu userbenchmark with Inductor**

Items | Perf Geomean Ratio (v3.3.6/baseline)
-- | --
eager_throughtput_bf16_infer | 1.00x
eager_throughtput_fp32_infer | 1.00x
jit_llga_throughtput_amp_bf16 | 1.01x
jit_llga_throughtput_fp32 | 1.00x
eager_throughtput_fx_int8 | 1.00x
eager_throughtput_bf16_train | 1.46x
eager_throughtput_fp32_train | 1.41x

**Dynamo benchmarks tests**
Precision | Shape | Wrapper | Thread | Eager old/new GEOMEAN | Inductor old/new GEOMEAN
-- | -- | -- | -- | -- | --
Float32 | Static | Default | Multiple | 1.003836812 | 1.003425
Float32 | Static | Default | Single | 1.000181451 | 0.999611
Float32 | Dynamic | Default | Multiple | 1.003980183 | 1.006563
Float32 | Dynamic | Default | Single | 1.000076939 | 0.999969
AMP | Static | Default | Multiple | 0.996824772 | 0.998715
AMP | Static | Default | Single | 0.996402574 | 1.001483
AMP | Dynamic | Default | Multiple | 0.994919866 | 1.000467
AMP | Dynamic | Default | Single | 0.9962054 | 1.000767

(on Aarch64)
https://github.com/pytorch/pytorch/pull/122164#issuecomment-2007912919

---

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122164
Approved by: https://github.com/snadampal, https://github.com/malfet, https://github.com/atalman
2024-04-02 12:57:11 -04:00
8602990e3f [CherryPick] Back out "[DeviceMesh] Add support for nD slicing (#119752)" (#121763) (#122495)
Summary:
Original commit changeset: e52b8809c8d8

Original Phabricator Diff: D54778906

We have to backout this diff.
D54778906 seems to be causing test failures for APF blocking trunk health and hence release. Just starting to look at the issue. T182209248

Test Plan: Sandcastle

Reviewed By: satgera

Differential Revision: D54825114

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121763
Approved by: https://github.com/osalpekar

(cherry picked from commit e99fa0042cd3dcd2eded24585d59c53f2da9d9f5)
2024-03-28 14:25:08 -07:00
685cc955df [ROCm] Update triton rocm branch to release/2.3.x (#122493)
* Update triton rocm branch to release/2.3.x

* Remove ROCM_TRITION_VERSION and update to 2.3.0

* Remove unnecessary ROCm conditionalisation

* Skip failing UT
2024-03-28 14:18:37 -07:00
b1c2430fbd remove torchao dependency (#122635)
* remove torchao dependency (#122524)

Test Plan:
CI

```
buck2 run mode/dev-nosan mode/inplace executorch/examples/models/llama2:export_llama -- -c ~/llama/ultra_new_checkpoint.pt -p ~/llama/params.json -kv -E 8,8 -d fp32 --pt2e_quantize "xnnpack_dynamic" -2
```

```
buck run //executorch/backends/xnnpack/test:test_xnnpack_ops -- executorch.backends.xnnpack.test.ops.linear.TestLinear.test_qd8_fp32_per_token_weight_per_channel_group_int4
```

Differential Revision: D55263008

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122524
Approved by: https://github.com/jerryzh168

(cherry picked from commit c677221798d8ce87c97aac1bd9ae34af0767c383)

* Update executorch.txt

* Update _decomposed.py

* Update executorch.txt

* Update executorch.txt

* Update executorch.txt

* Update executorch.txt

* Update executorch.txt

---------

Co-authored-by: Guang Yang <guangyang@meta.com>
Co-authored-by: Guang Yang <42389959+guangy10@users.noreply.github.com>
2024-03-28 12:25:12 -07:00
3002eb2556 [export] hack skip index_put_ in dce (#122683) (#122721)
Summary: Ideally we should do whats in the todo. Just doing this for now to unblock llama capture

Test Plan: capturing llama and using pt2e to quantize it

Differential Revision: D55354487

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122683
Approved by: https://github.com/kimishpatel

(cherry picked from commit 41d24df08f72e059c4eebdde4315e63a9918406f)

Co-authored-by: Jacob Szwejbka <jakeszwe@meta.com>
2024-03-27 21:29:53 -07:00
e1a846d6b8 Fix auto_functionalize (#121990) (#122654)
Differential Revision: D54964130

When we re-export, auto_functionalize HOP will be in the graph. Therefore, we need to implement proper functionalization rule for it. Since the content inside auto_functionalize is guaranteed be functional, it is ok to just fall through it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121990
Approved by: https://github.com/ydwu4, https://github.com/zou3519

(cherry picked from commit 0d845f7b0781f091452a5fd31de14e1c2117f3d4)

Co-authored-by: Tugsbayasgalan (Tugsuu) Manlaibaatar <tmanlaibaatar@meta.com>
2024-03-27 21:28:56 -07:00
4a9a8c606d [export] add pass to remove auto functionalized hop (#122246) (#122655)
Summary: Adds a pass that blindly removes the functionalize hop without consideration on if its safe. Useful for ExecuTorch today and other usecases that have additional logic that can reason about when this pass is safe to use

Test Plan: added unit test

Differential Revision: D55103867

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122246
Approved by: https://github.com/angelayi

(cherry picked from commit c84f81b395fff969bbd2f784efad8ab1a8aa52de)

Co-authored-by: Jacob Szwejbka <jakeszwe@meta.com>
2024-03-27 21:05:15 -07:00
d3201f48b1 Revert "Revert "CI: Specify libc and libstdcxx versions in conda environments"" (#122523)
This reverts commit 74832f12fae2e1bc51bf1f9971dcd12c90a971f5.
2024-03-22 17:41:42 -04:00
74832f12fa Revert "CI: Specify libc and libstdcxx versions in conda environments" (#122497)
This reverts commit b4f90aae1b375bfe06d3c4a099240e06f93c81c4.
2024-03-22 11:27:50 -04:00
02cdb400d7 Use temporary name for triton package, fix lint (#122438)
* Use temporary name for triton package

* Fix lint
2024-03-21 17:30:38 -04:00
37257774c6 Triton wheel build using 2.3.x branch (#122403)
* Triton build 2.3.x

* Revert "[Release Only] Build triton using pinned version rather branch (#121765)"

This reverts commit d69c4219127e2cf5d9637b0daacc0a24e65f8133.

* Triton wheel change

* release
2024-03-21 12:52:21 -04:00
c4e5434423 necessary change to make torch2.3 work with triton2.2 (#122139) 2024-03-21 08:24:53 -04:00
b4f90aae1b CI: Specify libc and libstdcxx versions in conda environments (#121929)
Without this we get mismatches between the GLIBC and GLIBCXX ABI used
by conda packages vs pytorch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121556
Approved by: https://github.com/isuruf, https://github.com/malfet

(cherry picked from commit 7a53dedb07ed72b85d1e083ce38c43c7810fc5f1)

Co-authored-by: Peter Bell <peterbell10@live.co.uk>
2024-03-14 17:56:46 -04:00
94d6463255 [RELEASE ONLY CHANGES] Increase timeout for linux binary jobs, fix workflow lint (#121851)
* [release only] Increase timeout job for linux binary builds by 30min

* fix lint
2024-03-13 19:50:57 -04:00
6a89a753b1 [RELEASE ONLY CHANGES] Apply release only changes Release 2.3 (#121813)
* [Release only changes] Release only changes #2

* common+lint
2024-03-13 11:03:48 -04:00
d69c421912 [Release Only] Build triton using pinned version rather branch (#121765) 2024-03-12 19:05:23 -04:00
6725db07ae [RELEASE ONLY CHANGES] Apply release only changes Release 2.3 (#121726)
* Apply release only changes

* temp changes

* tweak

* fix

* Revert "tweak"

This reverts commit 38edcac21448829ac114c73423c84614628e2598.
2024-03-12 18:14:35 -04:00
167 changed files with 2592 additions and 1374 deletions

View File

@ -1 +1 @@
0a22a91d04c2b4a029a69a198eac390089c3e891 c8ad905211f45e162102823149f0d7f2cfaa4418

View File

@ -1 +1 @@
a9bc1a36470eefafe0e2ab2503b8698f1e89e7e3 958fccea74da58e7e0595ab88ae6cd3f6795a173

View File

@ -57,8 +57,21 @@ fi
# Uncomment the below when resolved to track the latest conda update # Uncomment the below when resolved to track the latest conda update
# as_jenkins conda update -y -n base conda # as_jenkins conda update -y -n base conda
if [[ $(uname -m) == "aarch64" ]]; then
export SYSROOT_DEP="sysroot_linux-aarch64=2.17"
else
export SYSROOT_DEP="sysroot_linux-64=2.17"
fi
# Install correct Python version # Install correct Python version
as_jenkins conda create -n py_$ANACONDA_PYTHON_VERSION -y python="$ANACONDA_PYTHON_VERSION" # Also ensure sysroot is using a modern GLIBC to match system compilers
as_jenkins conda create -n py_$ANACONDA_PYTHON_VERSION -y\
python="$ANACONDA_PYTHON_VERSION" \
${SYSROOT_DEP}
# libstdcxx from conda default channels are too old, we need GLIBCXX_3.4.30
# which is provided in libstdcxx 12 and up.
conda_install libstdcxx-ng=12.3.0 -c conda-forge
# Install PyTorch conda deps, as per https://github.com/pytorch/pytorch README # Install PyTorch conda deps, as per https://github.com/pytorch/pytorch README
if [[ $(uname -m) == "aarch64" ]]; then if [[ $(uname -m) == "aarch64" ]]; then
@ -110,14 +123,5 @@ fi
pip_install -r /opt/conda/requirements-docs.txt pip_install -r /opt/conda/requirements-docs.txt
fi fi
# HACK HACK HACK
# gcc-9 for ubuntu-18.04 from http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu
# Pulls llibstdc++6 13.1.0-8ubuntu1~18.04 which is too new for conda
# So remove libstdc++6.so.3.29 installed by https://anaconda.org/anaconda/libstdcxx-ng/files?version=11.2.0
# Same is true for gcc-12 from Ubuntu-22.04
if grep -e [12][82].04.[623] /etc/issue >/dev/null; then
rm /opt/conda/envs/py_$ANACONDA_PYTHON_VERSION/lib/libstdc++.so.6
fi
popd popd
fi fi

View File

@ -84,7 +84,11 @@ install_ubuntu() {
if [[ $(ver $ROCM_VERSION) -ge $(ver 6.0) ]]; then if [[ $(ver $ROCM_VERSION) -ge $(ver 6.0) ]]; then
for kdb in /opt/rocm/share/miopen/db/*.kdb for kdb in /opt/rocm/share/miopen/db/*.kdb
do do
sqlite3 $kdb "PRAGMA journal_mode=off; PRAGMA VACUUM;" # journal_mode=delete seems to work on some kdbs that have "wal" as initial journal_mode
sqlite3 $kdb "PRAGMA journal_mode=delete; PRAGMA VACUUM;"
JOURNAL_MODE=$(sqlite3 $kdb "PRAGMA journal_mode;")
# Both "delete and "off" work in cases where user doesn't have write permissions to directory where kdbs are installed
if [[ $JOURNAL_MODE != "delete" ]] && [[ $JOURNAL_MODE != "off" ]]; then echo "kdb journal_mode change failed" && exit 1; fi
done done
fi fi
@ -163,7 +167,11 @@ install_centos() {
if [[ $(ver $ROCM_VERSION) -ge $(ver 6.0) ]]; then if [[ $(ver $ROCM_VERSION) -ge $(ver 6.0) ]]; then
for kdb in /opt/rocm/share/miopen/db/*.kdb for kdb in /opt/rocm/share/miopen/db/*.kdb
do do
sqlite3 $kdb "PRAGMA journal_mode=off; PRAGMA VACUUM;" # journal_mode=delete seems to work on some kdbs that have "wal" as initial journal_mode
sqlite3 $kdb "PRAGMA journal_mode=delete; PRAGMA VACUUM;"
JOURNAL_MODE=$(sqlite3 $kdb "PRAGMA journal_mode;")
# Both "delete" and "off" work in cases where user doesn't have write permissions to directory where kdbs are installed
if [[ $JOURNAL_MODE != "delete" ]] && [[ $JOURNAL_MODE != "off" ]]; then echo "kdb journal_mode change failed" && exit 1; fi
done done
fi fi

View File

@ -231,6 +231,7 @@ scikit-image==0.20.0 ; python_version >= "3.10"
scipy==1.6.3 ; python_version < "3.10" scipy==1.6.3 ; python_version < "3.10"
scipy==1.8.1 ; python_version == "3.10" scipy==1.8.1 ; python_version == "3.10"
scipy==1.10.1 ; python_version == "3.11" scipy==1.10.1 ; python_version == "3.11"
scipy==1.12.0 ; python_version == "3.12"
# Pin SciPy because of failing distribution tests (see #60347) # Pin SciPy because of failing distribution tests (see #60347)
#Description: scientific python #Description: scientific python
#Pinned versions: 1.6.3 #Pinned versions: 1.6.3

View File

@ -1 +1 @@
3.0.0 2.3.1

View File

@ -255,6 +255,11 @@ else
# or building non-XLA tests. # or building non-XLA tests.
if [[ "$BUILD_ENVIRONMENT" != *rocm* && if [[ "$BUILD_ENVIRONMENT" != *rocm* &&
"$BUILD_ENVIRONMENT" != *xla* ]]; then "$BUILD_ENVIRONMENT" != *xla* ]]; then
if [[ "$BUILD_ENVIRONMENT" != *py3.8* ]]; then
# Install numpy-2.0 release candidate for builds
# Which should be backward compatible with Numpy-1.X
python -mpip install --pre numpy==2.0.0rc1
fi
WERROR=1 python setup.py bdist_wheel WERROR=1 python setup.py bdist_wheel
else else
python setup.py bdist_wheel python setup.py bdist_wheel

View File

@ -178,7 +178,7 @@ function install_torchrec_and_fbgemm() {
function clone_pytorch_xla() { function clone_pytorch_xla() {
if [[ ! -d ./xla ]]; then if [[ ! -d ./xla ]]; then
git clone --recursive --quiet https://github.com/pytorch/xla.git git clone --recursive -b r2.3 https://github.com/pytorch/xla.git
pushd xla pushd xla
# pin the xla hash so that we don't get broken by changes to xla # pin the xla hash so that we don't get broken by changes to xla
git checkout "$(cat ../.github/ci_commit_pins/xla.txt)" git checkout "$(cat ../.github/ci_commit_pins/xla.txt)"

View File

@ -26,8 +26,8 @@ echo "error: python_doc_push_script.sh: version (arg2) not specified"
fi fi
# Argument 1: Where to copy the built documentation to # Argument 1: Where to copy the built documentation to
# (pytorch.github.io/$install_path) # (pytorch_docs/$install_path)
install_path="${1:-${DOCS_INSTALL_PATH:-docs/${DOCS_VERSION}}}" install_path="${1:-${DOCS_INSTALL_PATH:-${DOCS_VERSION}}}"
if [ -z "$install_path" ]; then if [ -z "$install_path" ]; then
echo "error: python_doc_push_script.sh: install_path (arg1) not specified" echo "error: python_doc_push_script.sh: install_path (arg1) not specified"
exit 1 exit 1
@ -68,8 +68,8 @@ build_docs () {
} }
git clone https://github.com/pytorch/pytorch.github.io -b "$branch" --depth 1 git clone https://github.com/pytorch/docs pytorch_docs -b "$branch" --depth 1
pushd pytorch.github.io pushd pytorch_docs
export LC_ALL=C export LC_ALL=C
export PATH=/opt/conda/bin:$PATH export PATH=/opt/conda/bin:$PATH

View File

@ -302,6 +302,7 @@ test_inductor_distributed() {
pytest test/distributed/_composable/fsdp/test_fully_shard_frozen.py pytest test/distributed/_composable/fsdp/test_fully_shard_frozen.py
pytest test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py -k test_compute_dtype pytest test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py -k test_compute_dtype
pytest test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py -k test_reduce_dtype pytest test/distributed/_composable/fsdp/test_fully_shard_mixed_precision.py -k test_reduce_dtype
pytest test/distributed/fsdp/test_fsdp_tp_integration.py -k test_fsdp_tp_integration
# this runs on both single-gpu and multi-gpu instance. It should be smart about skipping tests that aren't supported # this runs on both single-gpu and multi-gpu instance. It should be smart about skipping tests that aren't supported
# with if required # gpus aren't available # with if required # gpus aren't available

View File

@ -1 +1 @@
707a632930bfde19ffb361cdf5c31a7682af4e67 r2.3

View File

@ -27,3 +27,6 @@ rockset==1.0.3
z3-solver==4.12.2.0 z3-solver==4.12.2.0
tensorboard==2.13.0 tensorboard==2.13.0
optree==0.9.1 optree==0.9.1
# NB: test_hparams_* from test_tensorboard is failing with protobuf 5.26.0 in
# which the stringify metadata is wrong when escaping double quote
protobuf==3.20.2

View File

@ -10,9 +10,6 @@ from typing import Optional
SCRIPT_DIR = Path(__file__).parent SCRIPT_DIR = Path(__file__).parent
REPO_DIR = SCRIPT_DIR.parent.parent REPO_DIR = SCRIPT_DIR.parent.parent
# TODO: Remove me once Triton version is again in sync for vanilla and ROCm
ROCM_TRITION_VERSION = "2.1.0"
def read_triton_pin(rocm_hash: bool = False) -> str: def read_triton_pin(rocm_hash: bool = False) -> str:
triton_file = "triton.txt" if not rocm_hash else "triton-rocm.txt" triton_file = "triton.txt" if not rocm_hash else "triton-rocm.txt"
@ -99,7 +96,14 @@ def build_triton(
triton_repo = "https://github.com/openai/triton" triton_repo = "https://github.com/openai/triton"
triton_pkg_name = "pytorch-triton" triton_pkg_name = "pytorch-triton"
check_call(["git", "clone", triton_repo], cwd=tmpdir) check_call(["git", "clone", triton_repo], cwd=tmpdir)
check_call(["git", "checkout", commit_hash], cwd=triton_basedir) if release:
ver, rev, patch = version.split(".")
check_call(
["git", "checkout", f"release/{ver}.{rev}.x"], cwd=triton_basedir
)
else:
check_call(["git", "checkout", commit_hash], cwd=triton_basedir)
if build_conda: if build_conda:
with open(triton_basedir / "meta.yaml", "w") as meta: with open(triton_basedir / "meta.yaml", "w") as meta:
print( print(
@ -109,7 +113,7 @@ def build_triton(
print("source:\n path: .\n", file=meta) print("source:\n path: .\n", file=meta)
print( print(
"build:\n string: py{{py}}\n number: 1\n script: cd python; " "build:\n string: py{{py}}\n number: 1\n script: cd python; "
"python setup.py install --record=record.txt\n", "python setup.py install --single-version-externally-managed --record=record.txt\n",
" script_env:\n - MAX_JOBS\n", " script_env:\n - MAX_JOBS\n",
file=meta, file=meta,
) )
@ -155,7 +159,7 @@ def build_triton(
patch_init_py( patch_init_py(
triton_pythondir / "triton" / "__init__.py", triton_pythondir / "triton" / "__init__.py",
version=f"{version}", version=f"{version}",
expected_version=ROCM_TRITION_VERSION if build_rocm else None, expected_version=None,
) )
if build_rocm: if build_rocm:
@ -164,7 +168,7 @@ def build_triton(
triton_pythondir / "setup.py", triton_pythondir / "setup.py",
name=triton_pkg_name, name=triton_pkg_name,
version=f"{version}", version=f"{version}",
expected_version=ROCM_TRITION_VERSION, expected_version=None,
) )
check_call("scripts/amd/setup_rocm_libs.sh", cwd=triton_basedir, shell=True) check_call("scripts/amd/setup_rocm_libs.sh", cwd=triton_basedir, shell=True)
print("ROCm libraries setup for triton installation...") print("ROCm libraries setup for triton installation...")

View File

@ -62,9 +62,9 @@ SUPPORTED_PERIODICAL_MODES: Dict[str, Callable[[Optional[str]], bool]] = {
} }
# The link to the published list of disabled jobs # The link to the published list of disabled jobs
DISABLED_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json" DISABLED_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json?versionId=qO7aEr.Og33PtLXfNq0j0yj.bbLC7SzR"
# and unstable jobs # and unstable jobs
UNSTABLE_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/unstable-jobs.json" UNSTABLE_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/unstable-jobs.json?versionId=7NhgpqKTtGXVUnL1C79KboTW_5qQx8y5"
# Some constants used to handle disabled and unstable jobs # Some constants used to handle disabled and unstable jobs
JOB_NAME_SEP = "/" JOB_NAME_SEP = "/"

View File

@ -21,6 +21,8 @@ DOCKER_IMAGE_TYPES = ["runtime", "devel"]
def generate_docker_matrix() -> Dict[str, List[Dict[str, str]]]: def generate_docker_matrix() -> Dict[str, List[Dict[str, str]]]:
ret: List[Dict[str, str]] = [] ret: List[Dict[str, str]] = []
# CUDA amd64 Docker images are available as both runtime and devel while
# CPU arm64 image is only available as runtime.
for cuda, version in generate_binary_build_matrix.CUDA_ARCHES_FULL_VERSION.items(): for cuda, version in generate_binary_build_matrix.CUDA_ARCHES_FULL_VERSION.items():
for image in DOCKER_IMAGE_TYPES: for image in DOCKER_IMAGE_TYPES:
ret.append( ret.append(
@ -31,9 +33,19 @@ def generate_docker_matrix() -> Dict[str, List[Dict[str, str]]]:
cuda cuda
], ],
"image_type": image, "image_type": image,
"platform": "linux/arm64,linux/amd64", "platform": "linux/amd64",
} }
) )
ret.append(
{
"cuda": "cpu",
"cuda_full_version": "",
"cudnn_version": "",
"image_type": "runtime",
"platform": "linux/arm64",
}
)
return {"include": ret} return {"include": ret}

View File

@ -8,7 +8,7 @@
# NOTE: If testing pytorch/builder changes you can change this variable to change what pytorch/builder reference # NOTE: If testing pytorch/builder changes you can change this variable to change what pytorch/builder reference
# the binary builds will check out # the binary builds will check out
{%- set builder_repo = "pytorch/builder" -%} {%- set builder_repo = "pytorch/builder" -%}
{%- set builder_branch = "main" -%} {%- set builder_branch = "release/2.3" -%}
{%- macro concurrency(build_environment) -%} {%- macro concurrency(build_environment) -%}
concurrency: concurrency:

View File

@ -100,8 +100,8 @@ jobs:
with: with:
name: !{{ config["build_name"] }} name: !{{ config["build_name"] }}
path: "${{ runner.temp }}/artifacts/" path: "${{ runner.temp }}/artifacts/"
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }} !{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: ROCm set GPU_FLAG - name: ROCm set GPU_FLAG
run: | run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}" echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"

View File

@ -81,8 +81,8 @@ jobs:
elif [ -d "/Applications/Xcode_13.3.1.app" ]; then elif [ -d "/Applications/Xcode_13.3.1.app" ]; then
echo "DEVELOPER_DIR=/Applications/Xcode_13.3.1.app/Contents/Developer" >> "${GITHUB_ENV}" echo "DEVELOPER_DIR=/Applications/Xcode_13.3.1.app/Contents/Developer" >> "${GITHUB_ENV}"
fi fi
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }} !{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Install sccache (only for non-forked PRs, and pushes to trunk) - name: Install sccache (only for non-forked PRs, and pushes to trunk)
uses: nick-fields/retry@v2.8.2 uses: nick-fields/retry@v2.8.2
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }} if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}

View File

@ -65,8 +65,8 @@ jobs:
steps: steps:
!{{ common.setup_ec2_windows() }} !{{ common.setup_ec2_windows() }}
!{{ set_runner_specific_vars() }} !{{ set_runner_specific_vars() }}
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }} !{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Populate binary env - name: Populate binary env
shell: bash shell: bash
run: | run: |
@ -105,8 +105,8 @@ jobs:
with: with:
name: !{{ config["build_name"] }} name: !{{ config["build_name"] }}
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}" path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }} !{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Populate binary env - name: Populate binary env
shell: bash shell: bash
run: | run: |

View File

@ -37,7 +37,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }} keep-going: ${{ steps.filter.outputs.keep-going }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -59,25 +59,25 @@ jobs:
runs-on: ${{ matrix.runner }} runs-on: ${{ matrix.runner }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with: with:
docker-image-name: ${{ inputs.docker-image-name }} docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -141,5 +141,5 @@ jobs:
if: always() if: always()
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always() if: always()

View File

@ -37,7 +37,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }} keep-going: ${{ steps.filter.outputs.keep-going }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -59,25 +59,25 @@ jobs:
runs-on: ${{ matrix.runner }} runs-on: ${{ matrix.runner }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with: with:
docker-image-name: ${{ inputs.docker-image-name }} docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -186,5 +186,5 @@ jobs:
if: always() if: always()
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always() if: always()

View File

@ -42,7 +42,7 @@ jobs:
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }} reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -64,30 +64,30 @@ jobs:
runs-on: ${{ matrix.runner }} runs-on: ${{ matrix.runner }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with: with:
docker-image-name: ${{ inputs.docker-image-name }} docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG - name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
uses: pytorch/test-infra/.github/actions/setup-nvidia@main uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.3
if: ${{ inputs.cuda-version != 'cpu' }} if: ${{ inputs.cuda-version != 'cpu' }}
- name: Output disk space left - name: Output disk space left
@ -196,5 +196,5 @@ jobs:
file-suffix: bazel-${{ github.job }}_${{ steps.get-job-id.outputs.job-id }} file-suffix: bazel-${{ github.job }}_${{ steps.get-job-id.outputs.job-id }}
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always() if: always()

View File

@ -78,7 +78,7 @@ on:
jobs: jobs:
build: build:
runs-on: ${{ inputs.runs_on }} runs-on: ${{ inputs.runs_on }}
timeout-minutes: 180 timeout-minutes: 210
env: env:
PYTORCH_ROOT: ${{ inputs.PYTORCH_ROOT }} PYTORCH_ROOT: ${{ inputs.PYTORCH_ROOT }}
BUILDER_ROOT: ${{ inputs.BUILDER_ROOT }} BUILDER_ROOT: ${{ inputs.BUILDER_ROOT }}
@ -139,13 +139,13 @@ jobs:
run: env run: env
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.github-token }} github-secret: ${{ secrets.github-token }}
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' }} no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' }}
@ -173,7 +173,6 @@ jobs:
- name: Checkout PyTorch to pytorch dir - name: Checkout PyTorch to pytorch dir
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -187,7 +186,7 @@ jobs:
- name: Checkout pytorch/builder to builder dir - name: Checkout pytorch/builder to builder dir
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -213,7 +212,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' }} if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' }}
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ inputs.DOCKER_IMAGE }} docker-image: ${{ inputs.DOCKER_IMAGE }}
@ -270,7 +269,7 @@ jobs:
- name: Teardown Linux - name: Teardown Linux
if: always() if: always()
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
- name: Chown workspace - name: Chown workspace
if: always() if: always()

View File

@ -127,14 +127,14 @@ jobs:
} >> "${GITHUB_ENV} }}" } >> "${GITHUB_ENV} }}"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.github-token }} github-secret: ${{ secrets.github-token }}
# Setup the environment # Setup the environment
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' }} no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' }}
@ -155,7 +155,6 @@ jobs:
- name: Checkout PyTorch to pytorch dir - name: Checkout PyTorch to pytorch dir
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
@ -168,7 +167,7 @@ jobs:
- name: Checkout pytorch/builder to builder dir - name: Checkout pytorch/builder to builder dir
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -199,12 +198,12 @@ jobs:
path: "${{ runner.temp }}/artifacts/" path: "${{ runner.temp }}/artifacts/"
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG - name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
uses: pytorch/test-infra/.github/actions/setup-nvidia@main uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.3
if: ${{ inputs.GPU_ARCH_TYPE == 'cuda' && steps.filter.outputs.is-test-matrix-empty == 'False' }} if: ${{ inputs.GPU_ARCH_TYPE == 'cuda' && steps.filter.outputs.is-test-matrix-empty == 'False' }}
- name: Pull Docker image - name: Pull Docker image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' }} if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' }}
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ inputs.DOCKER_IMAGE }} docker-image: ${{ inputs.DOCKER_IMAGE }}
@ -214,7 +213,7 @@ jobs:
- name: Teardown Linux - name: Teardown Linux
if: always() if: always()
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
- name: Chown workspace - name: Chown workspace
if: always() if: always()

View File

@ -95,7 +95,7 @@ jobs:
SHA1: ${{ github.event.pull_request.head.sha || github.sha }} SHA1: ${{ github.event.pull_request.head.sha || github.sha }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
no-sudo: true no-sudo: true

View File

@ -23,7 +23,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }} keep-going: ${{ steps.filter.outputs.keep-going }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -44,7 +44,7 @@ jobs:
runs-on: ${{ matrix.runner }} runs-on: ${{ matrix.runner }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Set up JDK 8 - name: Set up JDK 8
uses: actions/setup-java@v3 uses: actions/setup-java@v3
@ -53,7 +53,7 @@ jobs:
distribution: 'temurin' distribution: 'temurin'
- name: Setup miniconda - name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with: with:
python-version: 3.8 python-version: 3.8
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }} environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}

View File

@ -66,7 +66,7 @@ jobs:
name: build-docs-${{ matrix.docs_type }}-${{ inputs.push }} name: build-docs-${{ matrix.docs_type }}-${{ inputs.push }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: | instructions: |
@ -77,19 +77,19 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with: with:
docker-image-name: ${{ inputs.docker-image }} docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -163,7 +163,7 @@ jobs:
retention-days: 14 retention-days: 14
s3-bucket: doc-previews s3-bucket: doc-previews
if-no-files-found: error if-no-files-found: error
path: pytorch.github.io/docs/main/ path: pytorch_docs/main/
s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }} s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }}
- name: Upload C++ Docs Preview - name: Upload C++ Docs Preview
@ -187,5 +187,5 @@ jobs:
s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }}/functorchdocs s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }}/functorchdocs
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always() if: always()

View File

@ -46,7 +46,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }} keep-going: ${{ steps.filter.outputs.keep-going }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -80,7 +80,7 @@ jobs:
steps: steps:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Populate CI build options - name: Populate CI build options
shell: bash shell: bash
@ -102,7 +102,7 @@ jobs:
brew install libtool brew install libtool
- name: Setup miniconda for iOS - name: Setup miniconda for iOS
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with: with:
python-version: "3.9" python-version: "3.9"
environment-file: .github/requirements/conda-env-iOS.txt environment-file: .github/requirements/conda-env-iOS.txt

View File

@ -73,7 +73,7 @@ jobs:
test-matrix: ${{ steps.filter.outputs.test-matrix }} test-matrix: ${{ steps.filter.outputs.test-matrix }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -82,14 +82,14 @@ jobs:
# checkout because when we run this action we don't *have* a local # checkout because when we run this action we don't *have* a local
# checkout. In other cases you should prefer a local checkout. # checkout. In other cases you should prefer a local checkout.
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with: with:
docker-image-name: ${{ inputs.docker-image-name }} docker-image-name: ${{ inputs.docker-image-name }}
@ -103,7 +103,7 @@ jobs:
echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}"
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -209,5 +209,5 @@ jobs:
path: sccache-stats-*.json path: sccache-stats-*.json
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always() if: always()

View File

@ -57,7 +57,7 @@ jobs:
timeout-minutes: ${{ matrix.mem_leak_check == 'mem_leak_check' && 600 || inputs.timeout-minutes }} timeout-minutes: ${{ matrix.mem_leak_check == 'mem_leak_check' && 600 || inputs.timeout-minutes }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
if: ${{ !contains(matrix.runner, 'gcp.a100') }} if: ${{ !contains(matrix.runner, 'gcp.a100') }}
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -66,14 +66,14 @@ jobs:
docker exec -it $(docker container ps --format '{{.ID}}') bash docker exec -it $(docker container ps --format '{{.ID}}') bash
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with: with:
docker-image-name: ${{ inputs.docker-image }} docker-image-name: ${{ inputs.docker-image }}
@ -87,13 +87,13 @@ jobs:
echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}"
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG - name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
id: install-nvidia-driver id: install-nvidia-driver
uses: pytorch/test-infra/.github/actions/setup-nvidia@main uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.3
if: contains(inputs.build-environment, 'cuda') && !contains(matrix.config, 'nogpu') if: contains(inputs.build-environment, 'cuda') && !contains(matrix.config, 'nogpu')
- name: Lock NVIDIA A100 40GB Frequency - name: Lock NVIDIA A100 40GB Frequency
@ -307,7 +307,7 @@ jobs:
path: ./**/core.[1-9]* path: ./**/core.[1-9]*
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always() if: always()
# NB: We are currently having an intermittent GPU-related issue on G5 runners with # NB: We are currently having an intermittent GPU-related issue on G5 runners with

View File

@ -71,11 +71,11 @@ jobs:
test-matrix: ${{ steps.filter.outputs.test-matrix }} test-matrix: ${{ steps.filter.outputs.test-matrix }}
steps: steps:
- name: Clean up disk space before running MacOS workflow - name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.3
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Set xcode version - name: Set xcode version
env: env:
@ -87,7 +87,7 @@ jobs:
- name: Setup miniconda - name: Setup miniconda
if: inputs.environment-file == '' if: inputs.environment-file == ''
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with: with:
python-version: ${{ inputs.python-version }} python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }} environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -97,7 +97,7 @@ jobs:
# environment even though the arch is x86-64 # environment even though the arch is x86-64
- name: Setup miniconda using the provided environment file - name: Setup miniconda using the provided environment file
if: inputs.environment-file != '' if: inputs.environment-file != ''
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with: with:
python-version: ${{ inputs.python-version }} python-version: ${{ inputs.python-version }}
environment-file: ${{ inputs.environment-file }} environment-file: ${{ inputs.environment-file }}
@ -207,4 +207,4 @@ jobs:
- name: Clean up disk space - name: Clean up disk space
if: always() if: always()
continue-on-error: true continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.3

View File

@ -40,7 +40,7 @@ jobs:
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }} reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
submodules: false submodules: false
@ -81,7 +81,7 @@ jobs:
use-gha: true use-gha: true
- name: Setup miniconda - name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with: with:
python-version: ${{ inputs.python-version }} python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }} environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -159,4 +159,4 @@ jobs:
- name: Clean up disk space - name: Clean up disk space
if: always() if: always()
continue-on-error: true continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.3

View File

@ -79,11 +79,11 @@ jobs:
done done
- name: Clean up disk space before running MacOS workflow - name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.3
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Download build artifacts - name: Download build artifacts
uses: ./.github/actions/download-build-artifacts uses: ./.github/actions/download-build-artifacts
@ -98,7 +98,7 @@ jobs:
use-gha: true use-gha: true
- name: Setup miniconda - name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with: with:
python-version: ${{ inputs.python-version }} python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }} environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -227,4 +227,4 @@ jobs:
- name: Clean up disk space - name: Clean up disk space
if: always() if: always()
continue-on-error: true continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.3

View File

@ -58,7 +58,7 @@ jobs:
steps: steps:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
no-sudo: true no-sudo: true
@ -80,12 +80,12 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with: with:
docker-image-name: ${{ inputs.docker-image }} docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}

View File

@ -23,7 +23,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }} keep-going: ${{ steps.filter.outputs.keep-going }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -54,10 +54,10 @@ jobs:
SUPPORT_ABI: '${{ matrix.support_abi }}' SUPPORT_ABI: '${{ matrix.support_abi }}'
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup miniconda - name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.3
with: with:
python-version: 3.8 python-version: 3.8
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}.txt environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}.txt

View File

@ -60,10 +60,10 @@ jobs:
git config --global core.fsmonitor false git config --global core.fsmonitor false
- name: Clean up leftover processes on non-ephemeral Windows runner - name: Clean up leftover processes on non-ephemeral Windows runner
uses: pytorch/test-infra/.github/actions/cleanup-runner@main uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.3
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: | instructions: |
@ -78,7 +78,7 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
no-sudo: true no-sudo: true

View File

@ -54,10 +54,10 @@ jobs:
git config --global core.fsmonitor false git config --global core.fsmonitor false
- name: Clean up leftover processes on non-ephemeral Windows runner - name: Clean up leftover processes on non-ephemeral Windows runner
uses: pytorch/test-infra/.github/actions/cleanup-runner@main uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.3
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: | instructions: |
@ -73,7 +73,7 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
no-sudo: true no-sudo: true

View File

@ -54,7 +54,7 @@ jobs:
steps: steps:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup XPU - name: Setup XPU
uses: ./.github/actions/setup-xpu uses: ./.github/actions/setup-xpu
@ -72,12 +72,12 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with: with:
docker-image-name: ${{ inputs.docker-image }} docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}

View File

@ -3,7 +3,7 @@ name: Build Triton wheels
on: on:
push: push:
branches: branches:
- main - release/2.3
tags: tags:
# NOTE: Binary build pipelines should only get triggered on release candidate builds # NOTE: Binary build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1 # Release candidate tags look like: v1.11.0-rc1
@ -47,12 +47,12 @@ jobs:
BUILD_DEVICE: ${{ matrix.device }} BUILD_DEVICE: ${{ matrix.device }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
submodules: false submodules: false
@ -60,7 +60,7 @@ jobs:
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ env.DOCKER_IMAGE }} docker-image: ${{ env.DOCKER_IMAGE }}
@ -125,7 +125,7 @@ jobs:
path: ${{ runner.temp }}/artifacts/* path: ${{ runner.temp }}/artifacts/*
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always() if: always()
upload-wheel: upload-wheel:
@ -203,12 +203,12 @@ jobs:
PY_VERS: ${{ matrix.py_vers }} PY_VERS: ${{ matrix.py_vers }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
submodules: false submodules: false
@ -216,7 +216,7 @@ jobs:
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ env.DOCKER_IMAGE }} docker-image: ${{ env.DOCKER_IMAGE }}
@ -252,7 +252,7 @@ jobs:
path: ${{ runner.temp }}/artifacts/* path: ${{ runner.temp }}/artifacts/*
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always() if: always()
upload-conda: upload-conda:

View File

@ -31,7 +31,7 @@ jobs:
runs-on: linux.20_04.4x runs-on: linux.20_04.4x
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
submodules: false submodules: false
fetch-depth: 1 fetch-depth: 1

View File

@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Run close_nonexistent_disable_issues.py - name: Run close_nonexistent_disable_issues.py
env: env:

View File

@ -74,21 +74,21 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
# deep clone (fetch-depth 0) required for git merge-base # deep clone (fetch-depth 0) required for git merge-base
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Build docker image - name: Build docker image
id: build-docker-image id: build-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with: with:
docker-image-name: ${{ matrix.docker-image-name }} docker-image-name: ${{ matrix.docker-image-name }}
always-rebuild: true always-rebuild: true
push: true push: true
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ steps.build-docker-image.outputs.docker-image }} docker-image: ${{ steps.build-docker-image.outputs.docker-image }}
@ -120,5 +120,5 @@ jobs:
if: always() if: always()
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always() if: always()

View File

@ -7,6 +7,7 @@ on:
- Dockerfile - Dockerfile
- docker.Makefile - docker.Makefile
- .github/workflows/docker-release.yml - .github/workflows/docker-release.yml
- .github/scripts/generate_docker_release_matrix.py
push: push:
branches: branches:
- nightly - nightly
@ -40,7 +41,7 @@ jobs:
matrix: ${{ steps.generate-matrix.outputs.matrix }} matrix: ${{ steps.generate-matrix.outputs.matrix }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: true submodules: true
@ -68,7 +69,7 @@ jobs:
CUDNN_VERSION: ${{ matrix.cudnn_version }} CUDNN_VERSION: ${{ matrix.cudnn_version }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.3
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
@ -129,17 +130,27 @@ jobs:
if: ${{ github.event.ref == 'refs/heads/nightly' && matrix.image_type == 'runtime' }} if: ${{ github.event.ref == 'refs/heads/nightly' && matrix.image_type == 'runtime' }}
run: | run: |
PYTORCH_DOCKER_TAG="${PYTORCH_VERSION}-cuda${CUDA_VERSION_SHORT}-cudnn${CUDNN_VERSION}-runtime" PYTORCH_DOCKER_TAG="${PYTORCH_VERSION}-cuda${CUDA_VERSION_SHORT}-cudnn${CUDNN_VERSION}-runtime"
CUDA_SUFFIX="-cu${CUDA_VERSION}"
if [[ ${CUDA_VERSION_SHORT} == "cpu" ]]; then
PYTORCH_DOCKER_TAG="${PYTORCH_VERSION}-runtime"
CUDA_SUFFIX=""
fi
PYTORCH_NIGHTLY_COMMIT=$(docker run ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_DOCKER_TAG}" \ PYTORCH_NIGHTLY_COMMIT=$(docker run ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_DOCKER_TAG}" \
python -c 'import torch; print(torch.version.git_version[:7],end="")') python -c 'import torch; print(torch.version.git_version[:7],end="")')
docker tag ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_DOCKER_TAG}" \ docker tag ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_DOCKER_TAG}" \
ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}-cu${CUDA_VERSION}" ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}${CUDA_SUFFIX}"
docker push ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}-cu${CUDA_VERSION}"
docker push ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}${CUDA_SUFFIX}"
# Please note, here we ned to pin specific verison of CUDA as with latest label
if [[ ${CUDA_VERSION_SHORT} == "12.1" ]]; then
docker tag ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}${CUDA_SUFFIX}" \
ghcr.io/pytorch/pytorch-nightly:latest
docker push ghcr.io/pytorch/pytorch-nightly:latest
fi
docker tag ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}-cu${CUDA_VERSION}" \
ghcr.io/pytorch/pytorch-nightly:latest
docker push ghcr.io/pytorch/pytorch-nightly:latest
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.3
if: always() if: always()

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
runs_on: linux.arm64.2xlarge runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
@ -69,7 +69,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-aarch64 build_name: manywheel-py3_8-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
@ -91,7 +91,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-aarch64 build_name: manywheel-py3_8-cpu-aarch64
secrets: secrets:
@ -111,7 +111,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
runs_on: linux.arm64.2xlarge runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
@ -132,7 +132,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-aarch64 build_name: manywheel-py3_9-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
@ -154,7 +154,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-aarch64 build_name: manywheel-py3_9-cpu-aarch64
secrets: secrets:
@ -174,7 +174,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
runs_on: linux.arm64.2xlarge runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
@ -195,7 +195,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-aarch64 build_name: manywheel-py3_10-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
@ -217,7 +217,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-aarch64 build_name: manywheel-py3_10-cpu-aarch64
secrets: secrets:
@ -237,7 +237,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
runs_on: linux.arm64.2xlarge runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
@ -258,7 +258,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-aarch64 build_name: manywheel-py3_11-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
@ -280,7 +280,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-aarch64 build_name: manywheel-py3_11-cpu-aarch64
secrets: secrets:
@ -300,7 +300,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
runs_on: linux.arm64.2xlarge runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
@ -321,7 +321,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-aarch64 build_name: manywheel-py3_12-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
@ -343,7 +343,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-aarch64 build_name: manywheel-py3_12-cpu-aarch64
secrets: secrets:

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu build_name: conda-py3_8-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -66,7 +66,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu build_name: conda-py3_8-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -87,7 +87,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu build_name: conda-py3_8-cpu
secrets: secrets:
@ -108,7 +108,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_8-cuda11_8 build_name: conda-py3_8-cuda11_8
@ -128,7 +128,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda11_8 build_name: conda-py3_8-cuda11_8
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -150,7 +150,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda11_8 build_name: conda-py3_8-cuda11_8
secrets: secrets:
@ -171,7 +171,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_8-cuda12_1 build_name: conda-py3_8-cuda12_1
@ -191,7 +191,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda12_1 build_name: conda-py3_8-cuda12_1
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -213,7 +213,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda12_1 build_name: conda-py3_8-cuda12_1
secrets: secrets:
@ -233,7 +233,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu build_name: conda-py3_9-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -251,7 +251,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu build_name: conda-py3_9-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -272,7 +272,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu build_name: conda-py3_9-cpu
secrets: secrets:
@ -293,7 +293,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_9-cuda11_8 build_name: conda-py3_9-cuda11_8
@ -313,7 +313,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda11_8 build_name: conda-py3_9-cuda11_8
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -335,7 +335,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda11_8 build_name: conda-py3_9-cuda11_8
secrets: secrets:
@ -356,7 +356,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_9-cuda12_1 build_name: conda-py3_9-cuda12_1
@ -376,7 +376,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda12_1 build_name: conda-py3_9-cuda12_1
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -398,7 +398,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda12_1 build_name: conda-py3_9-cuda12_1
secrets: secrets:
@ -418,7 +418,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu build_name: conda-py3_10-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -436,7 +436,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu build_name: conda-py3_10-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -457,7 +457,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu build_name: conda-py3_10-cpu
secrets: secrets:
@ -478,7 +478,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_10-cuda11_8 build_name: conda-py3_10-cuda11_8
@ -498,7 +498,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda11_8 build_name: conda-py3_10-cuda11_8
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -520,7 +520,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda11_8 build_name: conda-py3_10-cuda11_8
secrets: secrets:
@ -541,7 +541,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_10-cuda12_1 build_name: conda-py3_10-cuda12_1
@ -561,7 +561,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda12_1 build_name: conda-py3_10-cuda12_1
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -583,7 +583,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda12_1 build_name: conda-py3_10-cuda12_1
secrets: secrets:
@ -603,7 +603,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu build_name: conda-py3_11-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -621,7 +621,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu build_name: conda-py3_11-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -642,7 +642,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu build_name: conda-py3_11-cpu
secrets: secrets:
@ -663,7 +663,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_11-cuda11_8 build_name: conda-py3_11-cuda11_8
@ -683,7 +683,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda11_8 build_name: conda-py3_11-cuda11_8
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -705,7 +705,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda11_8 build_name: conda-py3_11-cuda11_8
secrets: secrets:
@ -726,7 +726,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_11-cuda12_1 build_name: conda-py3_11-cuda12_1
@ -746,7 +746,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda12_1 build_name: conda-py3_11-cuda12_1
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -768,7 +768,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda12_1 build_name: conda-py3_11-cuda12_1
secrets: secrets:
@ -788,7 +788,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cpu build_name: conda-py3_12-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -806,7 +806,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cpu build_name: conda-py3_12-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -827,7 +827,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cpu build_name: conda-py3_12-cpu
secrets: secrets:
@ -848,7 +848,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_12-cuda11_8 build_name: conda-py3_12-cuda11_8
@ -868,7 +868,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda11_8 build_name: conda-py3_12-cuda11_8
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -890,7 +890,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda11_8 build_name: conda-py3_12-cuda11_8
secrets: secrets:
@ -911,7 +911,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_12-cuda12_1 build_name: conda-py3_12-cuda12_1
@ -931,7 +931,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda12_1 build_name: conda-py3_12-cuda12_1
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -953,7 +953,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda12_1 build_name: conda-py3_12-cuda12_1
secrets: secrets:

View File

@ -43,7 +43,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -62,7 +62,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi build_name: libtorch-cpu-shared-with-deps-cxx11-abi

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -67,7 +67,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -89,7 +89,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -111,7 +111,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
@ -131,7 +131,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
@ -154,7 +154,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
@ -176,7 +176,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
@ -196,7 +196,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
@ -219,7 +219,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
@ -241,7 +241,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.7-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.7-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm5_7-shared-with-deps-cxx11-abi build_name: libtorch-rocm5_7-shared-with-deps-cxx11-abi
@ -263,7 +263,7 @@ jobs:
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.7-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.7-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
steps: steps:
@ -277,7 +277,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -289,7 +288,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -305,7 +304,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/libtorch-cxx11-builder:rocm5.7-main docker-image: pytorch/libtorch-cxx11-builder:rocm5.7-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -325,7 +324,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.7-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.7-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm5_7-shared-with-deps-cxx11-abi build_name: libtorch-rocm5_7-shared-with-deps-cxx11-abi
@ -347,7 +346,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm6_0-shared-with-deps-cxx11-abi build_name: libtorch-rocm6_0-shared-with-deps-cxx11-abi
@ -369,7 +368,7 @@ jobs:
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
steps: steps:
@ -383,7 +382,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -395,7 +393,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -411,7 +409,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/libtorch-cxx11-builder:rocm6.0-main docker-image: pytorch/libtorch-cxx11-builder:rocm6.0-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -431,7 +429,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm6_0-shared-with-deps-cxx11-abi build_name: libtorch-rocm6_0-shared-with-deps-cxx11-abi

View File

@ -43,7 +43,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11 build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -62,7 +62,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11 build_name: libtorch-cpu-shared-with-deps-pre-cxx11

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11 build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -67,7 +67,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11 build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -89,7 +89,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11 build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -111,7 +111,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11 build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
@ -131,7 +131,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11 build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
@ -154,7 +154,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11 build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
@ -176,7 +176,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11 build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
@ -196,7 +196,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11 build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
@ -219,7 +219,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11 build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
@ -241,7 +241,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm5_7-shared-with-deps-pre-cxx11 build_name: libtorch-rocm5_7-shared-with-deps-pre-cxx11
@ -263,7 +263,7 @@ jobs:
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
steps: steps:
@ -277,7 +277,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -289,7 +288,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -305,7 +304,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm5.7-main docker-image: pytorch/manylinux-builder:rocm5.7-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -325,7 +324,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm5_7-shared-with-deps-pre-cxx11 build_name: libtorch-rocm5_7-shared-with-deps-pre-cxx11
@ -347,7 +346,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm6_0-shared-with-deps-pre-cxx11 build_name: libtorch-rocm6_0-shared-with-deps-pre-cxx11
@ -369,7 +368,7 @@ jobs:
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
steps: steps:
@ -383,7 +382,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -395,7 +393,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -411,7 +409,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm6.0-main docker-image: pytorch/manylinux-builder:rocm6.0-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -431,7 +429,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm6_0-shared-with-deps-pre-cxx11 build_name: libtorch-rocm6_0-shared-with-deps-pre-cxx11

View File

@ -44,7 +44,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8 build_name: manywheel-py3_8-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -64,7 +64,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8 build_name: manywheel-py3_8-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -84,7 +84,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1 build_name: manywheel-py3_8-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -104,7 +104,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1 build_name: manywheel-py3_8-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu build_name: manywheel-py3_8-cpu
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -66,7 +66,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu build_name: manywheel-py3_8-cpu
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -87,7 +87,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu build_name: manywheel-py3_8-cpu
secrets: secrets:
@ -107,7 +107,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-cxx11-abi build_name: manywheel-py3_8-cpu-cxx11-abi
@ -126,7 +126,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-cxx11-abi build_name: manywheel-py3_8-cpu-cxx11-abi
@ -148,7 +148,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-cxx11-abi build_name: manywheel-py3_8-cpu-cxx11-abi
@ -170,7 +170,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8 build_name: manywheel-py3_8-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -190,7 +190,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8 build_name: manywheel-py3_8-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -212,7 +212,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8 build_name: manywheel-py3_8-cuda11_8
secrets: secrets:
@ -233,7 +233,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1 build_name: manywheel-py3_8-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -253,7 +253,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1 build_name: manywheel-py3_8-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -275,7 +275,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1 build_name: manywheel-py3_8-cuda12_1
secrets: secrets:
@ -296,7 +296,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-rocm5_7 build_name: manywheel-py3_8-rocm5_7
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -317,7 +317,7 @@ jobs:
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
steps: steps:
- name: Setup ROCm - name: Setup ROCm
@ -330,7 +330,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -342,7 +341,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -358,7 +357,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm5.7-main docker-image: pytorch/manylinux-builder:rocm5.7-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -378,7 +377,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-rocm5_7 build_name: manywheel-py3_8-rocm5_7
secrets: secrets:
@ -399,7 +398,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-rocm6_0 build_name: manywheel-py3_8-rocm6_0
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -420,7 +419,7 @@ jobs:
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
steps: steps:
- name: Setup ROCm - name: Setup ROCm
@ -433,7 +432,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -445,7 +443,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -461,7 +459,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm6.0-main docker-image: pytorch/manylinux-builder:rocm6.0-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -481,7 +479,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-rocm6_0 build_name: manywheel-py3_8-rocm6_0
secrets: secrets:
@ -501,7 +499,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu build_name: manywheel-py3_9-cpu
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -519,7 +517,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu build_name: manywheel-py3_9-cpu
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -540,7 +538,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu build_name: manywheel-py3_9-cpu
secrets: secrets:
@ -560,7 +558,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-cxx11-abi build_name: manywheel-py3_9-cpu-cxx11-abi
@ -579,7 +577,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-cxx11-abi build_name: manywheel-py3_9-cpu-cxx11-abi
@ -601,7 +599,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-cxx11-abi build_name: manywheel-py3_9-cpu-cxx11-abi
@ -623,7 +621,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda11_8 build_name: manywheel-py3_9-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -643,7 +641,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda11_8 build_name: manywheel-py3_9-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -665,7 +663,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda11_8 build_name: manywheel-py3_9-cuda11_8
secrets: secrets:
@ -686,7 +684,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda12_1 build_name: manywheel-py3_9-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -706,7 +704,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda12_1 build_name: manywheel-py3_9-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -728,7 +726,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda12_1 build_name: manywheel-py3_9-cuda12_1
secrets: secrets:
@ -749,7 +747,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-rocm5_7 build_name: manywheel-py3_9-rocm5_7
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -770,7 +768,7 @@ jobs:
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
steps: steps:
- name: Setup ROCm - name: Setup ROCm
@ -783,7 +781,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -795,7 +792,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -811,7 +808,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm5.7-main docker-image: pytorch/manylinux-builder:rocm5.7-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -831,7 +828,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-rocm5_7 build_name: manywheel-py3_9-rocm5_7
secrets: secrets:
@ -852,7 +849,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-rocm6_0 build_name: manywheel-py3_9-rocm6_0
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -873,7 +870,7 @@ jobs:
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
steps: steps:
- name: Setup ROCm - name: Setup ROCm
@ -886,7 +883,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -898,7 +894,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -914,7 +910,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm6.0-main docker-image: pytorch/manylinux-builder:rocm6.0-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -934,7 +930,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-rocm6_0 build_name: manywheel-py3_9-rocm6_0
secrets: secrets:
@ -954,7 +950,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu build_name: manywheel-py3_10-cpu
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -972,7 +968,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu build_name: manywheel-py3_10-cpu
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -993,7 +989,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu build_name: manywheel-py3_10-cpu
secrets: secrets:
@ -1013,7 +1009,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-cxx11-abi build_name: manywheel-py3_10-cpu-cxx11-abi
@ -1032,7 +1028,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-cxx11-abi build_name: manywheel-py3_10-cpu-cxx11-abi
@ -1054,7 +1050,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-cxx11-abi build_name: manywheel-py3_10-cpu-cxx11-abi
@ -1076,7 +1072,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda11_8 build_name: manywheel-py3_10-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1096,7 +1092,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda11_8 build_name: manywheel-py3_10-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1118,7 +1114,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda11_8 build_name: manywheel-py3_10-cuda11_8
secrets: secrets:
@ -1139,7 +1135,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda12_1 build_name: manywheel-py3_10-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1159,7 +1155,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda12_1 build_name: manywheel-py3_10-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1181,7 +1177,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda12_1 build_name: manywheel-py3_10-cuda12_1
secrets: secrets:
@ -1202,7 +1198,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-rocm5_7 build_name: manywheel-py3_10-rocm5_7
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1223,7 +1219,7 @@ jobs:
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
steps: steps:
- name: Setup ROCm - name: Setup ROCm
@ -1236,7 +1232,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1248,7 +1243,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1264,7 +1259,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm5.7-main docker-image: pytorch/manylinux-builder:rocm5.7-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -1284,7 +1279,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-rocm5_7 build_name: manywheel-py3_10-rocm5_7
secrets: secrets:
@ -1305,7 +1300,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-rocm6_0 build_name: manywheel-py3_10-rocm6_0
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1326,7 +1321,7 @@ jobs:
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
steps: steps:
- name: Setup ROCm - name: Setup ROCm
@ -1339,7 +1334,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1351,7 +1345,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1367,7 +1361,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm6.0-main docker-image: pytorch/manylinux-builder:rocm6.0-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -1387,7 +1381,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-rocm6_0 build_name: manywheel-py3_10-rocm6_0
secrets: secrets:
@ -1407,7 +1401,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu build_name: manywheel-py3_11-cpu
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1425,7 +1419,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu build_name: manywheel-py3_11-cpu
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1446,7 +1440,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu build_name: manywheel-py3_11-cpu
secrets: secrets:
@ -1466,7 +1460,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-cxx11-abi build_name: manywheel-py3_11-cpu-cxx11-abi
@ -1485,7 +1479,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-cxx11-abi build_name: manywheel-py3_11-cpu-cxx11-abi
@ -1507,7 +1501,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-cxx11-abi build_name: manywheel-py3_11-cpu-cxx11-abi
@ -1529,7 +1523,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda11_8 build_name: manywheel-py3_11-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1549,7 +1543,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda11_8 build_name: manywheel-py3_11-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1571,7 +1565,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda11_8 build_name: manywheel-py3_11-cuda11_8
secrets: secrets:
@ -1592,7 +1586,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda12_1 build_name: manywheel-py3_11-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1612,7 +1606,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda12_1 build_name: manywheel-py3_11-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1634,7 +1628,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda12_1 build_name: manywheel-py3_11-cuda12_1
secrets: secrets:
@ -1655,7 +1649,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-rocm5_7 build_name: manywheel-py3_11-rocm5_7
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1676,7 +1670,7 @@ jobs:
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
steps: steps:
- name: Setup ROCm - name: Setup ROCm
@ -1689,7 +1683,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1701,7 +1694,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1717,7 +1710,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm5.7-main docker-image: pytorch/manylinux-builder:rocm5.7-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -1737,7 +1730,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-rocm5_7 build_name: manywheel-py3_11-rocm5_7
secrets: secrets:
@ -1758,7 +1751,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-rocm6_0 build_name: manywheel-py3_11-rocm6_0
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1779,7 +1772,7 @@ jobs:
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
steps: steps:
- name: Setup ROCm - name: Setup ROCm
@ -1792,7 +1785,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1804,7 +1796,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1820,7 +1812,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm6.0-main docker-image: pytorch/manylinux-builder:rocm6.0-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -1840,7 +1832,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-rocm6_0 build_name: manywheel-py3_11-rocm6_0
secrets: secrets:
@ -1860,7 +1852,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu build_name: manywheel-py3_12-cpu
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1878,7 +1870,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu build_name: manywheel-py3_12-cpu
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -1899,7 +1891,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu build_name: manywheel-py3_12-cpu
secrets: secrets:
@ -1919,7 +1911,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-cxx11-abi build_name: manywheel-py3_12-cpu-cxx11-abi
@ -1938,7 +1930,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-cxx11-abi build_name: manywheel-py3_12-cpu-cxx11-abi
@ -1960,7 +1952,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu-cxx11-abi DESIRED_CUDA: cpu-cxx11-abi
GPU_ARCH_TYPE: cpu-cxx11-abi GPU_ARCH_TYPE: cpu-cxx11-abi
DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-main DOCKER_IMAGE: pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.3
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-cxx11-abi build_name: manywheel-py3_12-cpu-cxx11-abi
@ -1982,7 +1974,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda11_8 build_name: manywheel-py3_12-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -2002,7 +1994,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda11_8 build_name: manywheel-py3_12-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -2024,7 +2016,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda11_8 build_name: manywheel-py3_12-cuda11_8
secrets: secrets:
@ -2045,7 +2037,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda12_1 build_name: manywheel-py3_12-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -2065,7 +2057,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda12_1 build_name: manywheel-py3_12-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -2087,7 +2079,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda12_1 build_name: manywheel-py3_12-cuda12_1
secrets: secrets:
@ -2108,7 +2100,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-rocm5_7 build_name: manywheel-py3_12-rocm5_7
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -2129,7 +2121,7 @@ jobs:
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
steps: steps:
- name: Setup ROCm - name: Setup ROCm
@ -2142,7 +2134,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2154,7 +2145,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2170,7 +2161,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm5.7-main docker-image: pytorch/manylinux-builder:rocm5.7-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -2190,7 +2181,7 @@ jobs:
DESIRED_CUDA: rocm5.7 DESIRED_CUDA: rocm5.7
GPU_ARCH_VERSION: 5.7 GPU_ARCH_VERSION: 5.7
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.7-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-rocm5_7 build_name: manywheel-py3_12-rocm5_7
secrets: secrets:
@ -2211,7 +2202,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-rocm6_0 build_name: manywheel-py3_12-rocm6_0
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -2232,7 +2223,7 @@ jobs:
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
steps: steps:
- name: Setup ROCm - name: Setup ROCm
@ -2245,7 +2236,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2257,7 +2247,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2273,7 +2263,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm6.0-main docker-image: pytorch/manylinux-builder:rocm6.0-2.3
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -2293,7 +2283,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-rocm6_0 build_name: manywheel-py3_12-rocm6_0
secrets: secrets:

View File

@ -77,7 +77,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -89,7 +88,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -141,7 +140,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu build_name: conda-py3_8-cpu
use_s3: False use_s3: False
@ -195,7 +194,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -207,7 +205,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -259,7 +257,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu build_name: conda-py3_9-cpu
use_s3: False use_s3: False
@ -313,7 +311,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -325,7 +322,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -377,7 +374,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu build_name: conda-py3_10-cpu
use_s3: False use_s3: False
@ -431,7 +428,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -443,7 +439,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -495,7 +491,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu build_name: conda-py3_11-cpu
use_s3: False use_s3: False
@ -549,7 +545,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -561,7 +556,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -613,7 +608,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cpu build_name: conda-py3_12-cpu
use_s3: False use_s3: False

View File

@ -81,7 +81,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -93,7 +92,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -145,7 +144,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.3
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi build_name: libtorch-cpu-shared-with-deps-cxx11-abi

View File

@ -78,7 +78,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -90,7 +89,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -142,7 +141,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: wheel-py3_8-cpu build_name: wheel-py3_8-cpu
use_s3: False use_s3: False
@ -197,7 +196,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -209,7 +207,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -261,7 +259,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: wheel-py3_9-cpu build_name: wheel-py3_9-cpu
use_s3: False use_s3: False
@ -316,7 +314,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -328,7 +325,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -380,7 +377,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: wheel-py3_10-cpu build_name: wheel-py3_10-cpu
use_s3: False use_s3: False
@ -435,7 +432,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -447,7 +443,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -499,7 +495,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: wheel-py3_11-cpu build_name: wheel-py3_11-cpu
use_s3: False use_s3: False
@ -554,7 +550,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -566,7 +561,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -618,7 +613,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: wheel-py3_12-cpu build_name: wheel-py3_12-cpu
use_s3: False use_s3: False

View File

@ -93,7 +93,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -105,7 +104,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -210,7 +209,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -222,7 +220,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -336,7 +334,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -348,7 +345,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -454,7 +451,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -466,7 +462,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -581,7 +577,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -593,7 +588,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -699,7 +694,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -711,7 +705,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -825,7 +819,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -837,7 +830,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -942,7 +935,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -954,7 +946,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1068,7 +1060,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1080,7 +1071,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1186,7 +1177,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1198,7 +1188,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1313,7 +1303,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1325,7 +1314,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1431,7 +1420,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1443,7 +1431,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1557,7 +1545,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1569,7 +1556,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1674,7 +1661,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1686,7 +1672,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1800,7 +1786,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1812,7 +1797,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1918,7 +1903,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1930,7 +1914,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2045,7 +2029,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2057,7 +2040,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2163,7 +2146,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2175,7 +2157,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2289,7 +2271,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2301,7 +2282,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2406,7 +2387,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2418,7 +2398,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2532,7 +2512,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2544,7 +2523,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2650,7 +2629,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2662,7 +2640,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2777,7 +2755,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2789,7 +2766,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2895,7 +2872,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2907,7 +2883,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3021,7 +2997,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3033,7 +3008,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3138,7 +3113,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3150,7 +3124,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3264,7 +3238,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3276,7 +3249,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3382,7 +3355,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3394,7 +3366,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3509,7 +3481,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3521,7 +3492,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3627,7 +3598,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3639,7 +3609,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder

View File

@ -90,7 +90,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -102,7 +101,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -211,7 +210,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -223,7 +221,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder

View File

@ -97,7 +97,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -109,7 +108,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -218,7 +217,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -230,7 +228,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -352,7 +350,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -364,7 +361,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -474,7 +471,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -486,7 +482,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -609,7 +605,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -621,7 +616,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -731,7 +726,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -743,7 +737,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder

View File

@ -90,7 +90,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -102,7 +101,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -211,7 +210,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -223,7 +221,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder

View File

@ -97,7 +97,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -109,7 +108,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -218,7 +217,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -230,7 +228,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -352,7 +350,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -364,7 +361,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -474,7 +471,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -486,7 +482,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -609,7 +605,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -621,7 +616,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -731,7 +726,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -743,7 +737,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder

View File

@ -94,7 +94,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -106,7 +105,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -211,7 +210,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -223,7 +221,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -338,7 +336,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -350,7 +347,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -456,7 +453,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -468,7 +464,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -584,7 +580,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -596,7 +591,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -702,7 +697,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -714,7 +708,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -829,7 +823,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -841,7 +834,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -946,7 +939,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -958,7 +950,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1073,7 +1065,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1085,7 +1076,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1191,7 +1182,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1203,7 +1193,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1319,7 +1309,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1331,7 +1320,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1437,7 +1426,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1449,7 +1437,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1564,7 +1552,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1576,7 +1563,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1681,7 +1668,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1693,7 +1679,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1808,7 +1794,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1820,7 +1805,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1926,7 +1911,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1938,7 +1922,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2054,7 +2038,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2066,7 +2049,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2172,7 +2155,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2184,7 +2166,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2299,7 +2281,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2311,7 +2292,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2416,7 +2397,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2428,7 +2408,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2543,7 +2523,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2555,7 +2534,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2661,7 +2640,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2673,7 +2651,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2789,7 +2767,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2801,7 +2778,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2907,7 +2884,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2919,7 +2895,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3034,7 +3010,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3046,7 +3021,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3151,7 +3126,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3163,7 +3137,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3278,7 +3252,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3290,7 +3263,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3396,7 +3369,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3408,7 +3380,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3524,7 +3496,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3536,7 +3507,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3642,7 +3613,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3654,7 +3624,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.3
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder

View File

@ -111,7 +111,7 @@ jobs:
name: linux-jammy-cpu-py3.8-gcc11-inductor name: linux-jammy-cpu-py3.8-gcc11-inductor
uses: ./.github/workflows/_linux-build.yml uses: ./.github/workflows/_linux-build.yml
with: with:
build-environment: linux-jammy-py3_8-gcc11-build build-environment: linux-jammy-py3.8-gcc11-build
docker-image-name: pytorch-linux-jammy-py3.8-gcc11-inductor-benchmarks docker-image-name: pytorch-linux-jammy-py3.8-gcc11-inductor-benchmarks
test-matrix: | test-matrix: |
{ include: [ { include: [
@ -135,7 +135,7 @@ jobs:
uses: ./.github/workflows/_linux-test.yml uses: ./.github/workflows/_linux-test.yml
needs: linux-jammy-cpu-py3_8-gcc11-inductor-build needs: linux-jammy-cpu-py3_8-gcc11-inductor-build
with: with:
build-environment: linux-jammy-py3_8-gcc11-build build-environment: linux-jammy-py3.8-gcc11-build
docker-image: ${{ needs.linux-jammy-cpu-py3_8-gcc11-inductor-build.outputs.docker-image }} docker-image: ${{ needs.linux-jammy-cpu-py3_8-gcc11-inductor-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-jammy-cpu-py3_8-gcc11-inductor-build.outputs.test-matrix }} test-matrix: ${{ needs.linux-jammy-cpu-py3_8-gcc11-inductor-build.outputs.test-matrix }}
secrets: secrets:

View File

@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Run BC Lint Action - name: Run BC Lint Action
uses: pytorch/test-infra/.github/actions/bc-lint@main uses: pytorch/test-infra/.github/actions/bc-lint@release/2.3
with: with:
repo: ${{ github.event.pull_request.head.repo.full_name }} repo: ${{ github.event.pull_request.head.repo.full_name }}
base_sha: ${{ github.event.pull_request.base.sha }} base_sha: ${{ github.event.pull_request.base.sha }}

View File

@ -16,7 +16,7 @@ permissions: read-all
# When any other step fails, it's job will be retried once by retryBot. # When any other step fails, it's job will be retried once by retryBot.
jobs: jobs:
lintrunner-clang: lintrunner-clang:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.3
with: with:
timeout: 120 timeout: 120
runner: linux.2xlarge runner: linux.2xlarge
@ -32,7 +32,7 @@ jobs:
.github/scripts/lintrunner.sh .github/scripts/lintrunner.sh
lintrunner-noclang: lintrunner-noclang:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.3
with: with:
timeout: 120 timeout: 120
runner: linux.2xlarge runner: linux.2xlarge
@ -47,7 +47,7 @@ jobs:
.github/scripts/lintrunner.sh .github/scripts/lintrunner.sh
quick-checks: quick-checks:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.3
with: with:
runner: linux.2xlarge runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter docker-image: pytorch-linux-focal-linter
@ -88,7 +88,7 @@ jobs:
if: github.event_name == 'pull_request' && !contains(github.event.pull_request.labels.*.name, 'skip-pr-sanity-checks') if: github.event_name == 'pull_request' && !contains(github.event.pull_request.labels.*.name, 'skip-pr-sanity-checks')
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
submodules: false submodules: false
fetch-depth: -1 fetch-depth: -1
@ -101,7 +101,7 @@ jobs:
bash .github/scripts/pr-sanity-check.sh bash .github/scripts/pr-sanity-check.sh
workflow-checks: workflow-checks:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.3
with: with:
runner: linux.2xlarge runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter docker-image: pytorch-linux-focal-linter
@ -113,6 +113,7 @@ jobs:
CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]") CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]")
conda activate "${CONDA_ENV}" conda activate "${CONDA_ENV}"
export RELEASE_VERSION_TAG="2.3"
# Regenerate workflows # Regenerate workflows
.github/scripts/generate_ci_workflows.py .github/scripts/generate_ci_workflows.py
@ -137,7 +138,7 @@ jobs:
exit $RC exit $RC
toc: toc:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.3
with: with:
runner: linux.2xlarge runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter docker-image: pytorch-linux-focal-linter
@ -175,7 +176,7 @@ jobs:
test-tools: test-tools:
name: Test tools name: Test tools
if: ${{ github.repository == 'pytorch/pytorch' }} if: ${{ github.repository == 'pytorch/pytorch' }}
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.3
with: with:
runner: linux.2xlarge runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter docker-image: pytorch-linux-focal-linter
@ -196,7 +197,7 @@ jobs:
runs-on: linux.20_04.4x runs-on: linux.20_04.4x
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
submodules: false submodules: false
fetch-depth: 1 fetch-depth: 1
@ -226,15 +227,15 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
# deep clone (fetch-depth 0) required, to allow us to use git log # deep clone (fetch-depth 0) required, to allow us to use git log
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
submodules: false submodules: false
fetch-depth: 1 fetch-depth: 1
- name: Setup Python 3.5 - name: Setup Python 3.6
if: matrix.test_type == 'older_python_version' if: matrix.test_type == 'older_python_version'
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
python-version: '3.5' python-version: '3.6'
architecture: x64 architecture: x64
check-latest: false check-latest: false
cache: pip cache: pip

View File

@ -21,7 +21,7 @@ jobs:
environment: upload-stats environment: upload-stats
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false

View File

@ -41,7 +41,7 @@ jobs:
environment: update-commit-hash environment: update-commit-hash
steps: steps:
- name: update-vision-commit-hash - name: update-vision-commit-hash
uses: pytorch/test-infra/.github/actions/update-commit-hash@main uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.3
if: ${{ github.event_name == 'schedule' }} if: ${{ github.event_name == 'schedule' }}
with: with:
repo-name: vision repo-name: vision
@ -56,7 +56,7 @@ jobs:
environment: update-commit-hash environment: update-commit-hash
steps: steps:
- name: update-audio-commit-hash - name: update-audio-commit-hash
uses: pytorch/test-infra/.github/actions/update-commit-hash@main uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.3
if: ${{ github.event_name == 'schedule' }} if: ${{ github.event_name == 'schedule' }}
with: with:
repo-name: audio repo-name: audio
@ -71,7 +71,7 @@ jobs:
environment: update-commit-hash environment: update-commit-hash
steps: steps:
- name: update-executorch-commit-hash - name: update-executorch-commit-hash
uses: pytorch/test-infra/.github/actions/update-commit-hash@main uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.3
if: ${{ github.event_name == 'schedule' }} if: ${{ github.event_name == 'schedule' }}
with: with:
repo-name: executorch repo-name: executorch

View File

@ -311,7 +311,7 @@ jobs:
name: linux-focal-py3_8-clang9-xla name: linux-focal-py3_8-clang9-xla
uses: ./.github/workflows/_linux-build.yml uses: ./.github/workflows/_linux-build.yml
with: with:
build-environment: linux-focal-py3_8-clang9-xla build-environment: linux-focal-py3.8-clang9-xla
docker-image-name: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/xla_base:v1.1-lite docker-image-name: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/xla_base:v1.1-lite
test-matrix: | test-matrix: |
{ include: [ { include: [
@ -323,7 +323,7 @@ jobs:
uses: ./.github/workflows/_linux-test.yml uses: ./.github/workflows/_linux-test.yml
needs: linux-focal-py3_8-clang9-xla-build needs: linux-focal-py3_8-clang9-xla-build
with: with:
build-environment: linux-focal-py3_8-clang9-xla build-environment: linux-focal-py3.8-clang9-xla
docker-image: ${{ needs.linux-focal-py3_8-clang9-xla-build.outputs.docker-image }} docker-image: ${{ needs.linux-focal-py3_8-clang9-xla-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-focal-py3_8-clang9-xla-build.outputs.test-matrix }} test-matrix: ${{ needs.linux-focal-py3_8-clang9-xla-build.outputs.test-matrix }}

View File

@ -18,7 +18,7 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.3
with: with:
docker-image-name: pytorch-linux-focal-cuda12.1-cudnn8-py3-gcc9 docker-image-name: pytorch-linux-focal-cuda12.1-cudnn8-py3-gcc9
@ -32,13 +32,13 @@ jobs:
echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}"
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.3
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG - name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
id: install-nvidia-driver id: install-nvidia-driver
uses: pytorch/test-infra/.github/actions/setup-nvidia@main uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.3
- name: Clone PyTorch - name: Clone PyTorch
uses: actions/checkout@v3 uses: actions/checkout@v3

View File

@ -14,7 +14,7 @@ jobs:
# checkout because when we run this action we don't *have* a local # checkout because when we run this action we don't *have* a local
# checkout. In other cases you should prefer a local checkout. # checkout. In other cases you should prefer a local checkout.
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
submodules: false submodules: false

View File

@ -16,7 +16,7 @@ jobs:
environment: ${{ (github.event_name == 'schedule') && 'mergebot' || '' }} environment: ${{ (github.event_name == 'schedule') && 'mergebot' || '' }}
steps: steps:
- name: Update viable/strict - name: Update viable/strict
uses: pytorch/test-infra/.github/actions/update-viablestrict@main uses: pytorch/test-infra/.github/actions/update-viablestrict@release/2.3
with: with:
repository: pytorch/pytorch repository: pytorch/pytorch
stable-branch: viable/strict stable-branch: viable/strict

View File

@ -17,7 +17,7 @@ jobs:
contents: read contents: read
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false

View File

@ -44,7 +44,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
uses: pytorch/test-infra/.github/actions/upload-alerts@main uses: pytorch/test-infra/.github/actions/upload-alerts@release/2.3
with: with:
alerts: '${{ steps.alert_creation_step.outputs.script-output }}' alerts: '${{ steps.alert_creation_step.outputs.script-output }}'
organization: "pytorch" organization: "pytorch"

View File

@ -39,7 +39,7 @@ jobs:
run: echo "${TRIGGERING_WORKFLOW}" run: echo "${TRIGGERING_WORKFLOW}"
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
- uses: actions/setup-python@v4 - uses: actions/setup-python@v4
with: with:

View File

@ -29,7 +29,7 @@ jobs:
name: Upload dynamo performance stats for ${{ github.event.workflow_run.id }}, attempt ${{ github.event.workflow_run.run_attempt }} name: Upload dynamo performance stats for ${{ github.event.workflow_run.id }}, attempt ${{ github.event.workflow_run.run_attempt }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.3
with: with:
submodules: false submodules: false
fetch-depth: 1 fetch-depth: 1

View File

@ -21,7 +21,7 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- name: update-xla-commit-hash - name: update-xla-commit-hash
continue-on-error: true continue-on-error: true
uses: pytorch/test-infra/.github/actions/update-commit-hash@main uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.3
with: with:
repo-name: xla repo-name: xla
branch: master branch: master
@ -30,7 +30,7 @@ jobs:
updatebot-token: ${{ secrets.UPDATEBOT_TOKEN }} updatebot-token: ${{ secrets.UPDATEBOT_TOKEN }}
pytorchbot-token: ${{ secrets.GH_PYTORCHBOT_TOKEN }} pytorchbot-token: ${{ secrets.GH_PYTORCHBOT_TOKEN }}
- name: update-triton-commit-hash - name: update-triton-commit-hash
uses: pytorch/test-infra/.github/actions/update-commit-hash@main uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.3
with: with:
repo-owner: openai repo-owner: openai
repo-name: triton repo-name: triton

View File

@ -218,7 +218,7 @@ Validate the release jobs for pytorch and domain libraries should be green. Vali
* [TorchVision](https://hud.pytorch.org/hud/pytorch/vision/release%2F1.12) * [TorchVision](https://hud.pytorch.org/hud/pytorch/vision/release%2F1.12)
* [TorchAudio](https://hud.pytorch.org/hud/pytorch/audio/release%2F1.12) * [TorchAudio](https://hud.pytorch.org/hud/pytorch/audio/release%2F1.12)
Validate that the documentation build has completed and generated entry corresponding to the release in [docs folder](https://github.com/pytorch/pytorch.github.io/tree/site/docs/) of pytorch.github.io repository Validate that the documentation build has completed and generated entry corresponding to the release in [docs repository](https://github.com/pytorch/docs/tree/main/).
### Cherry Picking Fixes ### Cherry Picking Fixes

View File

@ -13,6 +13,10 @@ namespace at {
TORCH_API ScalarType toScalarType(const DLDataType& dtype); TORCH_API ScalarType toScalarType(const DLDataType& dtype);
TORCH_API DLManagedTensor* toDLPack(const Tensor& src); TORCH_API DLManagedTensor* toDLPack(const Tensor& src);
TORCH_API Tensor fromDLPack(DLManagedTensor* src); TORCH_API Tensor fromDLPack(DLManagedTensor* src);
C10_DEPRECATED_MESSAGE("Please migrate to a non-const variant")
inline Tensor fromDLPack(const DLManagedTensor* src) {
return fromDLPack(const_cast<DLManagedTensor*>(src));
}
TORCH_API Tensor TORCH_API Tensor
fromDLPack(DLManagedTensor* src, std::function<void(void*)> deleter); fromDLPack(DLManagedTensor* src, std::function<void(void*)> deleter);
TORCH_API DLDataType getDLDataType(const Tensor& t); TORCH_API DLDataType getDLDataType(const Tensor& t);

View File

@ -303,6 +303,29 @@ Tensor FunctionalInverses::_nested_view_from_buffer_inverse(const Tensor& base,
return Tensor(); return Tensor();
} }
Tensor FunctionalInverses::_nested_view_from_jagged_inverse(const Tensor& base, const Tensor& mutated_view, InverseReturnMode inverse_return_mode, const Tensor& offsets, const Tensor& dummy, const std::optional<Tensor>& lengths, int64_t ragged_idx) {
auto values = at::_nested_get_values(mutated_view);
if (inverse_return_mode != InverseReturnMode::NeverView) {
return values;
} else {
return values.clone(/*memory_format=*/at::MemoryFormat::Contiguous);
}
}
Tensor FunctionalInverses::_nested_get_values_inverse(const Tensor& base, const Tensor& mutated_view, InverseReturnMode inverse_return_mode) {
auto offsets = at::_nested_get_offsets(base);
auto lengths = at::_nested_get_lengths(base);
auto ragged_idx = at::_nested_get_ragged_idx(base);
auto dummy = at::_nested_get_jagged_dummy(base);
auto nt = at::_nested_view_from_jagged(mutated_view, offsets, dummy, lengths, ragged_idx);
if (inverse_return_mode != InverseReturnMode::NeverView) {
return nt;
} else {
return nt.clone(/*memory_format=*/at::MemoryFormat::Contiguous);
}
}
Tensor FunctionalInverses::unsqueeze_inverse(const Tensor& base, const Tensor& mutated_view, InverseReturnMode inverse_return_mode, int64_t dim) { Tensor FunctionalInverses::unsqueeze_inverse(const Tensor& base, const Tensor& mutated_view, InverseReturnMode inverse_return_mode, int64_t dim) {
if (inverse_return_mode != InverseReturnMode::NeverView) { if (inverse_return_mode != InverseReturnMode::NeverView) {
return at::squeeze(mutated_view, dim); return at::squeeze(mutated_view, dim);

View File

@ -173,11 +173,22 @@ void MPSStream::copy(id<MTLBuffer> srcBuffer,
endKernelCoalescing(); endKernelCoalescing();
id<MTLBlitCommandEncoder> blitEncoder = [commandBuffer() blitCommandEncoder]; id<MTLBlitCommandEncoder> blitEncoder = [commandBuffer() blitCommandEncoder];
[blitEncoder copyFromBuffer:srcBuffer // For some reason copyFromBuffer for 4Gb fails without returning an error
sourceOffset:(NSUInteger)srcOffset // See https://github.com/pytorch/pytorch/issues/124335
toBuffer:dstBuffer // Workaround by batching copy commands into 2Gb chunks
destinationOffset:(NSUInteger)dstOffset constexpr size_t max_copy_size = 0x80000000; // 2GB
size:(NSUInteger)length]; size_t bytes_copied = 0;
size_t bytes_remains = length;
while (bytes_remains > 0) {
NSUInteger bytes_to_copy = std::min(max_copy_size, bytes_remains);
[blitEncoder copyFromBuffer:srcBuffer
sourceOffset:(NSUInteger)srcOffset + bytes_copied
toBuffer:dstBuffer
destinationOffset:(NSUInteger)dstOffset + bytes_copied
size:bytes_to_copy];
bytes_copied += bytes_to_copy;
bytes_remains -= bytes_to_copy;
}
[blitEncoder endEncoding]; [blitEncoder endEncoding];
// profilerId has a value only if copy profiling is enabled // profilerId has a value only if copy profiling is enabled

View File

@ -850,19 +850,13 @@ void try_plans(
benchmark_cache.update(key, plan); benchmark_cache.update(key, plan);
return; return;
} catch (cudnn_frontend::cudnnException& e) { } catch (cudnn_frontend::cudnnException& e) {
TORCH_WARN("Plan failed with a cudnnException: ", e.what());
} catch (CuDNNError& e) { } catch (CuDNNError& e) {
TORCH_WARN("Plan failed with a CuDNNError: ", e.what());
} catch (c10::OutOfMemoryError& e) { } catch (c10::OutOfMemoryError& e) {
(void)cudaGetLastError(); // clear CUDA error (void)cudaGetLastError(); // clear CUDA error
TORCH_WARN("Plan failed with an OutOfMemoryError: ", e.what());
} }
} }
TORCH_CHECK( TORCH_CHECK(
false, false, "FIND was unable to find an engine to execute this computation");
"FIND was unable to find an engine to execute this computation after trying ",
plans.size(),
" plans.");
} }
void try_plans_fused( void try_plans_fused(
@ -880,19 +874,13 @@ void try_plans_fused(
benchmark_cache_fused.update(key, plan); benchmark_cache_fused.update(key, plan);
return; return;
} catch (cudnn_frontend::cudnnException& e) { } catch (cudnn_frontend::cudnnException& e) {
TORCH_WARN("Plan failed with a cudnnException: ", e.what());
} catch (CuDNNError& e) { } catch (CuDNNError& e) {
TORCH_WARN("Plan failed with a CuDNNError: ", e.what());
} catch (c10::OutOfMemoryError& e) { } catch (c10::OutOfMemoryError& e) {
(void)cudaGetLastError(); // clear CUDA error (void)cudaGetLastError(); // clear CUDA error
TORCH_WARN("Plan failed with an OutOfMemoryError: ", e.what());
} }
} }
TORCH_CHECK( TORCH_CHECK(
false, false, "FIND was unable to find an engine to execute this computation");
"FIND was unable to find an engine to execute this computation after trying ",
plans.size(),
" plans.");
} }
bool try_configs( bool try_configs(
@ -916,12 +904,9 @@ bool try_configs(
benchmark_cache.update(key, plan); benchmark_cache.update(key, plan);
return true; return true;
} catch (cudnn_frontend::cudnnException& e) { } catch (cudnn_frontend::cudnnException& e) {
TORCH_WARN("Plan failed with a cudnnException: ", e.what());
} catch (CuDNNError& e) { } catch (CuDNNError& e) {
TORCH_WARN("Plan failed with a CuDNNError: ", e.what());
} catch (c10::OutOfMemoryError& e) { } catch (c10::OutOfMemoryError& e) {
(void)cudaGetLastError(); // clear CUDA error (void)cudaGetLastError(); // clear CUDA error
TORCH_WARN("Plan failed with an OutOfMemoryError: ", e.what());
} }
} }
return false; return false;
@ -950,12 +935,9 @@ bool try_configs_fused(
benchmark_cache_fused.update(key, plan); benchmark_cache_fused.update(key, plan);
return true; return true;
} catch (cudnn_frontend::cudnnException& e) { } catch (cudnn_frontend::cudnnException& e) {
TORCH_WARN("Plan failed with a cudnnException: ", e.what());
} catch (CuDNNError& e) { } catch (CuDNNError& e) {
TORCH_WARN("Plan failed with a CuDNNError: ", e.what());
} catch (c10::OutOfMemoryError& e) { } catch (c10::OutOfMemoryError& e) {
(void)cudaGetLastError(); // clear CUDA error (void)cudaGetLastError(); // clear CUDA error
TORCH_WARN("Plan failed with an OutOfMemoryError: ", e.what());
} }
} }
return false; return false;

View File

@ -25,6 +25,10 @@ typedef NS_ENUM(NSUInteger, MPSGraphFFTScalingMode)
-(MPSGraphTensor * _Nonnull) conjugateWithTensor:(MPSGraphTensor * _Nonnull) tensor -(MPSGraphTensor * _Nonnull) conjugateWithTensor:(MPSGraphTensor * _Nonnull) tensor
name:(NSString * _Nullable) name; name:(NSString * _Nullable) name;
-(MPSGraphTensor * _Nonnull) realPartOfTensor:(MPSGraphTensor * _Nonnull) tensor
name:(NSString * _Nullable) name;
-(MPSGraphTensor * _Nonnull) fastFourierTransformWithTensor:(MPSGraphTensor * _Nonnull) tensor -(MPSGraphTensor * _Nonnull) fastFourierTransformWithTensor:(MPSGraphTensor * _Nonnull) tensor
axes:(NSArray<NSNumber *> * _Nonnull) axes axes:(NSArray<NSNumber *> * _Nonnull) axes
descriptor:(MPSGraphFFTDescriptor * _Nonnull) descriptor descriptor:(MPSGraphFFTDescriptor * _Nonnull) descriptor

View File

@ -210,6 +210,9 @@ std::string scalarToMetalTypeString(const c10::ScalarType& scalar_type) {
return "float"; return "float";
case ScalarType::Half: case ScalarType::Half:
return "half"; return "half";
case ScalarType::BFloat16:
checkSupportsBFloat16();
return "bfloat";
case ScalarType::Int: case ScalarType::Int:
return "int"; return "int";
case ScalarType::Long: case ScalarType::Long:

View File

@ -28,6 +28,7 @@ static Tensor& fill_scalar_mps_impl(Tensor& self, const Scalar& value) {
struct CachedGraph : public MPSCachedGraph { struct CachedGraph : public MPSCachedGraph {
CachedGraph(MPSGraph* graph) : MPSCachedGraph(graph) {} CachedGraph(MPSGraph* graph) : MPSCachedGraph(graph) {}
MPSGraphTensor* inputTensor_ = nil;
MPSGraphTensor* outputTensor_ = nil; MPSGraphTensor* outputTensor_ = nil;
}; };
@ -35,36 +36,23 @@ static Tensor& fill_scalar_mps_impl(Tensor& self, const Scalar& value) {
string key = "fill_scalar_mps_impl" + getTensorsStringKey(self) + ":" + to_string(value.toDouble()); string key = "fill_scalar_mps_impl" + getTensorsStringKey(self) + ":" + to_string(value.toDouble());
auto cachedGraph = LookUpOrCreateCachedGraph<CachedGraph>(key, [&](auto mpsGraph, auto newCachedGraph) { auto cachedGraph = LookUpOrCreateCachedGraph<CachedGraph>(key, [&](auto mpsGraph, auto newCachedGraph) {
auto isBool = self.scalar_type() == c10::ScalarType::Bool; MPSGraphTensor* inputTensor = mpsGraphScalarPlaceHolder(mpsGraph, getMPSDataType(self.scalar_type()));
auto isUInt8 = self.scalar_type() == c10::ScalarType::Byte;
auto dataType = !isUInt8 ? !isBool ? getMPSScalarType(self.scalar_type()) : MPSDataTypeInt8 : MPSDataTypeUInt32;
// constantWithScalar does not work for boolTypes on MacOS-12.[34]
// workaround by filing it as int8 tensor and than casting to bool
// See https://github.com/pytorch/pytorch/issues/82427
// constantWithScalar does not work for UInt8 Types on MacOS-12.[34]/Ventura preview
// workaround by filing it as uint32 tensor and than casting to uint8
// See https://github.com/pytorch/pytorch/issues/83692
MPSGraphTensor* inputTensor = [mpsGraph constantWithScalar:value.toDouble()
shape:getMPSShape(self)
dataType:dataType];
MPSGraphTensor* outputTensor = [mpsGraph identityWithTensor:inputTensor name:nil]; MPSGraphTensor* outputTensor = [mpsGraph identityWithTensor:inputTensor name:nil];
if (isBool) { newCachedGraph->inputTensor_ = inputTensor;
outputTensor = [mpsGraph castTensor:outputTensor toType:MPSDataTypeBool name:@"constWithBool-workaround"];
}
if (isUInt8) {
outputTensor = [mpsGraph castTensor:outputTensor toType:MPSDataTypeUInt8 name:@"constWithUInt8-workaround"];
}
newCachedGraph->outputTensor_ = outputTensor; newCachedGraph->outputTensor_ = outputTensor;
}); });
auto mpsScalar = getMPSScalar(value, self.scalar_type());
auto mpsScalarData = getMPSGraphTensorFromScalar(getCurrentMPSStream(), mpsScalar);
NSDictionary<MPSGraphTensor*, MPSGraphTensorData*>* feeds = @{cachedGraph->inputTensor_ : mpsScalarData};
Placeholder outputPlaceholder = Placeholder outputPlaceholder =
Placeholder(cachedGraph->outputTensor_, needsCopyToOutput ? output : self, nullptr, !needsCopyToOutput); Placeholder(cachedGraph->outputTensor_, needsCopyToOutput ? output : self, nullptr, !needsCopyToOutput);
NSDictionary<MPSGraphTensor*, MPSGraphTensorData*>* results = NSDictionary<MPSGraphTensor*, MPSGraphTensorData*>* results =
@{outputPlaceholder.getMPSGraphTensor() : outputPlaceholder.getMPSGraphTensorData()}; @{outputPlaceholder.getMPSGraphTensor() : outputPlaceholder.getMPSGraphTensorData()};
runMPSGraph(getCurrentMPSStream(), cachedGraph->graph(), /*feeds*/ nil, results); runMPSGraph(getCurrentMPSStream(), cachedGraph->graph(), feeds, results);
if (needsCopyToOutput) { if (needsCopyToOutput) {
self.copy_(output); self.copy_(output);

View File

@ -240,14 +240,20 @@ static void index_put_kernel_mps(TensorIterator& iter,
} // namespace mps } // namespace mps
static Tensor nonzero_fallback(const Tensor& self) { static Tensor nonzero_fallback(const Tensor& self) {
TORCH_WARN_ONCE("MPS: nonzero op is supported natively starting from macOS 13.0. ",
"Falling back on CPU. This may have performance implications.");
return at::nonzero(self.to("cpu")).clone().to("mps"); return at::nonzero(self.to("cpu")).clone().to("mps");
} }
Tensor& nonzero_out_mps(const Tensor& self, Tensor& out_) { Tensor& nonzero_out_mps(const Tensor& self, Tensor& out_) {
if (!is_macos_13_or_newer()) { if (!is_macos_13_or_newer(MacOSVersion::MACOS_VER_14_0_PLUS)) {
TORCH_WARN_ONCE("MPS: nonzero op is supported natively starting from macOS 13.0. ",
"Falling back on CPU. This may have performance implications.");
Tensor out_fallback = nonzero_fallback(self);
at::native::resize_output(out_, out_fallback.sizes());
out_.copy_(out_fallback.to("mps"));
return out_;
} else if (self.is_complex()) {
TORCH_WARN_ONCE("MPS: nonzero op is not supported for complex datatypes. ",
"Falling back on CPU. This may have performance implications.");
Tensor out_fallback = nonzero_fallback(self); Tensor out_fallback = nonzero_fallback(self);
at::native::resize_output(out_, out_fallback.sizes()); at::native::resize_output(out_, out_fallback.sizes());
out_.copy_(out_fallback.to("mps")); out_.copy_(out_fallback.to("mps"));
@ -281,7 +287,6 @@ Tensor& nonzero_out_mps(const Tensor& self, Tensor& out_) {
CachedGraph(MPSGraph* graph) : MPSCachedGraph(graph) {} CachedGraph(MPSGraph* graph) : MPSCachedGraph(graph) {}
MPSGraphTensor* inputTensor_ = nil; MPSGraphTensor* inputTensor_ = nil;
MPSGraphTensor* outputTensor_ = nil; MPSGraphTensor* outputTensor_ = nil;
MPSGraphTensor* scatterDataTensor_ = nil;
}; };
dispatch_sync(stream->queue(), ^() { dispatch_sync(stream->queue(), ^() {
@ -299,93 +304,20 @@ Tensor& nonzero_out_mps(const Tensor& self, Tensor& out_) {
out = at::empty(out_.sizes(), out_.scalar_type(), c10::nullopt, kMPS, c10::nullopt, c10::nullopt); out = at::empty(out_.sizes(), out_.scalar_type(), c10::nullopt, kMPS, c10::nullopt, c10::nullopt);
} }
int64_t _apparentInputShape = 1;
for (auto dim : self.sizes()) {
_apparentInputShape *= dim;
}
MPSShape* apparentOutputShape = @[ @(total_nonzero * nDim) ];
MPSShape* apparentInputShape = @[ @(_apparentInputShape) ];
// Pseudocode:
//
// inputTensor = [1, 0, 0, 3]
// inputNonZero = [1, 0, 0, 1]
// indices = [1, 1, 1, 2]
// maskedIndices = [0, -1, -1, 1]
// coordinates = [0, 1, 2, 3]
// scatterResult = [0, 3]
@autoreleasepool { @autoreleasepool {
string key = "nonzero_out_mps" + getTensorsStringKey(self); string key = "nonzero_out_mps" + getTensorsStringKey(self);
auto cachedGraph = LookUpOrCreateCachedGraph<CachedGraph>(key, [&](auto mpsGraph, auto newCachedGraph) { auto cachedGraph = LookUpOrCreateCachedGraph<CachedGraph>(key, [&](auto mpsGraph, auto newCachedGraph) {
MPSDataType inputDataType = getMPSDataType(self); MPSGraphTensor* inputTensor = mpsGraphRankedPlaceHolder(mpsGraph, getMPSDataType(self), getMPSShape(self));
MPSShape* inputShape = getMPSShape(self);
MPSGraphTensor* inputTensor = MPSGraphTensor* outputTensor = [mpsGraph nonZeroIndicesOfTensor:inputTensor name:nil];
mpsGraphRankedPlaceHolder(mpsGraph, getMPSScalarType(self.scalar_type()), apparentInputShape);
MPSGraphTensor* scatterDataTensor = mpsGraphUnrankedPlaceHolder(mpsGraph, getMPSScalarType(out.scalar_type()));
MPSGraphTensor* zeroTensor = [mpsGraph constantWithScalar:0.0 dataType:inputDataType];
MPSGraphTensor* oneTensor = [mpsGraph constantWithScalar:1.0 dataType:MPSDataTypeInt32];
MPSGraphTensor* minusMaxDimTensor = [mpsGraph constantWithScalar:-maxDimensions dataType:MPSDataTypeInt32];
MPSGraphTensor* inputNotEqualToZeroTensor = [mpsGraph notEqualWithPrimaryTensor:inputTensor
secondaryTensor:zeroTensor
name:nil];
MPSGraphTensor* maskTensor = [mpsGraph castTensor:inputNotEqualToZeroTensor
toType:MPSDataTypeInt32
name:@"castToInt32"];
MPSGraphTensor* indicesTensor = [mpsGraph cumulativeSumWithTensor:maskTensor axis:0 name:nil];
MPSGraphTensor* indicesMinusOneTensor = [mpsGraph subtractionWithPrimaryTensor:indicesTensor
secondaryTensor:oneTensor
name:nil];
MPSGraphTensor* maskedIndicesTensor = [mpsGraph selectWithPredicateTensor:inputNotEqualToZeroTensor
truePredicateTensor:indicesMinusOneTensor
falsePredicateTensor:minusMaxDimTensor
name:nil];
MPSGraphTensor* coordinatesTensor = [mpsGraph reshapeTensor:[mpsGraph coordinateAlongAxis:0
withShape:inputShape
name:nil]
withShape:@[ @-1 ]
name:nil];
if (nDim > 1) {
NSMutableArray<MPSGraphTensor*>* maskedIndicesTensorArray = [NSMutableArray arrayWithCapacity:nDim];
NSMutableArray<MPSGraphTensor*>* coordinatesTensorArray = [NSMutableArray arrayWithCapacity:nDim];
MPSGraphTensor* constantRankTensor = [mpsGraph constantWithScalar:nDim dataType:MPSDataTypeInt32];
maskedIndicesTensorArray[0] = [mpsGraph multiplicationWithPrimaryTensor:maskedIndicesTensor
secondaryTensor:constantRankTensor
name:nil];
coordinatesTensorArray[0] = coordinatesTensor;
for (int i = 1; i < nDim; i++) {
maskedIndicesTensorArray[i] = [mpsGraph additionWithPrimaryTensor:maskedIndicesTensorArray[i - 1]
secondaryTensor:oneTensor
name:nil];
coordinatesTensorArray[i] = [mpsGraph reshapeTensor:[mpsGraph coordinateAlongAxis:i
withShape:inputShape
name:nil]
withShape:@[ @-1 ]
name:nil];
}
maskedIndicesTensor = [mpsGraph concatTensors:maskedIndicesTensorArray dimension:0 interleave:YES name:nil];
coordinatesTensor = [mpsGraph concatTensors:coordinatesTensorArray dimension:0 interleave:YES name:nil];
}
MPSGraphTensor* outputTensor = [mpsGraph scatterWithDataTensor:scatterDataTensor
updatesTensor:coordinatesTensor
indicesTensor:maskedIndicesTensor
axis:0
mode:MPSGraphScatterModeSet
name:nil];
newCachedGraph->inputTensor_ = inputTensor; newCachedGraph->inputTensor_ = inputTensor;
newCachedGraph->scatterDataTensor_ = scatterDataTensor;
newCachedGraph->outputTensor_ = outputTensor; newCachedGraph->outputTensor_ = outputTensor;
}); });
Placeholder selfPlaceholder = Placeholder(cachedGraph->inputTensor_, self, apparentInputShape); Placeholder selfPlaceholder = Placeholder(cachedGraph->inputTensor_, self);
Placeholder outputPlaceholder = Placeholder(cachedGraph->outputTensor_, out, apparentOutputShape); Placeholder outputPlaceholder = Placeholder(cachedGraph->outputTensor_, out);
Placeholder scatterPlaceholder = Placeholder(cachedGraph->scatterDataTensor_, out, apparentOutputShape); auto feeds = dictionaryFromPlaceholders(selfPlaceholder);
auto feeds = dictionaryFromPlaceholders(selfPlaceholder, scatterPlaceholder);
runMPSGraph(stream, cachedGraph->graph(), feeds, outputPlaceholder); runMPSGraph(stream, cachedGraph->graph(), feeds, outputPlaceholder);
} }
@ -397,7 +329,13 @@ Tensor& nonzero_out_mps(const Tensor& self, Tensor& out_) {
} }
Tensor nonzero_mps(const Tensor& self) { Tensor nonzero_mps(const Tensor& self) {
if (!is_macos_13_or_newer()) { if (!is_macos_13_or_newer(MacOSVersion::MACOS_VER_14_0_PLUS)) {
TORCH_WARN_ONCE("MPS: nonzero op is supported natively starting from macOS 13.0. ",
"Falling back on CPU. This may have performance implications.");
return nonzero_fallback(self);
} else if (self.is_complex()) {
TORCH_WARN_ONCE("MPS: nonzero op is not supported for complex datatypes ",
"Falling back on CPU. This may have performance implications.");
return nonzero_fallback(self); return nonzero_fallback(self);
} }

View File

@ -4,6 +4,8 @@
#include <ATen/mps/MPSProfiler.h> #include <ATen/mps/MPSProfiler.h>
#include <ATen/native/LinearAlgebraUtils.h> #include <ATen/native/LinearAlgebraUtils.h>
#include <ATen/native/Resize.h> #include <ATen/native/Resize.h>
// For MTLLanguageVersion_3_1
#include <ATen/native/mps/MPSGraphSonomaOps.h>
#include <ATen/native/mps/OperationUtils.h> #include <ATen/native/mps/OperationUtils.h>
#ifndef AT_PER_OPERATOR_HEADERS #ifndef AT_PER_OPERATOR_HEADERS
@ -29,7 +31,7 @@ static const char* METAL_LINALG = R"MATMUL_METAL(
using namespace metal; using namespace metal;
template<typename T> template<typename T>
T dot_product(constant T *v1, constant T* v2, ulong2 strides, uint32_t size) { T dot_product(constant T *v1, constant T* v2, ulong2 strides, uint32_t size) {
T rc = 0.0; T rc = T(0.0);
for (uint32_t i = 0; i < size; ++i) { for (uint32_t i = 0; i < size; ++i) {
rc += v1[i * strides.x] * v2[i * strides.y]; rc += v1[i * strides.x] * v2[i * strides.y];
} }
@ -69,6 +71,9 @@ kernel void naive_matmul<DTYPE>( \
INSTANTIATE_NAIVE_MM(float); INSTANTIATE_NAIVE_MM(float);
INSTANTIATE_NAIVE_MM(half); INSTANTIATE_NAIVE_MM(half);
#if __METAL_VERSION__ >= 310
INSTANTIATE_NAIVE_MM(bfloat);
#endif
)MATMUL_METAL"; )MATMUL_METAL";
id<MTLLibrary> compileLinalgOpLibrary(id<MTLDevice> device) { id<MTLLibrary> compileLinalgOpLibrary(id<MTLDevice> device) {
@ -79,7 +84,8 @@ id<MTLLibrary> compileLinalgOpLibrary(id<MTLDevice> device) {
NSError* error = nil; NSError* error = nil;
MTLCompileOptions* options = [[MTLCompileOptions new] autorelease]; MTLCompileOptions* options = [[MTLCompileOptions new] autorelease];
[options setLanguageVersion:MTLLanguageVersion2_3]; [options setLanguageVersion:is_macos_13_or_newer(MacOSVersion::MACOS_VER_14_0_PLUS) ? MTLLanguageVersion3_1
: MTLLanguageVersion2_3];
linalgLibrary = [device newLibraryWithSource:[NSString stringWithCString:METAL_LINALG encoding:NSASCIIStringEncoding] linalgLibrary = [device newLibraryWithSource:[NSString stringWithCString:METAL_LINALG encoding:NSASCIIStringEncoding]
options:options options:options
error:&error]; error:&error];

View File

@ -30,41 +30,53 @@ static void clamp_mps_graph(CachedGraph* cachedGraph,
const Tensor& min_tensor, const Tensor& min_tensor,
const Tensor& max_tensor) { const Tensor& max_tensor) {
auto input_dtype = input_tensor.scalar_type(); auto input_dtype = input_tensor.scalar_type();
auto min_dtype = input_dtype; auto min_dtype = cachedGraph->minTensor ? min_tensor.scalar_type() : input_dtype;
auto max_dtype = input_dtype; auto max_dtype = cachedGraph->maxTensor ? max_tensor.scalar_type() : input_dtype;
if (cachedGraph->minTensor) {
min_dtype = min_tensor.scalar_type();
}
if (cachedGraph->maxTensor) {
max_dtype = max_tensor.scalar_type();
}
MPSGraph* mpsGraph = cachedGraph->graph(); MPSGraph* mpsGraph = cachedGraph->graph();
cachedGraph->inputTensor = mpsGraphRankedPlaceHolder(mpsGraph, input_tensor); cachedGraph->inputTensor = mpsGraphRankedPlaceHolder(mpsGraph, input_tensor);
MPSGraphTensor* minTensor = cachedGraph->minTensor; auto minTensor = cachedGraph->minTensor;
MPSGraphTensor* maxTensor = cachedGraph->maxTensor; auto maxTensor = cachedGraph->maxTensor;
if (input_dtype != min_dtype) { if (input_dtype != min_dtype) {
minTensor = castMPSTensor(mpsGraph, cachedGraph->minTensor, input_dtype); minTensor = castMPSTensor(mpsGraph, cachedGraph->minTensor, input_dtype);
} }
if (input_dtype != max_dtype) { if (input_dtype != max_dtype) {
maxTensor = castMPSTensor(mpsGraph, cachedGraph->maxTensor, input_dtype); maxTensor = castMPSTensor(mpsGraph, cachedGraph->maxTensor, input_dtype);
} }
if (cachedGraph->minTensor && cachedGraph->maxTensor) { if (c10::isIntegralType(input_dtype, /*includeBool=*/true)) {
cachedGraph->outputTensor = [mpsGraph clampWithTensor:cachedGraph->inputTensor if (minTensor && maxTensor) {
minValueTensor:minTensor cachedGraph->outputTensor = [mpsGraph clampWithTensor:cachedGraph->inputTensor
maxValueTensor:maxTensor minValueTensor:minTensor
name:nil]; maxValueTensor:maxTensor
} else if (cachedGraph->maxTensor) { name:nil];
cachedGraph->outputTensor = [mpsGraph minimumWithPrimaryTensor:cachedGraph->inputTensor } else if (maxTensor) {
secondaryTensor:maxTensor cachedGraph->outputTensor = [mpsGraph minimumWithPrimaryTensor:cachedGraph->inputTensor
name:nil]; secondaryTensor:maxTensor
} else if (cachedGraph->minTensor) { name:nil];
cachedGraph->outputTensor = [mpsGraph maximumWithPrimaryTensor:cachedGraph->inputTensor } else if (minTensor) {
secondaryTensor:minTensor cachedGraph->outputTensor = [mpsGraph maximumWithPrimaryTensor:cachedGraph->inputTensor
name:nil]; secondaryTensor:minTensor
name:nil];
}
return;
} }
// clampWithTensor doesn't propagate NaN through so simulate it as composition of
// maximumWithNaNPropagationWithPrimaryTensor and minimumWithNaNPropagationWithPrimaryTensor
auto outputTensor = cachedGraph->inputTensor;
if (minTensor) {
outputTensor = [mpsGraph maximumWithNaNPropagationWithPrimaryTensor:outputTensor
secondaryTensor:minTensor
name:nil];
}
if (maxTensor) {
outputTensor = [mpsGraph minimumWithNaNPropagationWithPrimaryTensor:outputTensor
secondaryTensor:maxTensor
name:nil];
}
cachedGraph->outputTensor = outputTensor;
} }
static void check_min_max_dims(const OptionalTensorRef clamp_opt, const Tensor& input_t, string op_name) { static void check_min_max_dims(const OptionalTensorRef clamp_opt, const Tensor& input_t, string op_name) {

View File

@ -75,23 +75,10 @@ static bool is_empty_tensor(const Tensor& self) {
return self.numel() == 0; return self.numel() == 0;
} }
static void unary_op(const Tensor& self, static void unary_op_noresize(const Tensor& self, const Tensor& output_, std::string op_name, UnaryOpBlock unaryBlock) {
const Tensor& output_,
std::string op_name,
UnaryOpBlock unaryBlock,
is_noop_p is_noop = is_empty_tensor) {
TORCH_CHECK(!(!is_macos_13_or_newer() && self.scalar_type() == ScalarType::Byte), TORCH_CHECK(!(!is_macos_13_or_newer() && self.scalar_type() == ScalarType::Byte),
"MPS support unary op with uint8 natively starting from macOS 13.0"); "MPS support unary op with uint8 natively starting from macOS 13.0");
if (!output_.is_same_size(self)) {
output_.resize_(self.sizes());
}
if (is_noop(self)) {
output_.copy_(self);
return;
}
auto output = output_; auto output = output_;
bool needsCopyToOutput = false; bool needsCopyToOutput = false;
if (output.storage_offset() || !output.is_contiguous()) { if (output.storage_offset() || !output.is_contiguous()) {
@ -139,6 +126,23 @@ static void unary_op(const Tensor& self,
} }
} }
static void unary_op(const Tensor& self,
const Tensor& output_,
std::string op_name,
UnaryOpBlock unaryBlock,
is_noop_p is_noop = is_empty_tensor) {
if (!output_.is_same_size(self)) {
output_.resize_(self.sizes());
}
if (is_noop(self)) {
output_.copy_(self);
return;
}
unary_op_noresize(self, output_, op_name, unaryBlock);
}
MPSGraphTensor* trunc_tensor(MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) { MPSGraphTensor* trunc_tensor(MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) {
// Rounding is a no-op for integral types, and also a reasonable workaround // Rounding is a no-op for integral types, and also a reasonable workaround
// For MPSGraph bug on Apple Silicon, that throws `Function floorOp_i64 was not found in the library` // For MPSGraph bug on Apple Silicon, that throws `Function floorOp_i64 was not found in the library`
@ -168,6 +172,12 @@ MPSGraphTensor* log1p(MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) {
return [mpsGraph logarithmWithTensor:addedTensor name:nil]; return [mpsGraph logarithmWithTensor:addedTensor name:nil];
} }
static MPSGraphTensor* lengthOfComplexAsReal(MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) {
auto squares = [mpsGraph squareWithTensor:inputTensor name:nil];
auto sumSquares = [mpsGraph reductionSumWithTensor:squares axis:-1 name:nil];
return [mpsGraph squareRootWithTensor:sumSquares name:nil];
}
} // namespace mps } // namespace mps
TORCH_IMPL_FUNC(trunc_out_mps)(const Tensor& self, const Tensor& output) { TORCH_IMPL_FUNC(trunc_out_mps)(const Tensor& self, const Tensor& output) {
@ -226,14 +236,6 @@ CREATE_MPS_STRUCTURED_UNARY_ROUNDING_TORCH_IMPL_FUNC(round_out_mps, round)
}); \ }); \
} }
#define CREATE_MPS_UNARY_TORCH_IMPL_FUNC(func_out, func_stub) \
Tensor& func_out(const Tensor& self, Tensor& output) { \
mps::unary_op(self, output, #func_out, ^MPSGraphTensor*(MPSGraph * mpsGraph, MPSGraphTensor * inputTensor) { \
return [mpsGraph func_stub##WithTensor:inputTensor name:nil]; \
}); \
return output; \
}
CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(exp_out_mps, exponent) CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(exp_out_mps, exponent)
CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(exp2_out_mps, exponentBase2) CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(exp2_out_mps, exponentBase2)
CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(reciprocal_out_mps, reciprocal) CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(reciprocal_out_mps, reciprocal)
@ -257,7 +259,35 @@ CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(asinh_out_mps, asinh)
CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(acosh_out_mps, acosh) CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(acosh_out_mps, acosh)
CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(atanh_out_mps, atanh) CREATE_MPS_STRUCTURED_UNARY_TORCH_IMPL_FUNC(atanh_out_mps, atanh)
CREATE_MPS_UNARY_TORCH_IMPL_FUNC(abs_out_mps, absolute) Tensor& abs_out_mps(const Tensor& self, Tensor& output) {
using namespace mps;
if (!output.is_same_size(self)) {
output.resize_(self.sizes());
}
if (self.numel() == 0) {
return output;
}
if (supportsComplex() || !self.is_complex()) {
unary_op_noresize(self, output, "abs_out_mps", ^MPSGraphTensor*(MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) {
auto rc = [mpsGraph absoluteWithTensor:inputTensor name:nil];
if (self.is_complex()) {
rc = [mpsGraph realPartOfTensor:rc name:nil];
}
return rc;
});
} else {
Tensor realInput = at::view_as_real(self);
unary_op_noresize(
realInput, output, "abs_out_mps", ^MPSGraphTensor*(MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) {
auto rc = lengthOfComplexAsReal(mpsGraph, inputTensor);
return [mpsGraph reshapeTensor:rc withShape:getMPSShape(output) name:nil];
});
}
return output;
}
Tensor& logical_not_out_mps(const Tensor& self, Tensor& output) { Tensor& logical_not_out_mps(const Tensor& self, Tensor& output) {
auto bool_self = self.to(ScalarType::Bool); auto bool_self = self.to(ScalarType::Bool);
@ -484,9 +514,7 @@ TORCH_IMPL_FUNC(sgn_out_mps)(const Tensor& self, const Tensor& output) {
Tensor realOutput = at::view_as_real(output); Tensor realOutput = at::view_as_real(output);
auto complex_sgn_op = [&](MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) -> MPSGraphTensor* { auto complex_sgn_op = [&](MPSGraph* mpsGraph, MPSGraphTensor* inputTensor) -> MPSGraphTensor* {
MPSGraphTensor* squares = [mpsGraph squareWithTensor:inputTensor name:nil]; MPSGraphTensor* norm = mps::lengthOfComplexAsReal(mpsGraph, inputTensor);
MPSGraphTensor* sumSquares = [mpsGraph reductionSumWithTensor:squares axis:-1 name:nil];
MPSGraphTensor* norm = [mpsGraph squareRootWithTensor:sumSquares name:nil];
MPSGraphTensor* zero = [mpsGraph constantWithScalar:0.0 dataType:norm.dataType]; MPSGraphTensor* zero = [mpsGraph constantWithScalar:0.0 dataType:norm.dataType];
MPSGraphTensor* isZero = [mpsGraph equalWithPrimaryTensor:norm secondaryTensor:zero name:nil]; MPSGraphTensor* isZero = [mpsGraph equalWithPrimaryTensor:norm secondaryTensor:zero name:nil];
MPSGraphTensor* sgnTensor = [mpsGraph divisionWithPrimaryTensor:inputTensor secondaryTensor:norm name:nil]; MPSGraphTensor* sgnTensor = [mpsGraph divisionWithPrimaryTensor:inputTensor secondaryTensor:norm name:nil];

View File

@ -6154,6 +6154,52 @@
CompositeExplicitAutogradNonFunctional: _nested_view_from_buffer_copy CompositeExplicitAutogradNonFunctional: _nested_view_from_buffer_copy
autogen: _nested_view_from_buffer_copy.out autogen: _nested_view_from_buffer_copy.out
- func: _nested_view_from_jagged(Tensor(a) self, Tensor offsets, Tensor dummy, Tensor? lengths=None, int ragged_idx=1) -> Tensor(a)
variants: function
device_check: NoCheck
dispatch: {}
- func: _nested_view_from_jagged_copy(Tensor self, Tensor offsets, Tensor dummy, Tensor? lengths=None, int ragged_idx=1) -> Tensor
variants: function
device_check: NoCheck
tags: view_copy
dispatch:
CompositeExplicitAutogradNonFunctional: _nested_view_from_jagged_copy
autogen: _nested_view_from_jagged_copy.out
- func: _nested_get_values(Tensor(a) self) -> Tensor(a)
variants: function
device_check: NoCheck
dispatch: {}
- func: _nested_get_values_copy(Tensor self) -> Tensor
variants: function
device_check: NoCheck
tags: view_copy
dispatch:
CompositeExplicitAutogradNonFunctional: _nested_get_values_copy
autogen: _nested_get_values_copy.out
- func: _nested_get_offsets(Tensor self) -> Tensor
variants: function
device_check: NoCheck
dispatch: {}
# returns undefined Tensor if no lengths present
- func: _nested_get_lengths(Tensor self) -> Tensor
variants: function
device_check: NoCheck
dispatch: {}
- func: _nested_get_ragged_idx(Tensor self) -> int
variants: function
device_check: NoCheck
dispatch: {}
- func: _nested_get_jagged_dummy(Tensor any) -> Tensor
category_override: dummy
dispatch: {}
- func: _trilinear(Tensor i1, Tensor i2, Tensor i3, int[] expand1, int[] expand2, int[] expand3, int[] sumdim, int unroll_dim=1) -> Tensor - func: _trilinear(Tensor i1, Tensor i2, Tensor i3, int[] expand1, int[] expand2, int[] expand3, int[] sumdim, int unroll_dim=1) -> Tensor
dispatch: dispatch:
# calls unsqueeze # calls unsqueeze

View File

@ -157,43 +157,6 @@ aotriton::TensorView<Rank> mk_aotensor(const at::Tensor& q, c10::string_view ten
cast_dtype(q.dtype())); cast_dtype(q.dtype()));
} }
template<bool COPY_FROM_INPUT, // For Input Tensor
bool COPY_BACK> // For Output Tensor
class TensorStorageSanitizer {
public:
TensorStorageSanitizer(const at::Tensor& ref,
at::Tensor& to_sanitize)
: ref_(ref), to_sanitize_(to_sanitize)
{
need_sanitize = ref_.strides() != to_sanitize_.strides();
if (!need_sanitize)
return;
temp_ = at::empty_like(ref_);
if (COPY_FROM_INPUT) {
temp_.copy_(to_sanitize_);
}
}
~TensorStorageSanitizer()
{
if (need_sanitize && COPY_BACK)
to_sanitize_.copy_(temp_);
}
at::Tensor& sanitized_tensor()
{
if (need_sanitize)
return temp_;
return to_sanitize_;
}
private:
const at::Tensor& ref_;
at::Tensor& to_sanitize_;
at::Tensor temp_;
bool need_sanitize = false;
};
} }
#define CHECK_DEVICE(x) TORCH_CHECK(x.is_cuda(), #x " must be on CUDA") #define CHECK_DEVICE(x) TORCH_CHECK(x.is_cuda(), #x " must be on CUDA")
@ -531,9 +494,6 @@ mha_bwd(const at::Tensor &dout, // batch_size x seqlen_q x num_heads, x head_si
int d_head = head_size_og; int d_head = head_size_og;
hipError_t err; // TODO: Error handling hipError_t err; // TODO: Error handling
{ {
TensorStorageSanitizer<true, false> dq_s(q_t, dq_t);
TensorStorageSanitizer<true, false> dk_s(k_t, dk_t);
TensorStorageSanitizer<true, false> dv_s(v_t, dv_t);
using aotriton::v2::flash::attn_bwd; using aotriton::v2::flash::attn_bwd;
err = attn_bwd(mk_aotensor(q_t, "q"), err = attn_bwd(mk_aotensor(q_t, "q"),
mk_aotensor(k_t, "k"), mk_aotensor(k_t, "k"),
@ -541,9 +501,9 @@ mha_bwd(const at::Tensor &dout, // batch_size x seqlen_q x num_heads, x head_si
softmax_scale, softmax_scale,
mk_aotensor(out_t, "out"), mk_aotensor(out_t, "out"),
mk_aotensor(dout_t, "dout"), mk_aotensor(dout_t, "dout"),
mk_aotensor(dq_s.sanitized_tensor(), "dq"), mk_aotensor(dq_t, "dq"),
mk_aotensor(dk_s.sanitized_tensor(), "dk"), mk_aotensor(dk_t, "dk"),
mk_aotensor(dv_s.sanitized_tensor(), "dv"), mk_aotensor(dv_t, "dv"),
mk_aotensor<2>(softmax_lse_cont, "L"), mk_aotensor<2>(softmax_lse_cont, "L"),
mk_aotensor<2>(delta, "delta"), mk_aotensor<2>(delta, "delta"),
p_dropout, p_dropout,

View File

@ -150,7 +150,7 @@ hf_Bert_large,pass,0
hf_BigBird,pass,0 hf_BigBird,pass,46

1 name accuracy graph_breaks
150
151
152
153
154
155
156

View File

@ -98,7 +98,7 @@ hf_Bert_large,pass,6
hf_BigBird,pass,6 hf_BigBird,pass, 52

1 name accuracy graph_breaks
98
99
100
101
102
103
104

View File

@ -138,7 +138,7 @@ hf_Bert_large,pass,0
hf_BigBird,fail_accuracy,0 hf_BigBird,fail_to_run,0

1 name accuracy graph_breaks
138
139
140
141
142
143
144

View File

@ -150,7 +150,7 @@ hf_Bert_large,pass,0
hf_BigBird,fail_to_run,0 hf_BigBird,pass,46

1 name accuracy graph_breaks
150
151
152
153
154
155
156

View File

@ -94,7 +94,7 @@ hf_Bert_large,pass,6
hf_BigBird,fail_to_run,3 hf_BigBird,pass,52

1 name accuracy graph_breaks
94
95
96
97
98
99
100

View File

@ -150,7 +150,7 @@ hf_Bert_large,pass,0
hf_BigBird,fail_to_run,0 hf_BigBird,fail_accuracy,46

1 name accuracy graph_breaks
150
151
152
153
154
155
156

View File

@ -94,7 +94,7 @@ hf_Bert_large,pass,6
hf_BigBird,fail_to_run,3 hf_BigBird,pass,52

1 name accuracy graph_breaks
94
95
96
97
98
99
100

View File

@ -150,7 +150,7 @@ hf_Bert_large,pass,0
hf_BigBird,pass,0 hf_BigBird,pass,46

1 name accuracy graph_breaks
150
151
152
153
154
155
156

View File

@ -98,7 +98,7 @@ hf_Bert_large,pass,6
hf_BigBird,pass,6 hf_BigBird,pass,52

1 name accuracy graph_breaks
98
99
100
101
102
103
104

View File

@ -150,7 +150,7 @@ hf_Bert_large,pass,0
hf_BigBird,fail_accuracy,0 hf_BigBird,fail_accuracy,46

1 name accuracy graph_breaks
150
151
152
153
154
155
156

View File

@ -98,7 +98,7 @@ hf_Bert_large,pass,6
hf_BigBird,pass,6 hf_BigBird,pass,52

1 name accuracy graph_breaks
98
99
100
101
102
103
104

View File

@ -6,7 +6,7 @@ if(NOT __AOTRITON_INCLUDED)
set(__AOTRITON_INSTALL_DIR "${PROJECT_SOURCE_DIR}/torch") set(__AOTRITON_INSTALL_DIR "${PROJECT_SOURCE_DIR}/torch")
ExternalProject_Add(aotriton_external ExternalProject_Add(aotriton_external
GIT_REPOSITORY https://github.com/ROCm/aotriton.git GIT_REPOSITORY https://github.com/ROCm/aotriton.git
GIT_TAG 9044fe5eb16130e49a0a1f781ea15037353ad542 GIT_TAG 24a3fe9cb57e5cda3c923df29743f9767194cc27
SOURCE_DIR ${__AOTRITON_SOURCE_DIR} SOURCE_DIR ${__AOTRITON_SOURCE_DIR}
BINARY_DIR ${__AOTRITON_BUILD_DIR} BINARY_DIR ${__AOTRITON_BUILD_DIR}
PREFIX ${__AOTRITON_INSTALL_DIR} PREFIX ${__AOTRITON_INSTALL_DIR}

Some files were not shown because too many files have changed in this diff Show More