Compare commits

...

85 Commits

Author SHA1 Message Date
0fabc3ba44 CUDA aarch64 12.6 and 12.8 builds fix triton constraints (#165022)
CUDA aarch64 12.6 and 12.8 builds fix triton constraints (#165013)

Since we have introduced CUDA aarch64 builds for all cuda versions we need to remove this constraint.
This was missed by https://github.com/pytorch/pytorch/pull/162364

Proper constraint on triton should be:
```
Requires-Dist: triton==3.5.0; platform_system == "Linux"
```

not:
```
Requires-Dist: triton==3.5.0; platform_system == "Linux" and platform_machine == "x86_64"
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165013
Approved by: https://github.com/Camyll, https://github.com/nWEIdia, https://github.com/tinglvv

(cherry picked from commit 81dbeb06f4b3eb6c56625ec25d377eb7c7c6c573)

Co-authored-by: atalman <atalman@fb.com>
2025-10-08 21:09:57 -04:00
26e023a973 [MPS] Update OS version in error message (#164949)
[MPS] Update OS version in error message (#164946)

Followup after https://github.com/pytorch/pytorch/pull/159912
Fixes https://github.com/pytorch/pytorch/issues/164943

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164946
Approved by: https://github.com/Camyll

(cherry picked from commit 01f3a43462da594b65a6c9e8b46c132cd360cea9)

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-10-08 11:11:48 -07:00
6f12be2770 CUDA 13.0 builds fix on Amazon Linux 2023 (#164893)
CUDA 13.0 builds fix on Amazon Linux 2023 (#164870)

During 2.9 rc testing I am seeing an issue on Amazon Linux 2023 with CUDA 13.0 builds

This is related to:
 https://github.com/pytorch/pytorch/issues/152756

Workflow: https://github.com/pytorch/test-infra/actions/runs/18324074610/job/52184079262

Error:
```
WARNING: There was an error checking the latest version of pip.
+ python3.11 .ci/pytorch/smoke_test/smoke_test.py --package torchonly
Traceback (most recent call last):
  File "/usr/local/lib64/python3.11/site-packages/torch/__init__.py", line 333, in _load_global_deps
    ctypes.CDLL(global_deps_lib_path, mode=ctypes.RTLD_GLOBAL)
  File "/usr/lib64/python3.11/ctypes/__init__.py", line 376, in __init__
    self._handle = _dlopen(self._name, mode)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: libcudart.so.13: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/pytorch/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 12, in <module>
    import torch
  File "/usr/local/lib64/python3.11/site-packages/torch/__init__.py", line 425, in <module>
    _load_global_deps()
  File "/usr/local/lib64/python3.11/site-packages/torch/__init__.py", line 383, in _load_global_deps
    _preload_cuda_deps(lib_folder, lib_name)
  File "/usr/local/lib64/python3.11/site-packages/torch/__init__.py", line 317, in _preload_cuda_deps
    raise ValueError(f"{lib_name} not found in the system path {sys.path}")
Traceback (most recent call last):
ValueError: libnvToolsExt.so.*[0-9] not found in the system path ['/pytorch/pytorch/.ci/pytorch/smoke_test', '/usr/lib64/python311.zip', '/usr/lib64/python3.11', '/usr/lib64/python3.11/lib-dynload', '/usr/local/lib64/python3.11/site-packages', '/usr/local/lib/python3.11/site-packages', '/usr/lib64/python3.11/site-packages', '/usr/lib/python3.11/site-packages']
  File "/home/ec2-user/actions-runner/_work/test-infra/test-infra/test-infra/.github/scripts/run_with_env_secrets.py", line 102, in <module>
    main()
  File "/home/ec2-user/actions-runner/_work/test-infra/test-infra/test-infra/.github/scripts/run_with_env_secrets.py", line 98, in main
    run_cmd_or_die(f"docker exec -t {container_name} /exec")
  File "/home/ec2-user/actions-runner/_work/test-infra/test-infra/test-infra/.github/scripts/run_with_env_secrets.py", line 39, in run_cmd_or_die
    raise RuntimeError(f"Command {cmd} failed with exit code {exit_code}")
RuntimeError: Command docker exec -t 7d9c5bd403cac9a9ee824d63a1d6f6057ecce89a7daa94a81617dbf8eff0ff2e /exec failed with exit code 1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164870
Approved by: https://github.com/Camyll


(cherry picked from commit 483f4e0db91166128ad8922d86dc7222338d4ecc)

Co-authored-by: atalman <atalman@fb.com>
Co-authored-by: Eli Uriegas <1700823+seemethere@users.noreply.github.com>
2025-10-07 19:33:08 -07:00
42f0c2c970 update the baseline data for the operator benchmark (#164789)
update the baseline data for the operator benchmark (#162693)

According to the results of the last four operator benchmark runs, we found that five models achieved more than a 30% improvement compared to the baseline. Therefore, we will update the operator benchmark baseline data.
We use the average results from the four runs as the new baseline for the five models.

And add a pull request trigger for the operator benchmark workflow

Benchmarking   Framework | Benchmarking   Module Name | Case Name | tag | run_backward | baseline   old | r1 | r2 | r3 | r4 | avg | speedup
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
PyTorch | add | add_M1_N1_K1_cpu | short | FALSE | 3.9497 | 2.57 | 2.54 | 2.38 | 2.31 | 2.45 | 1.61
PyTorch | functional.hardtanh | functional.hardtanh_dims(512	512)_contigFalse_inplaceFalse_dtypetorch.quint8 | short | FALSE | 67.118 | 50.02 | 49.80 | 46.78 | 48.94 | 48.88 | 1.37
PyTorch | relu6 | relu6_dims(512	512)_contigFalse_inplaceFalse_dtypetorch.quint8 | short | FALSE | 68.739 | 51.17 | 51.19 | 48.07 | 50.42 | 50.21 | 1.37
PyTorch | relu6 | relu6_dims(256	1024)_contigFalse_inplaceFalse_dtypetorch.quint8 | short | FALSE | 69.1875 | 51.97 | 52.77 | 50.00 | 51.24 | 51.50 | 1.34
PyTorch | functional.hardtanh | functional.hardtanh_dims(256	1024)_contigFalse_inplaceFalse_dtypetorch.quint8 | short | FALSE | 67.436 | 50.98 | 51.69 | 49.06 | 49.87 | 50.40 | 1.34

@chuanqi129 @huydhn @desertfire @jainapurva

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162693
Approved by: https://github.com/huydhn

(cherry picked from commit f7ea4975abb0aeb0224894f0b54b1f8fd1fa70e3)

Co-authored-by: LifengWang <lifeng.a.wang@intel.com>
2025-10-07 07:10:51 -07:00
b015422da1 fix cpp extension distributed warning spew (#164785)
fix cpp extension distributed warning spew (#162764)

With the new change we only log the warning if we're running non distributed code or if we're in rank 0. Unit testing that certain messages get printed on certain ranks only feels kinda jank so test plan is below instead

Test plan

```python
# torchrun --nproc_per_node=2 demo_fix.py

import os
import logging

logging.getLogger('torch.utils.cpp_extension').setLevel(logging.DEBUG)

import torch
if 'RANK' in os.environ:
    torch.distributed.init_process_group('nccl')

from torch.utils.cpp_extension import _get_cuda_arch_flags
_get_cuda_arch_flags()

print(f"Rank {os.environ.get('RANK', '0')} done")
```

Logs showing how how `TORCH_CUDA_ARCH_LIST`only shows up once if we explicitly set the the logging level to `logging.DEBUG`. It also improves the debug message to explain what the actual behavior will be

```
(source) [marksaroufim@devgpu005]~% torchrun --nproc_per_node=2 demo_fix.py

W0911 18:30:16.594000 1315439 /home/marksaroufim/pytorch/torch/distributed/run.py:814]
W0911 18:30:16.594000 1315439 /home/marksaroufim/pytorch/torch/distributed/run.py:814] *****************************************
W0911 18:30:16.594000 1315439 /home/marksaroufim/pytorch/torch/distributed/run.py:814] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0911 18:30:16.594000 1315439 /home/marksaroufim/pytorch/torch/distributed/run.py:814] *****************************************
[rank0]:V0911 18:30:18.921000 1316753 pytorch/torch/utils/cpp_extension.py:2444] TORCH_CUDA_ARCH_LIST is not set, using TORCH_CUDA_ARCH_LIST='10.0+PTX' for visible GPU architectures. Set os.environ['TORCH_CUDA_ARCH_LIST'] to override.
Rank 0 done
Rank 1 done
```

But if we just use the default and comment out `logging.getLogger('torch.utils.cpp_extension').setLevel(logging.DEBUG)`

Then we get

```
(source) [marksaroufim@devgpu005]~% torchrun --nproc_per_node=2 demo_fix.py
W0911 18:14:33.926000 690759 /home/marksaroufim/pytorch/torch/distributed/run.py:814]
W0911 18:14:33.926000 690759 /home/marksaroufim/pytorch/torch/distributed/run.py:814] *****************************************
W0911 18:14:33.926000 690759 /home/marksaroufim/pytorch/torch/distributed/run.py:814] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0911 18:14:33.926000 690759 /home/marksaroufim/pytorch/torch/distributed/run.py:814] *****************************************
Rank 0 done
Rank 1 done
(source) [marksaroufim@devgpu005]~%
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162764
Approved by: https://github.com/ezyang, https://github.com/zou3519

(cherry picked from commit f7e83219619a05934a344ca699c33ee69d5a3642)

Co-authored-by: Mark Saroufim <marksaroufim@meta.com>
2025-10-06 16:58:36 -07:00
d4c4307032 Fix docker build issue after 164575 (#164779)
Fix docker build issue after 164575 (#164774)

Looks like https://github.com/pytorch/pytorch/pull/164575 introduced an issue.
The command is wrong:
```
conda install -c "whl/nightly" -y python=3.11 conda=25.7.0
```
Should be just using default conda channel:
```
conda install  -y python=3.11 conda=25.7.0
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164774
Approved by: https://github.com/Camyll

(cherry picked from commit c1f40d33c89b361a1edad17aa25cfff1ab4014fd)

Co-authored-by: atalman <atalman@fb.com>
2025-10-06 16:56:06 -04:00
3b57315b1b [ROCm] Increase binary build timeout to 5 hours (300 minutes) (#164770)
[ROCm] Increase binary build timeout to 5 hours (300 minutes) (#163776)

Despite narrowing down the [FBGEMM_GENAI build to gfx942](https://github.com/pytorch/pytorch/pull/162648), the nightly builds still timed out because they [didn't get enough time to finish the post-PyTorch-build steps](https://github.com/pytorch/pytorch/actions/runs/17969771026/job/51109432897).

This PR increases timeout for ROCm builds for both [libtorch ](https://github.com/pytorch/pytorch/actions/runs/17969771026)and [manywheel](https://github.com/pytorch/pytorch/actions/runs/17969771041), because both of those are close to the 4hr mark currently.

This PR is a more ROCm-targeted version of https://github.com/pytorch/pytorch/pull/162880 (which is for release/2.9 branch).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163776
Approved by: https://github.com/jeffdaily


(cherry picked from commit 0ec946a0522748332f42675a4d690ff32d773d42)

Co-authored-by: Jithun Nair <jithun.nair@amd.com>
Co-authored-by: Jeff Daily <jeff.daily@amd.com>
2025-10-06 16:08:40 -04:00
c74f05797d Pin conda version for Docker builds (#164579)
Pin conda version for Docker builds (#164575)

Mitigates https://github.com/pytorch/pytorch/issues/164574
Remove unused CUDA_CHANNEL var - this was used before when we had  pytorch install via conda.

Please note: CUDA 13.0 failures are expected since the CI tries to build against prod and CUDA 13.0 is not available in prod yet.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164575
Approved by: https://github.com/malfet, https://github.com/Camyll

(cherry picked from commit e40fe634b1a7aa33e278b1404ee02dea12277080)

Co-authored-by: atalman <atalman@fb.com>
2025-10-03 11:44:46 -04:00
fd364580a9 [Cherry-Pick] Work Around exposing statically linked libstdc++ CXX11 ABI strong symbols (#163980) (#164508)
* Work Around exposing statically linked libstdc++ CXX11 ABI strong symbols (#163980)

Work Around for: https://github.com/pytorch/pytorch/issues/133437

Test plan:
1. Build whl in CI
2. Download
3. Run ``nm -D libtorch_cpu.so | grep "recursive_directory_iterator"``

Test with check_binary_symbols.py:

Success:
```
num_cxx11_symbols: 2326
num_pre_cxx11_symbols: 0
lib: /home/ec2-user/github/variant-repack/.venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
num_statically_linked_symbols (T): 0
```

Fail when using "W" instead of "T" as type calling ``cxx11_statically_linked_symbols = grep_symbols(
        lib, STATICALLY_LINKED_CXX11_ABI, symbol_type="W"
    )`` :
```
num_cxx11_symbols: 2326
num_pre_cxx11_symbols: 0
lib: /home/ec2-user/github/variant-repack/.venv/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so
num_statically_linked_symbols (T): 20
Traceback (most recent call last):
  File "/home/ec2-user/github/variant-repack/test/pytorch/.ci/pytorch/smoke_test/check_binary_symbolsc.py", line 130, in <module>
    main()
  File "/home/ec2-user/github/variant-repack/test/pytorch/.ci/pytorch/smoke_test/check_binary_symbolsc.py", line 126, in main
    check_lib_statically_linked_libstdc_cxx_abi_symbols(libtorch_cpu_path)
  File "/home/ec2-user/github/variant-repack/test/pytorch/.ci/pytorch/smoke_test/check_binary_symbolsc.py", line 95, in check_lib_statically_linked_libstdc_cxx_abi_symbols
    raise RuntimeError(
RuntimeError: Found statically linked libstdc++ symbols (recursive_directory_iterator), but there shouldn't be any, see: ['std::filesystem::__cxx11::recursive_directory_iterator::recursion_pending() const', 'std::filesystem::__cxx11::recursive_directory_iterator::depth() const', 'std::filesystem::__cxx11::recursive_directory_iterator::options() const', 'std::filesystem::__cxx11::recursive_directory_iterator::operator*() const', 'std::__shared_ptr<std::filesystem::__cxx11::recursive_directory_iterator::_Dir_stack, (__gnu_cxx::_Lock_policy)2>::operator bool() const', 'std::filesystem::__cxx11::recursive_directory_iterator::disable_recursion_pending()', 'std::filesystem::__cxx11::recursive_directory_iterator::pop(std::error_code&)', 'std::filesystem::__cxx11::recursive_directory_iterator::pop()', 'std::filesystem::__cxx11::recursive_directory_iterator::increment(std::error_code&)', 'std::filesystem::__cxx11::recursive_directory_iterator::recursive_directory_iterator(std::filesystem::__cxx11::path const&, std::filesystem::directory_options, std::error_code*)', 'std::filesystem::__cxx11::recursive_directory_iterator::recursive_directory_iterator(std::filesystem::__cxx11::path const&, std::filesystem::directory_options, std::error_code*)', 'std::filesystem::__cxx11::recursive_directory_iterator::~recursive_directory_iterator()', 'std::filesystem::__cxx11::recursive_directory_iterator::~recursive_directory_iterator()', 'std::filesystem::__cxx11::recursive_directory_iterator::operator=(std::filesystem::__cxx11::recursive_directory_iterator&&)', 'std::filesystem::__cxx11::recursive_directory_iterator::operator=(std::filesystem::__cxx11::recursive_directory_iterator const&)', 'std::filesystem::__cxx11::recursive_directory_iterator::operator++()', 'std::__shared_ptr<std::filesystem::__cxx11::recursive_directory_iterator::_Dir_stack, (__gnu_cxx::_Lock_policy)2>::__shared_ptr(std::__shared_ptr<std::filesystem::__cxx11::recursive_directory_iterator::_Dir_stack, (__gnu_cxx::_Lock_policy)2>&&)', 'std::__shared_ptr<std::filesystem::__cxx11::recursive_directory_iterator::_Dir_stack, (__gnu_cxx::_Lock_policy)2>::__shared_ptr()', 'std::__shared_ptr<std::filesystem::__cxx11::recursive_directory_iterator::_Dir_stack, (__gnu_cxx::_Lock_policy)2>::__shared_ptr(std::__shared_ptr<std::filesystem::__cxx11::recursive_directory_iterator::_Dir_stack, (__gnu_cxx::_Lock_policy)2>&&)', 'std::__shared_ptr<std::filesystem::__cxx11::recursive_directory_iterator::_Dir_stack, (__gnu_cxx::_Lock_policy)2>::__shared_ptr()']
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163980
Approved by: https://github.com/isuruf, https://github.com/malfet

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>

* fix

---------

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-10-02 17:49:44 -04:00
2f6387e9a1 [CherrryPick][2.9] Cherry pick request for Reapply "Make functionalization ViewMeta serializable with pickle #163769 (#163873)
Reapply "Make functionalization `ViewMeta` serializable with pickle. (#143712)"  (#163769)

NOTE: This is a re-export of https://github.com/pytorch/pytorch/pull/161994 ; the changes between these two PRs is exclusively to the buck/build files

(Summary from #161994 )
Attempted rebase of https://github.com/pytorch/pytorch/pull/143712.

This reverts commit 6c713ccb5e0df227dd5b630057cbccd373cbe7d6.

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames Lucaskabela

imported-using-ghimport

Test Plan: Imported from OSS

Differential Revision: D81524507

Pulled By: Lucaskabela

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163769
Approved by: https://github.com/dolpm


(cherry picked from commit 7d710403b003e44bf31d367673a05468e49df75d)

Co-authored-by: Brian Hirsh <hirsheybar@fb.com>
2025-10-02 16:07:51 -04:00
017d857f5f fix pickling for BitwiseFn (#163861)
* fix pickling for BitwiseFn (#163571)

Summary:
ran into AttributeError: Can't get local object 'make_opaque_bitwise_fn.<locals>.BitwiseFn'

looks like it was fixed for UnaryFn but not BitwiseFn in https://github.com/pytorch/pytorch/pull/138395

Fixes #147841

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163571
Approved by: https://github.com/jamesjwu

(cherry picked from commit cde5c9aebd7a2eda0c935de1ab7a40b6453c5813)

* Fix lintrunner with -a

---------

Co-authored-by: dolpm <34420038+dolpm@users.noreply.github.com>
Co-authored-by: Lucas Kabela <lucaskabela@meta.com>
2025-10-02 15:35:40 -04:00
d6e8411889 Make sure Windows CUDA 12.8 build follow same arches as Linux builds (#164477)
Make sure Windows CUDA 12.8 build follow same arches as Linux builds (#164470)

I believe ``set TORCH_CUDA_ARCH_LIST=7.0;7.5;8.0;8.6;9.0;10.0;12.0`` is the one thats actually used. Hence remove 6.1  to align the support with Linux support.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164470
Approved by: https://github.com/tinglvv, https://github.com/nWEIdia, https://github.com/Camyll

(cherry picked from commit 235b995ce18de632ab816940319fcd66b46039b8)

Co-authored-by: Andrey Talman <atalman@fb.com>
2025-10-02 14:33:06 -04:00
10b501fde9 [Flex] Fix silent correctness w/ backpropping grads (#164366)
[Flex] Fix silent correctness w/ backpropping grads (#163677)

Fixes #https://github.com/pytorch/pytorch/issues/162228

# Summary

Majority of our tests are only compiling flex-attention in isolation. This means that for fake tensor propagation the input primals and all captured buffers dont do any intermediate computation below autograd.  As a result result the by happen chance match the `require_grad`ness of the eager implementation and this check  will pass. However if score_mod is a the result of some other intermediate fake tensor prop then it is not guaranteed to have accurate req_gradness, which was happening here.

TLDR is that this was a boot and suspenders that was actually harmful and we should just let the joint graph handle creating the correct joint graph

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163677
Approved by: https://github.com/ydwu4

(cherry picked from commit e2ce79e4cce5327b71fcf366fad1133030563285)

Co-authored-by: drisspg <drisspguessous@gmail.com>
2025-10-01 14:43:28 -07:00
31c72b8a96 [a2av] Separate in/out splits into two tensors (#164028)
[a2av] Separate in/out splits into two tensors (#163837)

Old signature:
`all_to_all_vdev(Tensor input, Tensor(a!) out, Tensor(a!) in_out_splits, str group_name)`
New signature:
`all_to_all_vdev(Tensor input, Tensor(a!) out, Tensor in_splits, Tensor(a!) out_splits_offsets, str group_name)`

i.e. split `in_out_splits` into IN tensor and OUT tensor so that we can define the TORCH_LIBRARY signature better.
Also to be in line with the 2D version.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163837
Approved by: https://github.com/fduwjj
ghstack dependencies: #163886

(cherry picked from commit bbf8aa43efe755b9c310347b3780962fca85bf9c)

Co-authored-by: Ke Wen <kw2501@meta.com>
2025-10-01 14:43:19 -07:00
1cd83de315 [Flex attention] Fix flex attention head broadcast (#164368)
[Flex attention] Fix flex attention head broadcast (#163426)

Fixes part of #163314

In particular bug: **Bug 1: H=None Broadcasting Produces Incorrect Results**

This fixes a shape bug when slicing BlockMask on the Q-tile axis with an int (**mask[:, :, i]**). That form of indexing collapses the Q dimension, so kv_num_blocks/kv_indices lose their expected [B, H, Q_tiles, …] shape. Due to them losing shape, even though the mask_mod remains "interpretable", the kernel’s stride math then reads wrong offsets. Due to this we get silent numerical mismatches compared to regular SDPA, especially when single position decoding/H broadcasting.

The B=None, H=None works case is accidental: with singleton batch/head the kernel maps to index 0 via `sparse_idx_z = off_zq % 1` and `sparse_idx_hq = off_hq % 1` and with a single Q tile `q_start // SPARSE_Q_MULTIPLE = 0`. The missing Q-tiles stride is multiplied by 0, so the bad offset from the collapsed Q axis doesn’t move the pointer and it happens to read the first tile correctly. Once H > 1 or there are multiple Q tiles, those terms become nonzero and the kernel indexes with wrong strides which causes silent error

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163426
Approved by: https://github.com/drisspg

(cherry picked from commit 1a42656d6c43a9bb7eb90c511884ce451d29422f)

Co-authored-by: Isalia20 <irakli.salia854@gmail.com>
2025-10-01 13:48:10 -07:00
881c2ccae9 Update Gloo submodule (#164371)
Update Gloo submodule (#163112)

Which makes PyTorch buildable with gcc-15, tested by running the build inside `fedora:44` docker
```
docker run --rm -it fedora:44 bash -c "yum install -y g++ python3-devel git; git clone https://github.com/pytorch/pytorch; cd pytorch; git checkout 8f710acce8332979c9a7bf97e72666dfd35c43e6; python3 -mpip install -r requirements.txt; python3 setup.py bdist_wheel"
```

Fixes https://github.com/pytorch/pytorch/issues/156595
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163112
Approved by: https://github.com/huydhn

(cherry picked from commit 65845d72917fc27cd89a88b067e7c8f44bc0c987)

Co-authored-by: Nikita Shulga <nshulga@meta.com>
2025-10-01 12:00:18 -07:00
764f65584a [MPS] Chunk fillBuffer into 4Gb slices (#164370)
[MPS] Chunk fillBuffer into 4Gb slices (#164108)

To avoid regression on MacOS 26, which one could observe by running the following script
```swift
import Metal

let bufferSize = 1<<32 + 4

guard let device = MTLCreateSystemDefaultDevice() else { fatalError("No Metal device found") }
guard let buffer = device.makeBuffer(length: bufferSize, options: .storageModeShared) else { fatalError("Failed to create buffer") }

guard let cmdQueue = device.makeCommandQueue() else { fatalError("Failed to create command queue") }
guard let cmdBuffer = cmdQueue.makeCommandBuffer() else { fatalError("Failed to create command buffer") }
guard let blitEncoder = cmdBuffer.makeBlitCommandEncoder() else { fatalError("Failed to create blit encoder") }

blitEncoder.fill(buffer: buffer, range: 0..<bufferSize, value: 0x42)
blitEncoder.endEncoding()

cmdBuffer.commit()
cmdBuffer.waitUntilCompleted()

let tailOffs = 8
let hostPtr = buffer.contents().bindMemory(to: UInt8.self, capacity: bufferSize)
let tail = Array(UnsafeBufferPointer(start: hostPtr + (bufferSize - tailOffs), count: tailOffs))

for (idx, val) in tail.enumerated() {
    print("Offs 0x\(String(bufferSize - tailOffs + idx, radix: 16)): 0x\(String(val, radix: 16))")
}
```

Test plan: run `test_indexing.py` on MacOS-26

Fixes https://github.com/pytorch/pytorch/issues/161265
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164108
Approved by: https://github.com/Skylion007

(cherry picked from commit 6db1b9dd217501e0b3171d96335bed7b2bb53c36)

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2025-10-01 11:59:56 -07:00
3e8a062385 Update Microsoft C++ Redistributable to the latest version (#164369)
Update Microsoft C++ Redistributable to the latest version (#161430)

Update Microsoft C++ Redistributable link to the latest version as one of the libraries used by AMD currently has a dependency on that.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/161430
Approved by: https://github.com/malfet

(cherry picked from commit 1330c638bef7fac64a42935b5a46ee32637ddd4d)

Co-authored-by: Saman Khatir <saman.khatir@amd.com>
2025-10-01 11:57:53 -07:00
3abee625e1 Fix warn message (#164367)
Fix warn message (#163578)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163578
Approved by: https://github.com/albanD, https://github.com/Skylion007, https://github.com/atalman, https://github.com/v0i0

(cherry picked from commit f3f67ff43a014b75b804d5ded0c7de3d8e0be65f)

Co-authored-by: drisspg <drisspguessous@gmail.com>
2025-10-01 11:57:16 -07:00
f227c883f9 [MPSHooks] Release pending command encoder (#164365)
[MPSHooks] Release pending command encoder (#164093)

Before returning a comand buffer, as subsequent calle are very likely to allocate their own encoder, which results in the following runtime error
```
 tryCoalescingPreviousComputeCommandEncoderWithConfig:nextEncoderClass:]:1090: failed assertion `A command encoder is already encoding to this command buffer'
```

Added regression test to `test_mps_extension`

Please note, that `torch::mps::get_command_buffer()` should be called with dispatch_queue held, both before and after this change, but many implementations skip that

Fixes https://github.com/pytorch/pytorch/issues/163721
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164093
Approved by: https://github.com/atalman, https://github.com/Skylion007

(cherry picked from commit 8f32adc90a7fee83583c9ba89dbdfabb317e0452)

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2025-10-01 11:56:42 -07:00
a5feacb14b [SDPA] [MPS] Fixes regression in 2.8.0 for scaled_dot_product_attention using mps (#164364)
[SDPA] [MPS] Fixes regression in 2.8.0 for scaled_dot_product_attention using mps (#163598)

Fixes #163597

- Updates fast SDPA implementations to take in query tensor stride info similar to key and value instead of assuming stride.
- Updated tests with additional transpose/permutation layouts. New tests catch the regression.

### Benchmarking with script found in [implementation PR](https://github.com/pytorch/pytorch/pull/152781#:~:text=19.8%25%20speed%20improvement-,Script%20to%20get%20perf%3A,-import%20torch%0Aimport)

Times are averaged over 100000 iterations. This change should not have any significant performance difference. Tested on an M3 Pro

### Vector Fast Path (q_len=1, k_len=256)

- Before: 0.160 ms
- After: 0.157 ms

### Vector 2-pass (q_len=1, k_len=4096)

- Before: 0.342 ms
- After: 0.339 ms

### Vector Fast Path (q_len=8, k_len=256)

- Before: 0.228 ms
- After: 0.231 ms

### Vector 2-pass (q_len=8, k_len=4096)

- Before: 0.432 ms
- After:  0.436 ms

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163598
Approved by: https://github.com/malfet

(cherry picked from commit 1c12d7416bc4f1cf0bc8a229e64169fc361b688e)

Co-authored-by: Vismai Khanderao <59114226+Vismai-Khanderao@users.noreply.github.com>
2025-10-01 11:37:14 -07:00
71282c8364 Update Sphinx theme (#164147) (#164254)
Fix links in the top nav bar: 71e55749be

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164147
Approved by: https://github.com/albanD

(cherry picked from commit e88cca069171ceb117dd1ceb73e8bf3e54aa83cf)
2025-10-01 09:59:45 -07:00
e70d9f5322 [vllm hash update] update the pinned vllm hash (#164190) (#164312)
* [vllm hash update] update the pinned vllm hash (#164190)

This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned vllm hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164190
Approved by: https://github.com/pytorchbot

* Cherry pick b7125b3c456d48445ab0b84fab28702577cd9557

Signed-off-by: Huy Do <huydhn@gmail.com>

---------

Signed-off-by: Huy Do <huydhn@gmail.com>
Co-authored-by: PyTorch UpdateBot <pytorchupdatebot@users.noreply.github.com>
2025-10-01 06:43:17 -07:00
005e3e8d78 Clean up obsoleted vLLM tests (#164282)
Clean up obsoleted vLLM tests (#163383)

They have been removed in https://github.com/vllm-project/vllm/pull/25117 and https://github.com/vllm-project/vllm/pull/22772, thus failing in trunk at the moment after the latest pin commit update

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163383
Approved by: https://github.com/wdvr, https://github.com/seemethere, https://github.com/malfet

(cherry picked from commit a31acf32bd18e115df910002aef42baf7a9b4a33)

Co-authored-by: Huy Do <huydhn@gmail.com>
2025-09-30 14:40:57 -07:00
72cf48ea43 [AARCH64][CD][CUDA13][Triton][PTXAS] Turn on BUILD_BUNDLE_PTXAS=1 (#164236)
[AARCH64][CD][CUDA13][Triton][PTXAS] Turn on BUILD_BUNDLE_PTXAS=1   (#163988)

See also #163972, which was intended to be this PR.

Triton (release/3.5.x) by default ships CUDA12.8 ptxas.
This PR tries to bundle a ptxas version for cuda13, so that it can help https://github.com/pytorch/pytorch/issues/163801 when users run on new devices like THOR and Spark.

Fixes https://github.com/pytorch/pytorch/issues/163801

Test Plan:

Check binary size increase against nightly or v2.9RC
Install the binary from into a working THOR and GB200/GH100 machine (reproduce the original issue first on THOR), then install the binary built from this PR and we expect the issue to be gone without any additional user setting. Testing on GB200 is to ensure no regression.
Reference: https://github.com/pytorch/pytorch/pull/119750 and 5c814e2527

Note: with this PR, the pytorch world's torch.compile is supposed to find ptxas via "torch/_inductor/runtime/compile_tasks.py" and "_set_triton_ptxas_path". Use cases that do not go through "_set_triton_ptxas_path" may not be able to use the cuda13 ptxas binary.
However, as is, the triton world does not know the existence of this new cuda13 ptxas. So IF a users thinks there is already pytorch/bin/ptxas and delete the ptxas from triton, then  c6ad34f7eb/python/triton/knobs.py (L216) would still complain ptxas not found (if removed - it won't know this new one available)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163988
Approved by: https://github.com/atalman

(cherry picked from commit 3b4ad4a17d69e2db495ecaf3bae8916282a4eb0d)

Co-authored-by: Wei Wang <weiwan@nvidia.com>
2025-09-30 13:53:56 -04:00
a21a4bf11a [CI] Move libtorch-cpu-shared-with-deps-release-build to python 3.10 (#164182)
[CI] Move libtorch-cpu-shared-with-deps-release-build to python 3.10 (#162877)

Related to https://github.com/pytorch/pytorch/pull/162862

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162877
Approved by: https://github.com/malfet

(cherry picked from commit c9e57d7e9f326e427fc4ae5c318fd017cd4b75a9)

Co-authored-by: atalman <atalman@fb.com>
2025-09-29 15:52:16 -07:00
21fec65781 Use linux.g4dn.4xlarge.nvidia.gpu for cuda 12.4 legacy driver tests (#164172)
Use linux.g4dn.4xlarge.nvidia.gpu for cuda 12.4 legacy driver tests (#163956)

Workaround for https://github.com/pytorch/pytorch/issues/163658

Looks like the workflow passes on 12.8 build that use inux.g4dn.4xlarge.nvidia.gpu but its failing on 12.6 builds that use linux.4xlarge.nvidia.gpu: https://github.com/pytorch/pytorch/actions/runs/17953843505/job/51080623612#step:13:470

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163956
Approved by: https://github.com/malfet


(cherry picked from commit 349c960970f4e29eff0d37a9b3c1ca5ed86a121a)

Co-authored-by: atalman <atalman@fb.com>
Co-authored-by: Mark Saroufim <marksaroufim@meta.com>
2025-09-29 16:14:37 -04:00
22d46b50ec [CUDA] revert PR 130472 (#163379)
[CUDA] revert PR 130472 (#162950)

This change may also resolve https://github.com/pytorch/pytorch/issues/161789, though verification is still needed.

PR #130472 would introduced the problem of  freeing the same address without clean metadata. according to the below discussion, reverted it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/162950
Approved by: https://github.com/ngimel, https://github.com/eqy, https://github.com/syed-ahmed

(cherry picked from commit 4a160dae3cabaff358a6bb2490d0160dd1bf2cdf)

Co-authored-by: thenumberouscode <dream20151224@163.com>
2025-09-29 16:05:26 -04:00
d1b63e2b4a Skip test_conv3d_cudnn_broken on ROCM (#164163)
Skip test_conv3d_cudnn_broken on ROCM (#164138)

Followup after https://github.com/pytorch/pytorch/pull/163903  Fixes https://github.com/pytorch/pytorch/issues/164137

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164138
Approved by: https://github.com/Camyll

(cherry picked from commit 95be302889b8683b7ec7793a69ffa8891b6b5af8)

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-09-29 11:41:18 -07:00
20100b7210 [c10d] P2P tensors must be dense (#163981)
[c10d] P2P tensors must be dense (#163719)

Fixes #161324
by adding `is_non_overlapping_and_dense` check.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163719
Approved by: https://github.com/ngimel

(cherry picked from commit 11a231ef52841a549913b7a6d423cc9004b6b58b)

Co-authored-by: Ke Wen <kw2501@meta.com>
2025-09-29 11:27:24 -07:00
a2c77043ee Add operator benchmarking run to CI nightly (#164151)
Add operator benchmarking run to CI nightly (#162530)

This PR introduces a new "operator microbenchmark" CI workflow and GitHub Actions for operator microbenchmarks, updating test scripts and job matrices to support new parameters, and broadening the operator benchmark tests to include more data types, larger shapes, and gradient tests. The benchmark configurations now focus more on different cuda hardware and multiple dtypes (bf16, fp16, fp32), for both compile and eager mode.

**Benchmark Configuration and Coverage:**

* Expanded operator benchmark configurations in `addmm_test.py`, `bmm_test.py`, `matmul_test.py`, and `mm_test.py` to benchmark multiple dtypes on CUDA devices, in eager and compile mode, for forward and backward run. The configs with tag "long" for the above mentioned files are being run in CI.
* The CI benchmarking is running on various hardwares: H100, A100.
* The CI job also uploads the microbenchmarking outputs to a [HUD](https://hud.pytorch.org/benchmark/llms?repoName=pytorch%2Fpytorch&benchmarkName=PyTorch+operator+microbenchmark) dashboard.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162530
Approved by: https://github.com/huydhn


(cherry picked from commit 54b38f3b46c33a1cc4e8f7894619358afcbd7c89)

Co-authored-by: jainapurva <apurvajain.kota@gmail.com>
Co-authored-by: Huy Do <huydhn@gmail.com>
2025-09-29 11:21:19 -07:00
b64fc8e41e Fix operator benchmark issue#162708 (#164140)
Fix operator benchmark issue#162708 (#162744)

This PR skips memory metric calculation for ops which don't take tensor input, fixing the operator_benchmark bug

Fixes https://github.com/pytorch/pytorch/issues/162708

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162744
Approved by: https://github.com/huydhn

(cherry picked from commit 5f66902ecfb9cb4f7b9c50cb86307217cec1dbe9)

Co-authored-by: jainapurva <apurvajain.kota@gmail.com>
2025-09-29 09:34:26 -07:00
709f4f62a0 [cuDNN][Convolution] Disable cuDNN for 3D convolutions with kernel size != 1 for cuDNN 9.8+ (#164027)
[cuDNN][Convolution] Disable cuDNN for 3D convolutions with kernel size != 1 for cuDNN 9.8+ (#163581)

To workaround #163539

Still confirming whether 9.10 is affected. The original test states that the convolution is "large," but note that the input size does not apepar to require 64-bit indexing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163581
Approved by: https://github.com/ngimel, https://github.com/malfet


(cherry picked from commit e2817ac20426356278502db3b1614ea87cb7cff7)

Co-authored-by: Eddie Yan <eddiey@nvidia.com>
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-09-29 09:07:14 -07:00
11f776c8ee [cuDNN][SDPA] Disable dropout for cuDNN SDPA on 9.11 - 9.13 (#164026)
[cuDNN][SDPA] Disable dropout for cuDNN SDPA on 9.11 - 9.13 (#163903)

cuDNN introduced some broken heuristics for these cases so we need to disable dropout to avoid unexpected crashes due to heuristics refusing to proceed

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163903
Approved by: https://github.com/ngimel, https://github.com/malfet, https://github.com/atalman

(cherry picked from commit ed3085814a870f7a07b7f9c696999a47d4f85376)

Co-authored-by: Eddie Yan <eddiey@nvidia.com>
2025-09-29 09:06:23 -07:00
45e257f046 [cuDNN][conv][64-bit] Disable cuDNN for 64-bit depthwise convs again (#164023)
[cuDNN][conv][64-bit] Disable cuDNN for 64-bit depthwise convs again (#163171)

test is breaking, will check if there's an older version that we can enable on to avoid completely dropping support

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163171
Approved by: https://github.com/ngimel, https://github.com/malfet

(cherry picked from commit 0ea10f9912a9ec7c6d606bc71e3ec91f20372212)

Co-authored-by: eqy <eddiey@nvidia.com>
2025-09-29 09:03:36 -07:00
37e2626639 Update the operator benchmarking, to benchmark using torch.compile (#164101)
Update the operator benchmarking, to benchmark using torch.compile (#161394)

This pull request enhances the PyTorch operator benchmarking suite by introducing support for benchmarking with `torch.compile` mode, in addition to existing Eager and JIT. It also adds peak memory measurement (fwd/bwd pass); improves the output format in JSON to be used by dashboard for reporting; and introduce some more CLI options. The new CLI flags introduced are:

- Added `--use-compile` CLI argument and corresponding logic to run benchmarks using `torch.compile`, including mutual exclusivity with `--use-jit`
- Added `--benchmark-name` argument for customizing the benchmark name in output
- Updated default value for `--output-json-for-dashboard` to `benchmark-results.json` for more predictable output file name

Sample command to run a single operator:
`python -m pt.mm_test --use-compile`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161394
Approved by: https://github.com/jbschlosser

(cherry picked from commit af60398c3a057506363e028bf328843a755b4f24)

Co-authored-by: jainapurva <apurvajain.kota@gmail.com>
2025-09-29 07:49:05 -07:00
d7a703ea92 [SymmMem] Barrier on team instead of world (#163376)
[SymmMem] Barrier on team instead of world (#163298)

As titled. Avoiding a potential hang when running dispatch and combine in subgroups.

The rest is just re-arrange of the tests to create a sub-group test class. (no substantial change)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163298
Approved by: https://github.com/fegin

(cherry picked from commit f8fb437197033c33ecc435cd5e1e6a5b2bc5bf69)

Co-authored-by: Ke Wen <kw2501@meta.com>
2025-09-26 16:41:18 -07:00
daa3d04325 [SymmMem] Fix memory allocation hold-up (#163375)
[SymmMem] Fix memory allocation hold-up (#162680)

Problem:
Without MemPool it looks like nvshmem backend never deallocates memory.

Cause:
Handles in `symm_mems_` (a map) keeps reference to memory allocations.

Solution:
- Remove reference to allocation from handles -- the reference is never used anyway.
- Use `unique_ptr` instead of `shared_ptr` to wrap allocation to ensure single ownership.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162680
Approved by: https://github.com/ezyang
ghstack dependencies: #163298

(cherry picked from commit 7130b174e07dbc1a708934b18dede3d88e8f779f)

Co-authored-by: Ke Wen <kw2501@meta.com>
2025-09-26 16:35:56 -07:00
999304396f [dist] handle discontiguous allgather/reducescatter inputs (#163987)
[dist] handle discontiguous allgather/reducescatter inputs (#163712)

Fixes #163483

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163712
Approved by: https://github.com/ezyang, https://github.com/kwen2501

(cherry picked from commit 71eec6a0bf69f712f4b9279fdc8d1459be0426e6)

Co-authored-by: Natalia Gimelshein <ngimel@meta.com>
2025-09-26 16:21:08 -07:00
5340e741df [Reland][163423] Promote @requires_nvshmem instead of enable_triton (#163916)
[Reland][163423] Promote `@requires_nvshmem` instead of `enable_triton` (#163549)

#163423 was approved but reverted due to a revert of base.
Relanding without base.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163549
Approved by: https://github.com/wdvr


(cherry picked from commit 6e6c899347db952f6a691feb4e8610fe9cca0279)

Co-authored-by: Ke Wen <kw2501@fb.com>
Co-authored-by: Wouter Devriendt <wouterdevriendt@meta.com>
2025-09-26 15:58:30 -07:00
7cadf8ac04 [Inductor][Intel GPU] Save threads_per_warp from tirton compiled kernel for launching kernel correctly in cpp wrapper. (#163388)
[Inductor][Intel GPU] Save `threads_per_warp` from tirton compiled kernel for launching kernel correctly in cpp wrapper. (#163315)

On the Inductor XPU backend, `threads_per_warp` is not always 32. For Intel GEMM Triton kernels, it can be 16. This information must be preserved for XPU so that the Cpp wrapper can launch the kernel with the correct configuration.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163315
Approved by: https://github.com/EikanWang, https://github.com/desertfire

(cherry picked from commit 9f8a311af09586ac4026d6a56fc7c4ac7acc62ed)

Co-authored-by: xinan.lin <xinan.lin@intel.com>
2025-09-26 14:42:09 -04:00
f9e495fe8e Move inductor jobs 3.9->3.10 (#163954)
Move inductor jobs 3.9->3.10 (#162323)

Related to: https://github.com/pytorch/pytorch/issues/161167

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162323
Approved by: https://github.com/huydhn, https://github.com/Skylion007


(cherry picked from commit e8eeb060348f250975124abb957b1d7d9c4af9a0)

Co-authored-by: atalman <atalman@fb.com>
Co-authored-by: Huy Do <huydhn@gmail.com>
2025-09-26 12:37:50 -04:00
57dc68844d [CI] Fix test_triton_wait_until hang (#163914)
[CI] Fix test_triton_wait_until hang (#163886)

I don't know why `nvshmem_barrier_all_kernel`  leads the test to hang. Will investigate.
But since it is an unnecessary call here, I am removing it to unblock other PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163886
Approved by: https://github.com/fegin

(cherry picked from commit 96275dbf88372bb32a123c4ea918498128fbecb9)

Co-authored-by: Ke Wen <kw2501@meta.com>
2025-09-26 12:16:00 -04:00
63da9d2730 [Release 2.9] Update torch-xpu-ops commit pin (#163622)
Update commit pin to 789f59
2025-09-26 09:46:02 -04:00
824d59fbf6 [CI] Install libuv for Win testing (#163907)
[CI] Install libuv for Win testing (#163797)

Current working theory why f0078941cf caused a regression, are because Windows CI no longer could be build with distributed, as it could not find libuv
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163797
Approved by: https://github.com/wdvr

(cherry picked from commit cc660d38ac533b92f3ad4cb1105f7a16f74b9f09)

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-09-26 00:03:22 -07:00
fc8bf12b38 Fix cpp build (#163887)
Fix cpp build (#162774)

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162774
Approved by: https://github.com/malfet, https://github.com/atalman

(cherry picked from commit b61bdc7cc4c841bf7574bc993f3fd445682f0997)

Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
2025-09-25 14:50:59 -07:00
49dab18ecf [CD] Add statically linked windows libraries to exclude list (#163862)
[CD] Add statically linked windows libraries to exclude list (#163768)

Fixes: https://github.com/pytorch/pytorch/issues/159514

Seeing following in the Wheel build logs:
```
Linking CXX static library lib\kineto.lib
Linking CXX static library lib\dnnl.lib
....
```

These files are around 800MB uncompressed and 109MB compressed, hence provide ~50% size reduction for Windows CPU builds.

Test Plan: Build Pytorch Windows binary. Build vision, audio and torchcodec with this binary. Smoke test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163768
Approved by: https://github.com/albanD, https://github.com/malfet

(cherry picked from commit 98c4e35f14601909c113b4fd2857b6f0fb525316)

Co-authored-by: atalman <atalman@fb.com>
2025-09-25 14:46:56 -07:00
0154ca1d3d [BE] Update Python min version to 3.10 (#162310) (#163885)
* [BE] Update Python min version to 3.10 (#162310)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162310
Approved by: https://github.com/atalman, https://github.com/Skylion007, https://github.com/ZainRizvi

* comment out executorch

---------

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2025-09-25 14:44:48 -07:00
132d9fac3b Revert "[BE] Update Python min version to 3.10 (#162310)" (#163882)
Revert "[BE] Update Python min version to 3.10 (#162310) (#163802)"

This reverts commit 7d024a6e299eee2830e9fbdae1913e432160bb23.
2025-09-25 10:54:12 -07:00
87c5d4a858 [cherrypick] [CI] Move Windows build/tests to Python-3.10 #162862 (#163800)
[CI] Move Windows build/tests to Python-3.10 (#162862)

What supposed to be a very simple change end up being quite involved, as current Windows CI framework is quite inflexible, i.e. it takes a lots of argument, but later on ignores them, namely:
 - `PYTHON_VERSION` used to be a no-op that is simply ignored by the scripts
 - With this change, `setup-win` action will create an environment called `py_tmp` with specific python version + intel-openmp (that is hard runtime requirement, but for some reason not packaged into the wheel nor marked as such)
 - Copied test type dependencies from be01a40157/aws/ami/windows/scripts/Installers/Install-Pip-Dependencies.ps1 (L16) into `win-test.sh`, but made some adjustments to be compatible with 3.10 runtime (scipy version update) and just make rerun-tests compatible with the rest of the deps

I think in the long run, one needs to update 4432e2cacd/aws/ami/windows/scripts/Installers/Install-Miniconda3.ps1 that currently pins Miniconda python to 3.9, but also figure out how CI can still create a new environment without having to download all the dependencies all the time
Pull Request resolved: https://github.com/pytorch/pytorch/pull/162862
Approved by: https://github.com/wdvr, https://github.com/huydhn
ghstack dependencies: #163339, #163341

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2025-09-25 09:06:52 -07:00
b0dc90881c [CD] Simplify NVIDIA driver installation step (#163349) (#163790)
Undo changes introduced in https://github.com/pytorch/pytorch/pull/160956 as driver has been updated to 580 for both fleets

Fixes https://github.com/pytorch/pytorch/issues/163342
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163349
Approved by: https://github.com/seemethere

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-09-25 10:40:57 -04:00
c0577aad39 Use cuda nvrtc so file based on cuda version used by torch (#163642) (#163788)
Fixes https://github.com/pytorch/pytorch/issues/162367

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163642
Approved by: https://github.com/msaroufim
2025-09-25 10:40:09 -04:00
9952b87600 [CD] CUDA 13.0 fix preload logic to include nvidia/cu13/lib/ (#163766)
[CD] CUDA 13.0 fix preload logic to include nvidia/cu13/lib/ (#163661)

Preload logic no longer works with CUDA 13.0
See the installation path:
```
ls /home/ubuntu/.venv/lib/python3.10/site-packages/nvidia/cu13/lib/
libcheckpoint.so   libcudadevrt.a      libcufft.so.12   libcufile_rdma.so.1  libcusolver.so.12    libnvJitLink.so.13  libnvperf_target.so            libnvrtc.alt.so.13    libpcsamplingutil.so
libcublas.so.13    libcudart.so.13     libcufftw.so.12  libcupti.so.13       libcusolverMg.so.12  libnvblas.so.13     libnvrtc-builtins.alt.so.13.0  libnvrtc.so.13
libcublasLt.so.13  libcudart_static.a  libcufile.so.0   libcurand.so.10      libcusparse.so.12    libnvperf_host.so   libnvrtc-builtins.so.13.0      libnvtx3interop.so.1

ls /home/ubuntu/.venv/lib/python3.10/site-packages/nvidia/
cu13  cudnn  cusparselt  nccl  nvshmem
```

Test using script from : https://github.com/pytorch/pytorch/issues/162367
```
Kernel test passed!
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163661
Approved by: https://github.com/nWEIdia, https://github.com/tinglvv, https://github.com/Camyll

(cherry picked from commit 141fc7276ebc722b6076cc3afe4fbc6307a1b775)

Co-authored-by: atalman <atalman@fb.com>
2025-09-25 10:38:16 -04:00
300bade202 [Cherry-Pick] [CD] CUDA 13 specific followup changes. Remove sm50-70 From CUDA 12.6 and CUDA 12.8 builds (#162455) (#163764)
* [CD] CUDA 13 specific followup changes (#162455)

Follow up for CUDA 13 bring up https://github.com/pytorch/pytorch/issues/159779
sm50-70 should not be added to sbsa build arch list, as previous archs had no support for arm.
remove platform_machine from PYTORCH_EXTRA_INSTALL_REQUIREMENTS

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162455
Approved by: https://github.com/atalman

* update

---------

Co-authored-by: Ting Lu <tingl@nvidia.com>
2025-09-25 10:37:52 -04:00
96f0c0fa07 Fix some edge cases (#163106)
Fix some edge cases (#162295)

``` Summary
🔝 Top 5 Performance Differences (by absolute %):
shape: (5, 7)
┌────────────────┬────────────────┬─────────────────────────────┬───────────────────┬──────────────────────┬───────────────────────────┬───────────┐
│ attn_type      ┆ dtype          ┆ shape(B,Hq,M,Hkv,N,D)       ┆ TFlops BWD (base) ┆ TFlops BWD (no_peel) ┆ no_peel_speedup_over_base ┆ pct_delta │
│ ---            ┆ ---            ┆ ---                         ┆ ---               ┆ ---                  ┆ ---                       ┆ ---       │
│ str            ┆ str            ┆ str                         ┆ f64               ┆ f64                  ┆ f64                       ┆ f64       │
╞════════════════╪════════════════╪═════════════════════════════╪═══════════════════╪══════════════════════╪═══════════════════════════╪═══════════╡
│ sliding_window ┆ torch.bfloat16 ┆ (2, 16, 1024, 4, 1024, 64)  ┆ 56.937931         ┆ 58.960459            ┆ 1.035522                  ┆ 3.552163  │
│ noop           ┆ torch.bfloat16 ┆ (2, 16, 1024, 4, 1024, 128) ┆ 89.221306         ┆ 86.295642            ┆ 0.967209                  ┆ -3.27911  │
│ causal         ┆ torch.bfloat16 ┆ (2, 16, 4096, 4, 4096, 128) ┆ 111.552594        ┆ 114.380841           ┆ 1.025353                  ┆ 2.535349  │
│ alibi          ┆ torch.bfloat16 ┆ (2, 16, 1024, 16, 1024, 64) ┆ 74.830149         ┆ 76.685445            ┆ 1.024793                  ┆ 2.479344  │
│ alibi          ┆ torch.bfloat16 ┆ (2, 16, 1024, 4, 1024, 64)  ┆ 55.279932         ┆ 56.369312            ┆ 1.019707                  ┆ 1.97066   │
└────────────────┴────────────────┴─────────────────────────────┴───────────────────┴──────────────────────┴───────────────────────────┴───────────┘

🔺 Top 5 Cases Where no_peel (change) is Faster than base (baseline):
shape: (5, 7)
┌────────────────┬────────────────┬─────────────────────────────┬───────────────────┬──────────────────────┬───────────────────────────┬───────────┐
│ attn_type      ┆ dtype          ┆ shape(B,Hq,M,Hkv,N,D)       ┆ TFlops BWD (base) ┆ TFlops BWD (no_peel) ┆ no_peel_speedup_over_base ┆ pct_delta │
│ ---            ┆ ---            ┆ ---                         ┆ ---               ┆ ---                  ┆ ---                       ┆ ---       │
│ str            ┆ str            ┆ str                         ┆ f64               ┆ f64                  ┆ f64                       ┆ f64       │
╞════════════════╪════════════════╪═════════════════════════════╪═══════════════════╪══════════════════════╪═══════════════════════════╪═══════════╡
│ sliding_window ┆ torch.bfloat16 ┆ (2, 16, 1024, 4, 1024, 64)  ┆ 56.937931         ┆ 58.960459            ┆ 1.035522                  ┆ 3.552163  │
│ causal         ┆ torch.bfloat16 ┆ (2, 16, 4096, 4, 4096, 128) ┆ 111.552594        ┆ 114.380841           ┆ 1.025353                  ┆ 2.535349  │
│ alibi          ┆ torch.bfloat16 ┆ (2, 16, 1024, 16, 1024, 64) ┆ 74.830149         ┆ 76.685445            ┆ 1.024793                  ┆ 2.479344  │
│ alibi          ┆ torch.bfloat16 ┆ (2, 16, 1024, 4, 1024, 64)  ┆ 55.279932         ┆ 56.369312            ┆ 1.019707                  ┆ 1.97066   │
│ causal         ┆ torch.bfloat16 ┆ (4, 16, 4096, 4, 4096, 64)  ┆ 111.08814         ┆ 112.447047           ┆ 1.012233                  ┆ 1.22327   │
└────────────────┴────────────────┴─────────────────────────────┴───────────────────┴──────────────────────┴───────────────────────────┴───────────┘

🔻 Top 5 Cases Where no_peel (change) is Slower than base (baseline):
shape: (5, 7)
┌────────────────┬────────────────┬─────────────────────────────┬───────────────────┬──────────────────────┬───────────────────────────┬───────────┐
│ attn_type      ┆ dtype          ┆ shape(B,Hq,M,Hkv,N,D)       ┆ TFlops BWD (base) ┆ TFlops BWD (no_peel) ┆ no_peel_speedup_over_base ┆ pct_delta │
│ ---            ┆ ---            ┆ ---                         ┆ ---               ┆ ---                  ┆ ---                       ┆ ---       │
│ str            ┆ str            ┆ str                         ┆ f64               ┆ f64                  ┆ f64                       ┆ f64       │
╞════════════════╪════════════════╪═════════════════════════════╪═══════════════════╪══════════════════════╪═══════════════════════════╪═══════════╡
│ noop           ┆ torch.bfloat16 ┆ (2, 16, 1024, 4, 1024, 128) ┆ 89.221306         ┆ 86.295642            ┆ 0.967209                  ┆ -3.27911  │
│ causal         ┆ torch.bfloat16 ┆ (4, 16, 1024, 4, 1024, 64)  ┆ 78.23082          ┆ 76.693169            ┆ 0.980345                  ┆ -1.965531 │
│ sliding_window ┆ torch.bfloat16 ┆ (2, 16, 2048, 4, 2048, 128) ┆ 96.95663          ┆ 95.573333            ┆ 0.985733                  ┆ -1.426717 │
│ alibi          ┆ torch.bfloat16 ┆ (4, 16, 2048, 4, 2048, 64)  ┆ 93.373473         ┆ 92.294147            ┆ 0.988441                  ┆ -1.155924 │
│ alibi          ┆ torch.bfloat16 ┆ (2, 16, 2048, 4, 2048, 128) ┆ 96.95147          ┆ 96.105389            ┆ 0.991273                  ┆ -0.872685 │
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162295
Approved by: https://github.com/mlazos, https://github.com/v0i0

(cherry picked from commit 864ffe12d737403230e8257b9bce0a830bd590c1)

Co-authored-by: drisspg <drisspguessous@gmail.com>
2025-09-25 10:29:39 -04:00
7d024a6e29 [BE] Update Python min version to 3.10 (#162310) (#163802)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/162310
Approved by: https://github.com/atalman, https://github.com/Skylion007, https://github.com/ZainRizvi
ghstack dependencies: #162862

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2025-09-24 15:48:19 -07:00
be29c5b207 Add analytics ID to cpp docs (#163695)
Add analytics ID to cpp docs (#163370)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163370
Approved by: https://github.com/albanD

(cherry picked from commit e6a9db58d71e474deac28276de1f611638c32eeb)

Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
2025-09-24 15:45:17 -07:00
5322dab793 Update pytorch.org links in docs/conf.py (#163703)
Update pytorch.org links in docs/conf.py (#163682)

Update links in conf.py to docs.pytorch.org

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163682
Approved by: https://github.com/sekyondaMeta, https://github.com/albanD

(cherry picked from commit 8c8416b021e59a5ec58aceb38eeffc63885a28bc)

Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
2025-09-24 15:44:43 -07:00
1dadb6196b [BE] Introduce CONDA_ROOT_DIR (#163805)
[BE] Introduce `CONDA_ROOT_DIR` (#163341)

Which equal to `%CONDA_PARENT_DIR%/Miniconda3`, and replace this pattern with `%CONDA_ROOT_DIR%` throughout the codebase
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163341
Approved by: https://github.com/clee2000
ghstack dependencies: #163339

(cherry picked from commit a273475b01e912f402378a522bb9c4ed37e8413a)

Co-authored-by: Nikita Shulga <nshulga@meta.com>
2025-09-24 15:42:16 -07:00
6c058c1262 Move ROCM trunk wheel builds to 3.10 (#163804)
Move ROCM trunk wheel builds to 3.10 (#163339)

This code is a delicious spaghetti: Sometimes python version is defined in jinja template (see https://github.com/pytorch/pytorch/pull/162297 ) sometimes in shell script (see https://github.com/pytorch/pytorch/pull/162877 ), but this time around it's in a python file (and there is another one called `generate_binary_build_matrix.py` that defines `FULL_PYTHON_VERSIONS`)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163339
Approved by: https://github.com/clee2000

(cherry picked from commit 52dd7a898c117305b4407c7f26bbcc7b39f20aaa)

Co-authored-by: Nikita Shulga <nshulga@meta.com>
2025-09-24 15:41:55 -07:00
715dca6725 [export] Remove .contiguous() when saving weights to raw bytes (#163662)
[export] Remove .contiguous() when saving weights to raw bytes (#163587)

Summary: `.contiguous()` will discard the original storage size of the tensor, and could lead to issues during loading.

Test Plan:
buck2 run mode/dev-nosan caffe2/test:test_export -- -r test_1D_tensor_slicing
buck2 run mode/dev-nosan caffe2/test:test_export -- -r test_2D_tensor_slicing

Differential Revision: D83016250

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163587
Approved by: https://github.com/angelayi

(cherry picked from commit 720a7b2887ca4efc8d63b32373182bc97918c76e)

Co-authored-by: Yiming Zhou <yimingzhou@meta.com>
2025-09-23 10:15:06 -07:00
47cb45e4f6 Update pytorch_sphinx_theme2 to latest hash (#163655)
Update pytorch_sphinx_theme2 to latest hash (#163269)

The updated theme:
- Fixes articleBody in the json+ld that caused previous Google Search issues
- Other minor fixes
- 404.html fixes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163269
Approved by: https://github.com/albanD

(cherry picked from commit 68e75be86ab618bb6b1dc32b603a780ff6046262)

Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
2025-09-23 10:13:51 -07:00
4966d058f2 CUDA 13.0 Warning update for supported architectures (#163633)
CUDA 13.0 Warning update for supported architectures (#163585)

Please see build script: 8da008678f/.ci/manywheel/build_cuda.sh (L69-L71)

This should display correct warning:
``
Please install PyTorch with a following CUDA
configurations: 12.6 12.8 13.0 following instructions at
https://pytorch.org/get-started/locally/
``
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163585
Approved by: https://github.com/malfet

(cherry picked from commit 3c64b2abab5a23809140da5bd6272307b776e459)

Co-authored-by: atalman <atalman@fb.com>
2025-09-23 10:13:06 -07:00
579794ed7b [SymmMem] Fix put_signal + wait_until hang (#163458)
[SymmMem] Fix put_signal + wait_until hang (#163194)

The test used a wrong ptr to refer to remote address:
```
            dst_ptr = out_hdl.buffer_ptrs[peer]
            src_ptr = inp_hdl.buffer_ptrs[rank]
            sig_ptr = out_hdl.signal_pad_ptrs[peer]
```
All three indices should be `rank` instead of `peer` because NVSHMEM APIs accept local address as input and perform translation internally. Without correct signal address, the peer would be waiting, thus hang.

Also adjusted the signature of `nvshmem.putmem_signal_block` to accept tensor instead of pointer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163194
Approved by: https://github.com/ngimel
ghstack dependencies: #163025, #163152

(cherry picked from commit 80f8be9840c20c3efe1274266b52ab098f4d1030)

Co-authored-by: Ke Wen <kw2501@meta.com>
2025-09-23 10:10:02 -07:00
7cf37ae3cb [2.9 cherry pick][triton] update 3.5 pin to bbb06c0334a6772b92d24bde54956e675c8c6604 (#163382) (#163583)
Includes:
* https://github.com/triton-lang/triton/pull/8211 to work around a PTXAS bug that was causing 03-matrix-multiplication tutorial matmuls to underperform due to excessive WGMMA waits
* https://github.com/triton-lang/triton/pull/8157 to fix a convert_layout bug

Verified that this passes Triton CI in https://github.com/pytorch/pytorch/pull/159158 and improves gemm perf (see https://github.com/pytorch/pytorch/issues/159704)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163382
Approved by: https://github.com/Camyll, https://github.com/atalman
2025-09-22 18:20:20 -07:00
f83cf0714e [graph partition] Add way to register custom rule (#163310) (#163395)
This PR adds an experimental way to register a custom rule for if
inductor should partition the graph around an operator.

Test Plan:
- new test

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163310
Approved by: https://github.com/ProExpertProg, https://github.com/BoyuanFeng, https://github.com/eellison
ghstack dependencies: #162117, #162307, #162651
2025-09-22 18:18:07 -07:00
ddd5074afc [CI] Update NVIDIA driver to 580.82.07 (#163522)
[CI] Update NVIDIA driver to `580.82.07` (#163111)

To make CI machines capable of running CUDA-13 tests. Unfortunately, this upgrade regresses NUMBA integration, so live patch it with 6e08c9d08e

This fix was suggested in https://github.com/pytorch/pytorch/issues/162878#issuecomment-3288635745

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163111
Approved by: https://github.com/huydhn

(cherry picked from commit 8dbac62edb48815dfca84dfdcca40d6a24d0652b)

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2025-09-22 11:45:48 -04:00
35c55da805 [Graph Partition] improve custom op output alias (#163380)
[Graph Partition] improve custom op output alias (#163227)

For a custom op with multiple outputs, we will see the following generated code:
```
buf1 = op1(arg0)
buf3 = buf0[0]
buf4 = buf0[1]
del buf1 # <--- if buf1 is not accessed in the future
```

If `buf1` is not accessed in the future, it's good to deallocate early. So we don't delay `del` until both buf3 and buf4 are not used anymore. Note that buf3 and buf4 hold reference to the data such that `del buf1` does not prevent their usage.

However, when there are mutating args, we don't see `del buf1` immediately.

```python
@torch.library.custom_op(
    "mylib::op1",
    mutates_args=["x"],
    schema="(Tensor(a!)?  x) -> (Tensor, Tensor)",
    device_types="cuda",
)
def op1(x) -> tuple[torch.Tensor, torch.Tensor]:
    x = x + 1
    return (x + 1, x + 2)
```

<img width="661" height="821" alt="image" src="https://github.com/user-attachments/assets/3d1d1f5a-9749-4652-bb02-da593c78702d" />

Why? Because `buf3` is a MultiOutput with `buf1` as input and believes `buf1` (an output of FallbackKernel op1) has inputs that alias output.
72fedf0575/torch/_inductor/ir.py (L7976-L7982)

According to `[NOTE: FallbackKernel supported operators]`, as a mutating op that are auto-functionalizable, buf1's output should NOT alias any of the inputs. This PR improves get_inputs_that_alias_output of Fallback Kernel.

Use case: [moe custom op in vllm](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/fused_moe/layer.py#L2057-L2064)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163227
Approved by: https://github.com/zou3519

(cherry picked from commit 4967ad8baa724b8b1acc123698bb1265723feb87)

Co-authored-by: Boyuan Feng <boyuan@meta.com>
2025-09-19 16:36:03 -07:00
a576d48637 Skip test_ind_worker_queue on Windows and macOS (flaky) (#163363)
Skip test_ind_worker_queue on Windows and macOS (flaky) (#162555)

Fixes https://github.com/pytorch/pytorch/issues/68643

It was closed by the bot yesterday and the issue was still there https://github.com/pytorch/pytorch/actions/runs/17595694816/job/49989589647.  It's better to just skip it directly in the code as this test has been disabled on Windows and MacOS since 2021 O_o
Pull Request resolved: https://github.com/pytorch/pytorch/pull/162555
Approved by: https://github.com/clee2000

(cherry picked from commit 98e22c8a693644c6d235d7a858dc411b1aefafa7)

Co-authored-by: Huy Do <huydhn@gmail.com>
2025-09-19 13:07:00 -07:00
25d8c0be68 Add decomp rule to assert_tensor_metadata for BatchedTensors (#163361)
Add decomp rule to assert_tensor_metadata for BatchedTensors  (#163008)

Whenever there is device move, export introduces assert_tensor_metadata aten operator to make sure to guard for device specialization. This aten op didn't work with Vmap because we didn't register explicit decomp rule saying we just skip BatchedTensor and call it on underlying tensor

Differential Revision: [D82483979](https://our.internmc.facebook.com/intern/diff/D82483979)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163008
Approved by: https://github.com/huydhn

(cherry picked from commit e28983be76aa4651e3cb69dc3a4234d75038d938)

Co-authored-by: Tugsbayasgalan Manlaibaatar <tmanlaibaatar@fb.com>
2025-09-19 13:00:57 -07:00
b1aae80953 [Cherry Pick][Graph Partition] allow sharing default device context (#163097)
cherry pick PR 162873
2025-09-19 11:10:29 -07:00
eqy
76bebf38de [Release 2.9] [cuDNN][SDPA][submodule] Roll-back cuDNN frontend upgrade, update Met… (#163265)
[cuDNN][SDPA][submodule] Roll-back cuDNN frontend upgrade, update Meta registration (#163104)

For https://github.com/pytorch/torchtitan/issues/1713

Also note that we will need to rollback the cuDNN frontend upgrade in 2.9 as it currently introduces a segmentation fault by assuming tensors have their strides and sizes populated at graph creation time 1a7b4b78db/include/cudnn_frontend/node/sdpa_support_surface.h (L447%C2%A0)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163104
Approved by: https://github.com/drisspg
2025-09-19 10:53:04 -07:00
bc158ebdc7 [SymmMem] Fix NVSHMEM plugin + Triton 3.5 (#163262)
[SymmMem] Fix NVSHMEM plugin + Triton 3.5 (#163152)

1. The dispatch signatures defined in `core.extern_elementwise` call must match the C signature of the NVSHMEM functions, in particular the dtypes. Otherwise, there would be weird errors, such as IMA or hang. When matched, most of time the NVSHMEM device function will be inlined into the generated PTX. When not matched, it is represented as a function call in the PTX (not sure if it is the function call that goes wrong).

2. When calling the `core.extern` wrappers from the `triton.jit` kernels, the input must be cast to match the signatures defined in 1, e.g. via `nbytes.to(tl.int64)`. Otherwise, Triton will report a key error when searching for such kernel.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163152
Approved by: https://github.com/ngimel
ghstack dependencies: #163025

(cherry picked from commit 57a54a04b6eb78e0aa7d13b48e25fb8c0c49fd60)

Co-authored-by: Ke Wen <kw2501@meta.com>
2025-09-19 10:51:02 -07:00
ffa6f63fe2 Revert "Make distributed modules importable even when backend not bui… (#163024)
Revert "Make distributed modules importable even when backend not built (#159889)" (#162568)

This reverts commit a0d026688cd69583d5a4e0c6f3e5fda141a7f4a9.

Revert "Always build USE_DISTRIBUTED. (#160449)"

This reverts commit d80297a6846f1f2c36fd4f19e22919f2abe8fcea.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162568
Approved by: https://github.com/huydhn

Co-authored-by: Edward Yang <ezyang@meta.com>
2025-09-19 10:34:55 -07:00
baab5c6c8b [ONNX] Update export docstring & Set fallback=False by default (#162637)
* [ONNX] Update export docstring (#162622)

Update export docstring to reflect the latest configuration.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162622
Approved by: https://github.com/titaiwangms

(cherry picked from commit 7e2e83cdbe532b230dee40cfe0454116c9b64710)

* Change fallback option to False in ONNX export

* Change fallback parameter default to False

---------

Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
2025-09-16 17:23:47 -07:00
9718af107e Support vmap + custom autograd function/improve DTensor constructor inefficiency (#162738)
Support vmap + custom autograd function/improve DTensor constructor inefficiency (#162240)

This makes gemma3 exportable on transformers=4.55.4

In HF, there is a torch funciton mode called TransformGetItemToIndex which internally calls custom autograd function. When this custom autograd function is called under vmap, It triggers CustomFunctionHigherOrderOP which error-ed because there was no pre-dispatch proxy mode implementation.

Since there are number of requests lately to add various operators in pre-dispatch IR, I introduce a decorator in export that works similar to `allow_in_graph`. Basically:
1) We intercept custom_autograd_function.apply at pre-dispatch mode when this decorator is applied
2) We apply `flat_apply` HOP to hide the pytree spec for this autograd function. Note that this adds restriction that this custom autograd function needs to take in fx-able types.
3) subclass constructor decorator is implemented similarly, so we just refactor it to use similar implementation as this new decorator. eventually we should delete the subclass constructor decorator.
4) Move some code in subclass constructor decorator to exit early in non-export environment which should shave off some inefficiency (around 1% according to @swolchok 's benchmark)

Fixes: https://github.com/pytorch/pytorch/issues/161563#issuecomment-3246309758

Differential Revision: [D82141316](https://our.internmc.facebook.com/intern/diff/D82141316)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/162240
Approved by: https://github.com/ydwu4

(cherry picked from commit 463fbc8ca0537e5635236190d2ca38ce6fcef831)

Co-authored-by: Tugsbayasgalan Manlaibaatar <tmanlaibaatar@fb.com>
2025-09-16 17:22:16 -07:00
7f8ba48c2a Fix the regression issue caused by non-arrch64 platforms not hitting the MKLDNN path. (#162778)
Fix the regression issue caused by non-arrch64 platforms not hitting the MKLDNN path. (#162168)

This issue was introduced by the commit in issue #161065. Added an extra check to provide a proper path for other platforms.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162168
Approved by: https://github.com/mingfeima, https://github.com/malfet


(cherry picked from commit 563921619b3e820b170475b9278ff94ee6e1a32c)

Co-authored-by: Yuxingwang-intel <yuxing.wang@intel.com>
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-09-16 17:21:10 -07:00
aebf427c53 [Release 2.9] Update torch-xpu-ops commit pin (#162935)
Update commit pin to f8408a
2025-09-16 17:19:31 -07:00
44baf2ff8d fix deterministic scatter_add path for multi-d tensors (#162977)
fix deterministic scatter_add path for multi-d tensors (#162866)

PReviously for more than 2d tensor `select` didn't work correctly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/162866
Approved by: https://github.com/valentinandrei

(cherry picked from commit bf6b40da3e3be7718b8ddc94eed2da8cabaa5e86)

Co-authored-by: Natalia Gimelshein <ngimel@meta.com>
2025-09-16 17:17:36 -07:00
1076941ff7 [ONNX] Fix rotary_embedding_23 implementation (#163041)
[ONNX] Fix rotary_embedding_23 implementation (#162865)

The implementation of rotary_embedding_23 when input is 3D was incorrect.

## Tested

Locally with

```py
import onnx_ir as ir
import onnx
import torch
import os
import numpy as np

base_path = "/home/justinchu/dev/onnx/onnx/backend/test/data/node"
test_names = [
    "test_rotary_embedding",
    "test_rotary_embedding_3d_input",
    "test_rotary_embedding_interleaved",
    "test_rotary_embedding_no_position_ids",
    "test_rotary_embedding_no_position_ids_interleaved",
    "test_rotary_embedding_no_position_ids_rotary_dim",
    "test_rotary_embedding_with_interleaved_rotary_dim",
    "test_rotary_embedding_with_rotary_dim",
]
model_paths = [os.path.join(base_path, name) for name in test_names]

for path in model_paths:
    print(f"Checking {path} for issues...")

    model = onnx.load(os.path.join(path, "model.onnx"))
    input0 = ir.from_proto(
        onnx.load_tensor(os.path.join(path, "test_data_set_0", "input_0.pb"))
    ).numpy()
    input1 = ir.from_proto(
        onnx.load_tensor(os.path.join(path, "test_data_set_0", "input_1.pb"))
    ).numpy()
    input2 = ir.from_proto(
        onnx.load_tensor(os.path.join(path, "test_data_set_0", "input_2.pb"))
    ).numpy()
    if os.path.exists(os.path.join(path, "test_data_set_0", "input_3.pb")):
        input3 = ir.from_proto(
            onnx.load_tensor(os.path.join(path, "test_data_set_0", "input_3.pb"))
        ).numpy()
    else:
        input3 = None
    output0 = ir.from_proto(
        onnx.load_tensor(os.path.join(path, "test_data_set_0", "output_0.pb"))
    ).numpy()

    m = ir.from_proto(model)

    node = m.graph[-1]
    print(node)
    assert node.op_type == "RotaryEmbedding"

    interleaved = node.attributes.get_int("interleaved", 0)
    num_heads = node.attributes.get_int("num_heads", 0)
    rotary_embedding_dim = node.attributes.get_int("rotary_embedding_dim", 0)

    torch_out = torch.onnx.ops.rotary_embedding(
        torch.tensor(input0),
        torch.tensor(input1),
        torch.tensor(input2),
        position_ids=torch.tensor(input3) if input3 is not None else None,
        interleaved=bool(interleaved),
        num_heads=num_heads,
        rotary_embedding_dim=rotary_embedding_dim,
    )
    torch_out = torch_out.detach().cpu().numpy()
    np.testing.assert_allclose(torch_out, output0)
```

Fix https://github.com/pytorch/pytorch/issues/162848

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162865
Approved by: https://github.com/kunal-vaishnavi, https://github.com/titaiwangms

(cherry picked from commit fdf68fa5d70abebee1c5090a51ea30c7aa40b9b0)

Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
2025-09-16 17:16:23 -07:00
0ac9fa4413 [ez][CI] Fix docs push in nightly workflow (#163085)
[ez][CI] Fix docs push in nightly workflow (#162657)

HUD metrics page says docs push hasn't happened in 21 days
<img width="293" height="142" alt="image" src="https://github.com/user-attachments/assets/f930aab8-0503-4bf2-b962-8c375dec6b78" />

I guess main branch docs just haven't been updated?  Did anyone notice?  Do we care?

Either way I think this should fix it

Likely started after https://github.com/pytorch/pytorch/pull/161182
Pull Request resolved: https://github.com/pytorch/pytorch/pull/162657
Approved by: https://github.com/huydhn

(cherry picked from commit 2f533959430c2a41fe16ef79fe4d680a5c4e0585)

Co-authored-by: Catherine Lee <csl@fb.com>
2025-09-16 12:04:17 -07:00
152383b745 fix typo: summit -> submit (#162597)
fix typo: summit -> submit (#162587)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162587
Approved by: https://github.com/justinchuby

(cherry picked from commit fefc406a3d0d90db0f808419fb88045f90b213cd)

Co-authored-by: Masaki Kozuki <mkozuki@nvidia.com>
2025-09-12 11:41:11 -04:00
c31a8186c1 [CD] Aarch64 Fix packaging `libarm_compute.so` and other libraries to the aarch64 CUDA wheels (#162596)
[CD] Aarch64 Fix packaging ``libarm_compute.so`` and other libraries to the aarch64 CUDA wheels (#162566)

Fixes aarch64 linux packaging, following error:
https://github.com/pytorch/vision/actions/runs/17612462583/job/50037380487#step:15:62
```
Traceback (most recent call last):
  File "/__w/vision/vision/pytorch/vision/setup.py", line 13, in <module>
    import torch
  File "/__w/_temp/conda_environment_17612462583/lib/python3.11/site-packages/torch/__init__.py", line 415, in <module>
    from torch._C import *  # noqa: F403
    ^^^^^^^^^^^^^^^^^^^^^^
ImportError: libarm_compute.so: cannot open shared object file: No such file or directory
```
Due to missing dependencies.

Current Error:
File torch-2.10.0.dev20250910+cu130-cp310-cp310-linux_aarch64.whl is extracted
File is repackaged as torch-2.10.0.dev20250910+cu130-cp310-cp310-manylinux_2_28_aarch64.whl
File torch-2.10.0.dev20250910+cu130-cp310-cp310-linux_aarch64.whl renamed as torch-2.10.0.dev20250910+cu130-cp310-cp310-manylinux_2_28_aarch64.whl
Hence the repackaging does not take any effect.

This PR does following
File torch-2.10.0.dev20250910+cu130-cp310-cp310-linux_aarch64.whl is extracted
File torch-2.10.0.dev20250910+cu130-cp310-cp310-linux_aarch64.whl  deleted
File is repackaged as torch-2.10.0.dev20250910+cu130-cp310-cp310-manylinux_2_28_aarch64.whl

Looks like after migrating from zipping the wheel to wheel pack renaming the wheel is no longer necessary. Hence removing renaming and deleting old file.
```
2025-09-10T10:10:05.9652454Z Using nvidia libs from pypi - skipping CUDA library bundling
2025-09-10T10:10:05.9656595Z Copying to /pytorch/dist/tmp/torch/lib/libgomp.so.1
2025-09-10T10:10:05.9873843Z Copying to /pytorch/dist/tmp/torch/lib/libgfortran.so.5
2025-09-10T10:10:06.0410041Z Copying to /pytorch/dist/tmp/torch/lib/libarm_compute.so
2025-09-10T10:10:06.2869242Z Copying to /pytorch/dist/tmp/torch/lib/libarm_compute_graph.so
2025-09-10T10:10:06.4385740Z Copying to /pytorch/dist/tmp/torch/lib/libnvpl_lapack_lp64_gomp.so.0
2025-09-10T10:10:06.5461372Z Copying to /pytorch/dist/tmp/torch/lib/libnvpl_blas_lp64_gomp.so.0
2025-09-10T10:10:06.5728970Z Copying to /pytorch/dist/tmp/torch/lib/libnvpl_lapack_core.so.0
2025-09-10T10:10:06.6231872Z Copying to /pytorch/dist/tmp/torch/lib/libnvpl_blas_core.so.0
2025-09-10T10:10:14.1503110Z Updated tag from Tag: cp310-cp310-linux_aarch64
2025-09-10T10:10:14.1503482Z  to Tag: cp310-cp310-manylinux_2_28_aarch64
2025-09-10T10:10:14.1503682Z
2025-09-10T10:10:41.6498892Z Repacking wheel as /pytorch/dist/torch-2.10.0.dev20250910+cu130-cp310-cp310-manylinux_2_28_aarch64.whl...OK
2025-09-10T10:10:41.9394460Z Renaming torch-2.10.0.dev20250910+cu130-cp310-cp310-linux_aarch64.whl wheel to torch-2.10.0.dev20250910+cu130-cp310-cp310-manylinux_2_28_aarch64.whl
```

Test Plan, Executed on local file:
```
  inflating: ubuntu/dist/tmp/torch-2.9.0.dev20250909+cu130.dist-info/WHEEL
  inflating: ubuntu/dist/tmp/torch-2.9.0.dev20250909+cu130.dist-info/entry_points.txt
  inflating: ubuntu/dist/tmp/torch-2.9.0.dev20250909+cu130.dist-info/top_level.txt
  inflating: ubuntu/dist/tmp/torch-2.9.0.dev20250909+cu130.dist-info/RECORD
Bundling CUDA libraries with wheel
Updated tag from Tag: cp310-cp310-manylinux_2_28_aarch64
 to Tag: cp310-cp310-manylinux_2_28_aarch64

Repacking wheel as ubuntu/dist/torch-2.9.0.dev20250909+cu130-cp310-cp310-manylinux_2_28_aarch64.whl...OK
Copying torch-2.9.0.dev20250909+cu130-cp310-cp310-manylinux_2_28_aarch64.whl to artifacts
Build Complete. Created torch-2.9.0.dev20250909+cu130-cp310-cp310-manylinux_2_28_aarch64.whl..
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162566
Approved by: https://github.com/jeanschmidt, https://github.com/NicolasHug

(cherry picked from commit 3d32bb114bf0d5bd0193dc40f20253635dddf080)

Co-authored-by: atalman <atalman@fb.com>
2025-09-10 12:22:02 -04:00
ce928e17c1 CUDA 13.0 Windows Nvidia Driver Update to 580.88 (#162501)
CUDA 13.0 Windows Nvidia Driver Update to 580.88 (#162425)

Related to https://github.com/pytorch/pytorch/issues/162333
https://github.com/pytorch/pytorch/issues/159779

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162425
Approved by: https://github.com/tinglvv, https://github.com/malfet

(cherry picked from commit e38e953432764e00f16999c8b7df6346ad357a16)

Co-authored-by: atalman <atalman@fb.com>
2025-09-09 14:27:57 -04:00
cd2c98a5b5 [Release 2.9] Release only changes (#162493) 2025-09-09 11:15:20 -07:00
309 changed files with 4648 additions and 2955 deletions

View File

@ -5,9 +5,9 @@ GPU_ARCH_VERSION=${GPU_ARCH_VERSION:-}
# Set CUDA architecture lists to match x86 build_cuda.sh # Set CUDA architecture lists to match x86 build_cuda.sh
if [[ "$GPU_ARCH_VERSION" == *"12.6"* ]]; then if [[ "$GPU_ARCH_VERSION" == *"12.6"* ]]; then
export TORCH_CUDA_ARCH_LIST="5.0;6.0;7.0;8.0;9.0" export TORCH_CUDA_ARCH_LIST="8.0;9.0"
elif [[ "$GPU_ARCH_VERSION" == *"12.8"* ]]; then elif [[ "$GPU_ARCH_VERSION" == *"12.8"* ]]; then
export TORCH_CUDA_ARCH_LIST="7.0;8.0;9.0;10.0;12.0" export TORCH_CUDA_ARCH_LIST="8.0;9.0;10.0;12.0"
elif [[ "$GPU_ARCH_VERSION" == *"13.0"* ]]; then elif [[ "$GPU_ARCH_VERSION" == *"13.0"* ]]; then
export TORCH_CUDA_ARCH_LIST="8.0;9.0;10.0;11.0;12.0+PTX" export TORCH_CUDA_ARCH_LIST="8.0;9.0;10.0;11.0;12.0+PTX"
fi fi
@ -15,6 +15,8 @@ fi
# Compress the fatbin with -compress-mode=size for CUDA 13 # Compress the fatbin with -compress-mode=size for CUDA 13
if [[ "$DESIRED_CUDA" == *"13"* ]]; then if [[ "$DESIRED_CUDA" == *"13"* ]]; then
export TORCH_NVCC_FLAGS="-compress-mode=size" export TORCH_NVCC_FLAGS="-compress-mode=size"
# Bundle ptxas into the cu13 wheel, see https://github.com/pytorch/pytorch/issues/163801
export BUILD_BUNDLE_PTXAS=1
fi fi
SCRIPTPATH="$( cd -- "$(dirname "$0")" >/dev/null 2>&1 ; pwd -P )" SCRIPTPATH="$( cd -- "$(dirname "$0")" >/dev/null 2>&1 ; pwd -P )"
@ -42,9 +44,6 @@ else
echo "Bundling CUDA libraries with wheel for aarch64." echo "Bundling CUDA libraries with wheel for aarch64."
else else
echo "Using nvidia libs from pypi for aarch64." echo "Using nvidia libs from pypi for aarch64."
# Fix platform constraints in PYTORCH_EXTRA_INSTALL_REQUIREMENTS for aarch64
# Replace 'platform_machine == "x86_64"' with 'platform_machine == "aarch64"'
export PYTORCH_EXTRA_INSTALL_REQUIREMENTS="${PYTORCH_EXTRA_INSTALL_REQUIREMENTS//platform_machine == \'x86_64\'/platform_machine == \'aarch64\'}"
echo "Updated PYTORCH_EXTRA_INSTALL_REQUIREMENTS for aarch64: $PYTORCH_EXTRA_INSTALL_REQUIREMENTS" echo "Updated PYTORCH_EXTRA_INSTALL_REQUIREMENTS for aarch64: $PYTORCH_EXTRA_INSTALL_REQUIREMENTS"
export USE_NVIDIA_PYPI_LIBS=1 export USE_NVIDIA_PYPI_LIBS=1
fi fi

View File

@ -138,6 +138,8 @@ def package_cuda_wheel(wheel_path, desired_cuda) -> None:
folder = os.path.dirname(wheel_path) folder = os.path.dirname(wheel_path)
os.mkdir(f"{folder}/tmp") os.mkdir(f"{folder}/tmp")
os.system(f"unzip {wheel_path} -d {folder}/tmp") os.system(f"unzip {wheel_path} -d {folder}/tmp")
# Delete original wheel since it will be repackaged
os.system(f"rm {wheel_path}")
# Check if we should use PyPI NVIDIA libraries or bundle system libraries # Check if we should use PyPI NVIDIA libraries or bundle system libraries
use_nvidia_pypi_libs = os.getenv("USE_NVIDIA_PYPI_LIBS", "0") == "1" use_nvidia_pypi_libs = os.getenv("USE_NVIDIA_PYPI_LIBS", "0") == "1"
@ -211,7 +213,8 @@ def package_cuda_wheel(wheel_path, desired_cuda) -> None:
] ]
# CUDA version-specific libraries # CUDA version-specific libraries
if "130" in desired_cuda: if "13" in desired_cuda:
minor_version = desired_cuda[-1]
version_specific_libs = [ version_specific_libs = [
"/usr/local/cuda/extras/CUPTI/lib64/libcupti.so.13", "/usr/local/cuda/extras/CUPTI/lib64/libcupti.so.13",
"/usr/local/cuda/lib64/libcublas.so.13", "/usr/local/cuda/lib64/libcublas.so.13",
@ -221,7 +224,7 @@ def package_cuda_wheel(wheel_path, desired_cuda) -> None:
"/usr/local/cuda/lib64/libcusolver.so.12", "/usr/local/cuda/lib64/libcusolver.so.12",
"/usr/local/cuda/lib64/libnvJitLink.so.13", "/usr/local/cuda/lib64/libnvJitLink.so.13",
"/usr/local/cuda/lib64/libnvrtc.so.13", "/usr/local/cuda/lib64/libnvrtc.so.13",
"/usr/local/cuda/lib64/libnvrtc-builtins.so.13.0", f"/usr/local/cuda/lib64/libnvrtc-builtins.so.13.{minor_version}",
] ]
elif "12" in desired_cuda: elif "12" in desired_cuda:
# Get the last character for libnvrtc-builtins version (e.g., "129" -> "9") # Get the last character for libnvrtc-builtins version (e.g., "129" -> "9")
@ -237,6 +240,8 @@ def package_cuda_wheel(wheel_path, desired_cuda) -> None:
"/usr/local/cuda/lib64/libnvrtc.so.12", "/usr/local/cuda/lib64/libnvrtc.so.12",
f"/usr/local/cuda/lib64/libnvrtc-builtins.so.12.{minor_version}", f"/usr/local/cuda/lib64/libnvrtc-builtins.so.12.{minor_version}",
] ]
else:
raise ValueError(f"Unsupported CUDA version: {desired_cuda}.")
# Combine all libraries # Combine all libraries
libs_to_copy = common_libs + version_specific_libs libs_to_copy = common_libs + version_specific_libs
@ -275,14 +280,7 @@ def complete_wheel(folder: str) -> str:
f"/{folder}/dist/{repaired_wheel_name}", f"/{folder}/dist/{repaired_wheel_name}",
) )
else: else:
repaired_wheel_name = wheel_name.replace( repaired_wheel_name = list_dir(f"/{folder}/dist")[0]
"linux_aarch64", "manylinux_2_28_aarch64"
)
print(f"Renaming {wheel_name} wheel to {repaired_wheel_name}")
os.rename(
f"/{folder}/dist/{wheel_name}",
f"/{folder}/dist/{repaired_wheel_name}",
)
print(f"Copying {repaired_wheel_name} to artifacts") print(f"Copying {repaired_wheel_name} to artifacts")
shutil.copy2( shutil.copy2(

View File

@ -214,8 +214,7 @@ case "$tag" in
TRITON=yes TRITON=yes
;; ;;
pytorch-linux-jammy-py3-gcc11-inductor-benchmarks) pytorch-linux-jammy-py3-gcc11-inductor-benchmarks)
# TODO (huydhn): Upgrade this to Python >= 3.10 ANACONDA_PYTHON_VERSION=3.10
ANACONDA_PYTHON_VERSION=3.9
GCC_VERSION=11 GCC_VERSION=11
VISION=yes VISION=yes
KATEX=yes KATEX=yes
@ -263,13 +262,10 @@ case "$tag" in
TRITON_CPU=yes TRITON_CPU=yes
;; ;;
pytorch-linux-jammy-linter) pytorch-linux-jammy-linter)
# TODO: Use 3.9 here because of this issue https://github.com/python/mypy/issues/13627. PYTHON_VERSION=3.10
# We will need to update mypy version eventually, but that's for another day. The task
# would be to upgrade mypy to 1.0.0 with Python 3.11
PYTHON_VERSION=3.9
;; ;;
pytorch-linux-jammy-cuda12.8-cudnn9-py3.9-linter) pytorch-linux-jammy-cuda12.8-cudnn9-py3.10-linter)
PYTHON_VERSION=3.9 PYTHON_VERSION=3.10
CUDA_VERSION=12.8.1 CUDA_VERSION=12.8.1
;; ;;
pytorch-linux-jammy-aarch64-py3.10-gcc11) pytorch-linux-jammy-aarch64-py3.10-gcc11)

View File

@ -1 +1 @@
fccfc522864cf8bc172abe0cd58ae5581e2d44b9 bbb06c0334a6772b92d24bde54956e675c8c6604

View File

@ -0,0 +1,9 @@
#!/bin/bash
set -xe
# Script used in Linux x86 and aarch64 CD pipeline
# Workaround for exposing statically linked libstdc++ CXX11 ABI symbols.
# see: https://github.com/pytorch/pytorch/issues/133437
LIBNONSHARED=$(gcc -print-file-name=libstdc++_nonshared.a)
nm -g $LIBNONSHARED | grep " T " | grep recursive_directory_iterator | cut -c 20- > weaken-symbols.txt
objcopy --weaken-symbols weaken-symbols.txt $LIBNONSHARED $LIBNONSHARED

View File

@ -130,7 +130,8 @@ ENV LD_LIBRARY_PATH=/opt/rh/gcc-toolset-${DEVTOOLSET_VERSION}/root/usr/lib64:/op
RUN for cpython_version in "cp312-cp312" "cp313-cp313" "cp313-cp313t"; do \ RUN for cpython_version in "cp312-cp312" "cp313-cp313" "cp313-cp313t"; do \
/opt/python/${cpython_version}/bin/python -m pip install setuptools wheel; \ /opt/python/${cpython_version}/bin/python -m pip install setuptools wheel; \
done; done;
ADD ./common/patch_libstdc.sh patch_libstdc.sh
RUN bash ./patch_libstdc.sh && rm patch_libstdc.sh
# cmake-3.18.4 from pip; force in case cmake3 already exists # cmake-3.18.4 from pip; force in case cmake3 already exists
RUN yum install -y python3-pip && \ RUN yum install -y python3-pip && \

View File

@ -71,3 +71,5 @@ RUN rm -rf /opt/python/cp33-cp33m /opt/_internal/cpython-3.3.6
RUN rm -rf /opt/python/cp34-cp34m /opt/_internal/cpython-3.4.6 RUN rm -rf /opt/python/cp34-cp34m /opt/_internal/cpython-3.4.6
COPY --from=openblas /opt/OpenBLAS/ /opt/OpenBLAS/ COPY --from=openblas /opt/OpenBLAS/ /opt/OpenBLAS/
ENV LD_LIBRARY_PATH=/opt/OpenBLAS/lib:$LD_LIBRARY_PATH ENV LD_LIBRARY_PATH=/opt/OpenBLAS/lib:$LD_LIBRARY_PATH
ADD ./common/patch_libstdc.sh patch_libstdc.sh
RUN bash ./patch_libstdc.sh && rm patch_libstdc.sh

View File

@ -95,3 +95,5 @@ COPY --from=nvpl /opt/nvpl/lib/ /usr/local/lib/
COPY --from=nvpl /opt/nvpl/include/ /usr/local/include/ COPY --from=nvpl /opt/nvpl/include/ /usr/local/include/
RUN ln -sf /usr/local/cuda-${BASE_CUDA_VERSION} /usr/local/cuda RUN ln -sf /usr/local/cuda-${BASE_CUDA_VERSION} /usr/local/cuda
ENV PATH=/usr/local/cuda/bin:$PATH ENV PATH=/usr/local/cuda/bin:$PATH
ADD ./common/patch_libstdc.sh patch_libstdc.sh
RUN bash ./patch_libstdc.sh && rm patch_libstdc.sh

View File

@ -93,8 +93,9 @@ librosa==0.10.2 ; python_version == "3.12" and platform_machine != "s390x"
#Pinned versions: #Pinned versions:
#test that import: #test that import:
mypy==1.16.0 mypy==1.16.0 ; platform_system != "Windows"
# Pin MyPy version because new errors are likely to appear with each release # Pin MyPy version because new errors are likely to appear with each release
# Skip on Windows as lots of type annotations are POSIX specific
#Description: linter #Description: linter
#Pinned versions: 1.16.0 #Pinned versions: 1.16.0
#test that import: test_typing.py, test_type_hints.py #test that import: test_typing.py, test_type_hints.py

View File

@ -1,7 +1,7 @@
sphinx==5.3.0 sphinx==5.3.0
#Description: This is used to generate PyTorch docs #Description: This is used to generate PyTorch docs
#Pinned versions: 5.3.0 #Pinned versions: 5.3.0
-e git+https://github.com/pytorch/pytorch_sphinx_theme.git@1657ad2fc1acdc98aa719eebecbb0128a7c13ce4#egg=pytorch_sphinx_theme2 -e git+https://github.com/pytorch/pytorch_sphinx_theme.git@71e55749be14ceb56e7f8211a9fb649866b87ad4#egg=pytorch_sphinx_theme2
# TODO: sphinxcontrib.katex 0.9.0 adds a local KaTeX server to speed up pre-rendering # TODO: sphinxcontrib.katex 0.9.0 adds a local KaTeX server to speed up pre-rendering
# but it doesn't seem to work and hangs around idly. The initial thought that it is probably # but it doesn't seem to work and hangs around idly. The initial thought that it is probably

View File

@ -7,4 +7,4 @@ set -ex
SCRIPTPATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" SCRIPTPATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
USE_NVSHMEM=0 USE_CUSPARSELT=0 BUILD_PYTHONLESS=1 DESIRED_PYTHON="3.9" ${SCRIPTPATH}/../manywheel/build.sh USE_NVSHMEM=0 USE_CUSPARSELT=0 BUILD_PYTHONLESS=1 DESIRED_PYTHON="3.10" ${SCRIPTPATH}/../manywheel/build.sh

View File

@ -41,7 +41,6 @@ def sample_vllm_test_library():
"pytest -v -s basic_correctness/test_cumem.py", "pytest -v -s basic_correctness/test_cumem.py",
"pytest -v -s basic_correctness/test_basic_correctness.py", "pytest -v -s basic_correctness/test_basic_correctness.py",
"pytest -v -s basic_correctness/test_cpu_offload.py", "pytest -v -s basic_correctness/test_cpu_offload.py",
"VLLM_TEST_ENABLE_ARTIFICIAL_PREEMPT=1 pytest -v -s basic_correctness/test_preemption.py",
], ],
}, },
"vllm_basic_models_test": { "vllm_basic_models_test": {
@ -68,15 +67,12 @@ def sample_vllm_test_library():
"-v", "-v",
"-s", "-s",
"entrypoints/llm", "entrypoints/llm",
"--ignore=entrypoints/llm/test_lazy_outlines.py",
"--ignore=entrypoints/llm/test_generate.py", "--ignore=entrypoints/llm/test_generate.py",
"--ignore=entrypoints/llm/test_generate_multiple_loras.py",
"--ignore=entrypoints/llm/test_collective_rpc.py", "--ignore=entrypoints/llm/test_collective_rpc.py",
] ]
), ),
"pytest -v -s entrypoints/llm/test_lazy_outlines.py",
"pytest -v -s entrypoints/llm/test_generate.py", "pytest -v -s entrypoints/llm/test_generate.py",
"VLLM_USE_V1=0 pytest -v -s entrypoints/offline_mode", "pytest -v -s entrypoints/offline_mode",
], ],
}, },
"vllm_regression_test": { "vllm_regression_test": {

View File

@ -67,7 +67,7 @@ fi
# wheels with cxx11-abi # wheels with cxx11-abi
echo "Checking that the gcc ABI is what we expect" echo "Checking that the gcc ABI is what we expect"
if [[ "$(uname)" != 'Darwin' ]]; then if [[ "$(uname)" != 'Darwin' && "$(uname -m)" != "s390x" ]]; then
# We also check that there are cxx11 symbols in libtorch # We also check that there are cxx11 symbols in libtorch
# #
echo "Checking that symbols in libtorch.so have the right gcc abi" echo "Checking that symbols in libtorch.so have the right gcc abi"

View File

@ -284,7 +284,7 @@ function install_torchrec_and_fbgemm() {
function clone_pytorch_xla() { function clone_pytorch_xla() {
if [[ ! -d ./xla ]]; then if [[ ! -d ./xla ]]; then
git clone --recursive --quiet https://github.com/pytorch/xla.git git clone --recursive -b r2.9 https://github.com/pytorch/xla.git
pushd xla pushd xla
# pin the xla hash so that we don't get broken by changes to xla # pin the xla hash so that we don't get broken by changes to xla
git checkout "$(cat ../.github/ci_commit_pins/xla.txt)" git checkout "$(cat ../.github/ci_commit_pins/xla.txt)"

View File

@ -58,7 +58,7 @@ time python tools/setup_helpers/generate_code.py \
# Build the docs # Build the docs
pushd docs/cpp pushd docs/cpp
time make VERBOSE=1 html -j time make VERBOSE=1 html
popd popd
popd popd

View File

@ -35,10 +35,11 @@ fi
print_cmake_info print_cmake_info
if [[ ${BUILD_ENVIRONMENT} == *"distributed"* ]]; then if [[ ${BUILD_ENVIRONMENT} == *"distributed"* ]]; then
USE_OPENMP=1 WERROR=1 python setup.py bdist_wheel # Needed for inductor benchmarks, as lots of HF networks make `torch.distribtued` calls
USE_DISTRIBUTED=1 USE_OPENMP=1 WERROR=1 python setup.py bdist_wheel
else else
# NB: we always build with distributed; USE_DISTRIBUTED turns off all # Explicitly set USE_DISTRIBUTED=0 to align with the default build config on mac. This also serves as the sole CI config that tests
# backends (specifically the gloo backend), so test that this case works too # that building with USE_DISTRIBUTED=0 works at all. See https://github.com/pytorch/pytorch/issues/86448
USE_DISTRIBUTED=0 USE_OPENMP=1 MACOSX_DEPLOYMENT_TARGET=11.0 WERROR=1 BUILD_TEST=OFF USE_PYTORCH_METAL=1 python setup.py bdist_wheel --plat-name macosx_11_0_arm64 USE_DISTRIBUTED=0 USE_OPENMP=1 MACOSX_DEPLOYMENT_TARGET=11.0 WERROR=1 BUILD_TEST=OFF USE_PYTORCH_METAL=1 python setup.py bdist_wheel --plat-name macosx_11_0_arm64
fi fi
if which sccache > /dev/null; then if which sccache > /dev/null; then

View File

@ -13,13 +13,9 @@ if [[ ! $(python -c "import torch; print(int(torch.backends.openmp.is_available(
fi fi
popd popd
python -mpip install -r requirements.txt
# enable debug asserts in serialization # enable debug asserts in serialization
export TORCH_SERIALIZATION_DEBUG=1 export TORCH_SERIALIZATION_DEBUG=1
python -mpip install --no-input -r requirements.txt
setup_test_python() { setup_test_python() {
# The CircleCI worker hostname doesn't resolve to an address. # The CircleCI worker hostname doesn't resolve to an address.
# This environment variable makes ProcessGroupGloo default to # This environment variable makes ProcessGroupGloo default to

View File

@ -0,0 +1,25 @@
From 6e08c9d08e9de59c7af28b720289debbbd384764 Mon Sep 17 00:00:00 2001
From: Michael Wang <13521008+isVoid@users.noreply.github.com>
Date: Tue, 1 Apr 2025 17:28:05 -0700
Subject: [PATCH] Avoid bumping certain driver API to avoid future breakage
(#185)
Co-authored-by: isVoid <isVoid@users.noreply.github.com>
---
numba_cuda/numba/cuda/cudadrv/driver.py | 3 +++
1 file changed, 3 insertions(+)
diff --git a/numba_cuda/numba/cuda/cudadrv/driver.py b/numba_cuda/numba/cuda/cudadrv/driver.py
index 1641bf77..233e9ed7 100644
--- a/numba_cuda/numba/cuda/cudadrv/driver.py
+++ b/numba_cuda/numba/cuda/cudadrv/driver.py
@@ -365,6 +365,9 @@ def _find_api(self, fname):
else:
variants = ('_v2', '')
+ if fname in ("cuCtxGetDevice", "cuCtxSynchronize"):
+ return getattr(self.lib, fname)
+
for variant in variants:
try:
return getattr(self.lib, f'{fname}{variant}')

View File

@ -32,6 +32,9 @@ LIBTORCH_NAMESPACE_LIST = (
"torch::", "torch::",
) )
# Patterns for detecting statically linked libstdc++ symbols
STATICALLY_LINKED_CXX11_ABI = [re.compile(r".*recursive_directory_iterator.*")]
def _apply_libtorch_symbols(symbols): def _apply_libtorch_symbols(symbols):
return [ return [
@ -53,12 +56,17 @@ def get_symbols(lib: str) -> list[tuple[str, str, str]]:
return [x.split(" ", 2) for x in lines.decode("latin1").split("\n")[:-1]] return [x.split(" ", 2) for x in lines.decode("latin1").split("\n")[:-1]]
def grep_symbols(lib: str, patterns: list[Any]) -> list[str]: def grep_symbols(
lib: str, patterns: list[Any], symbol_type: str | None = None
) -> list[str]:
def _grep_symbols( def _grep_symbols(
symbols: list[tuple[str, str, str]], patterns: list[Any] symbols: list[tuple[str, str, str]], patterns: list[Any]
) -> list[str]: ) -> list[str]:
rc = [] rc = []
for _s_addr, _s_type, s_name in symbols: for _s_addr, _s_type, s_name in symbols:
# Filter by symbol type if specified
if symbol_type and _s_type != symbol_type:
continue
for pattern in patterns: for pattern in patterns:
if pattern.match(s_name): if pattern.match(s_name):
rc.append(s_name) rc.append(s_name)
@ -80,6 +88,18 @@ def grep_symbols(lib: str, patterns: list[Any]) -> list[str]:
return functools.reduce(list.__add__, (x.result() for x in tasks), []) return functools.reduce(list.__add__, (x.result() for x in tasks), [])
def check_lib_statically_linked_libstdc_cxx_abi_symbols(lib: str) -> None:
cxx11_statically_linked_symbols = grep_symbols(
lib, STATICALLY_LINKED_CXX11_ABI, symbol_type="T"
)
num_statically_linked_symbols = len(cxx11_statically_linked_symbols)
print(f"num_statically_linked_symbols (T): {num_statically_linked_symbols}")
if num_statically_linked_symbols > 0:
raise RuntimeError(
f"Found statically linked libstdc++ symbols (recursive_directory_iterator): {cxx11_statically_linked_symbols[:100]}"
)
def check_lib_symbols_for_abi_correctness(lib: str) -> None: def check_lib_symbols_for_abi_correctness(lib: str) -> None:
print(f"lib: {lib}") print(f"lib: {lib}")
cxx11_symbols = grep_symbols(lib, LIBTORCH_CXX11_PATTERNS) cxx11_symbols = grep_symbols(lib, LIBTORCH_CXX11_PATTERNS)
@ -107,6 +127,7 @@ def main() -> None:
libtorch_cpu_path = str(install_root / "lib" / "libtorch_cpu.so") libtorch_cpu_path = str(install_root / "lib" / "libtorch_cpu.so")
check_lib_symbols_for_abi_correctness(libtorch_cpu_path) check_lib_symbols_for_abi_correctness(libtorch_cpu_path)
check_lib_statically_linked_libstdc_cxx_abi_symbols(libtorch_cpu_path)
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -32,6 +32,16 @@ if [[ "$BUILD_ENVIRONMENT" != *rocm* && "$BUILD_ENVIRONMENT" != *s390x* && -d /v
git config --global --add safe.directory /var/lib/jenkins/workspace git config --global --add safe.directory /var/lib/jenkins/workspace
fi fi
# Patch numba to avoid CUDA-13 crash, see https://github.com/pytorch/pytorch/issues/162878
NUMBA_CUDA_DIR=$(python -c "import os;import numba.cuda; print(os.path.dirname(numba.cuda.__file__))" 2>/dev/null || true)
if [ -n "$NUMBA_CUDA_DIR" ]; then
NUMBA_PATCH="$(dirname "$(realpath "${BASH_SOURCE[0]}")")/numba-cuda-13.patch"
pushd "$NUMBA_CUDA_DIR"
patch -p4 <"$NUMBA_PATCH"
popd
fi
echo "Environment variables:" echo "Environment variables:"
env env
@ -1614,6 +1624,25 @@ test_operator_benchmark() {
--expected "expected_ci_operator_benchmark_eager_float32_cpu.csv" --expected "expected_ci_operator_benchmark_eager_float32_cpu.csv"
} }
test_operator_microbenchmark() {
TEST_REPORTS_DIR=$(pwd)/test/test-reports
mkdir -p "$TEST_REPORTS_DIR"
TEST_DIR=$(pwd)
cd benchmarks/operator_benchmark/pt_extension
python -m pip install .
cd "${TEST_DIR}"/benchmarks/operator_benchmark
for OP_BENCHMARK_TESTS in matmul mm addmm bmm; do
$TASKSET python -m pt.${OP_BENCHMARK_TESTS}_test --tag-filter long \
--output-json-for-dashboard "${TEST_REPORTS_DIR}/operator_microbenchmark_${OP_BENCHMARK_TESTS}_compile.json" \
--benchmark-name "PyTorch operator microbenchmark" --use-compile
$TASKSET python -m pt.${OP_BENCHMARK_TESTS}_test --tag-filter long \
--output-json-for-dashboard "${TEST_REPORTS_DIR}/operator_microbenchmark_${OP_BENCHMARK_TESTS}.json" \
--benchmark-name "PyTorch operator microbenchmark"
done
}
if ! [[ "${BUILD_ENVIRONMENT}" == *libtorch* || "${BUILD_ENVIRONMENT}" == *-bazel-* ]]; then if ! [[ "${BUILD_ENVIRONMENT}" == *libtorch* || "${BUILD_ENVIRONMENT}" == *-bazel-* ]]; then
(cd test && python -c "import torch; print(torch.__config__.show())") (cd test && python -c "import torch; print(torch.__config__.show())")
@ -1668,6 +1697,8 @@ elif [[ "${TEST_CONFIG}" == *operator_benchmark* ]]; then
test_operator_benchmark cpu ${TEST_MODE} test_operator_benchmark cpu ${TEST_MODE}
fi fi
elif [[ "${TEST_CONFIG}" == *operator_microbenchmark* ]]; then
test_operator_microbenchmark
elif [[ "${TEST_CONFIG}" == *inductor_distributed* ]]; then elif [[ "${TEST_CONFIG}" == *inductor_distributed* ]]; then
test_inductor_distributed test_inductor_distributed
elif [[ "${TEST_CONFIG}" == *inductor-halide* ]]; then elif [[ "${TEST_CONFIG}" == *inductor-halide* ]]; then
@ -1721,11 +1752,6 @@ elif [[ "${TEST_CONFIG}" == *inductor_cpp_wrapper* ]]; then
elif [[ "${TEST_CONFIG}" == *inductor* ]]; then elif [[ "${TEST_CONFIG}" == *inductor* ]]; then
install_torchvision install_torchvision
test_inductor_shard "${SHARD_NUMBER}" test_inductor_shard "${SHARD_NUMBER}"
if [[ "${SHARD_NUMBER}" == 1 ]]; then
if [[ "${BUILD_ENVIRONMENT}" != linux-jammy-py3.9-gcc11-build ]]; then
test_inductor_distributed
fi
fi
elif [[ "${TEST_CONFIG}" == *einops* ]]; then elif [[ "${TEST_CONFIG}" == *einops* ]]; then
test_einops test_einops
elif [[ "${TEST_CONFIG}" == *dynamo_wrapped* ]]; then elif [[ "${TEST_CONFIG}" == *dynamo_wrapped* ]]; then

View File

@ -137,7 +137,7 @@ sccache --show-stats
python -c "import os, glob; os.system('python -mpip install --no-index --no-deps ' + glob.glob('dist/*.whl')[0])" python -c "import os, glob; os.system('python -mpip install --no-index --no-deps ' + glob.glob('dist/*.whl')[0])"
( (
if "%BUILD_ENVIRONMENT%"=="" ( if "%BUILD_ENVIRONMENT%"=="" (
echo NOTE: To run `import torch`, please make sure to activate the conda environment by running `call %CONDA_PARENT_DIR%\Miniconda3\Scripts\activate.bat %CONDA_PARENT_DIR%\Miniconda3` in Command Prompt before running Git Bash. echo NOTE: To run `import torch`, please make sure to activate the conda environment by running `call %CONDA_ROOT_DIR%\Scripts\activate.bat %CONDA_ROOT_DIR%\envs\py_tmp` in Command Prompt before running Git Bash.
) else ( ) else (
copy /Y "dist\*.whl" "%PYTORCH_FINAL_PACKAGE_DIR%" copy /Y "dist\*.whl" "%PYTORCH_FINAL_PACKAGE_DIR%"

View File

@ -3,12 +3,12 @@ if "%BUILD_ENVIRONMENT%"=="" (
) else ( ) else (
set CONDA_PARENT_DIR=C:\Jenkins set CONDA_PARENT_DIR=C:\Jenkins
) )
set CONDA_ROOT_DIR=%CONDA_PARENT_DIR%\Miniconda3
:: Be conservative here when rolling out the new AMI with conda. This will try :: Be conservative here when rolling out the new AMI with conda. This will try
:: to install conda as before if it couldn't find the conda installation. This :: to install conda as before if it couldn't find the conda installation. This
:: can be removed eventually after we gain enough confidence in the AMI :: can be removed eventually after we gain enough confidence in the AMI
if not exist %CONDA_PARENT_DIR%\Miniconda3 ( if not exist %CONDA_ROOT_DIR% (
set INSTALL_FRESH_CONDA=1 set INSTALL_FRESH_CONDA=1
) )
@ -17,10 +17,14 @@ if "%INSTALL_FRESH_CONDA%"=="1" (
if errorlevel 1 exit /b if errorlevel 1 exit /b
if not errorlevel 0 exit /b if not errorlevel 0 exit /b
%TMP_DIR_WIN%\Miniconda3-latest-Windows-x86_64.exe /InstallationType=JustMe /RegisterPython=0 /S /AddToPath=0 /D=%CONDA_PARENT_DIR%\Miniconda3 %TMP_DIR_WIN%\Miniconda3-latest-Windows-x86_64.exe /InstallationType=JustMe /RegisterPython=0 /S /AddToPath=0 /D=%CONDA_ROOT_DIR%
if errorlevel 1 exit /b if errorlevel 1 exit /b
if not errorlevel 0 exit /b if not errorlevel 0 exit /b
) )
:: Activate conda so that we can use its commands, i.e. conda, python, pip :: Activate conda so that we can use its commands, i.e. conda, python, pip
call %CONDA_PARENT_DIR%\Miniconda3\Scripts\activate.bat %CONDA_PARENT_DIR%\Miniconda3 call %CONDA_ROOT_DIR%\Scripts\activate.bat %CONDA_ROOT_DIR%
:: Activate conda so that we can use its commands, i.e. conda, python, pip
call conda activate py_tmp
call pip install -r .ci/docker/requirements-ci.txt

View File

@ -14,7 +14,7 @@ if not errorlevel 0 exit /b
:: build\torch. Rather than changing all these references, making a copy of torch folder :: build\torch. Rather than changing all these references, making a copy of torch folder
:: from conda to the current workspace is easier. The workspace will be cleaned up after :: from conda to the current workspace is easier. The workspace will be cleaned up after
:: the job anyway :: the job anyway
xcopy /s %CONDA_PARENT_DIR%\Miniconda3\Lib\site-packages\torch %TMP_DIR_WIN%\build\torch\ xcopy /s %CONDA_ROOT_DIR%\envs\py_tmp\Lib\site-packages\torch %TMP_DIR_WIN%\build\torch\
pushd . pushd .
if "%VC_VERSION%" == "" ( if "%VC_VERSION%" == "" (

View File

@ -38,7 +38,14 @@ if [[ "$BUILD_ENVIRONMENT" == *cuda* ]]; then
fi fi
# TODO: Move both of them to Windows AMI # TODO: Move both of them to Windows AMI
python -m pip install pytest-rerunfailures==10.3 pytest-cpp==2.3.0 tensorboard==2.13.0 protobuf==5.29.4 pytest-subtests==0.13.1 python -m pip install tensorboard==2.13.0 protobuf==5.29.4 pytest-subtests==0.13.1
# Copied from https://github.com/pytorch/test-infra/blob/be01a40157c36cd5a48391fdf44a7bc3ebd4c7e3/aws/ami/windows/scripts/Installers/Install-Pip-Dependencies.ps1#L16 with some adjustments
# pytest-rerunfailures==10.3 as 10.2 fails with INTERNALERROR> pluggy._manager.PluginValidationError: unknown hook 'pytest_configure_node'
# scipy from 1.6.3 to 1.10
# expecttest from 0.1.3 to 0.3.0
# xdoctest from 1.0.2 to 1.3.0
python -m pip install "future==0.18.2" "hypothesis==5.35.1" "expecttest==0.3.0" "librosa>=0.6.2" "scipy==1.10.1" "psutil==5.9.1" "pynvml==11.4.1" "pillow==9.2.0" "unittest-xml-reporting<=3.2.0,>=2.0.0" "pytest==7.1.3" "pytest-xdist==2.5.0" "pytest-flakefinder==1.1.0" "pytest-rerunfailures==10.3" "pytest-shard==0.1.2" "sympy==1.11.1" "xdoctest==1.3.0" "pygments==2.12.0" "opt-einsum>=3.3" "networkx==2.8.8" "mpmath==1.2.1" "pytest-cpp==2.3.0" "boto3==1.35.42"
# Install Z3 optional dependency for Windows builds. # Install Z3 optional dependency for Windows builds.
python -m pip install z3-solver==4.15.1.0 python -m pip install z3-solver==4.15.1.0
@ -52,9 +59,6 @@ python -m pip install parameterized==0.8.1
# Install pulp for testing ilps under torch\distributed\_tools # Install pulp for testing ilps under torch\distributed\_tools
python -m pip install pulp==2.9.0 python -m pip install pulp==2.9.0
# Install expecttest to merge https://github.com/pytorch/pytorch/pull/155308
python -m pip install expecttest==0.3.0
run_tests() { run_tests() {
# Run nvidia-smi if available # Run nvidia-smi if available
for path in '/c/Program Files/NVIDIA Corporation/NVSMI/nvidia-smi.exe' /c/Windows/System32/nvidia-smi.exe; do for path in '/c/Program Files/NVIDIA Corporation/NVSMI/nvidia-smi.exe' /c/Windows/System32/nvidia-smi.exe; do

View File

@ -37,10 +37,10 @@ IF "%CUDA_PATH_V128%"=="" (
) )
IF "%BUILD_VISION%" == "" ( IF "%BUILD_VISION%" == "" (
set TORCH_CUDA_ARCH_LIST=6.1;7.0;7.5;8.0;8.6;9.0;10.0;12.0 set TORCH_CUDA_ARCH_LIST=7.0;7.5;8.0;8.6;9.0;10.0;12.0
set TORCH_NVCC_FLAGS=-Xfatbin -compress-all set TORCH_NVCC_FLAGS=-Xfatbin -compress-all
) ELSE ( ) ELSE (
set NVCC_FLAGS=-D__CUDA_NO_HALF_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_90,code=compute_90 -gencode=arch=compute_100,code=compute_100 -gencode=arch=compute_120,code=compute_120 set NVCC_FLAGS=-D__CUDA_NO_HALF_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_90,code=compute_90 -gencode=arch=compute_100,code=compute_100 -gencode=arch=compute_120,code=compute_120
) )
set "CUDA_PATH=%CUDA_PATH_V128%" set "CUDA_PATH=%CUDA_PATH_V128%"

View File

@ -1,9 +1,9 @@
set WIN_DRIVER_VN=528.89 set WIN_DRIVER_VN=580.88
set "DRIVER_DOWNLOAD_LINK=https://ossci-windows.s3.amazonaws.com/%WIN_DRIVER_VN%-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe" & REM @lint-ignore set "DRIVER_DOWNLOAD_LINK=https://ossci-windows.s3.amazonaws.com/%WIN_DRIVER_VN%-data-center-tesla-desktop-win10-win11-64bit-dch-international.exe" & REM @lint-ignore
curl --retry 3 -kL %DRIVER_DOWNLOAD_LINK% --output %WIN_DRIVER_VN%-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe curl --retry 3 -kL %DRIVER_DOWNLOAD_LINK% --output %WIN_DRIVER_VN%-data-center-tesla-desktop-win10-win11-64bit-dch-international.exe
if errorlevel 1 exit /b 1 if errorlevel 1 exit /b 1
start /wait %WIN_DRIVER_VN%-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe -s -noreboot start /wait %WIN_DRIVER_VN%-data-center-tesla-desktop-win10-win11-64bit-dch-international.exe -s -noreboot
if errorlevel 1 exit /b 1 if errorlevel 1 exit /b 1
del %WIN_DRIVER_VN%-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe || ver > NUL del %WIN_DRIVER_VN%-data-center-tesla-desktop-win10-win11-64bit-dch-international.exe || ver > NUL

View File

@ -189,8 +189,7 @@ pip install requests ninja typing-extensions
retry pip install -r "${pytorch_rootdir}/requirements.txt" || true retry pip install -r "${pytorch_rootdir}/requirements.txt" || true
retry brew install libomp retry brew install libomp
# For USE_DISTRIBUTED=1 on macOS, this enables gloo, which needs libuv, which # For USE_DISTRIBUTED=1 on macOS, need libuv, which is build as part of tensorpipe submodule
# is build as part of tensorpipe submodule
export USE_DISTRIBUTED=1 export USE_DISTRIBUTED=1
export USE_MKLDNN=OFF export USE_MKLDNN=OFF

View File

@ -71,14 +71,7 @@ export PYTORCH_BUILD_NUMBER=1
# Set triton version as part of PYTORCH_EXTRA_INSTALL_REQUIREMENTS # Set triton version as part of PYTORCH_EXTRA_INSTALL_REQUIREMENTS
TRITON_VERSION=$(cat $PYTORCH_ROOT/.ci/docker/triton_version.txt) TRITON_VERSION=$(cat $PYTORCH_ROOT/.ci/docker/triton_version.txt)
# Here PYTORCH_EXTRA_INSTALL_REQUIREMENTS is already set for the all the wheel builds hence append TRITON_CONSTRAINT
TRITON_CONSTRAINT="platform_system == 'Linux' and platform_machine == 'x86_64'"
# CUDA 12.9/13.0 builds have triton for Linux and Linux aarch64 binaries.
if [[ "$DESIRED_CUDA" == "cu129" ]] || [[ "$DESIRED_CUDA" == "cu130" ]]; then
TRITON_CONSTRAINT="platform_system == 'Linux'" TRITON_CONSTRAINT="platform_system == 'Linux'"
fi
if [[ "$PACKAGE_TYPE" =~ .*wheel.* && -n "${PYTORCH_EXTRA_INSTALL_REQUIREMENTS:-}" && ! "$PYTORCH_BUILD_VERSION" =~ .*xpu.* ]]; then if [[ "$PACKAGE_TYPE" =~ .*wheel.* && -n "${PYTORCH_EXTRA_INSTALL_REQUIREMENTS:-}" && ! "$PYTORCH_BUILD_VERSION" =~ .*xpu.* ]]; then
TRITON_REQUIREMENT="triton==${TRITON_VERSION}; ${TRITON_CONSTRAINT}" TRITON_REQUIREMENT="triton==${TRITON_VERSION}; ${TRITON_CONSTRAINT}"

View File

@ -6,6 +6,12 @@ inputs:
cuda-version: cuda-version:
description: which cuda version to install, 'cpu' for none description: which cuda version to install, 'cpu' for none
required: true required: true
python-version:
required: false
type: string
default: "3.10"
description: |
The python version to be used. Will be 3.10 by default
runs: runs:
using: composite using: composite
@ -38,18 +44,24 @@ runs:
CONDA="C:\Jenkins\Miniconda3\condabin\conda.bat" CONDA="C:\Jenkins\Miniconda3\condabin\conda.bat"
{ {
echo "CONDA=${CONDA}";
echo "CONDA_RUN=${CONDA} run --no-capture-output"; echo "CONDA_RUN=${CONDA} run --no-capture-output";
echo "CONDA_BUILD=${CONDA} run conda-build"; echo "CONDA_BUILD=${CONDA} run conda-build";
echo "CONDA_INSTALL=${CONDA} install"; echo "CONDA_INSTALL=${CONDA} install";
} >> "${GITHUB_ENV}" } >> "${GITHUB_ENV}"
- name: Setup Python3 - name: Setup Python3
env:
PYTHON_VERSION: ${{ inputs.python-version }}
shell: bash shell: bash
run: | run: |
set +e set +e
set -x set -x
PYTHON3=$(${CONDA_RUN} which python3) # Create new py_tmp env with python-version
${CONDA} create -y -n py_tmp python=${PYTHON_VERSION} intel-openmp libuv
PYTHON3=$(${CONDA_RUN} -n py_tmp which python3)
EXIT_CODE=$? EXIT_CODE=$?
if [[ "${EXIT_CODE}" == "0" ]]; then if [[ "${EXIT_CODE}" == "0" ]]; then
@ -62,7 +74,7 @@ runs:
# installation, which is Python 3 based. Its Python is default to Python 3. Further, there # installation, which is Python 3 based. Its Python is default to Python 3. Further, there
# is also the Miniconda installation that is Python 2 based, and both can be installed if # is also the Miniconda installation that is Python 2 based, and both can be installed if
# needed. In both cases, Python binary is just called python # needed. In both cases, Python binary is just called python
PYTHON=$(${CONDA_RUN} which python) PYTHON=$(${CONDA_RUN} -n py_tmp which python)
EXIT_CODE=$? EXIT_CODE=$?
if [[ "${EXIT_CODE}" == "0" ]]; then if [[ "${EXIT_CODE}" == "0" ]]; then

View File

@ -1 +1 @@
e10fef08838612b4560e9c72e5cb1414a5edfa13 78a47f87ce259a48f0391fa9ae15add05ea7432b

View File

@ -1 +1 @@
6c5478ff7c3d50dd1e3047d72ec5909bea474073 r2.9

View File

@ -41,9 +41,9 @@ SUPPORTED_PERIODICAL_MODES: dict[str, Callable[[Optional[str]], bool]] = {
} }
# The link to the published list of disabled jobs # The link to the published list of disabled jobs
DISABLED_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json" DISABLED_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json?versionId=hjktHz2WOejHpxKpkqpDknTt5rMTM9KK"
# and unstable jobs # and unstable jobs
UNSTABLE_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/unstable-jobs.json" UNSTABLE_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/unstable-jobs.json?versionId=wrjdvvQTJxgvMO.rGw5MEuMsj6XbjuV7"
# Some constants used to handle disabled and unstable jobs # Some constants used to handle disabled and unstable jobs
JOB_NAME_SEP = "/" JOB_NAME_SEP = "/"

View File

@ -43,55 +43,55 @@ CUDA_AARCH64_ARCHES = ["12.6-aarch64", "12.8-aarch64", "13.0-aarch64"]
PYTORCH_EXTRA_INSTALL_REQUIREMENTS = { PYTORCH_EXTRA_INSTALL_REQUIREMENTS = {
"12.6": ( "12.6": (
"nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | "
"nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | "
"nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | "
"nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | "
"nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | "
"nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | "
"nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | "
"nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | "
"nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | "
"nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | "
"nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | "
"nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | "
"nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | "
"nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | "
"nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64'" "nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'"
), ),
"12.8": ( "12.8": (
"nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | "
"nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | "
"nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | "
"nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | "
"nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | "
"nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | "
"nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | "
"nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | "
"nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | "
"nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | "
"nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | "
"nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | "
"nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | "
"nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | "
"nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64'" "nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'"
), ),
"13.0": ( "13.0": (
"nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | "
"nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | "
"nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | "
"nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | "
"nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cublas==13.0.0.19; platform_system == 'Linux' | "
"nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cufft==12.0.0.15; platform_system == 'Linux' | "
"nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-curand==10.4.0.35; platform_system == 'Linux' | "
"nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | "
"nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | "
"nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | "
"nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | "
"nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | "
"nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-nvtx==13.0.39; platform_system == 'Linux' | "
"nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | " "nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | "
"nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64'" "nvidia-cufile==1.15.0.42; platform_system == 'Linux'"
), ),
"xpu": ( "xpu": (
"intel-cmplr-lib-rt==2025.2.1 | " "intel-cmplr-lib-rt==2025.2.1 | "

View File

@ -135,7 +135,7 @@ ROCM_SMOKE_WORKFLOWS = [
build_configs=generate_binary_build_matrix.generate_wheels_matrix( build_configs=generate_binary_build_matrix.generate_wheels_matrix(
OperatingSystem.LINUX, OperatingSystem.LINUX,
arches=["6.4"], arches=["6.4"],
python_versions=["3.9"], python_versions=["3.10"],
), ),
ciflow_config=CIFlowConfig( ciflow_config=CIFlowConfig(
labels={ labels={

View File

@ -32,7 +32,7 @@ concurrency:
{%- macro setup_ec2_windows() -%} {%- macro setup_ec2_windows() -%}
!{{ display_ec2_information() }} !{{ display_ec2_information() }}
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}

View File

@ -56,7 +56,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -77,6 +77,9 @@ jobs:
runs_on: linux.s390x runs_on: linux.s390x
ALPINE_IMAGE: "docker.io/s390x/alpine" ALPINE_IMAGE: "docker.io/s390x/alpine"
timeout-minutes: 420 timeout-minutes: 420
{%- elif config["gpu_arch_type"] == "rocm" %}
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
{%- elif "conda" in build_environment and config["gpu_arch_type"] == "cuda" %} {%- elif "conda" in build_environment and config["gpu_arch_type"] == "cuda" %}
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runs_on: linux.24xlarge.ephemeral runs_on: linux.24xlarge.ephemeral
@ -135,7 +138,7 @@ jobs:
contents: read contents: read
steps: steps:
- name: Setup XPU - name: Setup XPU
uses: pytorch/pytorch/.github/actions/setup-xpu@main uses: pytorch/pytorch/.github/actions/setup-xpu@release/2.9
- name: configure aws credentials - name: configure aws credentials
id: aws_creds id: aws_creds
uses: aws-actions/configure-aws-credentials@v4 uses: aws-actions/configure-aws-credentials@v4
@ -150,10 +153,10 @@ jobs:
with: with:
name: !{{ config["build_name"] }} name: !{{ config["build_name"] }}
path: "${{ runner.temp }}/artifacts/" path: "${{ runner.temp }}/artifacts/"
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: !{{ config["container_image"] }} docker-image-name: !{{ config["container_image"] }}
@ -161,7 +164,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -182,7 +185,7 @@ jobs:
with: with:
name: !{{ config["build_name"] }} name: !{{ config["build_name"] }}
path: "${{ runner.temp }}/artifacts/" path: "${{ runner.temp }}/artifacts/"
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
- name: ROCm set GPU_FLAG - name: ROCm set GPU_FLAG
run: | run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}" echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
@ -196,7 +199,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: !{{ config["container_image"] }} docker-image-name: !{{ config["container_image"] }}
@ -204,7 +207,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary

View File

@ -68,7 +68,7 @@ jobs:
chmod +x "${RUNNER_TEMP}/conda.sh" chmod +x "${RUNNER_TEMP}/conda.sh"
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda" /bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}" echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
- name: Populate binary env - name: Populate binary env
run: | run: |
# shellcheck disable=SC1091 # shellcheck disable=SC1091

View File

@ -64,7 +64,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -135,7 +135,7 @@ jobs:
{%- else %} {%- else %}
!{{ set_runner_specific_vars() }} !{{ set_runner_specific_vars() }}
!{{ common.setup_ec2_windows() }} !{{ common.setup_ec2_windows() }}
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
{%- endif %} {%- endif %}
- name: Populate binary env - name: Populate binary env
shell: bash shell: bash
@ -211,7 +211,7 @@ jobs:
"pytorch/.ci/pytorch/windows/arm64/bootstrap_rust.bat" "pytorch/.ci/pytorch/windows/arm64/bootstrap_rust.bat"
{%- else %} {%- else %}
!{{ common.setup_ec2_windows() }} !{{ common.setup_ec2_windows() }}
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ set_runner_specific_vars() }} !{{ set_runner_specific_vars() }}
{%- endif %} {%- endif %}
- uses: !{{ common.download_artifact_action }} - uses: !{{ common.download_artifact_action }}

View File

@ -47,7 +47,7 @@ jobs:
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }} reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -69,25 +69,25 @@ jobs:
runs-on: ${{ matrix.runner }} runs-on: ${{ matrix.runner }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-image-name: ${{ inputs.docker-image-name }} docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -97,7 +97,7 @@ jobs:
run: echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" run: echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT"
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG - name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
uses: pytorch/test-infra/.github/actions/setup-nvidia@main uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.9
if: ${{ inputs.cuda-version != 'cpu' && steps.check_container_runner.outputs.IN_CONTAINER_RUNNER == 'false' }} if: ${{ inputs.cuda-version != 'cpu' && steps.check_container_runner.outputs.IN_CONTAINER_RUNNER == 'false' }}
- name: Output disk space left - name: Output disk space left
@ -209,5 +209,5 @@ jobs:
file-suffix: bazel-${{ github.job }}_${{ steps.get-job-id.outputs.job-id }} file-suffix: bazel-${{ github.job }}_${{ steps.get-job-id.outputs.job-id }}
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.9
if: always() if: always()

View File

@ -142,13 +142,13 @@ jobs:
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
if: inputs.build_environment != 'linux-s390x-binary-manywheel' if: inputs.build_environment != 'linux-s390x-binary-manywheel'
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.github-token }} github-secret: ${{ secrets.github-token }}
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' || inputs.build_environment == 'linux-s390x-binary-manywheel' }} no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' || inputs.build_environment == 'linux-s390x-binary-manywheel' }}
@ -178,7 +178,6 @@ jobs:
- name: Checkout PyTorch to pytorch dir - name: Checkout PyTorch to pytorch dir
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -213,9 +212,9 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' && inputs.build_environment != 'linux-s390x-binary-manywheel' }} if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' && inputs.build_environment != 'linux-s390x-binary-manywheel' }}
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
# If doing this in main or release branch, use docker.io. Otherwise # If doing this in release/2.9 or release branch, use docker.io. Otherwise
# use ECR # use ECR
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: ${{ inputs.DOCKER_IMAGE }} docker-image-name: ${{ inputs.DOCKER_IMAGE }}
@ -227,7 +226,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' && inputs.build_environment != 'linux-s390x-binary-manywheel' }} if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' && inputs.build_environment != 'linux-s390x-binary-manywheel' }}
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -283,7 +282,7 @@ jobs:
- name: Teardown Linux - name: Teardown Linux
if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel' if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel'
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.9
- name: Chown workspace - name: Chown workspace
if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel' if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel'

View File

@ -125,14 +125,14 @@ jobs:
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
if: inputs.build_environment != 'linux-s390x-binary-manywheel' if: inputs.build_environment != 'linux-s390x-binary-manywheel'
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.github-token }} github-secret: ${{ secrets.github-token }}
# Setup the environment # Setup the environment
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' || inputs.build_environment == 'linux-s390x-binary-manywheel' }} no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' || inputs.build_environment == 'linux-s390x-binary-manywheel' }}
@ -155,7 +155,6 @@ jobs:
- name: Checkout PyTorch to pytorch dir - name: Checkout PyTorch to pytorch dir
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
show-progress: false show-progress: false
path: pytorch path: pytorch
@ -186,9 +185,7 @@ jobs:
path: "${{ runner.temp }}/artifacts/" path: "${{ runner.temp }}/artifacts/"
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG - name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
uses: pytorch/test-infra/.github/actions/setup-nvidia@main uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.9
with:
driver-version: ${{ startsWith(inputs.GPU_ARCH_VERSION, '13') && '580.65.06' || '570.133.07' }}
if: ${{ inputs.GPU_ARCH_TYPE == 'cuda' && steps.filter.outputs.is-test-matrix-empty == 'False' }} if: ${{ inputs.GPU_ARCH_TYPE == 'cuda' && steps.filter.outputs.is-test-matrix-empty == 'False' }}
- name: configure aws credentials - name: configure aws credentials
@ -203,7 +200,7 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' && inputs.build_environment != 'linux-s390x-binary-manywheel' }} if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' && inputs.build_environment != 'linux-s390x-binary-manywheel' }}
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: ${{ inputs.DOCKER_IMAGE }} docker-image-name: ${{ inputs.DOCKER_IMAGE }}
@ -213,7 +210,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' && inputs.build_environment != 'linux-s390x-binary-manywheel' }} if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' && inputs.build_environment != 'linux-s390x-binary-manywheel' }}
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -225,7 +222,7 @@ jobs:
- name: Teardown Linux - name: Teardown Linux
if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel' if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel'
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.9
- name: Chown workspace - name: Chown workspace
if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel' if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel'

View File

@ -81,7 +81,7 @@ jobs:
SHA1: ${{ github.event.pull_request.head.sha || github.sha }} SHA1: ${{ github.event.pull_request.head.sha || github.sha }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
no-sudo: true no-sudo: true

View File

@ -67,7 +67,7 @@ jobs:
# an OOM issue when running the job, so this upgrades the runner from 4xlarge # an OOM issue when running the job, so this upgrades the runner from 4xlarge
# to the next available tier of 12xlarge. So much memory just to generate cpp # to the next available tier of 12xlarge. So much memory just to generate cpp
# doc # doc
runner: ${{ inputs.runner_prefix }}linux.12xlarge runner: ${{ inputs.runner_prefix }}linux.12xlarge.memory
# TODO: Nightly cpp docs take longer and longer to finish (more than 3h now) # TODO: Nightly cpp docs take longer and longer to finish (more than 3h now)
# Let's try to figure out how this can be improved # Let's try to figure out how this can be improved
timeout-minutes: 360 timeout-minutes: 360
@ -84,7 +84,7 @@ jobs:
name: build-docs-${{ matrix.docs_type }}-${{ inputs.push }} name: build-docs-${{ matrix.docs_type }}-${{ inputs.push }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: | instructions: |
@ -95,7 +95,7 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
@ -110,12 +110,12 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-image-name: ${{ inputs.docker-image }} docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -222,5 +222,5 @@ jobs:
s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }}/functorchdocs s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }}/functorchdocs
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.9
if: always() if: always()

View File

@ -11,7 +11,7 @@ on:
jobs: jobs:
lint-urls: lint-urls:
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'skip-url-lint') }} if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'skip-url-lint') }}
uses: pytorch/test-infra/.github/workflows/linux_job_v2.yml@main uses: pytorch/test-infra/.github/workflows/linux_job_v2.yml@release/2.9
with: with:
job-name: lint-urls job-name: lint-urls
timeout: 120 timeout: 120
@ -37,7 +37,7 @@ jobs:
lint-xrefs: lint-xrefs:
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'skip-xref-lint') }} if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'skip-xref-lint') }}
uses: pytorch/test-infra/.github/workflows/linux_job_v2.yml@main uses: pytorch/test-infra/.github/workflows/linux_job_v2.yml@release/2.9
with: with:
job-name: lint-xrefs job-name: lint-xrefs
timeout: 60 timeout: 60

View File

@ -134,7 +134,7 @@ jobs:
test-matrix: ${{ steps.filter.outputs.test-matrix }} test-matrix: ${{ steps.filter.outputs.test-matrix }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
if: inputs.build-environment != 'linux-s390x-binary-manywheel' if: inputs.build-environment != 'linux-s390x-binary-manywheel'
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -147,7 +147,7 @@ jobs:
# checkout because when we run this action we don't *have* a local # checkout because when we run this action we don't *have* a local
# checkout. In other cases you should prefer a local checkout. # checkout. In other cases you should prefer a local checkout.
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
no-sudo: true no-sudo: true
@ -183,7 +183,7 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
if: inputs.build-environment != 'linux-s390x-binary-manywheel' if: inputs.build-environment != 'linux-s390x-binary-manywheel'
with: with:
docker-image-name: ${{ inputs.docker-image-name }} docker-image-name: ${{ inputs.docker-image-name }}
@ -199,7 +199,7 @@ jobs:
echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}"
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
if: inputs.build-environment != 'linux-s390x-binary-manywheel' && steps.use-old-whl.outputs.reuse != 'true' if: inputs.build-environment != 'linux-s390x-binary-manywheel' && steps.use-old-whl.outputs.reuse != 'true'
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -457,7 +457,7 @@ jobs:
artifact_prefix: usage_log_build_${{ steps.get-job-id.outputs.job-id }} artifact_prefix: usage_log_build_${{ steps.get-job-id.outputs.job-id }}
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.9
if: always() && inputs.build-environment != 'linux-s390x-binary-manywheel' if: always() && inputs.build-environment != 'linux-s390x-binary-manywheel'
- name: Cleanup docker - name: Cleanup docker

View File

@ -99,7 +99,7 @@ jobs:
contents: read contents: read
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
if: ${{ !contains(matrix.runner, 'b200') && inputs.build-environment != 'linux-s390x-binary-manywheel' }} if: ${{ !contains(matrix.runner, 'b200') && inputs.build-environment != 'linux-s390x-binary-manywheel' }}
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -108,7 +108,7 @@ jobs:
docker exec -it $(docker container ps --format '{{.ID}}') bash docker exec -it $(docker container ps --format '{{.ID}}') bash
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
no-sudo: true no-sudo: true
@ -139,7 +139,7 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
if: inputs.build-environment != 'linux-s390x-binary-manywheel' if: inputs.build-environment != 'linux-s390x-binary-manywheel'
with: with:
docker-image-name: ${{ inputs.docker-image }} docker-image-name: ${{ inputs.docker-image }}
@ -155,7 +155,7 @@ jobs:
echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}"
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
if: inputs.build-environment != 'linux-s390x-binary-manywheel' if: inputs.build-environment != 'linux-s390x-binary-manywheel'
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -167,9 +167,9 @@ jobs:
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG - name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
id: install-nvidia-driver id: install-nvidia-driver
uses: pytorch/test-infra/.github/actions/setup-nvidia@main uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.9
with: with:
driver-version: ${{ matrix.config == 'legacy_nvidia_driver' && '525.105.17' || '570.133.07' }} driver-version: ${{ matrix.config == 'legacy_nvidia_driver' && '525.105.17' || '580.82.07' }}
if: ${{ contains(inputs.build-environment, 'cuda') && !contains(matrix.config, 'nogpu') && steps.check_container_runner.outputs.IN_CONTAINER_RUNNER == 'false' && !contains(matrix.runner, 'b200') }} if: ${{ contains(inputs.build-environment, 'cuda') && !contains(matrix.config, 'nogpu') && steps.check_container_runner.outputs.IN_CONTAINER_RUNNER == 'false' && !contains(matrix.runner, 'b200') }}
- name: Setup GPU_FLAG for docker run - name: Setup GPU_FLAG for docker run
@ -273,6 +273,8 @@ jobs:
TEST_CONFIG: ${{ matrix.config }} TEST_CONFIG: ${{ matrix.config }}
SHARD_NUMBER: ${{ matrix.shard }} SHARD_NUMBER: ${{ matrix.shard }}
NUM_TEST_SHARDS: ${{ matrix.num_shards }} NUM_TEST_SHARDS: ${{ matrix.num_shards }}
EXTRA_FLAGS: ${{ matrix.extra_flags || '' }}
OP_BENCHMARK_TESTS: ${{ matrix.op_benchmark_tests }}
REENABLED_ISSUES: ${{ steps.keep-going.outputs.reenabled-issues }} REENABLED_ISSUES: ${{ steps.keep-going.outputs.reenabled-issues }}
CONTINUE_THROUGH_ERROR: ${{ steps.keep-going.outputs.keep-going }} CONTINUE_THROUGH_ERROR: ${{ steps.keep-going.outputs.keep-going }}
VERBOSE_TEST_LOGS: ${{ steps.keep-going.outputs.ci-verbose-test-logs }} VERBOSE_TEST_LOGS: ${{ steps.keep-going.outputs.ci-verbose-test-logs }}
@ -418,7 +420,7 @@ jobs:
aws-region: us-east-1 aws-region: us-east-1
- name: Upload the benchmark results - name: Upload the benchmark results
uses: pytorch/test-infra/.github/actions/upload-benchmark-results@main uses: pytorch/test-infra/.github/actions/upload-benchmark-results@release/2.9
if: inputs.build-environment != 'linux-s390x-binary-manywheel' if: inputs.build-environment != 'linux-s390x-binary-manywheel'
with: with:
benchmark-results-dir: test/test-reports benchmark-results-dir: test/test-reports
@ -476,7 +478,7 @@ jobs:
workflow_attempt: ${{github.run_attempt}} workflow_attempt: ${{github.run_attempt}}
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.9
if: always() && steps.check_container_runner.outputs.IN_CONTAINER_RUNNER == 'false' if: always() && steps.check_container_runner.outputs.IN_CONTAINER_RUNNER == 'false'
# NB: We are currently having an intermittent GPU-related issue on G5 runners with # NB: We are currently having an intermittent GPU-related issue on G5 runners with

View File

@ -67,11 +67,11 @@ jobs:
test-matrix: ${{ steps.filter.outputs.test-matrix }} test-matrix: ${{ steps.filter.outputs.test-matrix }}
steps: steps:
- name: Clean up disk space before running MacOS workflow - name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.9
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
- name: Set xcode version - name: Set xcode version
env: env:
@ -82,7 +82,7 @@ jobs:
fi fi
- name: Setup Python - name: Setup Python
uses: pytorch/test-infra/.github/actions/setup-python@main uses: pytorch/test-infra/.github/actions/setup-python@release/2.9
with: with:
python-version: ${{ inputs.python-version }} python-version: ${{ inputs.python-version }}
pip-requirements-file: .github/requirements/pip-requirements-macOS.txt pip-requirements-file: .github/requirements/pip-requirements-macOS.txt
@ -188,4 +188,4 @@ jobs:
- name: Clean up disk space - name: Clean up disk space
if: always() if: always()
continue-on-error: true continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.9

View File

@ -105,11 +105,11 @@ jobs:
done done
- name: Clean up disk space before running MacOS workflow - name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.9
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
- name: Get workflow job id - name: Get workflow job id
id: get-job-id id: get-job-id
@ -119,7 +119,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup Python - name: Setup Python
uses: pytorch/test-infra/.github/actions/setup-python@main uses: pytorch/test-infra/.github/actions/setup-python@release/2.9
with: with:
python-version: ${{ inputs.python-version }} python-version: ${{ inputs.python-version }}
pip-requirements-file: .github/requirements/pip-requirements-macOS.txt pip-requirements-file: .github/requirements/pip-requirements-macOS.txt
@ -257,7 +257,7 @@ jobs:
file-suffix: ${{ github.job }}-${{ matrix.config }}-${{ matrix.shard }}-${{ matrix.num_shards }}-${{ matrix.runner }}_${{ steps.get-job-id.outputs.job-id }} file-suffix: ${{ github.job }}-${{ matrix.config }}-${{ matrix.shard }}-${{ matrix.num_shards }}-${{ matrix.runner }}_${{ steps.get-job-id.outputs.job-id }}
- name: Upload the benchmark results - name: Upload the benchmark results
uses: pytorch/test-infra/.github/actions/upload-benchmark-results@main uses: pytorch/test-infra/.github/actions/upload-benchmark-results@release/2.9
with: with:
benchmark-results-dir: test/test-reports benchmark-results-dir: test/test-reports
dry-run: false dry-run: false
@ -287,4 +287,4 @@ jobs:
- name: Clean up disk space - name: Clean up disk space
if: always() if: always()
continue-on-error: true continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.9

View File

@ -81,7 +81,7 @@ jobs:
steps: steps:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
no-sudo: true no-sudo: true
@ -113,12 +113,12 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-image-name: ${{ inputs.docker-image }} docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -330,7 +330,7 @@ jobs:
aws-region: us-east-1 aws-region: us-east-1
- name: Upload the benchmark results - name: Upload the benchmark results
uses: pytorch/test-infra/.github/actions/upload-benchmark-results@main uses: pytorch/test-infra/.github/actions/upload-benchmark-results@release/2.9
with: with:
benchmark-results-dir: test/test-reports benchmark-results-dir: test/test-reports
dry-run: false dry-run: false

View File

@ -59,7 +59,7 @@ jobs:
PR_NUMBER: ${{ github.event.pull_request.number }} PR_NUMBER: ${{ github.event.pull_request.number }}
steps: steps:
# - name: Checkout PyTorch # - name: Checkout PyTorch
# uses: pytorch/pytorch/.github/actions/checkout-pytorch@main # uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
# with: # with:
# fetch-depth: 1 # fetch-depth: 1
# submodules: true # submodules: true

View File

@ -85,10 +85,10 @@ jobs:
git config --global core.fsmonitor false git config --global core.fsmonitor false
- name: Clean up leftover processes on non-ephemeral Windows runner - name: Clean up leftover processes on non-ephemeral Windows runner
uses: pytorch/test-infra/.github/actions/cleanup-runner@main uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.9
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: | instructions: |
@ -103,7 +103,7 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
no-sudo: true no-sudo: true
@ -151,7 +151,7 @@ jobs:
BUILD_WHEEL: 1 BUILD_WHEEL: 1
MAX_JOBS: 8 MAX_JOBS: 8
CUDA_VERSION: ${{ inputs.cuda-version }} CUDA_VERSION: ${{ inputs.cuda-version }}
PYTHON_VERSION: "3.9" PYTHON_VERSION: "3.10"
SCCACHE_BUCKET: "ossci-compiler-cache" SCCACHE_BUCKET: "ossci-compiler-cache"
SCCACHE_S3_KEY_PREFIX: ${{ github.workflow }} SCCACHE_S3_KEY_PREFIX: ${{ github.workflow }}
SCCACHE_REGION: us-east-1 SCCACHE_REGION: us-east-1

View File

@ -78,10 +78,10 @@ jobs:
git config --global core.fsmonitor false git config --global core.fsmonitor false
- name: Clean up leftover processes on non-ephemeral Windows runner - name: Clean up leftover processes on non-ephemeral Windows runner
uses: pytorch/test-infra/.github/actions/cleanup-runner@main uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.9
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: | instructions: |
@ -97,7 +97,7 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
no-sudo: true no-sudo: true
@ -184,7 +184,7 @@ jobs:
env: env:
USE_CUDA: ${{ inputs.cuda-version != 'cpu' && '1' || '0' }} USE_CUDA: ${{ inputs.cuda-version != 'cpu' && '1' || '0' }}
INSTALL_WINDOWS_SDK: 1 INSTALL_WINDOWS_SDK: 1
PYTHON_VERSION: 3.9 PYTHON_VERSION: "3.10"
CONTINUE_THROUGH_ERROR: ${{ steps.keep-going.outputs.keep-going }} CONTINUE_THROUGH_ERROR: ${{ steps.keep-going.outputs.keep-going }}
VERBOSE_TEST_LOGS: ${{ steps.keep-going.outputs.ci-verbose-test-logs }} VERBOSE_TEST_LOGS: ${{ steps.keep-going.outputs.ci-verbose-test-logs }}
TEST_SHOWLOCALS: ${{ steps.keep-going.outputs.ci-test-showlocals }} TEST_SHOWLOCALS: ${{ steps.keep-going.outputs.ci-test-showlocals }}

View File

@ -77,7 +77,7 @@ jobs:
steps: steps:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
- name: Setup XPU - name: Setup XPU
uses: ./.github/actions/setup-xpu uses: ./.github/actions/setup-xpu
@ -95,7 +95,7 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-image-name: ${{ inputs.docker-image }} docker-image-name: ${{ inputs.docker-image }}
@ -109,7 +109,7 @@ jobs:
echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}"
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}

View File

@ -39,7 +39,7 @@ jobs:
tag: ["cuda12.6", "cuda12.8", "cuda12.9", "cuda13.0", "rocm6.3", "rocm6.4", "cpu"] tag: ["cuda12.6", "cuda12.8", "cuda12.9", "cuda13.0", "rocm6.3", "rocm6.4", "cpu"]
steps: steps:
- name: Build docker image - name: Build docker image
uses: pytorch/pytorch/.github/actions/binary-docker-build@main uses: pytorch/pytorch/.github/actions/binary-docker-build@release/2.9
with: with:
docker-image-name: almalinux-builder docker-image-name: almalinux-builder
custom-tag-prefix: ${{matrix.tag}} custom-tag-prefix: ${{matrix.tag}}

View File

@ -32,7 +32,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -58,7 +58,7 @@ jobs:
] ]
steps: steps:
- name: Build docker image - name: Build docker image
uses: pytorch/pytorch/.github/actions/binary-docker-build@main uses: pytorch/pytorch/.github/actions/binary-docker-build@release/2.9
with: with:
docker-image-name: libtorch-cxx11-builder docker-image-name: libtorch-cxx11-builder
custom-tag-prefix: ${{ matrix.tag }} custom-tag-prefix: ${{ matrix.tag }}

View File

@ -25,7 +25,7 @@ jobs:
runs-on: linux.s390x runs-on: linux.s390x
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
submodules: false submodules: false
no-sudo: true no-sudo: true

View File

@ -32,7 +32,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -63,7 +63,7 @@ jobs:
name: ${{ matrix.name }}:${{ matrix.tag }} name: ${{ matrix.name }}:${{ matrix.tag }}
steps: steps:
- name: Build docker image - name: Build docker image
uses: pytorch/pytorch/.github/actions/binary-docker-build@main uses: pytorch/pytorch/.github/actions/binary-docker-build@release/2.9
with: with:
docker-image-name: ${{ matrix.name }} docker-image-name: ${{ matrix.name }}
custom-tag-prefix: ${{ matrix.tag }} custom-tag-prefix: ${{ matrix.tag }}

View File

@ -3,7 +3,7 @@ name: Build Triton wheels
on: on:
push: push:
branches: branches:
- main - release/2.9
tags: tags:
# NOTE: Binary build pipelines should only get triggered on release candidate builds # NOTE: Binary build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1 # Release candidate tags look like: v1.11.0-rc1
@ -36,7 +36,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -74,12 +74,12 @@ jobs:
PLATFORM: 'manylinux_2_28_x86_64' PLATFORM: 'manylinux_2_28_x86_64'
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
submodules: false submodules: false
@ -87,7 +87,7 @@ jobs:
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ env.DOCKER_IMAGE }} docker-image: ${{ env.DOCKER_IMAGE }}
@ -184,7 +184,7 @@ jobs:
path: ${{ runner.temp }}/artifacts/wheelhouse/* path: ${{ runner.temp }}/artifacts/wheelhouse/*
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.9
if: always() if: always()
build-wheel-win: build-wheel-win:
@ -217,7 +217,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}

View File

@ -42,12 +42,12 @@ jobs:
BUILD_DEVICE: ${{ matrix.device }} BUILD_DEVICE: ${{ matrix.device }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
submodules: false submodules: false
@ -170,7 +170,7 @@ jobs:
path: ${{ runner.temp }}/artifacts/externals/vllm/wheels/*.whl path: ${{ runner.temp }}/artifacts/externals/vllm/wheels/*.whl
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.9
if: always() if: always()
# Copied from build-triton-wheel workflow (mostly) # Copied from build-triton-wheel workflow (mostly)

View File

@ -38,7 +38,7 @@ jobs:
runs-on: linux.24_04.4x runs-on: linux.24_04.4x
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
submodules: false submodules: false
fetch-depth: 1 fetch-depth: 1

View File

@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
submodules: false submodules: false
fetch-depth: 1 fetch-depth: 1

View File

@ -19,7 +19,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}

View File

@ -33,7 +33,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -70,7 +70,7 @@ jobs:
pytorch-linux-jammy-py3-clang18-asan, pytorch-linux-jammy-py3-clang18-asan,
pytorch-linux-jammy-py3-clang12-onnx, pytorch-linux-jammy-py3-clang12-onnx,
pytorch-linux-jammy-linter, pytorch-linux-jammy-linter,
pytorch-linux-jammy-cuda12.8-cudnn9-py3.9-linter, pytorch-linux-jammy-cuda12.8-cudnn9-py3.10-linter,
# Executorch pin needs update # Executorch pin needs update
# pytorch-linux-jammy-py3-clang12-executorch, # pytorch-linux-jammy-py3-clang12-executorch,
pytorch-linux-jammy-py3.12-triton-cpu, pytorch-linux-jammy-py3.12-triton-cpu,
@ -96,21 +96,21 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
# deep clone (fetch-depth 0) required for git merge-base # deep clone (fetch-depth 0) required for git merge-base
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Build docker image - name: Build docker image
id: build-docker-image id: build-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-image-name: ci-image:${{ matrix.docker-image-name }} docker-image-name: ci-image:${{ matrix.docker-image-name }}
always-rebuild: true always-rebuild: true
push: true push: true
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.build-docker-image.outputs.docker-image }} docker-image: ${{ steps.build-docker-image.outputs.docker-image }}
@ -141,5 +141,5 @@ jobs:
if: always() if: always()
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.9
if: always() if: always()

View File

@ -20,7 +20,7 @@ jobs:
runs-on: rocm-docker runs-on: rocm-docker
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
no-sudo: true no-sudo: true
@ -39,13 +39,13 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-image-name: ci-image:pytorch-linux-jammy-rocm-n-py3 docker-image-name: ci-image:pytorch-linux-jammy-rocm-n-py3
push: false push: false
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}

View File

@ -37,7 +37,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -52,7 +52,7 @@ jobs:
matrix: ${{ steps.generate-matrix.outputs.matrix }} matrix: ${{ steps.generate-matrix.outputs.matrix }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.9
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: true submodules: true
@ -82,7 +82,7 @@ jobs:
CUDNN_VERSION: ${{ matrix.cudnn_version }} CUDNN_VERSION: ${{ matrix.cudnn_version }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
@ -164,12 +164,12 @@ jobs:
fi fi
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.9
if: always() if: always()
validate: validate:
needs: build needs: build
uses: pytorch/test-infra/.github/workflows/validate-docker-images.yml@main uses: pytorch/test-infra/.github/workflows/validate-docker-images.yml@release/2.9
with: with:
channel: nightly channel: test
ref: main ref: main

View File

@ -41,7 +41,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -132,7 +132,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_10-cuda-aarch64-12_6 build_name: manywheel-py3_10-cuda-aarch64-12_6
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -178,7 +178,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_10-cuda-aarch64-12_8 build_name: manywheel-py3_10-cuda-aarch64-12_8
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -224,7 +224,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_10-cuda-aarch64-13_0 build_name: manywheel-py3_10-cuda-aarch64-13_0
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -335,7 +335,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_11-cuda-aarch64-12_6 build_name: manywheel-py3_11-cuda-aarch64-12_6
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -381,7 +381,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_11-cuda-aarch64-12_8 build_name: manywheel-py3_11-cuda-aarch64-12_8
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -427,7 +427,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_11-cuda-aarch64-13_0 build_name: manywheel-py3_11-cuda-aarch64-13_0
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -538,7 +538,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_12-cuda-aarch64-12_6 build_name: manywheel-py3_12-cuda-aarch64-12_6
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -584,7 +584,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_12-cuda-aarch64-12_8 build_name: manywheel-py3_12-cuda-aarch64-12_8
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -630,7 +630,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_12-cuda-aarch64-13_0 build_name: manywheel-py3_12-cuda-aarch64-13_0
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -741,7 +741,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_13-cuda-aarch64-12_6 build_name: manywheel-py3_13-cuda-aarch64-12_6
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -787,7 +787,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_13-cuda-aarch64-12_8 build_name: manywheel-py3_13-cuda-aarch64-12_8
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -833,7 +833,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_13-cuda-aarch64-13_0 build_name: manywheel-py3_13-cuda-aarch64-13_0
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -944,7 +944,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_13t-cuda-aarch64-12_6 build_name: manywheel-py3_13t-cuda-aarch64-12_6
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -990,7 +990,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_13t-cuda-aarch64-12_8 build_name: manywheel-py3_13t-cuda-aarch64-12_8
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -1036,7 +1036,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_13t-cuda-aarch64-13_0 build_name: manywheel-py3_13t-cuda-aarch64-13_0
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -1147,7 +1147,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_14-cuda-aarch64-12_6 build_name: manywheel-py3_14-cuda-aarch64-12_6
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -1193,7 +1193,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_14-cuda-aarch64-12_8 build_name: manywheel-py3_14-cuda-aarch64-12_8
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -1239,7 +1239,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_14-cuda-aarch64-13_0 build_name: manywheel-py3_14-cuda-aarch64-13_0
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -1350,7 +1350,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_14t-cuda-aarch64-12_6 build_name: manywheel-py3_14t-cuda-aarch64-12_6
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -1396,7 +1396,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_14t-cuda-aarch64-12_8 build_name: manywheel-py3_14t-cuda-aarch64-12_8
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
@ -1442,7 +1442,7 @@ jobs:
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_14t-cuda-aarch64-13_0 build_name: manywheel-py3_14t-cuda-aarch64-13_0
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
timeout-minutes: 420 timeout-minutes: 420
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}

View File

@ -41,7 +41,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -333,6 +333,7 @@ jobs:
LIBTORCH_CONFIG: release LIBTORCH_CONFIG: release
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: libtorch-rocm6_3-shared-with-deps-release build_name: libtorch-rocm6_3-shared-with-deps-release
build_environment: linux-binary-libtorch build_environment: linux-binary-libtorch
secrets: secrets:
@ -368,7 +369,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -390,7 +390,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: libtorch-cxx11-builder docker-image-name: libtorch-cxx11-builder
@ -398,7 +398,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -447,6 +447,7 @@ jobs:
LIBTORCH_CONFIG: release LIBTORCH_CONFIG: release
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: libtorch-rocm6_4-shared-with-deps-release build_name: libtorch-rocm6_4-shared-with-deps-release
build_environment: linux-binary-libtorch build_environment: linux-binary-libtorch
secrets: secrets:
@ -482,7 +483,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -504,7 +504,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: libtorch-cxx11-builder docker-image-name: libtorch-cxx11-builder
@ -512,7 +512,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary

View File

@ -36,7 +36,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}

View File

@ -36,7 +36,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -60,7 +60,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_12-cuda12_8 build_name: manywheel-py3_12-cuda12_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_12-cuda12_8-test: # Testing manywheel-py3_12-cuda12_8-test: # Testing

View File

@ -41,7 +41,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -127,7 +127,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_10-cuda12_6 build_name: manywheel-py3_10-cuda12_6
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_10-cuda12_6-test: # Testing manywheel-py3_10-cuda12_6-test: # Testing
@ -193,7 +193,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_10-cuda12_8 build_name: manywheel-py3_10-cuda12_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_10-cuda12_8-test: # Testing manywheel-py3_10-cuda12_8-test: # Testing
@ -259,7 +259,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_10-cuda13_0 build_name: manywheel-py3_10-cuda13_0
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_10-cuda13_0-test: # Testing manywheel-py3_10-cuda13_0-test: # Testing
@ -323,6 +323,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.3 DOCKER_IMAGE_TAG_PREFIX: rocm6.3
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_10-rocm6_3 build_name: manywheel-py3_10-rocm6_3
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -357,7 +358,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -379,7 +379,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -387,7 +387,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -434,6 +434,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.4 DOCKER_IMAGE_TAG_PREFIX: rocm6.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_10-rocm6_4 build_name: manywheel-py3_10-rocm6_4
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -468,7 +469,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -490,7 +490,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -498,7 +498,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -572,7 +572,7 @@ jobs:
contents: read contents: read
steps: steps:
- name: Setup XPU - name: Setup XPU
uses: pytorch/pytorch/.github/actions/setup-xpu@main uses: pytorch/pytorch/.github/actions/setup-xpu@release/2.9
- name: configure aws credentials - name: configure aws credentials
id: aws_creds id: aws_creds
uses: aws-actions/configure-aws-credentials@v4 uses: aws-actions/configure-aws-credentials@v4
@ -590,7 +590,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -601,7 +600,7 @@ jobs:
working-directory: pytorch working-directory: pytorch
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -609,7 +608,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -719,7 +718,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_11-cuda12_6 build_name: manywheel-py3_11-cuda12_6
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_11-cuda12_6-test: # Testing manywheel-py3_11-cuda12_6-test: # Testing
@ -785,7 +784,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_11-cuda12_8 build_name: manywheel-py3_11-cuda12_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_11-cuda12_8-test: # Testing manywheel-py3_11-cuda12_8-test: # Testing
@ -851,7 +850,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_11-cuda13_0 build_name: manywheel-py3_11-cuda13_0
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_11-cuda13_0-test: # Testing manywheel-py3_11-cuda13_0-test: # Testing
@ -915,6 +914,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.3 DOCKER_IMAGE_TAG_PREFIX: rocm6.3
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_11-rocm6_3 build_name: manywheel-py3_11-rocm6_3
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -949,7 +949,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -971,7 +970,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -979,7 +978,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -1026,6 +1025,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.4 DOCKER_IMAGE_TAG_PREFIX: rocm6.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_11-rocm6_4 build_name: manywheel-py3_11-rocm6_4
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -1060,7 +1060,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -1082,7 +1081,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -1090,7 +1089,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -1164,7 +1163,7 @@ jobs:
contents: read contents: read
steps: steps:
- name: Setup XPU - name: Setup XPU
uses: pytorch/pytorch/.github/actions/setup-xpu@main uses: pytorch/pytorch/.github/actions/setup-xpu@release/2.9
- name: configure aws credentials - name: configure aws credentials
id: aws_creds id: aws_creds
uses: aws-actions/configure-aws-credentials@v4 uses: aws-actions/configure-aws-credentials@v4
@ -1182,7 +1181,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -1193,7 +1191,7 @@ jobs:
working-directory: pytorch working-directory: pytorch
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -1201,7 +1199,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -1311,7 +1309,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_12-cuda12_6 build_name: manywheel-py3_12-cuda12_6
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_12-cuda12_6-test: # Testing manywheel-py3_12-cuda12_6-test: # Testing
@ -1377,7 +1375,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_12-cuda12_8 build_name: manywheel-py3_12-cuda12_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_12-cuda12_8-test: # Testing manywheel-py3_12-cuda12_8-test: # Testing
@ -1443,7 +1441,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_12-cuda13_0 build_name: manywheel-py3_12-cuda13_0
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_12-cuda13_0-test: # Testing manywheel-py3_12-cuda13_0-test: # Testing
@ -1507,6 +1505,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.3 DOCKER_IMAGE_TAG_PREFIX: rocm6.3
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_12-rocm6_3 build_name: manywheel-py3_12-rocm6_3
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -1541,7 +1540,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -1563,7 +1561,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -1571,7 +1569,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -1618,6 +1616,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.4 DOCKER_IMAGE_TAG_PREFIX: rocm6.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_12-rocm6_4 build_name: manywheel-py3_12-rocm6_4
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -1652,7 +1651,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -1674,7 +1672,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -1682,7 +1680,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -1756,7 +1754,7 @@ jobs:
contents: read contents: read
steps: steps:
- name: Setup XPU - name: Setup XPU
uses: pytorch/pytorch/.github/actions/setup-xpu@main uses: pytorch/pytorch/.github/actions/setup-xpu@release/2.9
- name: configure aws credentials - name: configure aws credentials
id: aws_creds id: aws_creds
uses: aws-actions/configure-aws-credentials@v4 uses: aws-actions/configure-aws-credentials@v4
@ -1774,7 +1772,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -1785,7 +1782,7 @@ jobs:
working-directory: pytorch working-directory: pytorch
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -1793,7 +1790,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -1903,7 +1900,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_13-cuda12_6 build_name: manywheel-py3_13-cuda12_6
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_13-cuda12_6-test: # Testing manywheel-py3_13-cuda12_6-test: # Testing
@ -1969,7 +1966,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_13-cuda12_8 build_name: manywheel-py3_13-cuda12_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_13-cuda12_8-test: # Testing manywheel-py3_13-cuda12_8-test: # Testing
@ -2035,7 +2032,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_13-cuda13_0 build_name: manywheel-py3_13-cuda13_0
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_13-cuda13_0-test: # Testing manywheel-py3_13-cuda13_0-test: # Testing
@ -2099,6 +2096,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.3 DOCKER_IMAGE_TAG_PREFIX: rocm6.3
DESIRED_PYTHON: "3.13" DESIRED_PYTHON: "3.13"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_13-rocm6_3 build_name: manywheel-py3_13-rocm6_3
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -2133,7 +2131,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -2155,7 +2152,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -2163,7 +2160,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -2210,6 +2207,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.4 DOCKER_IMAGE_TAG_PREFIX: rocm6.4
DESIRED_PYTHON: "3.13" DESIRED_PYTHON: "3.13"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_13-rocm6_4 build_name: manywheel-py3_13-rocm6_4
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -2244,7 +2242,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -2266,7 +2263,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -2274,7 +2271,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -2348,7 +2345,7 @@ jobs:
contents: read contents: read
steps: steps:
- name: Setup XPU - name: Setup XPU
uses: pytorch/pytorch/.github/actions/setup-xpu@main uses: pytorch/pytorch/.github/actions/setup-xpu@release/2.9
- name: configure aws credentials - name: configure aws credentials
id: aws_creds id: aws_creds
uses: aws-actions/configure-aws-credentials@v4 uses: aws-actions/configure-aws-credentials@v4
@ -2366,7 +2363,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -2377,7 +2373,7 @@ jobs:
working-directory: pytorch working-directory: pytorch
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -2385,7 +2381,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -2495,7 +2491,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_13t-cuda12_6 build_name: manywheel-py3_13t-cuda12_6
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_13t-cuda12_6-test: # Testing manywheel-py3_13t-cuda12_6-test: # Testing
@ -2561,7 +2557,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_13t-cuda12_8 build_name: manywheel-py3_13t-cuda12_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_13t-cuda12_8-test: # Testing manywheel-py3_13t-cuda12_8-test: # Testing
@ -2627,7 +2623,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_13t-cuda13_0 build_name: manywheel-py3_13t-cuda13_0
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_13t-cuda13_0-test: # Testing manywheel-py3_13t-cuda13_0-test: # Testing
@ -2691,6 +2687,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.3 DOCKER_IMAGE_TAG_PREFIX: rocm6.3
DESIRED_PYTHON: "3.13t" DESIRED_PYTHON: "3.13t"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_13t-rocm6_3 build_name: manywheel-py3_13t-rocm6_3
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -2725,7 +2722,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -2747,7 +2743,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -2755,7 +2751,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -2802,6 +2798,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.4 DOCKER_IMAGE_TAG_PREFIX: rocm6.4
DESIRED_PYTHON: "3.13t" DESIRED_PYTHON: "3.13t"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_13t-rocm6_4 build_name: manywheel-py3_13t-rocm6_4
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -2836,7 +2833,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -2858,7 +2854,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -2866,7 +2862,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -2940,7 +2936,7 @@ jobs:
contents: read contents: read
steps: steps:
- name: Setup XPU - name: Setup XPU
uses: pytorch/pytorch/.github/actions/setup-xpu@main uses: pytorch/pytorch/.github/actions/setup-xpu@release/2.9
- name: configure aws credentials - name: configure aws credentials
id: aws_creds id: aws_creds
uses: aws-actions/configure-aws-credentials@v4 uses: aws-actions/configure-aws-credentials@v4
@ -2958,7 +2954,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -2969,7 +2964,7 @@ jobs:
working-directory: pytorch working-directory: pytorch
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -2977,7 +2972,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -3087,7 +3082,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_14-cuda12_6 build_name: manywheel-py3_14-cuda12_6
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_14-cuda12_6-test: # Testing manywheel-py3_14-cuda12_6-test: # Testing
@ -3153,7 +3148,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_14-cuda12_8 build_name: manywheel-py3_14-cuda12_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_14-cuda12_8-test: # Testing manywheel-py3_14-cuda12_8-test: # Testing
@ -3219,7 +3214,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_14-cuda13_0 build_name: manywheel-py3_14-cuda13_0
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_14-cuda13_0-test: # Testing manywheel-py3_14-cuda13_0-test: # Testing
@ -3283,6 +3278,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.3 DOCKER_IMAGE_TAG_PREFIX: rocm6.3
DESIRED_PYTHON: "3.14" DESIRED_PYTHON: "3.14"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_14-rocm6_3 build_name: manywheel-py3_14-rocm6_3
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -3317,7 +3313,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -3339,7 +3334,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -3347,7 +3342,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -3394,6 +3389,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.4 DOCKER_IMAGE_TAG_PREFIX: rocm6.4
DESIRED_PYTHON: "3.14" DESIRED_PYTHON: "3.14"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_14-rocm6_4 build_name: manywheel-py3_14-rocm6_4
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -3428,7 +3424,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -3450,7 +3445,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -3458,7 +3453,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -3532,7 +3527,7 @@ jobs:
contents: read contents: read
steps: steps:
- name: Setup XPU - name: Setup XPU
uses: pytorch/pytorch/.github/actions/setup-xpu@main uses: pytorch/pytorch/.github/actions/setup-xpu@release/2.9
- name: configure aws credentials - name: configure aws credentials
id: aws_creds id: aws_creds
uses: aws-actions/configure-aws-credentials@v4 uses: aws-actions/configure-aws-credentials@v4
@ -3550,7 +3545,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -3561,7 +3555,7 @@ jobs:
working-directory: pytorch working-directory: pytorch
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -3569,7 +3563,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -3679,7 +3673,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_14t-cuda12_6 build_name: manywheel-py3_14t-cuda12_6
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.6.77; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.6.80; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.6.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.0.4; platform_system == 'Linux' | nvidia-curand-cu12==10.3.7.77; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.1.2; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.4.2; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.6.77; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.6.85; platform_system == 'Linux' | nvidia-cufile-cu12==1.11.1.6; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_14t-cuda12_6-test: # Testing manywheel-py3_14t-cuda12_6-test: # Testing
@ -3745,7 +3739,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_14t-cuda12_8 build_name: manywheel-py3_14t-cuda12_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.8.93; platform_system == 'Linux' | nvidia-cuda-runtime-cu12==12.8.90; platform_system == 'Linux' | nvidia-cuda-cupti-cu12==12.8.90; platform_system == 'Linux' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' | nvidia-cublas-cu12==12.8.4.1; platform_system == 'Linux' | nvidia-cufft-cu12==11.3.3.83; platform_system == 'Linux' | nvidia-curand-cu12==10.3.9.90; platform_system == 'Linux' | nvidia-cusolver-cu12==11.7.3.90; platform_system == 'Linux' | nvidia-cusparse-cu12==12.5.8.93; platform_system == 'Linux' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' | nvidia-nvshmem-cu12==3.3.20; platform_system == 'Linux' | nvidia-nvtx-cu12==12.8.90; platform_system == 'Linux' | nvidia-nvjitlink-cu12==12.8.93; platform_system == 'Linux' | nvidia-cufile-cu12==1.13.1.3; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_14t-cuda12_8-test: # Testing manywheel-py3_14t-cuda12_8-test: # Testing
@ -3811,7 +3805,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_14t-cuda13_0 build_name: manywheel-py3_14t-cuda13_0
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand==10.4.0.35; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile==1.15.0.42; platform_system == 'Linux' and platform_machine == 'x86_64' PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc==13.0.48; platform_system == 'Linux' | nvidia-cuda-runtime==13.0.48; platform_system == 'Linux' | nvidia-cuda-cupti==13.0.48; platform_system == 'Linux' | nvidia-cudnn-cu13==9.13.0.50; platform_system == 'Linux' | nvidia-cublas==13.0.0.19; platform_system == 'Linux' | nvidia-cufft==12.0.0.15; platform_system == 'Linux' | nvidia-curand==10.4.0.35; platform_system == 'Linux' | nvidia-cusolver==12.0.3.29; platform_system == 'Linux' | nvidia-cusparse==12.6.2.49; platform_system == 'Linux' | nvidia-cusparselt-cu13==0.8.0; platform_system == 'Linux' | nvidia-nccl-cu13==2.27.7; platform_system == 'Linux' | nvidia-nvshmem-cu13==3.3.24; platform_system == 'Linux' | nvidia-nvtx==13.0.39; platform_system == 'Linux' | nvidia-nvjitlink==13.0.39; platform_system == 'Linux' | nvidia-cufile==1.15.0.42; platform_system == 'Linux'
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_14t-cuda13_0-test: # Testing manywheel-py3_14t-cuda13_0-test: # Testing
@ -3875,6 +3869,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.3 DOCKER_IMAGE_TAG_PREFIX: rocm6.3
DESIRED_PYTHON: "3.14t" DESIRED_PYTHON: "3.14t"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_14t-rocm6_3 build_name: manywheel-py3_14t-rocm6_3
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -3909,7 +3904,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -3931,7 +3925,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -3939,7 +3933,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -3986,6 +3980,7 @@ jobs:
DOCKER_IMAGE_TAG_PREFIX: rocm6.4 DOCKER_IMAGE_TAG_PREFIX: rocm6.4
DESIRED_PYTHON: "3.14t" DESIRED_PYTHON: "3.14t"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
timeout-minutes: 300
build_name: manywheel-py3_14t-rocm6_4 build_name: manywheel-py3_14t-rocm6_4
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
secrets: secrets:
@ -4020,7 +4015,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -4042,7 +4036,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -4050,7 +4044,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary
@ -4124,7 +4118,7 @@ jobs:
contents: read contents: read
steps: steps:
- name: Setup XPU - name: Setup XPU
uses: pytorch/pytorch/.github/actions/setup-xpu@main uses: pytorch/pytorch/.github/actions/setup-xpu@release/2.9
- name: configure aws credentials - name: configure aws credentials
id: aws_creds id: aws_creds
uses: aws-actions/configure-aws-credentials@v4 uses: aws-actions/configure-aws-credentials@v4
@ -4142,7 +4136,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -4153,7 +4146,7 @@ jobs:
working-directory: pytorch working-directory: pytorch
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -4161,7 +4154,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary

View File

@ -38,13 +38,13 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
curr_branch: ${{ github.head_ref || github.ref_name }} curr_branch: ${{ github.head_ref || github.ref_name }}
curr_ref_type: ${{ github.ref_type }} curr_ref_type: ${{ github.ref_type }}
manywheel-py3_9-rocm6_4-build: manywheel-py3_10-rocm6_4-build:
if: ${{ github.repository_owner == 'pytorch' }} if: ${{ github.repository_owner == 'pytorch' }}
uses: ./.github/workflows/_binary-build-linux.yml uses: ./.github/workflows/_binary-build-linux.yml
needs: get-label-type needs: get-label-type
@ -58,16 +58,17 @@ jobs:
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: manylinux2_28-builder DOCKER_IMAGE: manylinux2_28-builder
DOCKER_IMAGE_TAG_PREFIX: rocm6.4 DOCKER_IMAGE_TAG_PREFIX: rocm6.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.10"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build_name: manywheel-py3_9-rocm6_4 timeout-minutes: 300
build_name: manywheel-py3_10-rocm6_4
build_environment: linux-binary-manywheel-rocm build_environment: linux-binary-manywheel-rocm
secrets: secrets:
github-token: ${{ secrets.GITHUB_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_9-rocm6_4-test: # Testing manywheel-py3_10-rocm6_4-test: # Testing
if: ${{ github.repository_owner == 'pytorch' }} if: ${{ github.repository_owner == 'pytorch' }}
needs: needs:
- manywheel-py3_9-rocm6_4-build - manywheel-py3_10-rocm6_4-build
- get-label-type - get-label-type
runs-on: linux.rocm.gpu.mi250 runs-on: linux.rocm.gpu.mi250
timeout-minutes: 240 timeout-minutes: 240
@ -82,19 +83,18 @@ jobs:
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: manylinux2_28-builder DOCKER_IMAGE: manylinux2_28-builder
DOCKER_IMAGE_TAG_PREFIX: rocm6.4 DOCKER_IMAGE_TAG_PREFIX: rocm6.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.10"
steps: steps:
- name: Setup ROCm - name: Setup ROCm
uses: ./.github/actions/setup-rocm uses: ./.github/actions/setup-rocm
- uses: actions/download-artifact@v4.1.7 - uses: actions/download-artifact@v4.1.7
name: Download Build Artifacts name: Download Build Artifacts
with: with:
name: manywheel-py3_9-rocm6_4 name: manywheel-py3_10-rocm6_4
path: "${{ runner.temp }}/artifacts/" path: "${{ runner.temp }}/artifacts/"
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -116,7 +116,7 @@ jobs:
role-duration-seconds: 18000 role-duration-seconds: 18000
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.9
with: with:
docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }} docker-registry: ${{ startsWith(github.event.ref, 'refs/tags/ciflow/') && '308535385114.dkr.ecr.us-east-1.amazonaws.com' || 'docker.io' }}
docker-image-name: manylinux2_28-builder docker-image-name: manylinux2_28-builder
@ -124,7 +124,7 @@ jobs:
docker-build-dir: .ci/docker docker-build-dir: .ci/docker
working-directory: pytorch working-directory: pytorch
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.9
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Test Pytorch binary - name: Test Pytorch binary

View File

@ -41,7 +41,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}

View File

@ -70,7 +70,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false

View File

@ -66,7 +66,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -206,7 +205,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -346,7 +344,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -486,7 +483,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -626,7 +622,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -766,7 +761,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -906,7 +900,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false

View File

@ -41,7 +41,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}

View File

@ -41,7 +41,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}

View File

@ -41,7 +41,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}

View File

@ -28,7 +28,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -77,7 +77,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -109,7 +109,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -183,7 +182,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -215,7 +214,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false

View File

@ -35,7 +35,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -84,7 +84,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -116,7 +116,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -190,7 +189,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -222,7 +221,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -332,7 +330,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -364,7 +362,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -439,7 +436,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -471,7 +468,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -582,7 +578,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -614,7 +610,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -689,7 +684,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -721,7 +716,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -832,7 +826,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -864,7 +858,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -939,7 +932,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -971,7 +964,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false

View File

@ -28,7 +28,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -77,7 +77,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -109,7 +109,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -183,7 +182,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -215,7 +214,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false

View File

@ -35,7 +35,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
@ -84,7 +84,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -116,7 +116,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -190,7 +189,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -222,7 +221,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -332,7 +330,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -364,7 +362,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -439,7 +436,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -471,7 +468,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -582,7 +578,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -614,7 +610,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -689,7 +684,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -721,7 +716,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -832,7 +826,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -864,7 +858,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false
@ -939,7 +932,7 @@ jobs:
echo "instance-type: $(get_ec2_metadata instance-type)" echo "instance-type: $(get_ec2_metadata instance-type)"
echo "system info $(uname -a)" echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.9
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -971,7 +964,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
show-progress: false show-progress: false

File diff suppressed because it is too large Load Diff

View File

@ -27,7 +27,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}

View File

@ -24,7 +24,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}

View File

@ -24,7 +24,7 @@ jobs:
get-label-type: get-label-type:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}

View File

@ -20,7 +20,7 @@ permissions:
jobs: jobs:
get-default-label-prefix: get-default-label-prefix:
name: get-default-label-prefix name: get-default-label-prefix
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}

View File

@ -23,7 +23,7 @@ permissions:
jobs: jobs:
get-default-label-prefix: get-default-label-prefix:
name: get-default-label-prefix name: get-default-label-prefix
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
@ -37,7 +37,7 @@ jobs:
uses: ./.github/workflows/_linux-build.yml uses: ./.github/workflows/_linux-build.yml
needs: get-default-label-prefix needs: get-default-label-prefix
with: with:
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
docker-image-name: ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks docker-image-name: ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks
runner_prefix: "${{ needs.get-default-label-prefix.outputs.label-type }}" runner_prefix: "${{ needs.get-default-label-prefix.outputs.label-type }}"
test-matrix: | test-matrix: |
@ -56,7 +56,7 @@ jobs:
uses: ./.github/workflows/_linux-test.yml uses: ./.github/workflows/_linux-test.yml
needs: nightly-dynamo-benchmarks-build needs: nightly-dynamo-benchmarks-build
with: with:
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
docker-image: ${{ needs.nightly-dynamo-benchmarks-build.outputs.docker-image }} docker-image: ${{ needs.nightly-dynamo-benchmarks-build.outputs.docker-image }}
test-matrix: ${{ needs.nightly-dynamo-benchmarks-build.outputs.test-matrix }} test-matrix: ${{ needs.nightly-dynamo-benchmarks-build.outputs.test-matrix }}
timeout-minutes: 720 timeout-minutes: 720

View File

@ -18,7 +18,7 @@ jobs:
get-default-label-prefix: get-default-label-prefix:
if: github.repository_owner == 'pytorch' if: github.repository_owner == 'pytorch'
name: get-default-label-prefix name: get-default-label-prefix
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }} issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}

View File

@ -70,7 +70,7 @@ permissions:
jobs: jobs:
get-label-type: get-label-type:
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}

View File

@ -55,7 +55,7 @@ permissions:
jobs: jobs:
get-label-type: get-label-type:
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}

View File

@ -75,7 +75,7 @@ permissions:
jobs: jobs:
get-label-type: get-label-type:
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}

View File

@ -70,7 +70,7 @@ permissions: read-all
jobs: jobs:
get-label-type: get-label-type:
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}

View File

@ -60,7 +60,7 @@ permissions:
jobs: jobs:
get-label-type: get-label-type:
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
@ -75,7 +75,7 @@ jobs:
needs: get-label-type needs: get-label-type
with: with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
docker-image-name: ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks docker-image-name: ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks
test-matrix: | test-matrix: |
{ include: [ { include: [
@ -101,7 +101,7 @@ jobs:
needs: inductor-build needs: inductor-build
if: github.event.schedule == '0 7 * * *' if: github.event.schedule == '0 7 * * *'
with: with:
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
dashboard-tag: training-false-inference-true-default-true-dynamic-true-cppwrapper-true-aotinductor-true dashboard-tag: training-false-inference-true-default-true-dynamic-true-cppwrapper-true-aotinductor-true
docker-image: ${{ needs.inductor-build.outputs.docker-image }} docker-image: ${{ needs.inductor-build.outputs.docker-image }}
test-matrix: ${{ needs.inductor-build.outputs.test-matrix }} test-matrix: ${{ needs.inductor-build.outputs.test-matrix }}
@ -118,7 +118,7 @@ jobs:
needs: inductor-build needs: inductor-build
if: github.event_name == 'workflow_dispatch' if: github.event_name == 'workflow_dispatch'
with: with:
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
dashboard-tag: training-${{ inputs.training }}-inference-${{ inputs.inference }}-default-${{ inputs.default }}-dynamic-${{ inputs.dynamic }}-cppwrapper-${{ inputs.cppwrapper }}-aotinductor-${{ inputs.aotinductor }} dashboard-tag: training-${{ inputs.training }}-inference-${{ inputs.inference }}-default-${{ inputs.default }}-dynamic-${{ inputs.dynamic }}-cppwrapper-${{ inputs.cppwrapper }}-aotinductor-${{ inputs.aotinductor }}
docker-image: ${{ needs.inductor-build.outputs.docker-image }} docker-image: ${{ needs.inductor-build.outputs.docker-image }}
test-matrix: ${{ needs.inductor-build.outputs.test-matrix }} test-matrix: ${{ needs.inductor-build.outputs.test-matrix }}

View File

@ -65,7 +65,7 @@ permissions:
jobs: jobs:
get-label-type: get-label-type:
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
@ -80,7 +80,7 @@ jobs:
needs: get-label-type needs: get-label-type
with: with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
docker-image-name: ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks docker-image-name: ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks
test-matrix: | test-matrix: |
{ include: [ { include: [
@ -107,7 +107,7 @@ jobs:
needs: inductor-build needs: inductor-build
if: github.event.schedule == '0 7 * * *' if: github.event.schedule == '0 7 * * *'
with: with:
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
dashboard-tag: training-false-inference-true-default-true-dynamic-true-cppwrapper-true-aotinductor-true-freezing-true dashboard-tag: training-false-inference-true-default-true-dynamic-true-cppwrapper-true-aotinductor-true-freezing-true
docker-image: ${{ needs.inductor-build.outputs.docker-image }} docker-image: ${{ needs.inductor-build.outputs.docker-image }}
test-matrix: ${{ needs.inductor-build.outputs.test-matrix }} test-matrix: ${{ needs.inductor-build.outputs.test-matrix }}
@ -124,7 +124,7 @@ jobs:
needs: inductor-build needs: inductor-build
if: github.event_name == 'workflow_dispatch' if: github.event_name == 'workflow_dispatch'
with: with:
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
dashboard-tag: training-${{ inputs.training }}-inference-${{ inputs.inference }}-default-${{ inputs.default }}-dynamic-${{ inputs.dynamic }}-cppwrapper-${{ inputs.cppwrapper }}-aotinductor-${{ inputs.aotinductor }}-freezing-${{ inputs.freezing }} dashboard-tag: training-${{ inputs.training }}-inference-${{ inputs.inference }}-default-${{ inputs.default }}-dynamic-${{ inputs.dynamic }}-cppwrapper-${{ inputs.cppwrapper }}-aotinductor-${{ inputs.aotinductor }}-freezing-${{ inputs.freezing }}
docker-image: ${{ needs.inductor-build.outputs.docker-image }} docker-image: ${{ needs.inductor-build.outputs.docker-image }}
test-matrix: ${{ needs.inductor-build.outputs.test-matrix }} test-matrix: ${{ needs.inductor-build.outputs.test-matrix }}

View File

@ -70,7 +70,7 @@ permissions:
jobs: jobs:
get-label-type: get-label-type:
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}

View File

@ -22,7 +22,7 @@ permissions:
jobs: jobs:
get-default-label-prefix: get-default-label-prefix:
name: get-default-label-prefix name: get-default-label-prefix
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
@ -154,7 +154,7 @@ jobs:
uses: ./.github/workflows/_linux-build.yml uses: ./.github/workflows/_linux-build.yml
needs: get-default-label-prefix needs: get-default-label-prefix
with: with:
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
docker-image-name: ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks docker-image-name: ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks
runner_prefix: "${{ needs.get-default-label-prefix.outputs.label-type }}" runner_prefix: "${{ needs.get-default-label-prefix.outputs.label-type }}"
test-matrix: | test-matrix: |
@ -200,7 +200,7 @@ jobs:
uses: ./.github/workflows/_linux-test.yml uses: ./.github/workflows/_linux-test.yml
needs: periodic-dynamo-benchmarks-cpu-build needs: periodic-dynamo-benchmarks-cpu-build
with: with:
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
docker-image: ${{ needs.periodic-dynamo-benchmarks-cpu-build.outputs.docker-image }} docker-image: ${{ needs.periodic-dynamo-benchmarks-cpu-build.outputs.docker-image }}
test-matrix: ${{ needs.periodic-dynamo-benchmarks-cpu-build.outputs.test-matrix }} test-matrix: ${{ needs.periodic-dynamo-benchmarks-cpu-build.outputs.test-matrix }}
secrets: inherit secrets: inherit

View File

@ -28,7 +28,7 @@ jobs:
get-label-type: get-label-type:
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}

View File

@ -20,7 +20,7 @@ permissions:
jobs: jobs:
get-label-type: get-label-type:
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}

View File

@ -19,7 +19,7 @@ permissions:
jobs: jobs:
get-label-type: get-label-type:
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
@ -110,7 +110,7 @@ jobs:
uses: ./.github/workflows/_linux-build.yml uses: ./.github/workflows/_linux-build.yml
needs: get-label-type needs: get-label-type
with: with:
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
docker-image-name: ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks docker-image-name: ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
test-matrix: | test-matrix: |
@ -127,7 +127,7 @@ jobs:
uses: ./.github/workflows/_linux-test.yml uses: ./.github/workflows/_linux-test.yml
needs: inductor-cpu-build needs: inductor-cpu-build
with: with:
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
docker-image: ${{ needs.inductor-cpu-build.outputs.docker-image }} docker-image: ${{ needs.inductor-cpu-build.outputs.docker-image }}
test-matrix: ${{ needs.inductor-cpu-build.outputs.test-matrix }} test-matrix: ${{ needs.inductor-cpu-build.outputs.test-matrix }}
secrets: inherit secrets: inherit

View File

@ -35,7 +35,7 @@ jobs:
get-label-type: get-label-type:
name: get-label-type name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@release/2.9
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }} if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with: with:
triggering_actor: ${{ github.triggering_actor }} triggering_actor: ${{ github.triggering_actor }}
@ -79,7 +79,7 @@ jobs:
uses: ./.github/workflows/_linux-build.yml uses: ./.github/workflows/_linux-build.yml
needs: get-label-type needs: get-label-type
with: with:
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
docker-image-name: ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks docker-image-name: ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}" runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
test-matrix: | test-matrix: |
@ -101,7 +101,7 @@ jobs:
uses: ./.github/workflows/_linux-test.yml uses: ./.github/workflows/_linux-test.yml
needs: inductor-cpu-build needs: inductor-cpu-build
with: with:
build-environment: linux-jammy-py3.9-gcc11-build build-environment: linux-jammy-py3.10-gcc11-build
docker-image: ${{ needs.inductor-cpu-build.outputs.docker-image }} docker-image: ${{ needs.inductor-cpu-build.outputs.docker-image }}
test-matrix: ${{ needs.inductor-cpu-build.outputs.test-matrix }} test-matrix: ${{ needs.inductor-cpu-build.outputs.test-matrix }}
secrets: inherit secrets: inherit

Some files were not shown because too many files have changed in this diff Show More