Compare commits

...

26 Commits

Author SHA1 Message Date
699c056479 [ROCm] Include hsa headers for rocm-triton whl (#129235)
* Include hsa headers for rocm-triton whl

* Update triton pin to release/3.0.x tip

* Update .ci/docker/ci_commit_pins/triton-rocm.txt

---------

Co-authored-by: Andrey Talman <atalman@fb.com>
2024-06-21 13:19:23 -04:00
49d2eec960 [custom ops] Switch out references from old landing page to new landi… (#129237)
[custom ops] Switch out references from old landing page to new landing page (#129178)

Test Plan:
- existing tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129178
Approved by: https://github.com/albanD
ghstack dependencies: #129177
2024-06-21 09:18:50 -07:00
165e09874b [docs] Redirect custom ops landing page to the correct place (#129177) (#129236)
I'm moving it to pytorch/tutorials
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129177
Approved by: https://github.com/albanD
2024-06-21 09:18:13 -07:00
93c51dc84b Re-enable py3.12 nightly wheel builds and add triton dependency for ROCm (#129161)
* Re-enable py3.12 nightly wheel builds and add triton dependency for ROCm  (#128525)

The llnl-hatchet developers have published the py3.12 binaries on [PyPI](https://pypi.org/project/llnl-hatchet/#files). In fact, looking [here](https://download.pytorch.org/whl/nightly/llnl-hatchet), it seems we already have the py3.12 wheels mirrored. This should allow us to re-enable py3.12 binaries for ROCm.

This PR reverts commit 9d849d4312cd1e62d97b9e9d58979ec78d36c95f.

It also adds the pytorch-triton-rocm dependency for torch wheels on ROCm since pytorch-triton-rocm py3.12 wheels are available now

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128525
Approved by: https://github.com/malfet

(cherry picked from commit a6ac6447b55bcf910dee5f925c2c17673f162a36)

* Regenerate workflows

* regenerate-2

---------

Co-authored-by: Jithun Nair <jithun.nair@amd.com>
Co-authored-by: atalman <atalman@fb.com>
2024-06-21 10:28:06 -04:00
67a815abd2 [Release only] Temporary change to depend on pytorch-triton (#129232)
[Release only] Temporary change to depend ot pytorch-triton
2024-06-21 09:58:07 -04:00
d2e4cc71f1 [inductor][ci] Fix torchbench dependency issue with numpy (#129074)
[inductor][ci] Fix torchbench dependency issue with numpy (#128968)

For some reason, pip will always upgrade the numpy version even when an older version has been installed.
We have to lock numpy version to the old version to make this constraint explicit.

Torchbench commit: 23512dbebd

Second attempt to fix #128845

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128968
Approved by: https://github.com/eellison

(cherry picked from commit 118f9ceb7c9ec608a845b40c2142f1a1720b73c9)

Co-authored-by: Xu Zhao <xzhao9@meta.com>
2024-06-21 09:19:22 -04:00
0233f8df5b [ROCm] [Triton] - Include roctracer headers in triton whl (#129227)
Include roctracer header
2024-06-21 09:18:47 -04:00
434bf9559f [Release 2.4] Release only changes for triton 3.0.x build (#129143)
* [Release only changes] Release changes for triton 3.0

* fix
2024-06-20 11:22:10 -04:00
50e57d4f3f Revert "[Release 2.4] Release only changes - use pinned triton." (#129139)
Revert "[Release 2.4] Release only changes - use pinned triton. (#128388)"

This reverts commit 1cd41997e99ae1722be3fe88e1867af5f6779433.
2024-06-20 10:15:27 -04:00
edcc77dadb Remove leftover warning causing log spew (#128837)
Original PR: #128688

This warning was left by mistake, and is uninformative (the user is doing nothing wrong) and causing log spew in trainings. See #120750 (comment)
2024-06-19 12:06:47 -04:00
0e0a9c5a5c [Inductor] Fix the High Order Op layout issue (#128275) (#128834)
Fix the issue: https://github.com/pytorch/pytorch/issues/127995

- In current implementation of creating `FallbackKernel`, the `device` of the `NoneLayout` is set to `None` when `example_output` returns from `cls.process_kernel` is `None`. 921aa194c7/torch/_inductor/ir.py (L5632-L5649)
- If a `ExternalKernel schedulerNode` has None device, the previous buffer will not flush before codegen this `ExternalKernel schedulerNode`  which causes the wrong generated code.
ef2b5ed500/torch/_inductor/scheduler.py (L2701-L2709)

**Test Plan**
```
python -u -m pytest -s -v test/higher_order_ops/test_with_effects.py -k test_compile_inductor_external_op_return_none
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128275
Approved by: https://github.com/eellison

Co-authored-by: leslie-fang-intel <leslie.fang@intel.com>
2024-06-19 12:05:13 -04:00
4af5000bff [Port][Quant][Inductor] Bug fix: mutation nodes not handled correctly for QLinearPointwiseBinaryPT2E (#128591)
[Quant][Inductor] Bug fix: mutation nodes not handled correctly for QLinearPointwiseBinaryPT2E (#127592)

Fixes #127402

- Revert some changes to `ir.MutationOutput` and inductor/test_flex_attention.py
- Add checks of mutation for QLinearPointwiseBinaryPT2E

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127592
Approved by: https://github.com/leslie-fang-intel, https://github.com/Chillee
2024-06-19 11:46:53 -04:00
562cdc2084 [tp] refactor and fix PrepareModuleInput for DTensor inputs (#128431) (#128719)
as titled, this PR refactors the PrepareModuleInput style to have common
method prepare_input_arg, allow both args/kwargs to reuse this logic

This also fixes https://github.com/pytorch/pytorch/issues/128365

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128431
Approved by: https://github.com/awgu

(cherry picked from commit 7775fee10f31ee683bd7beee9a5a9829c6574637)
2024-06-19 11:35:06 -04:00
b1d53f07b2 [inductor] fix compile time regression by caching get_gpu_type (#128363) (#128717)
We observed signficant compile time regression in torchtitan when turning
on 2D parallel + torch.compile recently. So I decided to get a deeper
understanding why.

It turns out this is affecting **all the trainings** that have functional collectives
captured in the graph, not only 2D parallel (2D parallel was just the
job that happen to have collectives captured in the TP region).

The root cause is because when doing inductor lowering, we are calling
the comm analysis pass to get a estimated collective time for each
collective node in the graph, for each call to check the collective
node, we are calling `get_gpu_type()`, which under the hood calls a
`torch.utils.collect_env.run` to get the GPU info. However, this call is
super expensive! The reason is that this call effectively spawns a new
process and call `nvidia-smi` to get the GPU info, so the cost is **linear**
to the number of collective nodes in the graph.

see https://github.com/pytorch/pytorch/blob/main/torch/utils/collect_env.py#L75

The fix is to add a lru cache to the function, so that we only call this
once and reuse the cached results afterwards

torchtitan benchmark shows:
* before this fix: 2D parallel + fp8 compile time: 6min +
* after this fix: 2D parallel + fp8 compile time: 2min 48s (more than 100% improvement)

There're more room to improve the compile time, but this PR is trying to fix the biggest regression I found so far.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128363
Approved by: https://github.com/yf225

(cherry picked from commit 8a09940a543d4c2fd23a5c78edbf1ac24d481b45)
2024-06-19 11:31:16 -04:00
86271445d6 [Inductor] Update Intel GPU Triton commit pin. (#124842) (#128615)
Update Intel triton for Pytorch 2.4 release.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124842
Approved by: https://github.com/EikanWang

(cherry picked from commit cf7adc2fa1c5c3b8e8cc5464a03823b6752958ad)
2024-06-19 10:31:08 -04:00
d71de3c95c Revert "Make torch_geometric models compatible with export (#123403)"… (#128511)
Revert "Make torch_geometric models compatible with export (#123403)" (#128377)

This reverts commit d78991a7381adb3df5e9b63c365db4506643edce.

This PR reverts https://github.com/pytorch/pytorch/pull/123403 to fix the performance regression as discussed in https://github.com/pytorch/pytorch/issues/127513#issuecomment-2158835653.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128377
Approved by: https://github.com/jgong5, https://github.com/angelayi, https://github.com/desertfire

(cherry picked from commit 5ef70faaa76364a73cd7f9da2d3f8e23da218b02)
2024-06-19 10:28:01 -04:00
e7dde73d43 [custom_op] stop using nonlocals to store information (#128547) (#128616)
Fixes https://github.com/pytorch/pytorch/issues/128544
Fixes https://github.com/pytorch/pytorch/issues/128535

We had a problem with multithreading where the nonlocals were being
clobbered. In the first place, we stored these nonlocals because we
wanted to ferry information from an autograd.Function.apply to
autograd.Function.forward.

Our new approach is:
- pass the information directly as an input to the
  autograd.Function.apply. This means that the autograd.Function.forward
  will receive the information too.
- this messes up ctx.needs_input_grad, which has an element per input to
  forward. The user should not see the additional information we passed.
  We fix this by temporarily overriding ctx.needs_input_grad to the
  right thing.
- this exposed a bug in that ctx.needs_input_grad wasn't correct for
  TensorList inputs. This PR fixes that too.

Test Plan:
- existing and new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128547
Approved by: https://github.com/williamwen42, https://github.com/soulitzer
2024-06-19 10:23:12 -04:00
9ad8a5b657 Clean up xpu ut to make CI happy (#128383) (#128614)
# Motivation
Before #127611 merged, the xpu-specific UT `test/test_xpu.py` was skipped temporarily. This PR aims to fix the UT bug introduced by #127741.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128383
Approved by: https://github.com/EikanWang

(cherry picked from commit 88974fedd06889bde8d1da297aa2bd10106f7c24)

Co-authored-by: Yu, Guangye <guangye.yu@intel.com>
2024-06-19 09:03:30 -04:00
ed624a0483 Change Dynamo's custom ops warning message to be less spammy (#128456) (#128581)
This is a short-term fix (for 2.4). In the longer term we should
fix https://github.com/pytorch/pytorch/issues/128430

The problem is that warnings.warn that are inside Dynamo print
all the time. Python warnings are supposed to print once, unless their
cache is reset: Dynamo ends up resetting that cache everytime it runs.

As a workaround we provide our own warn_once cache that is keyed on the
warning msg. I am not worried about this increasing memory usage because
that's effectively what python's warnings.warn cache does.

Test Plan:
- fix tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128456
Approved by: https://github.com/anijain2305
2024-06-19 08:56:23 -04:00
082c4f7e64 [inductor] fix linear add bias pattern (#128473) (#128577)
Fix https://github.com/pytorch/pytorch/issues/128287.
Previous the assertion in `linear_add_bias` are pretty bad
```
assert packed_weight_node.name == "_reorder_linear_weight"
assert transpose_weight_node.name == "permute_default"
```
because the `name` can be changed to `_reorder_linear_weight_id, permute_default_id` if we have more than 1 reorder/permute.

Check `target` instead `name` can solve this issue.

UT is also updated to have match more than 1 `linear_add_bias` pattern to cover this case.

Co-authored-by: Jiong Gong <jiong.gong@intel.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128473
Approved by: https://github.com/jgong5

(cherry picked from commit c53d65b3d3d5897c50d622acdd604ddfa8f57687)
2024-06-19 08:55:02 -04:00
459e2aa454 Revert "[cuDNN][SDPA] Remove TORCH_CUDNN_SDPA_ENABLED=1, enable cuDNN SDPA by default on H100 and 2nd on other archs >= sm80 (#125343)" (#128539)
This reverts commit 4c971932e839fc5da2b91906ad028d4654932bca.
2024-06-18 18:07:58 -07:00
6be0234f07 Revert "Deprecate torch._utils.is_compiling() and torch._dynamo.external_utils.is_compiling() (#127690)" (#128542)
This reverts commit 348b181a97abc2e636a6c18e5880a78e5d1dab94.
2024-06-18 18:07:35 -07:00
24a3885ef6 Revert "Set simdlen based on ATEN_CPU_CAPABILITY (#123514)" (#128541)
This reverts commit b66e3f0957b96b058c9b632ca60833d9717a9d8a because it was reverted on main.
2024-06-18 18:07:07 -07:00
62417c6ca9 [dynamo] Fix for #127696 (#128530)
[dynamo] Fix for #127696 (#128358)

Test Plan:
`buck2 test @//mode/dev-nosan //executorch/exir/backend/...`
https://www.internalfb.com/intern/testinfra/testrun/12666373989243932

Differential Revision: D58384518

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128358
Approved by: https://github.com/ydwu4

(cherry picked from commit 4345d98663d31f23492cafc0062f515a47d96a78)

Co-authored-by: Angela Yi <angelayi@meta.com>
2024-06-18 18:54:20 -04:00
1cd41997e9 [Release 2.4] Release only changes - use pinned triton. (#128388)
[Release 2.4] Release only changes - use pinned triton version
2024-06-10 23:19:21 -04:00
c85e2cacd3 [Release 2.4] Release only changes (#128347)
* Release 2.4 - release only changes

* more required changes

* fix

* temp changes for triton release

* fix_lint
2024-06-10 18:37:41 -04:00
144 changed files with 1228 additions and 1416 deletions

View File

@ -1 +1 @@
01cbe5045a6898c9a925f01435c8277b2fe6afcc 21eae954efa5bf584da70324b640288c3ee7aede

View File

@ -1 +1 @@
b8c64f64c18d8cac598b3adb355c21e7439c21de aac14a3b93f11d781d1d5ebc5400b15ae8df5185

View File

@ -178,7 +178,7 @@ function install_torchrec_and_fbgemm() {
function clone_pytorch_xla() { function clone_pytorch_xla() {
if [[ ! -d ./xla ]]; then if [[ ! -d ./xla ]]; then
git clone --recursive --quiet https://github.com/pytorch/xla.git git clone --recursive -b r2.4 https://github.com/pytorch/xla.git
pushd xla pushd xla
# pin the xla hash so that we don't get broken by changes to xla # pin the xla hash so that we don't get broken by changes to xla
git checkout "$(cat ../.github/ci_commit_pins/xla.txt)" git checkout "$(cat ../.github/ci_commit_pins/xla.txt)"

View File

@ -75,10 +75,9 @@ export PYTORCH_BUILD_NUMBER=1
TRITON_VERSION=$(cat $PYTORCH_ROOT/.ci/docker/triton_version.txt) TRITON_VERSION=$(cat $PYTORCH_ROOT/.ci/docker/triton_version.txt)
# Here PYTORCH_EXTRA_INSTALL_REQUIREMENTS is already set for the all the wheel builds hence append TRITON_CONSTRAINT # Here PYTORCH_EXTRA_INSTALL_REQUIREMENTS is already set for the all the wheel builds hence append TRITON_CONSTRAINT
TRITON_CONSTRAINT="platform_system == 'Linux' and platform_machine == 'x86_64' and python_version < '3.13'"
if [[ "$PACKAGE_TYPE" =~ .*wheel.* && -n "${PYTORCH_EXTRA_INSTALL_REQUIREMENTS:-}" ]]; then if [[ "$PACKAGE_TYPE" =~ .*wheel.* && -n "${PYTORCH_EXTRA_INSTALL_REQUIREMENTS:-}" ]]; then
# Only linux Python < 3.13 are supported wheels for triton TRITON_REQUIREMENT="pytorch-triton==${TRITON_VERSION}; ${TRITON_CONSTRAINT}"
TRITON_CONSTRAINT="platform_system == 'Linux' and platform_machine == 'x86_64' and python_version < '3.13'"
TRITON_REQUIREMENT="triton==${TRITON_VERSION}; ${TRITON_CONSTRAINT}"
if [[ -n "$PYTORCH_BUILD_VERSION" && "$PYTORCH_BUILD_VERSION" =~ .*dev.* ]]; then if [[ -n "$PYTORCH_BUILD_VERSION" && "$PYTORCH_BUILD_VERSION" =~ .*dev.* ]]; then
TRITON_SHORTHASH=$(cut -c1-10 $PYTORCH_ROOT/.ci/docker/ci_commit_pins/triton.txt) TRITON_SHORTHASH=$(cut -c1-10 $PYTORCH_ROOT/.ci/docker/ci_commit_pins/triton.txt)
TRITON_REQUIREMENT="pytorch-triton==${TRITON_VERSION}+${TRITON_SHORTHASH}; ${TRITON_CONSTRAINT}" TRITON_REQUIREMENT="pytorch-triton==${TRITON_VERSION}+${TRITON_SHORTHASH}; ${TRITON_CONSTRAINT}"
@ -87,11 +86,11 @@ if [[ "$PACKAGE_TYPE" =~ .*wheel.* && -n "${PYTORCH_EXTRA_INSTALL_REQUIREMENTS:
fi fi
# Set triton via PYTORCH_EXTRA_INSTALL_REQUIREMENTS for triton rocm package # Set triton via PYTORCH_EXTRA_INSTALL_REQUIREMENTS for triton rocm package
if [[ "$PACKAGE_TYPE" =~ .*wheel.* && -n "$PYTORCH_BUILD_VERSION" && "$PYTORCH_BUILD_VERSION" =~ .*rocm.* && $(uname) == "Linux" && "$DESIRED_PYTHON" != "3.12" ]]; then if [[ "$PACKAGE_TYPE" =~ .*wheel.* && -n "$PYTORCH_BUILD_VERSION" && "$PYTORCH_BUILD_VERSION" =~ .*rocm.* && $(uname) == "Linux" ]]; then
TRITON_REQUIREMENT="pytorch-triton-rocm==${TRITON_VERSION}" TRITON_REQUIREMENT="pytorch-triton-rocm==${TRITON_VERSION}; ${TRITON_CONSTRAINT}"
if [[ -n "$PYTORCH_BUILD_VERSION" && "$PYTORCH_BUILD_VERSION" =~ .*dev.* ]]; then if [[ -n "$PYTORCH_BUILD_VERSION" && "$PYTORCH_BUILD_VERSION" =~ .*dev.* ]]; then
TRITON_SHORTHASH=$(cut -c1-10 $PYTORCH_ROOT/.ci/docker/ci_commit_pins/triton-rocm.txt) TRITON_SHORTHASH=$(cut -c1-10 $PYTORCH_ROOT/.ci/docker/ci_commit_pins/triton-rocm.txt)
TRITON_REQUIREMENT="pytorch-triton-rocm==${TRITON_VERSION}+${TRITON_SHORTHASH}" TRITON_REQUIREMENT="pytorch-triton-rocm==${TRITON_VERSION}+${TRITON_SHORTHASH}; ${TRITON_CONSTRAINT}"
fi fi
if [[ -z "${PYTORCH_EXTRA_INSTALL_REQUIREMENTS:-}" ]]; then if [[ -z "${PYTORCH_EXTRA_INSTALL_REQUIREMENTS:-}" ]]; then
export PYTORCH_EXTRA_INSTALL_REQUIREMENTS="${TRITON_REQUIREMENT}" export PYTORCH_EXTRA_INSTALL_REQUIREMENTS="${TRITON_REQUIREMENT}"

View File

@ -1 +1 @@
d6015d42d9a1834bc7595c4bd6852562fb80b30b 23512dbebd44a11eb84afbf53c3c071dd105297e

View File

@ -1 +1 @@
6f0b61e5d782913a0fc7743812f2a8e522189111 r2.4

View File

@ -93,6 +93,8 @@ done
# Copy Include Files # Copy Include Files
cp -r $ROCM_HOME/include/hip $TRITON_ROCM_DIR/include cp -r $ROCM_HOME/include/hip $TRITON_ROCM_DIR/include
cp -r $ROCM_HOME/include/roctracer $TRITON_ROCM_DIR/include
cp -r $ROCM_HOME/include/hsa $TRITON_ROCM_DIR/include
# Copy linker # Copy linker
mkdir -p $TRITON_ROCM_DIR/llvm/bin mkdir -p $TRITON_ROCM_DIR/llvm/bin

View File

@ -38,9 +38,9 @@ SUPPORTED_PERIODICAL_MODES: Dict[str, Callable[[Optional[str]], bool]] = {
} }
# The link to the published list of disabled jobs # The link to the published list of disabled jobs
DISABLED_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json" DISABLED_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json?versionId=tIl0Qo224T_NDVw0dtG4hU1cZJM97inV"
# and unstable jobs # and unstable jobs
UNSTABLE_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/unstable-jobs.json" UNSTABLE_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/unstable-jobs.json?versionId=GPyRZRsOo26Gfk_WjAoNNxEMGXkIxIes"
# Some constants used to handle disabled and unstable jobs # Some constants used to handle disabled and unstable jobs
JOB_NAME_SEP = "/" JOB_NAME_SEP = "/"

View File

@ -347,10 +347,6 @@ def generate_wheels_matrix(
for python_version in python_versions: for python_version in python_versions:
for arch_version in arches: for arch_version in arches:
gpu_arch_type = arch_type(arch_version) gpu_arch_type = arch_type(arch_version)
# Disable py3.12 builds for ROCm because of triton dependency
# on llnl-hatchet, which doesn't have py3.12 wheels available
if gpu_arch_type == "rocm" and python_version == "3.12":
continue
gpu_arch_version = ( gpu_arch_version = (
"" ""
if arch_version == "cpu" if arch_version == "cpu"

View File

@ -8,7 +8,7 @@
# NOTE: If testing pytorch/builder changes you can change this variable to change what pytorch/builder reference # NOTE: If testing pytorch/builder changes you can change this variable to change what pytorch/builder reference
# the binary builds will check out # the binary builds will check out
{%- set builder_repo = "pytorch/builder" -%} {%- set builder_repo = "pytorch/builder" -%}
{%- set builder_branch = "main" -%} {%- set builder_branch = "release/2.4" -%}
{%- macro concurrency(build_environment) -%} {%- macro concurrency(build_environment) -%}
concurrency: concurrency:

View File

@ -113,8 +113,8 @@ jobs:
with: with:
name: !{{ config["build_name"] }} name: !{{ config["build_name"] }}
path: "${{ runner.temp }}/artifacts/" path: "${{ runner.temp }}/artifacts/"
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }} !{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: ROCm set GPU_FLAG - name: ROCm set GPU_FLAG
run: | run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}" echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"

View File

@ -81,8 +81,8 @@ jobs:
elif [ -d "/Applications/Xcode_13.3.1.app" ]; then elif [ -d "/Applications/Xcode_13.3.1.app" ]; then
echo "DEVELOPER_DIR=/Applications/Xcode_13.3.1.app/Contents/Developer" >> "${GITHUB_ENV}" echo "DEVELOPER_DIR=/Applications/Xcode_13.3.1.app/Contents/Developer" >> "${GITHUB_ENV}"
fi fi
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }} !{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Install sccache (only for non-forked PRs, and pushes to trunk) - name: Install sccache (only for non-forked PRs, and pushes to trunk)
uses: nick-fields/retry@v2.8.2 uses: nick-fields/retry@v2.8.2
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }} if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}

View File

@ -65,8 +65,8 @@ jobs:
steps: steps:
!{{ common.setup_ec2_windows() }} !{{ common.setup_ec2_windows() }}
!{{ set_runner_specific_vars() }} !{{ set_runner_specific_vars() }}
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }} !{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Populate binary env - name: Populate binary env
shell: bash shell: bash
run: | run: |
@ -105,8 +105,8 @@ jobs:
with: with:
name: !{{ config["build_name"] }} name: !{{ config["build_name"] }}
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}" path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
!{{ common.checkout(deep_clone=False, directory="pytorch") }} !{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }} !{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Populate binary env - name: Populate binary env
shell: bash shell: bash
run: | run: |

View File

@ -37,7 +37,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }} keep-going: ${{ steps.filter.outputs.keep-going }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -59,25 +59,25 @@ jobs:
runs-on: ${{ matrix.runner }} runs-on: ${{ matrix.runner }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.4
with: with:
docker-image-name: ${{ inputs.docker-image-name }} docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -141,5 +141,5 @@ jobs:
if: always() if: always()
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
if: always() if: always()

View File

@ -37,7 +37,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }} keep-going: ${{ steps.filter.outputs.keep-going }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -59,25 +59,25 @@ jobs:
runs-on: ${{ matrix.runner }} runs-on: ${{ matrix.runner }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.4
with: with:
docker-image-name: ${{ inputs.docker-image-name }} docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -186,5 +186,5 @@ jobs:
if: always() if: always()
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
if: always() if: always()

View File

@ -42,7 +42,7 @@ jobs:
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }} reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -64,25 +64,25 @@ jobs:
runs-on: ${{ matrix.runner }} runs-on: ${{ matrix.runner }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.4
with: with:
docker-image-name: ${{ inputs.docker-image-name }} docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -92,7 +92,7 @@ jobs:
run: echo "IN_ARC_RUNNER=$([ -f /.inarc ] && echo true || echo false)" >> "$GITHUB_OUTPUT" run: echo "IN_ARC_RUNNER=$([ -f /.inarc ] && echo true || echo false)" >> "$GITHUB_OUTPUT"
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG - name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
uses: pytorch/test-infra/.github/actions/setup-nvidia@main uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.4
if: ${{ inputs.cuda-version != 'cpu' && steps.check_arc_runner.outputs.IN_ARC_RUNNER == 'false' }} if: ${{ inputs.cuda-version != 'cpu' && steps.check_arc_runner.outputs.IN_ARC_RUNNER == 'false' }}
- name: Output disk space left - name: Output disk space left
@ -201,5 +201,5 @@ jobs:
file-suffix: bazel-${{ github.job }}_${{ steps.get-job-id.outputs.job-id }} file-suffix: bazel-${{ github.job }}_${{ steps.get-job-id.outputs.job-id }}
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
if: always() if: always()

View File

@ -145,13 +145,13 @@ jobs:
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
if: inputs.build_environment != 'linux-s390x-binary-manywheel' if: inputs.build_environment != 'linux-s390x-binary-manywheel'
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.github-token }} github-secret: ${{ secrets.github-token }}
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' || inputs.build_environment == 'linux-s390x-binary-manywheel' }} no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' || inputs.build_environment == 'linux-s390x-binary-manywheel' }}
@ -181,7 +181,6 @@ jobs:
- name: Checkout PyTorch to pytorch dir - name: Checkout PyTorch to pytorch dir
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -195,7 +194,7 @@ jobs:
- name: Checkout pytorch/builder to builder dir - name: Checkout pytorch/builder to builder dir
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -221,7 +220,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' && inputs.build_environment != 'linux-s390x-binary-manywheel' }} if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' && inputs.build_environment != 'linux-s390x-binary-manywheel' }}
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ inputs.DOCKER_IMAGE }} docker-image: ${{ inputs.DOCKER_IMAGE }}
@ -278,7 +277,7 @@ jobs:
- name: Teardown Linux - name: Teardown Linux
if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel' if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel'
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
- name: Chown workspace - name: Chown workspace
if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel' if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel'

View File

@ -128,14 +128,14 @@ jobs:
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)" - name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
if: inputs.build_environment != 'linux-s390x-binary-manywheel' if: inputs.build_environment != 'linux-s390x-binary-manywheel'
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
continue-on-error: true continue-on-error: true
with: with:
github-secret: ${{ secrets.github-token }} github-secret: ${{ secrets.github-token }}
# Setup the environment # Setup the environment
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' || inputs.build_environment == 'linux-s390x-binary-manywheel' }} no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' || inputs.build_environment == 'linux-s390x-binary-manywheel' }}
@ -158,7 +158,6 @@ jobs:
- name: Checkout PyTorch to pytorch dir - name: Checkout PyTorch to pytorch dir
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
@ -171,7 +170,7 @@ jobs:
- name: Checkout pytorch/builder to builder dir - name: Checkout pytorch/builder to builder dir
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -202,12 +201,12 @@ jobs:
path: "${{ runner.temp }}/artifacts/" path: "${{ runner.temp }}/artifacts/"
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG - name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
uses: pytorch/test-infra/.github/actions/setup-nvidia@main uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.4
if: ${{ inputs.GPU_ARCH_TYPE == 'cuda' && steps.filter.outputs.is-test-matrix-empty == 'False' }} if: ${{ inputs.GPU_ARCH_TYPE == 'cuda' && steps.filter.outputs.is-test-matrix-empty == 'False' }}
- name: Pull Docker image - name: Pull Docker image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' && inputs.build_environment != 'linux-s390x-binary-manywheel' }} if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' && inputs.build_environment != 'linux-s390x-binary-manywheel' }}
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ inputs.DOCKER_IMAGE }} docker-image: ${{ inputs.DOCKER_IMAGE }}
@ -217,7 +216,7 @@ jobs:
- name: Teardown Linux - name: Teardown Linux
if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel' if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel'
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
- name: Chown workspace - name: Chown workspace
if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel' if: always() && inputs.build_environment != 'linux-s390x-binary-manywheel'

View File

@ -95,7 +95,7 @@ jobs:
SHA1: ${{ github.event.pull_request.head.sha || github.sha }} SHA1: ${{ github.event.pull_request.head.sha || github.sha }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
no-sudo: true no-sudo: true

View File

@ -23,7 +23,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }} keep-going: ${{ steps.filter.outputs.keep-going }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -44,7 +44,7 @@ jobs:
runs-on: ${{ matrix.runner }} runs-on: ${{ matrix.runner }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Set up JDK 8 - name: Set up JDK 8
uses: actions/setup-java@v3 uses: actions/setup-java@v3
@ -53,7 +53,7 @@ jobs:
distribution: 'temurin' distribution: 'temurin'
- name: Setup miniconda - name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.4
with: with:
python-version: 3.8 python-version: 3.8
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }} environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}

View File

@ -80,7 +80,7 @@ jobs:
name: build-docs-${{ matrix.docs_type }}-${{ inputs.push }} name: build-docs-${{ matrix.docs_type }}-${{ inputs.push }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: | instructions: |
@ -91,7 +91,7 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
@ -106,12 +106,12 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.4
with: with:
docker-image-name: ${{ inputs.docker-image }} docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -218,5 +218,5 @@ jobs:
s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }}/functorchdocs s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }}/functorchdocs
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
if: always() if: always()

View File

@ -46,7 +46,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }} keep-going: ${{ steps.filter.outputs.keep-going }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -80,7 +80,7 @@ jobs:
steps: steps:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Populate CI build options - name: Populate CI build options
shell: bash shell: bash
@ -102,7 +102,7 @@ jobs:
brew install libtool brew install libtool
- name: Setup miniconda for iOS - name: Setup miniconda for iOS
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.4
with: with:
python-version: "3.9" python-version: "3.9"
environment-file: .github/requirements/conda-env-iOS.txt environment-file: .github/requirements/conda-env-iOS.txt

View File

@ -81,7 +81,7 @@ jobs:
test-matrix: ${{ steps.linux-build.outputs.test-matrix }} test-matrix: ${{ steps.linux-build.outputs.test-matrix }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -90,7 +90,7 @@ jobs:
# checkout because when we run this action we don't *have* a local # checkout because when we run this action we don't *have* a local
# checkout. In other cases you should prefer a local checkout. # checkout. In other cases you should prefer a local checkout.
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Linux Build - name: Linux Build
id: linux-build id: linux-build

View File

@ -86,7 +86,7 @@ jobs:
# checkout because when we run this action we don't *have* a local # checkout because when we run this action we don't *have* a local
# checkout. In other cases you should prefer a local checkout. # checkout. In other cases you should prefer a local checkout.
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Linux Build - name: Linux Build
id: linux-build id: linux-build

View File

@ -90,7 +90,7 @@ jobs:
test-matrix: ${{ steps.filter.outputs.test-matrix }} test-matrix: ${{ steps.filter.outputs.test-matrix }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -99,7 +99,7 @@ jobs:
# checkout because when we run this action we don't *have* a local # checkout because when we run this action we don't *have* a local
# checkout. In other cases you should prefer a local checkout. # checkout. In other cases you should prefer a local checkout.
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
@ -114,7 +114,7 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.4
with: with:
docker-image-name: ${{ inputs.docker-image-name }} docker-image-name: ${{ inputs.docker-image-name }}
@ -128,7 +128,7 @@ jobs:
echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}"
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -238,5 +238,5 @@ jobs:
s3-bucket: ${{ inputs.s3-bucket }} s3-bucket: ${{ inputs.s3-bucket }}
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
if: always() if: always()

View File

@ -67,7 +67,7 @@ jobs:
timeout-minutes: ${{ matrix.mem_leak_check == 'mem_leak_check' && 600 || inputs.timeout-minutes }} timeout-minutes: ${{ matrix.mem_leak_check == 'mem_leak_check' && 600 || inputs.timeout-minutes }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Linux Test - name: Linux Test
id: linux-test id: linux-test

View File

@ -68,7 +68,7 @@ jobs:
timeout-minutes: ${{ matrix.mem_leak_check == 'mem_leak_check' && 600 || inputs.timeout-minutes }} timeout-minutes: ${{ matrix.mem_leak_check == 'mem_leak_check' && 600 || inputs.timeout-minutes }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Linux Test - name: Linux Test
id: linux-test id: linux-test

View File

@ -67,7 +67,7 @@ jobs:
timeout-minutes: ${{ matrix.mem_leak_check == 'mem_leak_check' && 600 || inputs.timeout-minutes }} timeout-minutes: ${{ matrix.mem_leak_check == 'mem_leak_check' && 600 || inputs.timeout-minutes }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
if: ${{ !contains(matrix.runner, 'gcp.a100') }} if: ${{ !contains(matrix.runner, 'gcp.a100') }}
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -76,7 +76,7 @@ jobs:
docker exec -it $(docker container ps --format '{{.ID}}') bash docker exec -it $(docker container ps --format '{{.ID}}') bash
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
@ -91,7 +91,7 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.4
with: with:
docker-image-name: ${{ inputs.docker-image }} docker-image-name: ${{ inputs.docker-image }}
@ -105,7 +105,7 @@ jobs:
echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}"
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -116,7 +116,7 @@ jobs:
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG - name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
id: install-nvidia-driver id: install-nvidia-driver
uses: pytorch/test-infra/.github/actions/setup-nvidia@main uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.4
if: ${{ contains(inputs.build-environment, 'cuda') && !contains(matrix.config, 'nogpu') && steps.check_arc_runner.outputs.IN_ARC_RUNNER == 'false' }} if: ${{ contains(inputs.build-environment, 'cuda') && !contains(matrix.config, 'nogpu') && steps.check_arc_runner.outputs.IN_ARC_RUNNER == 'false' }}
- name: Lock NVIDIA A100 40GB Frequency - name: Lock NVIDIA A100 40GB Frequency
@ -333,7 +333,7 @@ jobs:
path: ./**/core.[1-9]* path: ./**/core.[1-9]*
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
if: always() if: always()
# NB: We are currently having an intermittent GPU-related issue on G5 runners with # NB: We are currently having an intermittent GPU-related issue on G5 runners with

View File

@ -71,11 +71,11 @@ jobs:
test-matrix: ${{ steps.filter.outputs.test-matrix }} test-matrix: ${{ steps.filter.outputs.test-matrix }}
steps: steps:
- name: Clean up disk space before running MacOS workflow - name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.4
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Set xcode version - name: Set xcode version
env: env:
@ -87,7 +87,7 @@ jobs:
- name: Setup miniconda - name: Setup miniconda
if: inputs.environment-file == '' if: inputs.environment-file == ''
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.4
with: with:
python-version: ${{ inputs.python-version }} python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }} environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -97,7 +97,7 @@ jobs:
# environment even though the arch is x86-64 # environment even though the arch is x86-64
- name: Setup miniconda using the provided environment file - name: Setup miniconda using the provided environment file
if: inputs.environment-file != '' if: inputs.environment-file != ''
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.4
with: with:
python-version: ${{ inputs.python-version }} python-version: ${{ inputs.python-version }}
environment-file: ${{ inputs.environment-file }} environment-file: ${{ inputs.environment-file }}
@ -207,4 +207,4 @@ jobs:
- name: Clean up disk space - name: Clean up disk space
if: always() if: always()
continue-on-error: true continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.4

View File

@ -40,7 +40,7 @@ jobs:
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }} reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
submodules: false submodules: false
@ -81,7 +81,7 @@ jobs:
use-gha: true use-gha: true
- name: Setup miniconda - name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.4
with: with:
python-version: ${{ inputs.python-version }} python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }} environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -159,4 +159,4 @@ jobs:
- name: Clean up disk space - name: Clean up disk space
if: always() if: always()
continue-on-error: true continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.4

View File

@ -74,11 +74,11 @@ jobs:
done done
- name: Clean up disk space before running MacOS workflow - name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.4
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Download build artifacts - name: Download build artifacts
uses: ./.github/actions/download-build-artifacts uses: ./.github/actions/download-build-artifacts
@ -93,7 +93,7 @@ jobs:
use-gha: true use-gha: true
- name: Setup miniconda - name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.4
with: with:
python-version: ${{ inputs.python-version }} python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }} environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -216,4 +216,4 @@ jobs:
- name: Clean up disk space - name: Clean up disk space
if: always() if: always()
continue-on-error: true continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.4

View File

@ -58,7 +58,7 @@ jobs:
steps: steps:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
no-sudo: true no-sudo: true
@ -80,12 +80,12 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.4
with: with:
docker-image-name: ${{ inputs.docker-image }} docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}

View File

@ -23,7 +23,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }} keep-going: ${{ steps.filter.outputs.keep-going }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false
@ -54,10 +54,10 @@ jobs:
SUPPORT_ABI: '${{ matrix.support_abi }}' SUPPORT_ABI: '${{ matrix.support_abi }}'
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Setup miniconda - name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.4
with: with:
python-version: 3.8 python-version: 3.8
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}.txt environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}.txt

View File

@ -32,7 +32,7 @@ jobs:
USERNAME: ${{ inputs.user_name }} USERNAME: ${{ inputs.user_name }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: true submodules: true

View File

@ -60,10 +60,10 @@ jobs:
git config --global core.fsmonitor false git config --global core.fsmonitor false
- name: Clean up leftover processes on non-ephemeral Windows runner - name: Clean up leftover processes on non-ephemeral Windows runner
uses: pytorch/test-infra/.github/actions/cleanup-runner@main uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.4
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: | instructions: |
@ -78,7 +78,7 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
no-sudo: true no-sudo: true

View File

@ -54,10 +54,10 @@ jobs:
git config --global core.fsmonitor false git config --global core.fsmonitor false
- name: Clean up leftover processes on non-ephemeral Windows runner - name: Clean up leftover processes on non-ephemeral Windows runner
uses: pytorch/test-infra/.github/actions/cleanup-runner@main uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.4
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: | instructions: |
@ -73,7 +73,7 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
no-sudo: true no-sudo: true

View File

@ -54,7 +54,7 @@ jobs:
steps: steps:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Setup XPU - name: Setup XPU
uses: ./.github/actions/setup-xpu uses: ./.github/actions/setup-xpu
@ -72,12 +72,12 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.4
with: with:
docker-image-name: ${{ inputs.docker-image }} docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}

View File

@ -3,7 +3,7 @@ name: Build Triton wheels
on: on:
push: push:
branches: branches:
- main - release/2.4
tags: tags:
# NOTE: Binary build pipelines should only get triggered on release candidate builds # NOTE: Binary build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1 # Release candidate tags look like: v1.11.0-rc1
@ -47,12 +47,12 @@ jobs:
BUILD_DEVICE: ${{ matrix.device }} BUILD_DEVICE: ${{ matrix.device }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
submodules: false submodules: false
@ -60,7 +60,7 @@ jobs:
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ env.DOCKER_IMAGE }} docker-image: ${{ env.DOCKER_IMAGE }}
@ -124,7 +124,7 @@ jobs:
path: ${{ runner.temp }}/artifacts/* path: ${{ runner.temp }}/artifacts/*
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
if: always() if: always()
upload-wheel: upload-wheel:
@ -209,12 +209,12 @@ jobs:
PY_VERS: ${{ matrix.py_vers }} PY_VERS: ${{ matrix.py_vers }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
submodules: false submodules: false
@ -222,7 +222,7 @@ jobs:
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ env.DOCKER_IMAGE }} docker-image: ${{ env.DOCKER_IMAGE }}
@ -257,7 +257,7 @@ jobs:
path: ${{ runner.temp }}/artifacts/* path: ${{ runner.temp }}/artifacts/*
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
if: always() if: always()
upload-conda: upload-conda:

View File

@ -31,7 +31,7 @@ jobs:
runs-on: linux.20_04.4x runs-on: linux.20_04.4x
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
submodules: false submodules: false
fetch-depth: 1 fetch-depth: 1

View File

@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Run close_nonexistent_disable_issues.py - name: Run close_nonexistent_disable_issues.py
env: env:

View File

@ -78,21 +78,21 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
# deep clone (fetch-depth 0) required for git merge-base # deep clone (fetch-depth 0) required for git merge-base
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- name: Setup Linux - name: Setup Linux
uses: ./.github/actions/setup-linux uses: ./.github/actions/setup-linux
- name: Build docker image - name: Build docker image
id: build-docker-image id: build-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.4
with: with:
docker-image-name: ${{ matrix.docker-image-name }} docker-image-name: ${{ matrix.docker-image-name }}
always-rebuild: true always-rebuild: true
push: true push: true
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ steps.build-docker-image.outputs.docker-image }} docker-image: ${{ steps.build-docker-image.outputs.docker-image }}
@ -124,5 +124,5 @@ jobs:
if: always() if: always()
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
if: always() if: always()

View File

@ -41,7 +41,7 @@ jobs:
matrix: ${{ steps.generate-matrix.outputs.matrix }} matrix: ${{ steps.generate-matrix.outputs.matrix }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: true submodules: true
@ -69,7 +69,7 @@ jobs:
CUDNN_VERSION: ${{ matrix.cudnn_version }} CUDNN_VERSION: ${{ matrix.cudnn_version }}
steps: steps:
- name: Setup SSH (Click me for login details) - name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.4
with: with:
github-secret: ${{ secrets.GITHUB_TOKEN }} github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
@ -147,12 +147,12 @@ jobs:
fi fi
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
if: always() if: always()
validate: validate:
needs: build needs: build
uses: pytorch/builder/.github/workflows/validate-docker-images.yml@main uses: pytorch/builder/.github/workflows/validate-docker-images.yml@release/2.4
with: with:
channel: nightly channel: nightly
ref: main ref: main

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
runs_on: linux.arm64.m7g.4xlarge runs_on: linux.arm64.m7g.4xlarge
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
@ -69,7 +69,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-aarch64 build_name: manywheel-py3_8-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
@ -91,7 +91,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-aarch64 build_name: manywheel-py3_8-cpu-aarch64
secrets: secrets:
@ -111,7 +111,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_TYPE: cuda-aarch64 GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-2.4
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
runs_on: linux.arm64.m7g.4xlarge runs_on: linux.arm64.m7g.4xlarge
@ -135,7 +135,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_TYPE: cuda-aarch64 GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-2.4
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda-aarch64 build_name: manywheel-py3_8-cuda-aarch64
@ -156,7 +156,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
runs_on: linux.arm64.m7g.4xlarge runs_on: linux.arm64.m7g.4xlarge
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
@ -177,7 +177,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-aarch64 build_name: manywheel-py3_9-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
@ -199,7 +199,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-aarch64 build_name: manywheel-py3_9-cpu-aarch64
secrets: secrets:
@ -219,7 +219,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_TYPE: cuda-aarch64 GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-2.4
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
runs_on: linux.arm64.m7g.4xlarge runs_on: linux.arm64.m7g.4xlarge
@ -243,7 +243,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_TYPE: cuda-aarch64 GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-2.4
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda-aarch64 build_name: manywheel-py3_9-cuda-aarch64
@ -264,7 +264,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
runs_on: linux.arm64.m7g.4xlarge runs_on: linux.arm64.m7g.4xlarge
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
@ -285,7 +285,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-aarch64 build_name: manywheel-py3_10-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
@ -307,7 +307,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-aarch64 build_name: manywheel-py3_10-cpu-aarch64
secrets: secrets:
@ -327,7 +327,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_TYPE: cuda-aarch64 GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-2.4
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
runs_on: linux.arm64.m7g.4xlarge runs_on: linux.arm64.m7g.4xlarge
@ -351,7 +351,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_TYPE: cuda-aarch64 GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-2.4
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda-aarch64 build_name: manywheel-py3_10-cuda-aarch64
@ -372,7 +372,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
runs_on: linux.arm64.m7g.4xlarge runs_on: linux.arm64.m7g.4xlarge
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
@ -393,7 +393,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-aarch64 build_name: manywheel-py3_11-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
@ -415,7 +415,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-aarch64 build_name: manywheel-py3_11-cpu-aarch64
secrets: secrets:
@ -435,7 +435,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_TYPE: cuda-aarch64 GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-2.4
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
runs_on: linux.arm64.m7g.4xlarge runs_on: linux.arm64.m7g.4xlarge
@ -459,7 +459,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_TYPE: cuda-aarch64 GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-2.4
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda-aarch64 build_name: manywheel-py3_11-cuda-aarch64
@ -480,7 +480,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
runs_on: linux.arm64.m7g.4xlarge runs_on: linux.arm64.m7g.4xlarge
ALPINE_IMAGE: "arm64v8/alpine" ALPINE_IMAGE: "arm64v8/alpine"
@ -501,7 +501,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-aarch64 build_name: manywheel-py3_12-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel build_environment: linux-aarch64-binary-manywheel
@ -523,7 +523,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64 GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-aarch64 build_name: manywheel-py3_12-cpu-aarch64
secrets: secrets:
@ -543,7 +543,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_TYPE: cuda-aarch64 GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-2.4
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
runs_on: linux.arm64.m7g.4xlarge runs_on: linux.arm64.m7g.4xlarge
@ -567,7 +567,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_TYPE: cuda-aarch64 GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cuda12.4-2.4
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cuda-aarch64 build_name: manywheel-py3_12-cuda-aarch64

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu build_name: conda-py3_8-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -66,7 +66,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu build_name: conda-py3_8-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -87,7 +87,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu build_name: conda-py3_8-cpu
secrets: secrets:
@ -108,7 +108,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_8-cuda11_8 build_name: conda-py3_8-cuda11_8
@ -128,7 +128,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda11_8 build_name: conda-py3_8-cuda11_8
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -150,7 +150,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda11_8 build_name: conda-py3_8-cuda11_8
secrets: secrets:
@ -171,7 +171,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_8-cuda12_1 build_name: conda-py3_8-cuda12_1
@ -191,7 +191,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda12_1 build_name: conda-py3_8-cuda12_1
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -213,7 +213,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda12_1 build_name: conda-py3_8-cuda12_1
secrets: secrets:
@ -234,7 +234,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_8-cuda12_4 build_name: conda-py3_8-cuda12_4
@ -254,7 +254,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda12_4 build_name: conda-py3_8-cuda12_4
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -276,7 +276,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda12_4 build_name: conda-py3_8-cuda12_4
secrets: secrets:
@ -296,7 +296,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu build_name: conda-py3_9-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -314,7 +314,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu build_name: conda-py3_9-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -335,7 +335,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu build_name: conda-py3_9-cpu
secrets: secrets:
@ -356,7 +356,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_9-cuda11_8 build_name: conda-py3_9-cuda11_8
@ -376,7 +376,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda11_8 build_name: conda-py3_9-cuda11_8
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -398,7 +398,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda11_8 build_name: conda-py3_9-cuda11_8
secrets: secrets:
@ -419,7 +419,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_9-cuda12_1 build_name: conda-py3_9-cuda12_1
@ -439,7 +439,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda12_1 build_name: conda-py3_9-cuda12_1
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -461,7 +461,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda12_1 build_name: conda-py3_9-cuda12_1
secrets: secrets:
@ -482,7 +482,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_9-cuda12_4 build_name: conda-py3_9-cuda12_4
@ -502,7 +502,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda12_4 build_name: conda-py3_9-cuda12_4
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -524,7 +524,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda12_4 build_name: conda-py3_9-cuda12_4
secrets: secrets:
@ -544,7 +544,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu build_name: conda-py3_10-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -562,7 +562,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu build_name: conda-py3_10-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -583,7 +583,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu build_name: conda-py3_10-cpu
secrets: secrets:
@ -604,7 +604,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_10-cuda11_8 build_name: conda-py3_10-cuda11_8
@ -624,7 +624,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda11_8 build_name: conda-py3_10-cuda11_8
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -646,7 +646,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda11_8 build_name: conda-py3_10-cuda11_8
secrets: secrets:
@ -667,7 +667,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_10-cuda12_1 build_name: conda-py3_10-cuda12_1
@ -687,7 +687,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda12_1 build_name: conda-py3_10-cuda12_1
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -709,7 +709,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda12_1 build_name: conda-py3_10-cuda12_1
secrets: secrets:
@ -730,7 +730,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_10-cuda12_4 build_name: conda-py3_10-cuda12_4
@ -750,7 +750,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda12_4 build_name: conda-py3_10-cuda12_4
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -772,7 +772,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda12_4 build_name: conda-py3_10-cuda12_4
secrets: secrets:
@ -792,7 +792,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu build_name: conda-py3_11-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -810,7 +810,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu build_name: conda-py3_11-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -831,7 +831,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu build_name: conda-py3_11-cpu
secrets: secrets:
@ -852,7 +852,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_11-cuda11_8 build_name: conda-py3_11-cuda11_8
@ -872,7 +872,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda11_8 build_name: conda-py3_11-cuda11_8
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -894,7 +894,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda11_8 build_name: conda-py3_11-cuda11_8
secrets: secrets:
@ -915,7 +915,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_11-cuda12_1 build_name: conda-py3_11-cuda12_1
@ -935,7 +935,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda12_1 build_name: conda-py3_11-cuda12_1
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -957,7 +957,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda12_1 build_name: conda-py3_11-cuda12_1
secrets: secrets:
@ -978,7 +978,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_11-cuda12_4 build_name: conda-py3_11-cuda12_4
@ -998,7 +998,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda12_4 build_name: conda-py3_11-cuda12_4
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -1020,7 +1020,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda12_4 build_name: conda-py3_11-cuda12_4
secrets: secrets:
@ -1040,7 +1040,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cpu build_name: conda-py3_12-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -1058,7 +1058,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cpu build_name: conda-py3_12-cpu
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -1079,7 +1079,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cpu build_name: conda-py3_12-cpu
secrets: secrets:
@ -1100,7 +1100,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_12-cuda11_8 build_name: conda-py3_12-cuda11_8
@ -1120,7 +1120,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda11_8 build_name: conda-py3_12-cuda11_8
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -1142,7 +1142,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-main DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda11_8 build_name: conda-py3_12-cuda11_8
secrets: secrets:
@ -1163,7 +1163,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_12-cuda12_1 build_name: conda-py3_12-cuda12_1
@ -1183,7 +1183,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda12_1 build_name: conda-py3_12-cuda12_1
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -1205,7 +1205,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda12_1 build_name: conda-py3_12-cuda12_1
secrets: secrets:
@ -1226,7 +1226,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
runs_on: linux.24xlarge runs_on: linux.24xlarge
build_name: conda-py3_12-cuda12_4 build_name: conda-py3_12-cuda12_4
@ -1246,7 +1246,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda12_4 build_name: conda-py3_12-cuda12_4
build_environment: linux-binary-conda build_environment: linux-binary-conda
@ -1268,7 +1268,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-main DOCKER_IMAGE: pytorch/conda-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cuda12_4 build_name: conda-py3_12-cuda12_4
secrets: secrets:

View File

@ -43,7 +43,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -62,7 +62,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi build_name: libtorch-cpu-shared-with-deps-cxx11-abi

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -67,7 +67,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -89,7 +89,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -111,7 +111,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
@ -131,7 +131,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
@ -154,7 +154,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
@ -176,7 +176,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
@ -196,7 +196,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
@ -219,7 +219,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
@ -241,7 +241,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.4-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.4-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_4-shared-with-deps-cxx11-abi build_name: libtorch-cuda12_4-shared-with-deps-cxx11-abi
@ -261,7 +261,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.4-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.4-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_4-shared-with-deps-cxx11-abi build_name: libtorch-cuda12_4-shared-with-deps-cxx11-abi
@ -284,7 +284,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.4-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.4-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_4-shared-with-deps-cxx11-abi build_name: libtorch-cuda12_4-shared-with-deps-cxx11-abi
@ -306,7 +306,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm6_0-shared-with-deps-cxx11-abi build_name: libtorch-rocm6_0-shared-with-deps-cxx11-abi
@ -328,7 +328,7 @@ jobs:
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
steps: steps:
@ -342,7 +342,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -354,7 +353,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -370,7 +369,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/libtorch-cxx11-builder:rocm6.0-main docker-image: pytorch/libtorch-cxx11-builder:rocm6.0-2.4
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -390,7 +389,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.0-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm6_0-shared-with-deps-cxx11-abi build_name: libtorch-rocm6_0-shared-with-deps-cxx11-abi
@ -412,7 +411,7 @@ jobs:
DESIRED_CUDA: rocm6.1 DESIRED_CUDA: rocm6.1
GPU_ARCH_VERSION: 6.1 GPU_ARCH_VERSION: 6.1
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.1-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.1-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm6_1-shared-with-deps-cxx11-abi build_name: libtorch-rocm6_1-shared-with-deps-cxx11-abi
@ -434,7 +433,7 @@ jobs:
GPU_ARCH_VERSION: 6.1 GPU_ARCH_VERSION: 6.1
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.1-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.1-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
steps: steps:
@ -448,7 +447,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -460,7 +458,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -476,7 +474,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/libtorch-cxx11-builder:rocm6.1-main docker-image: pytorch/libtorch-cxx11-builder:rocm6.1-2.4
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -496,7 +494,7 @@ jobs:
DESIRED_CUDA: rocm6.1 DESIRED_CUDA: rocm6.1
GPU_ARCH_VERSION: 6.1 GPU_ARCH_VERSION: 6.1
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.1-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm6.1-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm6_1-shared-with-deps-cxx11-abi build_name: libtorch-rocm6_1-shared-with-deps-cxx11-abi

View File

@ -43,7 +43,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11 build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -62,7 +62,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11 build_name: libtorch-cpu-shared-with-deps-pre-cxx11

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11 build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -67,7 +67,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11 build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -89,7 +89,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11 build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -111,7 +111,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11 build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
@ -131,7 +131,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11 build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
@ -154,7 +154,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11 build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
@ -176,7 +176,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11 build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
@ -196,7 +196,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11 build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
@ -219,7 +219,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11 build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
@ -241,7 +241,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.4-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_4-shared-with-deps-pre-cxx11 build_name: libtorch-cuda12_4-shared-with-deps-pre-cxx11
@ -261,7 +261,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.4-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_4-shared-with-deps-pre-cxx11 build_name: libtorch-cuda12_4-shared-with-deps-pre-cxx11
@ -284,7 +284,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.4-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_4-shared-with-deps-pre-cxx11 build_name: libtorch-cuda12_4-shared-with-deps-pre-cxx11
@ -306,7 +306,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm6_0-shared-with-deps-pre-cxx11 build_name: libtorch-rocm6_0-shared-with-deps-pre-cxx11
@ -328,7 +328,7 @@ jobs:
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
steps: steps:
@ -342,7 +342,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -354,7 +353,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -370,7 +369,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm6.0-main docker-image: pytorch/manylinux-builder:rocm6.0-2.4
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -390,7 +389,7 @@ jobs:
DESIRED_CUDA: rocm6.0 DESIRED_CUDA: rocm6.0
GPU_ARCH_VERSION: 6.0 GPU_ARCH_VERSION: 6.0
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.0-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm6_0-shared-with-deps-pre-cxx11 build_name: libtorch-rocm6_0-shared-with-deps-pre-cxx11
@ -412,7 +411,7 @@ jobs:
DESIRED_CUDA: rocm6.1 DESIRED_CUDA: rocm6.1
GPU_ARCH_VERSION: 6.1 GPU_ARCH_VERSION: 6.1
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.1-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.1-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm6_1-shared-with-deps-pre-cxx11 build_name: libtorch-rocm6_1-shared-with-deps-pre-cxx11
@ -434,7 +433,7 @@ jobs:
GPU_ARCH_VERSION: 6.1 GPU_ARCH_VERSION: 6.1
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1 SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.1-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.1-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
steps: steps:
@ -448,7 +447,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -460,7 +458,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -476,7 +474,7 @@ jobs:
- name: Pull Docker image - name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with: with:
docker-image: pytorch/manylinux-builder:rocm6.1-main docker-image: pytorch/manylinux-builder:rocm6.1-2.4
- name: Test Pytorch binary - name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm - name: Teardown ROCm
@ -496,7 +494,7 @@ jobs:
DESIRED_CUDA: rocm6.1 DESIRED_CUDA: rocm6.1
GPU_ARCH_VERSION: 6.1 GPU_ARCH_VERSION: 6.1
GPU_ARCH_TYPE: rocm GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.1-main DOCKER_IMAGE: pytorch/manylinux-builder:rocm6.1-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11 DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm6_1-shared-with-deps-pre-cxx11 build_name: libtorch-rocm6_1-shared-with-deps-pre-cxx11

View File

@ -44,7 +44,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8 build_name: manywheel-py3_8-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -64,7 +64,7 @@ jobs:
DESIRED_CUDA: cu118 DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8 GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8 build_name: manywheel-py3_8-cuda11_8
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -84,7 +84,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1 build_name: manywheel-py3_8-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -104,7 +104,7 @@ jobs:
DESIRED_CUDA: cu121 DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1 GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1 build_name: manywheel-py3_8-cuda12_1
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -124,7 +124,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_4 build_name: manywheel-py3_8-cuda12_4
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel
@ -144,7 +144,7 @@ jobs:
DESIRED_CUDA: cu124 DESIRED_CUDA: cu124
GPU_ARCH_VERSION: 12.4 GPU_ARCH_VERSION: 12.4
GPU_ARCH_TYPE: cuda GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.4-main DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.4-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_4 build_name: manywheel-py3_8-cuda12_4
build_environment: linux-binary-manywheel build_environment: linux-binary-manywheel

File diff suppressed because it is too large Load Diff

View File

@ -48,7 +48,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
runs_on: linux.s390x runs_on: linux.s390x
ALPINE_IMAGE: "docker.io/s390x/alpine" ALPINE_IMAGE: "docker.io/s390x/alpine"
@ -69,7 +69,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-s390x build_name: manywheel-py3_8-cpu-s390x
build_environment: linux-s390x-binary-manywheel build_environment: linux-s390x-binary-manywheel
@ -91,7 +91,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-s390x build_name: manywheel-py3_8-cpu-s390x
secrets: secrets:
@ -111,7 +111,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
runs_on: linux.s390x runs_on: linux.s390x
ALPINE_IMAGE: "docker.io/s390x/alpine" ALPINE_IMAGE: "docker.io/s390x/alpine"
@ -132,7 +132,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-s390x build_name: manywheel-py3_9-cpu-s390x
build_environment: linux-s390x-binary-manywheel build_environment: linux-s390x-binary-manywheel
@ -154,7 +154,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-s390x build_name: manywheel-py3_9-cpu-s390x
secrets: secrets:
@ -174,7 +174,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
runs_on: linux.s390x runs_on: linux.s390x
ALPINE_IMAGE: "docker.io/s390x/alpine" ALPINE_IMAGE: "docker.io/s390x/alpine"
@ -195,7 +195,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-s390x build_name: manywheel-py3_10-cpu-s390x
build_environment: linux-s390x-binary-manywheel build_environment: linux-s390x-binary-manywheel
@ -217,7 +217,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-s390x build_name: manywheel-py3_10-cpu-s390x
secrets: secrets:
@ -237,7 +237,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
runs_on: linux.s390x runs_on: linux.s390x
ALPINE_IMAGE: "docker.io/s390x/alpine" ALPINE_IMAGE: "docker.io/s390x/alpine"
@ -258,7 +258,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-s390x build_name: manywheel-py3_11-cpu-s390x
build_environment: linux-s390x-binary-manywheel build_environment: linux-s390x-binary-manywheel
@ -280,7 +280,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-s390x build_name: manywheel-py3_11-cpu-s390x
secrets: secrets:
@ -300,7 +300,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
runs_on: linux.s390x runs_on: linux.s390x
ALPINE_IMAGE: "docker.io/s390x/alpine" ALPINE_IMAGE: "docker.io/s390x/alpine"
@ -321,7 +321,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-s390x build_name: manywheel-py3_12-cpu-s390x
build_environment: linux-s390x-binary-manywheel build_environment: linux-s390x-binary-manywheel
@ -343,7 +343,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-s390x GPU_ARCH_TYPE: cpu-s390x
DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-main DOCKER_IMAGE: pytorch/manylinuxs390x-builder:cpu-s390x-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: manywheel-py3_12-cpu-s390x build_name: manywheel-py3_12-cpu-s390x
secrets: secrets:

View File

@ -77,7 +77,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -89,7 +88,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -141,7 +140,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu build_name: conda-py3_8-cpu
use_s3: False use_s3: False
@ -195,7 +194,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -207,7 +205,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -259,7 +257,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu build_name: conda-py3_9-cpu
use_s3: False use_s3: False
@ -313,7 +311,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -325,7 +322,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -377,7 +374,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu build_name: conda-py3_10-cpu
use_s3: False use_s3: False
@ -431,7 +428,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -443,7 +439,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -495,7 +491,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu build_name: conda-py3_11-cpu
use_s3: False use_s3: False
@ -549,7 +545,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -561,7 +556,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -613,7 +608,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-main DOCKER_IMAGE: pytorch/conda-builder:cpu-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: conda-py3_12-cpu build_name: conda-py3_12-cpu
use_s3: False use_s3: False

View File

@ -81,7 +81,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -93,7 +92,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -145,7 +144,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-main DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.4
LIBTORCH_VARIANT: shared-with-deps LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi build_name: libtorch-cpu-shared-with-deps-cxx11-abi

View File

@ -78,7 +78,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -90,7 +89,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -142,7 +141,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.4
DESIRED_PYTHON: "3.8" DESIRED_PYTHON: "3.8"
build_name: wheel-py3_8-cpu build_name: wheel-py3_8-cpu
use_s3: False use_s3: False
@ -197,7 +196,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -209,7 +207,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -261,7 +259,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.4
DESIRED_PYTHON: "3.9" DESIRED_PYTHON: "3.9"
build_name: wheel-py3_9-cpu build_name: wheel-py3_9-cpu
use_s3: False use_s3: False
@ -316,7 +314,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -328,7 +325,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -380,7 +377,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.4
DESIRED_PYTHON: "3.10" DESIRED_PYTHON: "3.10"
build_name: wheel-py3_10-cpu build_name: wheel-py3_10-cpu
use_s3: False use_s3: False
@ -435,7 +432,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -447,7 +443,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -499,7 +495,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.4
DESIRED_PYTHON: "3.11" DESIRED_PYTHON: "3.11"
build_name: wheel-py3_11-cpu build_name: wheel-py3_11-cpu
use_s3: False use_s3: False
@ -554,7 +550,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -566,7 +561,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -618,7 +613,7 @@ jobs:
# favor of GPU_ARCH_VERSION # favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-main DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.4
DESIRED_PYTHON: "3.12" DESIRED_PYTHON: "3.12"
build_name: wheel-py3_12-cpu build_name: wheel-py3_12-cpu
use_s3: False use_s3: False

View File

@ -93,7 +93,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -105,7 +104,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -210,7 +209,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -222,7 +220,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -336,7 +334,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -348,7 +345,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -454,7 +451,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -466,7 +462,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -581,7 +577,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -593,7 +588,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -699,7 +694,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -711,7 +705,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -826,7 +820,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -838,7 +831,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -944,7 +937,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -956,7 +948,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1070,7 +1062,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1082,7 +1073,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1187,7 +1178,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1199,7 +1189,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1313,7 +1303,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1325,7 +1314,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1431,7 +1420,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1443,7 +1431,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1558,7 +1546,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1570,7 +1557,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1676,7 +1663,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1688,7 +1674,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1803,7 +1789,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1815,7 +1800,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1921,7 +1906,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1933,7 +1917,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2047,7 +2031,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2059,7 +2042,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2164,7 +2147,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2176,7 +2158,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2290,7 +2272,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2302,7 +2283,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2408,7 +2389,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2420,7 +2400,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2535,7 +2515,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2547,7 +2526,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2653,7 +2632,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2665,7 +2643,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2780,7 +2758,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2792,7 +2769,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2898,7 +2875,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2910,7 +2886,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3024,7 +3000,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3036,7 +3011,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3141,7 +3116,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3153,7 +3127,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3267,7 +3241,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3279,7 +3252,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3385,7 +3358,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3397,7 +3369,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3512,7 +3484,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3524,7 +3495,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3630,7 +3601,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3642,7 +3612,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3757,7 +3727,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3769,7 +3738,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3875,7 +3844,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3887,7 +3855,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4001,7 +3969,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4013,7 +3980,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4118,7 +4085,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4130,7 +4096,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4244,7 +4210,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4256,7 +4221,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4362,7 +4327,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4374,7 +4338,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4489,7 +4453,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4501,7 +4464,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4607,7 +4570,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4619,7 +4581,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4734,7 +4696,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4746,7 +4707,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4852,7 +4813,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4864,7 +4824,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder

View File

@ -90,7 +90,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -102,7 +101,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -211,7 +210,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -223,7 +221,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder

View File

@ -97,7 +97,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -109,7 +108,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -218,7 +217,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -230,7 +228,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -352,7 +350,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -364,7 +361,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -474,7 +471,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -486,7 +482,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -609,7 +605,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -621,7 +616,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -731,7 +726,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -743,7 +737,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -866,7 +860,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -878,7 +871,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -988,7 +981,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1000,7 +992,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder

View File

@ -90,7 +90,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -102,7 +101,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -211,7 +210,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -223,7 +221,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder

View File

@ -97,7 +97,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -109,7 +108,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -218,7 +217,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -230,7 +228,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -352,7 +350,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -364,7 +361,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -474,7 +471,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -486,7 +482,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -609,7 +605,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -621,7 +616,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -731,7 +726,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -743,7 +737,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -866,7 +860,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -878,7 +871,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -988,7 +981,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1000,7 +992,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder

View File

@ -94,7 +94,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -106,7 +105,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -211,7 +210,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -223,7 +221,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -338,7 +336,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -350,7 +347,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -456,7 +453,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -468,7 +464,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -584,7 +580,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -596,7 +591,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -702,7 +697,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -714,7 +708,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -830,7 +824,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -842,7 +835,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -948,7 +941,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -960,7 +952,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1075,7 +1067,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1087,7 +1078,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1192,7 +1183,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1204,7 +1194,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1319,7 +1309,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1331,7 +1320,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1437,7 +1426,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1449,7 +1437,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1565,7 +1553,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1577,7 +1564,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1683,7 +1670,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1695,7 +1681,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1811,7 +1797,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1823,7 +1808,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -1929,7 +1914,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -1941,7 +1925,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2056,7 +2040,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2068,7 +2051,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2173,7 +2156,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2185,7 +2167,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2300,7 +2282,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2312,7 +2293,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2418,7 +2399,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2430,7 +2410,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2546,7 +2526,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2558,7 +2537,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2664,7 +2643,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2676,7 +2654,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2792,7 +2770,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2804,7 +2781,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -2910,7 +2887,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -2922,7 +2898,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3037,7 +3013,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3049,7 +3024,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3154,7 +3129,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3166,7 +3140,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3281,7 +3255,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3293,7 +3266,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3399,7 +3372,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3411,7 +3383,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3527,7 +3499,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3539,7 +3510,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3645,7 +3616,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3657,7 +3627,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3773,7 +3743,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3785,7 +3754,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -3891,7 +3860,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -3903,7 +3871,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4018,7 +3986,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4030,7 +3997,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4135,7 +4102,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4147,7 +4113,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4262,7 +4228,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4274,7 +4239,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4380,7 +4345,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4392,7 +4356,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4508,7 +4472,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4520,7 +4483,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4626,7 +4589,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4638,7 +4600,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4754,7 +4716,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4766,7 +4727,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder
@ -4872,7 +4833,6 @@ jobs:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive submodules: recursive
path: pytorch path: pytorch
quiet-checkout: true quiet-checkout: true
@ -4884,7 +4844,7 @@ jobs:
- name: Checkout pytorch/builder - name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout uses: malfet/checkout@silent-checkout
with: with:
ref: main ref: release/2.4
submodules: recursive submodules: recursive
repository: pytorch/builder repository: pytorch/builder
path: builder path: builder

View File

@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Run BC Lint Action - name: Run BC Lint Action
uses: pytorch/test-infra/.github/actions/bc-lint@main uses: pytorch/test-infra/.github/actions/bc-lint@release/2.4
with: with:
repo: ${{ github.event.pull_request.head.repo.full_name }} repo: ${{ github.event.pull_request.head.repo.full_name }}
base_sha: ${{ github.event.pull_request.base.sha }} base_sha: ${{ github.event.pull_request.base.sha }}

View File

@ -16,7 +16,7 @@ permissions: read-all
# When any other step fails, it's job will be retried once by retryBot. # When any other step fails, it's job will be retried once by retryBot.
jobs: jobs:
lintrunner-clang: lintrunner-clang:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.4
with: with:
timeout: 120 timeout: 120
runner: linux.2xlarge runner: linux.2xlarge
@ -32,7 +32,7 @@ jobs:
.github/scripts/lintrunner.sh .github/scripts/lintrunner.sh
lintrunner-noclang: lintrunner-noclang:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.4
with: with:
timeout: 120 timeout: 120
runner: linux.2xlarge runner: linux.2xlarge
@ -47,7 +47,7 @@ jobs:
.github/scripts/lintrunner.sh .github/scripts/lintrunner.sh
quick-checks: quick-checks:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.4
with: with:
runner: linux.2xlarge runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter docker-image: pytorch-linux-focal-linter
@ -88,7 +88,7 @@ jobs:
if: github.event_name == 'pull_request' && !contains(github.event.pull_request.labels.*.name, 'skip-pr-sanity-checks') if: github.event_name == 'pull_request' && !contains(github.event.pull_request.labels.*.name, 'skip-pr-sanity-checks')
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
submodules: false submodules: false
fetch-depth: -1 fetch-depth: -1
@ -101,7 +101,7 @@ jobs:
bash .github/scripts/pr-sanity-check.sh bash .github/scripts/pr-sanity-check.sh
workflow-checks: workflow-checks:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.4
with: with:
runner: linux.2xlarge runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter docker-image: pytorch-linux-focal-linter
@ -112,6 +112,7 @@ jobs:
# The generic Linux job chooses to use base env, not the one setup by the image # The generic Linux job chooses to use base env, not the one setup by the image
CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]") CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]")
conda activate "${CONDA_ENV}" conda activate "${CONDA_ENV}"
export RELEASE_VERSION_TAG="2.4"
# Regenerate workflows # Regenerate workflows
.github/scripts/generate_ci_workflows.py .github/scripts/generate_ci_workflows.py
@ -137,7 +138,7 @@ jobs:
exit $RC exit $RC
toc: toc:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.4
with: with:
runner: linux.2xlarge runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter docker-image: pytorch-linux-focal-linter
@ -175,7 +176,7 @@ jobs:
test-tools: test-tools:
name: Test tools name: Test tools
if: ${{ github.repository == 'pytorch/pytorch' }} if: ${{ github.repository == 'pytorch/pytorch' }}
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.4
with: with:
runner: linux.2xlarge runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter docker-image: pytorch-linux-focal-linter
@ -196,7 +197,7 @@ jobs:
runs-on: linux.20_04.4x runs-on: linux.20_04.4x
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
submodules: false submodules: false
fetch-depth: 1 fetch-depth: 1
@ -226,7 +227,7 @@ jobs:
# [see note: pytorch repo ref] # [see note: pytorch repo ref]
# deep clone (fetch-depth 0) required, to allow us to use git log # deep clone (fetch-depth 0) required, to allow us to use git log
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
submodules: false submodules: false
fetch-depth: 1 fetch-depth: 1

View File

@ -116,5 +116,5 @@ jobs:
AWS_REGION: "" AWS_REGION: ""
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
if: always() if: always()

View File

@ -21,7 +21,7 @@ jobs:
environment: upload-stats environment: upload-stats
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false

View File

@ -41,7 +41,7 @@ jobs:
environment: update-commit-hash environment: update-commit-hash
steps: steps:
- name: update-vision-commit-hash - name: update-vision-commit-hash
uses: pytorch/test-infra/.github/actions/update-commit-hash@main uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.4
if: ${{ github.event_name == 'schedule' }} if: ${{ github.event_name == 'schedule' }}
with: with:
repo-name: vision repo-name: vision
@ -56,7 +56,7 @@ jobs:
environment: update-commit-hash environment: update-commit-hash
steps: steps:
- name: update-audio-commit-hash - name: update-audio-commit-hash
uses: pytorch/test-infra/.github/actions/update-commit-hash@main uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.4
if: ${{ github.event_name == 'schedule' }} if: ${{ github.event_name == 'schedule' }}
with: with:
repo-name: audio repo-name: audio
@ -71,7 +71,7 @@ jobs:
environment: update-commit-hash environment: update-commit-hash
steps: steps:
- name: update-executorch-commit-hash - name: update-executorch-commit-hash
uses: pytorch/test-infra/.github/actions/update-commit-hash@main uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.4
if: ${{ github.event_name == 'schedule' }} if: ${{ github.event_name == 'schedule' }}
with: with:
repo-name: executorch repo-name: executorch

View File

@ -24,7 +24,7 @@ jobs:
- name: Calculate docker image - name: Calculate docker image
id: calculate-docker-image id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.4
with: with:
docker-image-name: pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9 docker-image-name: pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9
working-directory: pytorch working-directory: pytorch
@ -39,13 +39,13 @@ jobs:
echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}"
- name: Pull docker image - name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.4
with: with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }} docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG - name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
id: install-nvidia-driver id: install-nvidia-driver
uses: pytorch/test-infra/.github/actions/setup-nvidia@main uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.4
- name: Clone CodeLlama - name: Clone CodeLlama
uses: actions/checkout@v3 uses: actions/checkout@v3
@ -136,7 +136,7 @@ jobs:
"s3://target-determinator-assets/indexes/latest/${ZIP_NAME}" "s3://target-determinator-assets/indexes/latest/${ZIP_NAME}"
- name: Teardown Linux - name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.4
if: always() if: always()
concurrency: concurrency:

View File

@ -14,7 +14,7 @@ jobs:
# checkout because when we run this action we don't *have* a local # checkout because when we run this action we don't *have* a local
# checkout. In other cases you should prefer a local checkout. # checkout. In other cases you should prefer a local checkout.
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
submodules: false submodules: false

View File

@ -16,7 +16,7 @@ jobs:
environment: ${{ (github.event_name == 'schedule') && 'mergebot' || '' }} environment: ${{ (github.event_name == 'schedule') && 'mergebot' || '' }}
steps: steps:
- name: Update viable/strict - name: Update viable/strict
uses: pytorch/test-infra/.github/actions/update-viablestrict@main uses: pytorch/test-infra/.github/actions/update-viablestrict@release/2.4
with: with:
repository: pytorch/pytorch repository: pytorch/pytorch
stable-branch: viable/strict stable-branch: viable/strict

View File

@ -17,7 +17,7 @@ jobs:
contents: read contents: read
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false

View File

@ -44,7 +44,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
uses: pytorch/test-infra/.github/actions/upload-alerts@main uses: pytorch/test-infra/.github/actions/upload-alerts@release/2.4
with: with:
alerts: '${{ steps.alert_creation_step.outputs.script-output }}' alerts: '${{ steps.alert_creation_step.outputs.script-output }}'
organization: "pytorch" organization: "pytorch"

View File

@ -39,7 +39,7 @@ jobs:
run: echo "${TRIGGERING_WORKFLOW}" run: echo "${TRIGGERING_WORKFLOW}"
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
- uses: actions/setup-python@v4 - uses: actions/setup-python@v4
with: with:

View File

@ -29,7 +29,7 @@ jobs:
name: Upload dynamo performance stats for ${{ github.event.workflow_run.id }}, attempt ${{ github.event.workflow_run.run_attempt }} name: Upload dynamo performance stats for ${{ github.event.workflow_run.id }}, attempt ${{ github.event.workflow_run.run_attempt }}
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
submodules: false submodules: false
fetch-depth: 1 fetch-depth: 1

View File

@ -17,7 +17,7 @@ jobs:
environment: upload-stats environment: upload-stats
steps: steps:
- name: Checkout PyTorch - name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.4
with: with:
fetch-depth: 1 fetch-depth: 1
submodules: false submodules: false

View File

@ -21,7 +21,7 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- name: update-xla-commit-hash - name: update-xla-commit-hash
continue-on-error: true continue-on-error: true
uses: pytorch/test-infra/.github/actions/update-commit-hash@main uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.4
with: with:
repo-name: xla repo-name: xla
branch: master branch: master
@ -30,7 +30,7 @@ jobs:
updatebot-token: ${{ secrets.UPDATEBOT_TOKEN }} updatebot-token: ${{ secrets.UPDATEBOT_TOKEN }}
pytorchbot-token: ${{ secrets.GH_PYTORCHBOT_TOKEN }} pytorchbot-token: ${{ secrets.GH_PYTORCHBOT_TOKEN }}
- name: update-triton-commit-hash - name: update-triton-commit-hash
uses: pytorch/test-infra/.github/actions/update-commit-hash@main uses: pytorch/test-infra/.github/actions/update-commit-hash@release/2.4
with: with:
repo-owner: openai repo-owner: openai
repo-name: triton repo-name: triton

View File

@ -364,7 +364,7 @@ class TORCH_API Context {
bool enabled_flashSDP = true; bool enabled_flashSDP = true;
bool enabled_mem_efficientSDP = true; bool enabled_mem_efficientSDP = true;
bool enabled_mathSDP = true; bool enabled_mathSDP = true;
bool enabled_cudnnSDP = true; bool enabled_cudnnSDP = false;
#ifdef USE_ROCM #ifdef USE_ROCM
bool benchmark_cudnn = true; bool benchmark_cudnn = true;
#else #else

View File

@ -17,7 +17,7 @@ static void metaFallback(
"while using an operator with PT2 compilation APIs (torch.compile/torch.export); " "while using an operator with PT2 compilation APIs (torch.compile/torch.export); "
"in order to use this operator with those APIs you'll need to add a fake impl. " "in order to use this operator with those APIs you'll need to add a fake impl. "
"Please see the following for next steps: " "Please see the following for next steps: "
"https://pytorch.org/docs/main/notes/custom_operators.html"); "https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html");
} }
TORCH_LIBRARY_IMPL(_, Meta, m) { TORCH_LIBRARY_IMPL(_, Meta, m) {

View File

@ -614,13 +614,6 @@ void run_cudnn_SDP_bprop(
Tensor& dV, Tensor& dV,
const Tensor& dropoutseed, const Tensor& dropoutseed,
const Tensor& dropoutoffset) { const Tensor& dropoutoffset) {
Tensor dO_ = dO;
if (!dO.strides()[dO.strides().size() - 1]) {
TORCH_WARN(
"cuDNN SDPA backward got an innermost stride of 0 in grad_out, which is unsupported. Materializing a contiguous\
tensor which will increase memory usage...");
dO_ = dO.contiguous();
}
cudnnHandle_t handle = getCudnnHandle(); cudnnHandle_t handle = getCudnnHandle();
auto key = MHACacheKeyWrapper( auto key = MHACacheKeyWrapper(
b, h, s_q, s_kv, d, q, k, v, dropout_probability, is_causal, true); b, h, s_q, s_kv, d, q, k, v, dropout_probability, is_causal, true);
@ -642,7 +635,7 @@ void run_cudnn_SDP_bprop(
k, k,
v, v,
o, o,
dO_, dO,
softmaxstats, softmaxstats,
dQ, dQ,
dK, dK,

View File

@ -14728,12 +14728,12 @@
CUDA: _scaled_dot_product_efficient_attention_backward_cuda CUDA: _scaled_dot_product_efficient_attention_backward_cuda
tags: nondeterministic_seeded tags: nondeterministic_seeded
- func: _scaled_dot_product_cudnn_attention(Tensor query, Tensor key, Tensor value, bool compute_log_sumexp, float dropout_p=0.0, bool is_causal=False, *, float? scale=None) -> (Tensor output, Tensor logsumexp, Tensor philox_seed, Tensor philox_offset) - func: _scaled_dot_product_cudnn_attention(Tensor query, Tensor key, Tensor value, float dropout_p=0.0, bool is_causal=False, bool return_debug_mask=False, *, float? scale=None) -> (Tensor output, Tensor logsumexp, Tensor cum_seq_q, Tensor cum_seq_k, SymInt max_q, SymInt max_k, Tensor philox_seed, Tensor philox_offset, Tensor debug_attn_mask)
dispatch: dispatch:
CUDA: _scaled_dot_product_cudnn_attention_cuda CUDA: _scaled_dot_product_cudnn_attention_cuda
tags: nondeterministic_seeded tags: nondeterministic_seeded
- func: _scaled_dot_product_cudnn_attention_backward(Tensor grad_out, Tensor query, Tensor key, Tensor value, Tensor out, Tensor logsumexp, Tensor philox_seed, Tensor philox_offset, float dropout_p, bool is_causal, *, float? scale=None) -> (Tensor, Tensor, Tensor) - func: _scaled_dot_product_cudnn_attention_backward(Tensor grad_out, Tensor query, Tensor key, Tensor value, Tensor out, Tensor logsumexp, Tensor cum_seq_q, Tensor cum_seq_k, SymInt max_q, SymInt max_k, float dropout_p, bool is_causal, Tensor philox_seed, Tensor philox_offset, *, float? scale=None) -> (Tensor, Tensor, Tensor)
dispatch: dispatch:
CUDA: _scaled_dot_product_cudnn_attention_backward_cuda CUDA: _scaled_dot_product_cudnn_attention_backward_cuda
tags: nondeterministic_seeded tags: nondeterministic_seeded

View File

@ -666,7 +666,7 @@ Tensor scaled_dot_product_attention(
case sdp::SDPBackend::cudnn_attention: { case sdp::SDPBackend::cudnn_attention: {
bool compute_logsumexp = should_compute_logsumexp(query_, key, value); bool compute_logsumexp = should_compute_logsumexp(query_, key, value);
auto out_lse_softmax = at::_scaled_dot_product_cudnn_attention( auto out_lse_softmax = at::_scaled_dot_product_cudnn_attention(
query_, key, value, compute_logsumexp, dropout_p, is_causal, scale); query_, key, value, dropout_p, is_causal, compute_logsumexp, scale);
return std::get<0>(out_lse_softmax); return std::get<0>(out_lse_softmax);
} }
case sdp::SDPBackend::flash_attention: { case sdp::SDPBackend::flash_attention: {

View File

@ -735,27 +735,14 @@ std::tuple<Tensor, Tensor, Tensor, Tensor, c10::SymInt, c10::SymInt, Tensor, Ten
return std::make_tuple(attention, logsumexp, Tensor(), Tensor(), max_seqlen_batch_q, max_seqlen_batch_k, philox_seed, philox_offset, debug_attn_mask); return std::make_tuple(attention, logsumexp, Tensor(), Tensor(), max_seqlen_batch_q, max_seqlen_batch_k, philox_seed, philox_offset, debug_attn_mask);
} }
// Adapted from TE std::tuple<Tensor, Tensor, Tensor, Tensor, c10::SymInt, c10::SymInt, Tensor, Tensor, Tensor> _scaled_dot_product_cudnn_attention_cuda(
// extract seed and offset from PhiloxCudaState
__global__ void unpack_cudnn(at::PhiloxCudaState arg, int64_t* seed_ptr, int64_t* offset_ptr) {
if (arg.captured_) {
*seed_ptr = static_cast<int64_t>(*arg.seed_.ptr);
*offset_ptr = static_cast<int64_t>(
*(arg.offset_.ptr) + static_cast<int64_t>(arg.offset_intragraph_));
} else {
*seed_ptr = static_cast<int64_t>(arg.seed_.val);
*offset_ptr = static_cast<int64_t>(arg.offset_.val);
}
}
std::tuple<Tensor, Tensor, Tensor, Tensor> _scaled_dot_product_cudnn_attention_cuda(
const Tensor& query, const Tensor& query,
const Tensor& key, const Tensor& key,
const Tensor& value, const Tensor& value,
bool compute_logsumexp,
double dropout_p, double dropout_p,
bool is_causal, bool is_causal,
c10::optional<double> scale) { bool training,
std::optional<double> scale) {
// Used for tracking usage statistics // Used for tracking usage statistics
C10_LOG_API_USAGE_ONCE("torch.sdpa.flash_attention_cudnn"); C10_LOG_API_USAGE_ONCE("torch.sdpa.flash_attention_cudnn");
// Query (Batch x Num_heads x Q_seq_len x Dim_per_head) // Query (Batch x Num_heads x Q_seq_len x Dim_per_head)
@ -774,33 +761,9 @@ std::tuple<Tensor, Tensor, Tensor, Tensor> _scaled_dot_product_cudnn_attention_c
Tensor attention, log_sumexp; Tensor attention, log_sumexp;
at::Tensor cudnn_seed, cudnn_offset; auto cudnn_seed = at::zeros({1}, query.options().dtype(kLong));
cudnn_seed = at::empty({}, at::dtype(at::kLong).device(at::kCUDA)); auto cudnn_offset = at::zeros({1}, query.options().dtype(kLong));
cudnn_offset = at::empty({}, at::dtype(at::kLong).device(at::kCUDA));
const bool use_dropout = std::fpclassify(dropout_p) != FP_ZERO;
// See Note [Seed and Offset Device] in _efficient_attention_forward
at::PhiloxCudaState philox_state;
const bool in_capture_stream =
at::cuda::currentStreamCaptureStatus() != at::cuda::CaptureStatus::None;
if (use_dropout) {
// Device
auto gen = at::get_generator_or_default<at::CUDAGeneratorImpl>(
c10::nullopt, at::cuda::detail::getDefaultCUDAGenerator());
// See Note [Acquire lock when using random generators]
std::lock_guard<std::mutex> lock(gen->mutex_);
// if using dropout, we produce 1 random number for each element of the
// attention tensor
// TODO(eqy): should state be advanced per thread (local) amount or per call/launch (global) amount
philox_state = gen->philox_cuda_state(batch_size * num_heads * max_seqlen_batch_q * max_seqlen_batch_k);
unpack_cudnn<<<1, 1, 0, at::cuda::getCurrentCUDAStream()>>>(
philox_state, static_cast<int64_t*>(cudnn_seed.data_ptr()), static_cast<int64_t*>(cudnn_offset.data_ptr()));
}
const auto softmax_scale = sdp::calculate_scale(query, scale).as_float_unchecked(); const auto softmax_scale = sdp::calculate_scale(query, scale).as_float_unchecked();
Tensor debugmask;
run_cudnn_SDP_fprop(batch_size/*int64_t b*/, run_cudnn_SDP_fprop(batch_size/*int64_t b*/,
num_heads/*int64_t h*/, num_heads/*int64_t h*/,
@ -808,7 +771,7 @@ std::tuple<Tensor, Tensor, Tensor, Tensor> _scaled_dot_product_cudnn_attention_c
max_seqlen_batch_k/*int64_t s_kv*/, max_seqlen_batch_k/*int64_t s_kv*/,
head_dim/*int64_t d*/, head_dim/*int64_t d*/,
softmax_scale/*float scaling_factor*/, softmax_scale/*float scaling_factor*/,
compute_logsumexp/* bool */, training/* bool */,
is_causal/* bool */, is_causal/* bool */,
dropout_p/*double dropout_probability*/, dropout_p/*double dropout_probability*/,
query/* Tensor q*/, query/* Tensor q*/,
@ -819,7 +782,7 @@ std::tuple<Tensor, Tensor, Tensor, Tensor> _scaled_dot_product_cudnn_attention_c
cudnn_seed/*Tensor dropoutseed*/, cudnn_seed/*Tensor dropoutseed*/,
cudnn_offset/*Tensor dropoutoffset*/); cudnn_offset/*Tensor dropoutoffset*/);
return std::make_tuple(attention, log_sumexp, cudnn_seed, cudnn_offset); return std::make_tuple(attention, log_sumexp, Tensor(), Tensor(), max_seqlen_batch_q, max_seqlen_batch_k, cudnn_seed, cudnn_offset, Tensor());
} }
std::tuple<Tensor, Tensor, Tensor, Tensor> _scaled_dot_product_efficient_attention_cuda( std::tuple<Tensor, Tensor, Tensor, Tensor> _scaled_dot_product_efficient_attention_cuda(

View File

@ -171,32 +171,18 @@ std::tuple<Tensor, Tensor, Tensor> _scaled_dot_product_cudnn_attention_backward_
const Tensor& value, const Tensor& value,
const Tensor& out, const Tensor& out,
const Tensor& logsumexp, const Tensor& logsumexp,
const Tensor& philox_seed, const Tensor& cumulative_sequence_length_q,
const Tensor& philox_offset, const Tensor& cumulative_sequence_length_k,
// const Tensor& cumulative_sequence_length_q, const int64_t max_seqlen_batch_q,
// const Tensor& cumulative_sequence_length_k, const int64_t max_seqlen_batch_k,
// const int64_t max_seqlen_batch_q,
// const int64_t max_seqlen_batch_k,
double dropout_p, double dropout_p,
bool is_causal, bool is_causal,
c10::optional<double> scale) { const Tensor& philox_seed,
const Tensor& philox_offset,
std::optional<double> scale) {
auto& ctx = at::globalContext();
if (ctx.deterministicAlgorithms()) {
if (ctx.deterministicAlgorithmsWarnOnly()) {
TORCH_WARN_ONCE(
"cuDNN Attention defaults to a non-deterministic algorithm. ",
"To explicitly enable determinism call torch.use_deterministic_algorithms(True, warn_only=False).");
}
}
const int64_t batch_size = query.size(0); const int64_t batch_size = query.size(0);
const int64_t num_heads = query.size(1); const int64_t num_heads = query.size(1);
const int64_t head_dim = query.size(3); const int64_t head_dim = query.size(3);
const int64_t max_seqlen_batch_q = query.size(1);
const int64_t max_seqlen_batch_k = key.size(1);
const auto softmax_scale = sdp::calculate_scale(query, scale).as_float_unchecked(); const auto softmax_scale = sdp::calculate_scale(query, scale).as_float_unchecked();

View File

@ -6,7 +6,6 @@
#include <ATen/core/Tensor.h> #include <ATen/core/Tensor.h>
#include <ATen/core/grad_mode.h> #include <ATen/core/grad_mode.h>
#include <ATen/cuda/CUDAContext.h> #include <ATen/cuda/CUDAContext.h>
#include <ATen/cuda/CUDAConfig.h>
#include <ATen/detail/CUDAHooksInterface.h> #include <ATen/detail/CUDAHooksInterface.h>
#include <ATen/native/DispatchStub.h> #include <ATen/native/DispatchStub.h>
#include <ATen/native/transformers/cuda/sdp_utils.h> #include <ATen/native/transformers/cuda/sdp_utils.h>
@ -45,28 +44,14 @@
namespace sdp { namespace sdp {
namespace { namespace {
// TODO(eqy): more benchmarking to determine whether this should include sm86/89
// Needs to be kept in-sync with test_fused_chocie in test_transformers.py
bool check_prefer_cudnn_attention() {
auto dprops = at::cuda::getCurrentDeviceProperties();
return dprops->major >= 9;
}
// flash_attention V2 is universally faster than efficient_attention and Math // flash_attention V2 is universally faster than efficient_attention and Math
std::array<SDPBackend, num_backends> priority_order(sdp_params const& params) { std::array<SDPBackend, num_backends> priority_order(sdp_params const& params) {
constexpr std::array<SDPBackend, num_backends> default_order{ constexpr std::array<SDPBackend, num_backends> default_order{
SDPBackend::flash_attention,
SDPBackend::cudnn_attention,
SDPBackend::efficient_attention,
SDPBackend::math};
constexpr std::array<SDPBackend, num_backends> cudnn_order{
SDPBackend::cudnn_attention, SDPBackend::cudnn_attention,
SDPBackend::flash_attention, SDPBackend::flash_attention,
SDPBackend::efficient_attention, SDPBackend::efficient_attention,
SDPBackend::math}; SDPBackend::math};
static const bool prefer_cudnn = check_prefer_cudnn_attention(); return default_order;
return prefer_cudnn ? cudnn_order : default_order;
} }
bool use_tensor_cores(sdp_params const& params, cudaDeviceProp* dprops, bool is_half) { bool use_tensor_cores(sdp_params const& params, cudaDeviceProp* dprops, bool is_half) {
@ -466,6 +451,17 @@ bool check_cudnn_hardware_support(sdp_params const& params, bool debug) {
return true; return true;
} }
bool check_is_causal(sdp_params const& params, bool debug) {
// Check that the input is causal
if (!params.is_causal) {
if (debug) {
TORCH_WARN("CuDNN requires is_causal=True.");
}
return false;
}
return true;
}
bool check_for_nested_inputs(sdp_params const& params, bool debug) { bool check_for_nested_inputs(sdp_params const& params, bool debug) {
// Check that the input is nested // Check that the input is nested
if (has_for_nested_inputs(params)) { if (has_for_nested_inputs(params)) {
@ -489,6 +485,22 @@ bool check_dtypes_low_precision(sdp_params const& params, bool debug) {
} }
} }
bool check_runtime_enabled_cudnn(sdp_params const& params, bool debug) {
static c10::once_flag supported_flag;
static bool supported = false;
c10::call_once(supported_flag, []() {
supported = (c10::utils::check_env("TORCH_CUDNN_SDPA_ENABLED") == true);
});
if (!supported) {
if (debug) {
TORCH_WARN(
"The CuDNN backend needs to be enabled by setting the enviornment variable`TORCH_CUDNN_SDPA_ENABLED=1`");
}
return false;
}
return true;
}
bool check_runtime_disabled_cudnn(sdp_params const& params, bool debug) { bool check_runtime_disabled_cudnn(sdp_params const& params, bool debug) {
// We check the global context to see if user has explicitly turned of cudnn // We check the global context to see if user has explicitly turned of cudnn
// sdp kernels // sdp kernels
@ -501,15 +513,13 @@ bool check_runtime_disabled_cudnn(sdp_params const& params, bool debug) {
return true; return true;
} }
bool check_cudnn_deterministic(const sdp_params& params, bool debug) { bool check_cudnn_requires_grad(sdp_params const& params, bool debug) {
auto& ctx = at::globalContext(); // Check that the input is causal
if (ctx.deterministicAlgorithms()) { if (input_requires_grad(params)) {
if (!ctx.deterministicAlgorithmsWarnOnly()) { if (debug) {
if (debug) { TORCH_WARN("CuDNN does not currently support inputs with requires_grad=True.");
TORCH_WARN("cuDNN SDPA is not deterministic.");
}
return false;
} }
return false;
} }
return true; return true;
} }
@ -517,29 +527,21 @@ bool check_cudnn_deterministic(const sdp_params& params, bool debug) {
} // namespace } // namespace
bool can_use_cudnn_attention(const sdp_params& params, bool debug) { bool can_use_cudnn_attention(const sdp_params& params, bool debug) {
#if defined(USE_ROCM) || !AT_CUDNN_ENABLED() || \
(defined(CUDNN_VERSION) && CUDNN_VERSION < 8900)
TORCH_WARN_ONCE(!debug, "Torch was not compiled with cuDNN attention.");
return false;
#endif
// Define gate functions that determine if a flash kernel can be ran // Define gate functions that determine if a flash kernel can be ran
// Replace with std::to_array when we migrate to c++20 // Replace with std::to_array when we migrate to c++20
constexpr auto general_constraints = constexpr auto general_constraints =
array_of<bool (*)(sdp_params const&, bool)>( array_of<bool (*)(sdp_params const&, bool)>(
check_for_nested_inputs, check_runtime_enabled_cudnn,
check_nonzero_sequence_lengths_dense,
check_last_dim_stride_equals_1_dense<true /*ignore_singleton_dim>*/>,
check_all_tensors_on_device,
check_tensor_shapes,
check_cudnn_tensor_shapes,
check_runtime_disabled_cudnn, check_runtime_disabled_cudnn,
check_cudnn_deterministic, check_cudnn_hardware_support,
// check_cudnn_layout, check_all_tensors_on_device,
check_cudnn_tensor_shapes,
check_cudnn_layout,
// check_is_causal, // check_is_causal,
check_dtypes_low_precision, check_for_nested_inputs,
check_for_attn_mask_cudnn, check_cudnn_requires_grad,
check_cudnn_hardware_support check_dtypes_low_precision);
);
for (auto& constraint : general_constraints) { for (auto& constraint : general_constraints) {
if (!constraint(params, debug)) { if (!constraint(params, debug)) {
return false; return false;
@ -683,7 +685,6 @@ SDPBackend select_sdp_backend(sdp_params const& kernel_params) {
switch (backend) { switch (backend) {
case SDPBackend::cudnn_attention: case SDPBackend::cudnn_attention:
if (sdp::can_use_cudnn_attention(kernel_params, print_debug)) { if (sdp::can_use_cudnn_attention(kernel_params, print_debug)) {
TORCH_WARN("USING CUDNN SDPA");
return SDPBackend::cudnn_attention; return SDPBackend::cudnn_attention;
} }
break; break;

View File

@ -266,18 +266,7 @@ inline bool check_requires_grad_and_nested(sdp_params const& params, bool debug)
inline bool check_for_attn_mask(sdp_params const& params, bool debug) { inline bool check_for_attn_mask(sdp_params const& params, bool debug) {
if (params.attn_mask.has_value()) { if (params.attn_mask.has_value()) {
if (debug) { if (debug) {
TORCH_WARN("Flash Attention do not support non-null attn_mask."); TORCH_WARN("Flash Attention does not support non-null attn_mask.");
}
return false;
}
return true;
}
// TODO(eqy): remove this once support is added
inline bool check_for_attn_mask_cudnn(sdp_params const& params, bool debug) {
if (params.attn_mask.has_value()) {
if (debug) {
TORCH_WARN("cuDNN Attention does not support non-null attn_mask.");
} }
return false; return false;
} }
@ -324,7 +313,7 @@ inline bool check_tensor_shapes(sdp_params const& params, bool debug) {
(query_dim == 4))) { (query_dim == 4))) {
if (debug) { if (debug) {
TORCH_WARN( TORCH_WARN(
"All fused kernels requires query, key and value to be 4 dimensional, but got Query dim: ", "Both fused kernels requires query, key and value to be 4 dimensional, but got Query dim: ",
query_dim, query_dim,
", Key dim: ", ", Key dim: ",
params.key.dim(), params.key.dim(),
@ -436,7 +425,7 @@ inline bool check_nonzero_sequence_lengths_dense(sdp_params const& params, bool
if (zero_seq_len_q || zero_seq_len_k) { if (zero_seq_len_q || zero_seq_len_k) {
if (debug) { if (debug) {
TORCH_WARN( TORCH_WARN(
"All fused kernels do not support zero seq_len_q or seq_len_kv."); "Both fused kernels do not support zero seq_len_q or seq_len_kv.");
} }
return false; return false;
} }
@ -471,7 +460,7 @@ inline bool check_last_dim_stride_equals_1_dense(sdp_params const& params, bool
} }
epilogue_message << " instead."; epilogue_message << " instead.";
TORCH_WARN( TORCH_WARN(
"All fused kernels require the last dimension of the input to have stride 1. ", "Both fused kernels require the last dimension of the input to have stride 1. ",
"Got Query.stride(-1): ", "Got Query.stride(-1): ",
params.query.sym_stride(-1), params.query.sym_stride(-1),
", Key.stride(-1): ", ", Key.stride(-1): ",

View File

@ -1184,12 +1184,14 @@ class AOTInductorModelCache:
else: else:
_register_dataclass_output_as_pytree(example_outputs) _register_dataclass_output_as_pytree(example_outputs)
gm = torch.export._trace._export( # TODO(angelayi): change this to predispatch
# https://github.com/pytorch/pytorch/issues/127513 needs to be fixed before changing
# to predispatch to avoid performance regressions
gm = torch.export._trace._export_to_torch_ir(
model, model,
example_args, example_args,
example_kwargs, example_kwargs,
pre_dispatch=True, )
).module()
with torch.no_grad(): with torch.no_grad():
so_path = torch._inductor.aot_compile( so_path = torch._inductor.aot_compile(
gm, example_args, example_kwargs gm, example_args, example_kwargs

View File

@ -18,7 +18,7 @@ void throwNullDataPtrError() {
"If you're using torch.compile/export/fx, it is likely that we are erroneously " "If you're using torch.compile/export/fx, it is likely that we are erroneously "
"tracing into a custom kernel. To fix this, please wrap the custom kernel into " "tracing into a custom kernel. To fix this, please wrap the custom kernel into "
"an opaque custom op. Please see the following for details: " "an opaque custom op. Please see the following for details: "
"https://pytorch.org/docs/main/notes/custom_operators.html"); "https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html");
} }
// NOTE: [FakeTensor.data_ptr deprecation] // NOTE: [FakeTensor.data_ptr deprecation]

View File

@ -1580,7 +1580,7 @@ struct C10_API TensorImpl : public c10::intrusive_ptr_target {
"If you're using torch.compile/export/fx, it is likely that we are erroneously " "If you're using torch.compile/export/fx, it is likely that we are erroneously "
"tracing into a custom kernel. To fix this, please wrap the custom kernel into " "tracing into a custom kernel. To fix this, please wrap the custom kernel into "
"an opaque custom op. Please see the following for details: " "an opaque custom op. Please see the following for details: "
"https://pytorch.org/docs/main/notes/custom_operators.html\n" "https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html\n"
"If you're using Caffe2, Caffe2 uses a lazy allocation, so you will need to call " "If you're using Caffe2, Caffe2 uses a lazy allocation, so you will need to call "
"mutable_data() or raw_mutable_data() to actually allocate memory."); "mutable_data() or raw_mutable_data() to actually allocate memory.");
// Caller does the type check. // Caller does the type check.

View File

@ -92,8 +92,6 @@ torch.backends.cuda
.. autofunction:: torch.backends.cuda.can_use_efficient_attention .. autofunction:: torch.backends.cuda.can_use_efficient_attention
.. autofunction:: torch.backends.cuda.can_use_cudnn_attention
.. autofunction:: torch.backends.cuda.sdp_kernel .. autofunction:: torch.backends.cuda.sdp_kernel
torch.backends.cudnn torch.backends.cudnn

View File

@ -3,54 +3,4 @@
PyTorch Custom Operators Landing Page PyTorch Custom Operators Landing Page
===================================== =====================================
PyTorch offers a large library of operators that work on Tensors (e.g. :func:`torch.add`, `This page has moved. Click here for the new page. <https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html>`_
:func:`torch.sum`, etc). However, you may wish to bring a new custom operation to PyTorch
and get it to work with subsystems like :func:`torch.compile`, autograd, and :func:`torch.vmap`.
In order to do so, you must register the custom operation with PyTorch via the Python
:ref:`torch-library-docs` or C++ TORCH_LIBRARY APIs.
TL;DR
-----
How do I author a custom op from Python?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..
[comment] TODO(rzou): The following will be a link to a tutorial on the PyTorch tutorials site in 2.4
Please see the `Python Custom Operators tutorial <https://colab.research.google.com/drive/1xCh5BNHxGnutqGLMHaHwm47cbDL9CB1g#scrollTo=gg6WorNtKzeh>`_
How do I integrate custom C++ and/or CUDA code with PyTorch?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..
[comment] TODO(rzou): The following will be a link to a tutorial on the PyTorch tutorials site in 2.4
Please see the `Custom C++ and CUDA Operators tutorial <https://docs.google.com/document/d/1-LdJZBzlxiF0Tm-8NfbyFvRJaofdwRgLcycXGmlIpS0>`_
For more details
^^^^^^^^^^^^^^^^
Please see `The Custom Operators Manual (gdoc) <https://docs.google.com/document/d/1_W62p8WJOQQUzPsJYa7s701JXt0qf2OfLub2sbkHOaU>`_
(we're working on moving the information to our docs site). We recommend that you
first read one of the tutorials above and then use the Custom Operators Manual as a reference;
it is not meant to be read head to toe.
When should I create a Custom Operator?
---------------------------------------
If your operation is expressible as a composition of built-in PyTorch operators
then please write it as a Python function and call it instead of creating a
custom operator. Use the operator registration APIs to create a custom op if you
are calling into some library that PyTorch doesn't understand (e.g. custom C/C++ code,
a custom CUDA kernel, or Python bindings to C/C++/CUDA extensions).
Why should I create a Custom Operator?
--------------------------------------
It is possible to use a C/C++/CUDA kernel by grabbing a Tensor's data pointer
and passing it to a pybind'ed kernel. However, this approach doesn't compose with
PyTorch subsystems like autograd, torch.compile, vmap, and more. In order
for an operation to compose with PyTorch subsystems, it must be registered
via the operator registration APIs.

View File

@ -317,6 +317,18 @@ class TensorParallelStyleTest(DTensorTestBase):
self.assertEqual(comm_mode.get_total_counts(), 2) self.assertEqual(comm_mode.get_total_counts(), 2)
self.assertEqual(output.shape, (1 * self.world_size, 8)) self.assertEqual(output.shape, (1 * self.world_size, 8))
# test the case where x is a DTensor
x_dt = DTensor.from_local(
torch.randn(1, 8, device=self.device_type), mesh, [Shard(0)]
)
with comm_mode:
output = test_kwonly_mod(
x=x_dt, z=torch.ones(1, 8, device=self.device_type)
)
self.assertEqual(comm_mode.get_total_counts(), 2)
self.assertEqual(output.shape, (1 * self.world_size, 8))
@with_comms @with_comms
def test_prepare_module_output(self): def test_prepare_module_output(self):
mesh = init_device_mesh(self.device_type, (self.world_size,)) mesh = init_device_mesh(self.device_type, (self.world_size,))

View File

@ -246,16 +246,49 @@ class MiscTests(torch._inductor.test_case.TestCase):
return module.foobar(x) return module.foobar(x)
with self.assertWarnsOnceRegex( with self.assertWarnsOnceRegex(
UserWarning, ".*https://pytorch.org/docs/main/notes/custom_operators.html.*" UserWarning,
".*https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html.*",
): ):
f(x) f(x)
self.assertEqual(len(counters["graph_break"]), 1) self.assertEqual(len(counters["graph_break"]), 1)
first_graph_break = list(counters["graph_break"].keys())[0] first_graph_break = list(counters["graph_break"].keys())[0]
self.assertExpectedInline( self.assertExpectedInline(
first_graph_break, first_graph_break,
"""Graph break due to unsupported builtin mylib.PyCapsule.foobar. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/docs/main/notes/custom_operators.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.""", """Graph break due to unsupported builtin mylib.PyCapsule.foobar. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.""",
) )
cpp_source = """
#include <torch/extension.h>
at::Tensor baz(const at::Tensor& x) {
return x.clone();
}
"""
module2 = torch.utils.cpp_extension.load_inline(
name="mylib2",
cpp_sources=cpp_source,
functions="baz",
verbose=True,
)
torch._dynamo.reset()
# Test that each warning only happens once
@torch.compile(backend="eager")
def f(x):
module2.baz(x)
module.foobar(x)
module.foobar(x)
module2.baz(x)
module.foobar(x)
module2.baz(x)
return x.clone()
with warnings.catch_warnings(record=True) as ws:
warnings.simplefilter("always")
f(x)
f(x)
self.assertEqual(len(ws), 2)
def test_callpacked(self): def test_callpacked(self):
def call_packed(args): def call_packed(args):
a, b, c = args a, b, c = args

View File

@ -198,6 +198,33 @@ def forward(self, arg0_1, arg1_1, arg2_1):
res = torch.compile(f, backend="inductor")(*inputs) res = torch.compile(f, backend="inductor")(*inputs)
self.assertTrue(torch.allclose(res, f(*inputs))) self.assertTrue(torch.allclose(res, f(*inputs)))
@unittest.skipIf(IS_WINDOWS, "Skipped on Windows!")
@skipIfNoDynamoSupport
def test_compile_inductor_external_op_return_none(self):
with torch.library._scoped_library("mylib", "FRAGMENT") as lib:
torch.library.define(
"mylib::inplace_add",
"(Tensor input, Tensor(a!) output) -> ()",
lib=lib,
)
def inplace_add(input: torch.Tensor, output: torch.Tensor) -> None:
assert input.device == output.device
output.add_(input)
lib.impl("inplace_add", inplace_add, "CompositeExplicitAutograd")
def f(x):
out = torch.empty(3)
out = torch.zeros_like(out)
torch.ops.mylib.inplace_add(x, out)
return out
inputs = (torch.randn(3),)
res = torch.compile(f, backend="inductor")(*inputs)
self.assertTrue(torch.allclose(res, f(*inputs)))
def test_compile_aot_eager_requires_grad(self): def test_compile_aot_eager_requires_grad(self):
def f(x): def f(x):
torch.ops.aten._print("moo") torch.ops.aten._print("moo")

View File

@ -4,7 +4,6 @@ import copy
import functools import functools
import itertools import itertools
import math import math
import os
import platform import platform
import sys import sys
import unittest import unittest
@ -67,13 +66,12 @@ aten = torch.ops.aten
check_model = test_torchinductor.check_model check_model = test_torchinductor.check_model
requires_vectorization = unittest.skipUnless( requires_vectorization = unittest.skipUnless(
codecache.valid_vec_isa_list() and os.getenv("ATEN_CPU_CAPABILITY") != "default", codecache.valid_vec_isa_list(), "Does not support vectorization"
"Does not support vectorization",
) )
def check_metrics_vec_kernel_count(num_expected_vec_kernels): def check_metrics_vec_kernel_count(num_expected_vec_kernels):
if codecache.valid_vec_isa_list() and os.getenv("ATEN_CPU_CAPABILITY") != "default": if codecache.valid_vec_isa_list():
assert metrics.generated_cpp_vec_kernel_count == num_expected_vec_kernels assert metrics.generated_cpp_vec_kernel_count == num_expected_vec_kernels
@ -1582,71 +1580,6 @@ class CPUReproTests(TestCase):
metrics.reset() metrics.reset()
self.common(fn, (value,)) self.common(fn, (value,))
@unittest.skipIf(
not codecache.valid_vec_isa_list()
or "avx2" in [str(vec_isa) for vec_isa in codecache.valid_vec_isa_list()],
"Does not support vectorization or not s390x/neon machine",
)
@patch("torch.cuda.is_available", lambda: False)
def test_auto_zvec_neon_simd(self):
vec_zvec_neon = codecache.valid_vec_isa_list()[0]
self.assertTrue(vec_zvec_neon.bit_width() == 256)
with config.patch({"cpp.simdlen": 0}):
isa = codecache.pick_vec_isa()
self.assertFalse(isa)
with config.patch({"cpp.simdlen": 1}):
isa = codecache.pick_vec_isa()
self.assertFalse(isa)
with config.patch({"cpp.simdlen": 257}):
isa = codecache.pick_vec_isa()
self.assertFalse(isa)
with config.patch({"cpp.simdlen": 256}):
isa = codecache.pick_vec_isa()
self.assertTrue(isa == vec_zvec_neon)
pre_var = os.getenv("ATEN_CPU_CAPABILITY")
if pre_var:
os.environ.pop("ATEN_CPU_CAPABILITY")
try:
with config.patch({"cpp.simdlen": None}):
isa = codecache.pick_vec_isa()
self.assertTrue(isa == vec_zvec_neon)
with config.patch({"cpp.simdlen": None}):
os.environ["ATEN_CPU_CAPABILITY"] = "avx2"
isa = codecache.pick_vec_isa()
self.assertTrue(isa == vec_zvec_neon)
with config.patch({"cpp.simdlen": None}):
os.environ["ATEN_CPU_CAPABILITY"] = "avx512"
isa = codecache.pick_vec_isa()
self.assertTrue(isa == vec_zvec_neon)
with config.patch({"cpp.simdlen": None}):
os.environ["ATEN_CPU_CAPABILITY"] = "default"
isa = codecache.pick_vec_isa()
self.assertFalse(isa)
with config.patch({"cpp.simdlen": None}):
os.environ["ATEN_CPU_CAPABILITY"] = "neon"
isa = codecache.pick_vec_isa()
self.assertTrue(isa == vec_zvec_neon)
with config.patch({"cpp.simdlen": None}):
os.environ["ATEN_CPU_CAPABILITY"] = "zvector"
isa = codecache.pick_vec_isa()
self.assertTrue(isa == vec_zvec_neon)
finally:
if pre_var:
os.environ["ATEN_CPU_CAPABILITY"] = pre_var
elif os.getenv("ATEN_CPU_CAPABILITY"):
os.environ.pop("ATEN_CPU_CAPABILITY")
@unittest.skipIf( @unittest.skipIf(
platform.machine() != "x86_64" or not codecache.valid_vec_isa_list(), platform.machine() != "x86_64" or not codecache.valid_vec_isa_list(),
"Does not support vectorization or not x86_64 machine", "Does not support vectorization or not x86_64 machine",
@ -1662,6 +1595,13 @@ class CPUReproTests(TestCase):
self.assertTrue(vec_avx512.nelements(torch.bfloat16) == 32) self.assertTrue(vec_avx512.nelements(torch.bfloat16) == 32)
self.assertTrue(vec_avx2.nelements(torch.bfloat16) == 16) self.assertTrue(vec_avx2.nelements(torch.bfloat16) == 16)
with config.patch({"cpp.simdlen": None}):
isa = codecache.pick_vec_isa()
if vec_avx512 in codecache.valid_vec_isa_list():
self.assertTrue(isa == vec_avx512)
else:
self.assertTrue(isa == vec_avx2)
with config.patch({"cpp.simdlen": 0}): with config.patch({"cpp.simdlen": 0}):
isa = codecache.pick_vec_isa() isa = codecache.pick_vec_isa()
self.assertFalse(isa) self.assertFalse(isa)
@ -1691,60 +1631,6 @@ class CPUReproTests(TestCase):
isa = codecache.pick_vec_isa() isa = codecache.pick_vec_isa()
self.assertTrue(isa == vec_avx2) self.assertTrue(isa == vec_avx2)
pre_var = os.getenv("ATEN_CPU_CAPABILITY")
if pre_var:
os.environ.pop("ATEN_CPU_CAPABILITY")
try:
with config.patch({"cpp.simdlen": None}):
isa = codecache.pick_vec_isa()
if vec_avx512 in codecache.valid_vec_isa_list():
self.assertTrue(isa == vec_avx512)
else:
self.assertTrue(isa == vec_avx2)
with config.patch({"cpp.simdlen": None}):
os.environ["ATEN_CPU_CAPABILITY"] = "avx2"
isa = codecache.pick_vec_isa()
if vec_avx512 in codecache.valid_vec_isa_list():
self.assertTrue(isa == vec_avx2)
elif vec_avx2 in codecache.valid_vec_isa_list():
self.assertTrue(isa == vec_avx2)
with config.patch({"cpp.simdlen": None}):
os.environ["ATEN_CPU_CAPABILITY"] = "avx512"
isa = codecache.pick_vec_isa()
if vec_avx512 in codecache.valid_vec_isa_list():
self.assertTrue(isa == vec_avx512)
else:
self.assertTrue(isa == vec_avx2)
with config.patch({"cpp.simdlen": None}):
os.environ["ATEN_CPU_CAPABILITY"] = "default"
isa = codecache.pick_vec_isa()
self.assertFalse(isa)
with config.patch({"cpp.simdlen": None}):
os.environ["ATEN_CPU_CAPABILITY"] = "neon"
isa = codecache.pick_vec_isa()
if vec_avx512 in codecache.valid_vec_isa_list():
self.assertTrue(isa == vec_avx512)
else:
self.assertTrue(isa == vec_avx2)
with config.patch({"cpp.simdlen": None}):
os.environ["ATEN_CPU_CAPABILITY"] = "zvector"
isa = codecache.pick_vec_isa()
if vec_avx512 in codecache.valid_vec_isa_list():
self.assertTrue(isa == vec_avx512)
else:
self.assertTrue(isa == vec_avx2)
finally:
if pre_var:
os.environ["ATEN_CPU_CAPABILITY"] = pre_var
elif os.getenv("ATEN_CPU_CAPABILITY"):
os.environ.pop("ATEN_CPU_CAPABILITY")
@requires_vectorization @requires_vectorization
@patch("torch.cuda.is_available", lambda: False) @patch("torch.cuda.is_available", lambda: False)
def test_masked_fill_softmax(self): def test_masked_fill_softmax(self):
@ -3485,7 +3371,6 @@ class CPUReproTests(TestCase):
self.common(m, (idx, x)) self.common(m, (idx, x))
check_metrics_vec_kernel_count(1) check_metrics_vec_kernel_count(1)
@requires_vectorization
def test_embedding_vec_bf16(self): def test_embedding_vec_bf16(self):
class M(torch.nn.Module): class M(torch.nn.Module):
def __init__(self): def __init__(self):
@ -3770,7 +3655,7 @@ class CPUReproTests(TestCase):
x = torch.randint(0, 100, (819,), dtype=torch.int64) x = torch.randint(0, 100, (819,), dtype=torch.int64)
metrics.reset() metrics.reset()
self.common(fn, (x,)) self.common(fn, (x,))
check_metrics_vec_kernel_count(1) assert metrics.generated_cpp_vec_kernel_count == 1
def test_reduction_float_to_int64(self): def test_reduction_float_to_int64(self):
# https://github.com/pytorch/pytorch/issues/124821 # https://github.com/pytorch/pytorch/issues/124821
@ -3780,7 +3665,7 @@ class CPUReproTests(TestCase):
x = torch.randint(0, 100, (22, 51), dtype=torch.int64) x = torch.randint(0, 100, (22, 51), dtype=torch.int64)
metrics.reset() metrics.reset()
self.common(fn, (x,)) self.common(fn, (x,))
check_metrics_vec_kernel_count(1) assert metrics.generated_cpp_vec_kernel_count == 1
@config.patch({"cpp.dynamic_threads": True}) @config.patch({"cpp.dynamic_threads": True})
def test_reduction_with_dynamic_threads(self): def test_reduction_with_dynamic_threads(self):

View File

@ -8,7 +8,6 @@ import torch
import torch._dynamo import torch._dynamo
import torch.utils.cpp_extension import torch.utils.cpp_extension
from torch._C import FileCheck from torch._C import FileCheck
from torch._dynamo.testing import expectedFailureScalar
try: try:
from extension_backends.cpp.extension_codegen_backend import ( from extension_backends.cpp.extension_codegen_backend import (
@ -104,9 +103,6 @@ class ExtensionBackendTests(TestCase):
# return the working directory (see setUp) # return the working directory (see setUp)
os.chdir(self.old_working_dir) os.chdir(self.old_working_dir)
# Fails when testing the scalar version
# See https://github.com/pytorch/pytorch/issues/126372.
@expectedFailureScalar
def test_open_device_registration(self): def test_open_device_registration(self):
torch.utils.rename_privateuse1_backend("extension_device") torch.utils.rename_privateuse1_backend("extension_device")
torch._register_device_module("extension_device", self.module) torch._register_device_module("extension_device", self.module)

View File

@ -776,11 +776,13 @@ def forward(self, arg0_1, arg1_1, arg2_1, arg3_1, arg4_1):
metrics.reset() metrics.reset()
f(q, k, v) f(q, k, v)
accessed_bytes = 1 * 8 * 1024 * 64 * torch.float32.itemsize accessed_bytes = 1 * 8 * 1024 * 64 * torch.float32.itemsize
logsumexp_bytes = 1 * 8 * 1024 * torch.float32.itemsize
num_accesses = 4 # q, k, v reads, one output. num_accesses = 4 # q, k, v reads, one output.
self.assertEqual( # TODO: Get rid of this fudge factor
metrics.num_bytes_accessed, accessed_bytes * num_accesses + logsumexp_bytes # We need this fudge factor for now, since
) # 1. For some reason we materialize the output of the attention unnecessarily (it's related to the mutation somehow)
# 2. We also write the extraneous logsumexp
num_accesses += 2
self.assertLess(metrics.num_bytes_accessed, accessed_bytes * num_accesses)
@supported_platform @supported_platform
@skip("Triton bug ") # https://github.com/pytorch/pytorch/issues/124571 @skip("Triton bug ") # https://github.com/pytorch/pytorch/issues/124571

View File

@ -233,6 +233,7 @@ class TestPatternMatcherBase(TestCase):
rtol=1.3e-6, rtol=1.3e-6,
check_quantization=False, check_quantization=False,
check_dynamic=None, check_dynamic=None,
num_include_ops=None,
): ):
with torch.no_grad(): with torch.no_grad():
clone_inputs = self._clone_inputs(inputs) clone_inputs = self._clone_inputs(inputs)
@ -245,6 +246,12 @@ class TestPatternMatcherBase(TestCase):
) )
for op in include_ops: for op in include_ops:
self.assertIn(op, source_code) self.assertIn(op, source_code)
if num_include_ops is not None:
assert len(include_ops) == len(num_include_ops)
for i in range(len(include_ops)):
self.assertEqual(
source_code.count(include_ops[i]), num_include_ops[i]
)
for op in exclude_ops: for op in exclude_ops:
self.assertNotIn(op, source_code) self.assertNotIn(op, source_code)
if check_dynamic is not None: if check_dynamic is not None:
@ -400,13 +407,16 @@ class TestPatternMatcher(TestPatternMatcherBase):
class M(torch.nn.Module): class M(torch.nn.Module):
def __init__(self, dtype, unary_fn): def __init__(self, dtype, unary_fn):
super().__init__() super().__init__()
self.linear = torch.nn.Linear(10, 64, bias=False) self.linear1 = torch.nn.Linear(10, 64, bias=False)
self.bias = torch.randn(64).to(dtype=dtype) self.bias1 = torch.randn(64).to(dtype=dtype)
self.linear2 = torch.nn.Linear(10, 64, bias=False)
self.bias2 = torch.randn(64).to(dtype=dtype)
self.unary_fn = unary_fn self.unary_fn = unary_fn
def forward(self, x): def forward(self, x):
x = self.linear(x) + self.bias a = self.linear1(x) + self.bias1
return self.unary_fn(x) b = self.linear2(x) + self.bias2
return self.unary_fn(a), self.unary_fn(b)
dtypes = [] dtypes = []
if torch.ops.mkldnn._is_mkldnn_bf16_supported(): if torch.ops.mkldnn._is_mkldnn_bf16_supported():
@ -419,13 +429,14 @@ class TestPatternMatcher(TestPatternMatcherBase):
mod = M(dtype, unary_fn).eval() mod = M(dtype, unary_fn).eval()
v = torch.randn(2, 10) v = torch.randn(2, 10)
matcher_count = 3 matcher_count = 3
# Add 1 for weight packing pass, add 2 for bias folding pass. # Add 1 for weight packing pass, add 2 for bias folding pass per linear.
matcher_nodes = unary_list[unary_fn] + 3 matcher_nodes = unary_list[unary_fn] + 3
if self._check_unary_is_decomposed(unary_fn): if self._check_unary_is_decomposed(unary_fn):
# Has extra dtype conversion nodes for autocast. # Has extra dtype conversion nodes for autocast.
matcher_nodes += 2 matcher_nodes += 2
# we have 2 linears, so we double the matcher_count/nodes
self._test_common( self._test_common(
mod, (v,), matcher_count, matcher_nodes, check_autocast=dtype mod, (v,), matcher_count * 2, matcher_nodes * 2, check_autocast=dtype
) )
self.assertEqual(metrics.generated_kernel_count, 1) self.assertEqual(metrics.generated_kernel_count, 1)
@ -1808,6 +1819,32 @@ class TestPatternMatcher(TestPatternMatcherBase):
matcher_check_fn=matcher_check_fn, matcher_check_fn=matcher_check_fn,
is_qat=is_qat, is_qat=is_qat,
) )
if torch._inductor.config.cpp_wrapper:
# For CPP wrapper
self._test_code_common(
mod,
(v,),
[
"op_qlinear_pointwise.call",
"op_qlinear_pointwise_binary.call",
],
[],
check_quantization=True,
num_include_ops=[2, 2],
)
else:
# For python wrapper
self._test_code_common(
mod,
(v,),
[
"torch.ops.onednn.qlinear_pointwise.default",
"torch.ops.onednn.qlinear_pointwise.binary",
],
[],
check_quantization=True,
num_include_ops=[2, 2],
)
@skipIfNoDynamoSupport @skipIfNoDynamoSupport
@skipIfNoONEDNN @skipIfNoONEDNN

View File

@ -34,7 +34,6 @@ from torch._dynamo.debug_utils import aot_graph_input_parser
from torch._dynamo.testing import ( from torch._dynamo.testing import (
CompileCounterWithBackend, CompileCounterWithBackend,
expectedFailureCodegenDynamic, expectedFailureCodegenDynamic,
expectedFailureScalar,
rand_strided, rand_strided,
same, same,
skipIfPy312, skipIfPy312,
@ -1316,9 +1315,6 @@ class CommonTemplate:
self.common(fn, (torch.randn(1024),)) self.common(fn, (torch.randn(1024),))
# Fails when testing the scalar version
# See https://github.com/pytorch/pytorch/issues/128029.
@expectedFailureScalar
@skipIfRocm @skipIfRocm
@config.patch(debug_index_asserts=False) @config.patch(debug_index_asserts=False)
def test_neg_index(self): def test_neg_index(self):
@ -1581,40 +1577,16 @@ class CommonTemplate:
def fn(a): def fn(a):
return torch.var(a) return torch.var(a)
atol = None self.common(fn, ((torch.rand((10, 3, 352, 352), dtype=torch.float32),)))
rtol = None self.common(fn, ((torch.rand((14923), dtype=torch.float32),)))
if self.device == "cpu" and os.getenv("ATEN_CPU_CAPABILITY") == "default":
atol = 1e-4
rtol = 1e-4
self.common(
fn,
((torch.rand((10, 3, 352, 352), dtype=torch.float32),)),
rtol=rtol,
atol=atol,
)
self.common(
fn, ((torch.rand((14923), dtype=torch.float32),)), rtol=rtol, atol=atol
)
@skipCPUIf(IS_MACOS, "fails on macos") @skipCPUIf(IS_MACOS, "fails on macos")
def test_multilayer_var_lowp(self): def test_multilayer_var_lowp(self):
def fn(a): def fn(a):
return torch.var(a) return torch.var(a)
atol = None self.common(fn, (torch.rand((16, 16, 352, 352), dtype=torch.float16),))
rtol = None self.common(fn, (torch.rand((14923), dtype=torch.float16),))
if self.device == "cpu" and os.getenv("ATEN_CPU_CAPABILITY") == "default":
atol = 1e-3
rtol = 1e-3
self.common(
fn,
(torch.rand((16, 16, 352, 352), dtype=torch.float16),),
rtol=rtol,
atol=atol,
)
self.common(
fn, (torch.rand((14923), dtype=torch.float16),), rtol=rtol, atol=atol
)
def test_split_cumsum(self): def test_split_cumsum(self):
def fn(a): def fn(a):
@ -8227,7 +8199,7 @@ class CommonTemplate:
rand_strided(shape, stride, dtype).requires_grad_(True).add(1) rand_strided(shape, stride, dtype).requires_grad_(True).add(1)
for shape, stride, dtype in args for shape, stride, dtype in args
] ]
self.common(forward, args, atol=1e-05, rtol=1e-05) self.common(forward, args)
@requires_gpu() @requires_gpu()
def test_tmp_not_defined_issue3(self): def test_tmp_not_defined_issue3(self):
@ -9309,7 +9281,6 @@ class CommonTemplate:
# To support this behavior, we need to allow const-propping tensors that store symint data. # To support this behavior, we need to allow const-propping tensors that store symint data.
# For now, dynamo will explicitly graph break when it encounters user code with this behavior. # For now, dynamo will explicitly graph break when it encounters user code with this behavior.
@expectedFailureCodegenDynamic @expectedFailureCodegenDynamic
@expectedFailureScalar
def test_AllenaiLongformerBase_repro(self): def test_AllenaiLongformerBase_repro(self):
def fn(query, scores, window_overlap): def fn(query, scores, window_overlap):
batch_size, seq_len, num_heads, _ = query.size() batch_size, seq_len, num_heads, _ = query.size()
@ -9345,9 +9316,6 @@ class CommonTemplate:
opt_fn = torch._dynamo.optimize("inductor")(fn) opt_fn = torch._dynamo.optimize("inductor")(fn)
_, code = run_and_get_cpp_code(opt_fn, *args) _, code = run_and_get_cpp_code(opt_fn, *args)
print(code) print(code)
# When testing the scalar version, i.e., ATEN_CPU_CAPABILITY=default,
# static_cast<int>(256) is not found, but static_cast<int64_t>(256).
# See https://github.com/pytorch/pytorch/issues/126262.
FileCheck().check_count( FileCheck().check_count(
"static_cast<int32_t>(256)", "static_cast<int32_t>(256)",
1, 1,

View File

@ -2291,10 +2291,13 @@ class TestCustomOpAPI(TestCase):
class Stack(torch.autograd.Function): class Stack(torch.autograd.Function):
@staticmethod @staticmethod
def forward(ctx, xs): def forward(ctx, xs):
ctx.num_xs = len(xs)
return torch.stack(xs) return torch.stack(xs)
@staticmethod @staticmethod
def backward(ctx, grad): def backward(ctx, grad):
expected = ([True] * ctx.num_xs,)
self.assertEqual(ctx.needs_input_grad, expected)
return list(grad.unbind(0)) return list(grad.unbind(0))
# call two applys, do a backward on the first # call two applys, do a backward on the first
@ -2327,19 +2330,21 @@ class TestCustomOpAPI(TestCase):
class Foo(torch.autograd.Function): class Foo(torch.autograd.Function):
@staticmethod @staticmethod
def forward(ctx, xs): def forward(ctx, xs):
if len(xs) > 0: if len(xs) > 1:
return Foo.apply(xs[1:]) return Foo.apply(xs[1:])
ctx.len_xs = len(xs) ctx.len_xs = len(xs)
return x.sin() return xs[0].sin()
@staticmethod @staticmethod
def backward(ctx, grad): def backward(ctx, grad):
result = [None] * len_xs result = [None] * ctx.len_xs
result[-1] = grad.cos() result[-1] = grad.cos()
return result return result
with self.assertRaisesRegex(NotImplementedError, "Recursive call"): # should work
Foo.apply(xs) result = Foo.apply(xs)
expected = xs[-1].sin()
self.assertEqual(result, expected)
# recursive on backward # recursive on backward
@torch._library.autograd.supports_tensorlist @torch._library.autograd.supports_tensorlist

View File

@ -9,7 +9,6 @@ import torch.utils.flop_counter
from torch.testing._internal.common_cuda import ( from torch.testing._internal.common_cuda import (
PLATFORM_SUPPORTS_FLASH_ATTENTION, PLATFORM_SUPPORTS_FLASH_ATTENTION,
PLATFORM_SUPPORTS_MEM_EFF_ATTENTION, PLATFORM_SUPPORTS_MEM_EFF_ATTENTION,
PLATFORM_SUPPORTS_CUDNN_ATTENTION
) )
from torch.testing._internal.common_utils import ( from torch.testing._internal.common_utils import (
run_tests, run_tests,
@ -301,8 +300,7 @@ class TestFlopCounter(TestCase):
@unittest.skipIf(not HAS_CUDA, "CUDA not available") @unittest.skipIf(not HAS_CUDA, "CUDA not available")
@unittest.skipIf( @unittest.skipIf(
not PLATFORM_SUPPORTS_FLASH_ATTENTION not PLATFORM_SUPPORTS_FLASH_ATTENTION
or not PLATFORM_SUPPORTS_MEM_EFF_ATTENTION or not PLATFORM_SUPPORTS_MEM_EFF_ATTENTION,
or not PLATFORM_SUPPORTS_CUDNN_ATTENTION,
"Does not support all SDPA backends (pre-SM80 hardware on CUDA)", "Does not support all SDPA backends (pre-SM80 hardware on CUDA)",
) )
def test_sdpa(self): def test_sdpa(self):
@ -357,31 +355,15 @@ class TestFlopCounter(TestCase):
if backend == "math": if backend == "math":
backend = torch.backends.cuda.sdp_kernel( backend = torch.backends.cuda.sdp_kernel(
enable_flash=False, enable_flash=False, enable_math=True, enable_mem_efficient=False
enable_math=True,
enable_mem_efficient=False,
enable_cudnn=False,
) )
elif backend == "flash": elif backend == "flash":
backend = torch.backends.cuda.sdp_kernel( backend = torch.backends.cuda.sdp_kernel(
enable_flash=True, enable_flash=True, enable_math=False, enable_mem_efficient=False
enable_math=False,
enable_mem_efficient=False,
enable_cudnn=False,
) )
elif backend == "mem_efficient": elif backend == "mem_efficient":
backend = torch.backends.cuda.sdp_kernel( backend = torch.backends.cuda.sdp_kernel(
enable_flash=False, enable_flash=False, enable_math=False, enable_mem_efficient=True
enable_math=False,
enable_mem_efficient=True,
enable_cudnn=False,
)
elif backend == "cudnn":
backend = torch.backends.cuda.sdp_kernel(
enable_flash=False,
enable_math=False,
enable_mem_efficient=False,
enable_cudnn=True,
) )
mode = FlopCounterMode() mode = FlopCounterMode()
@ -407,24 +389,22 @@ class TestFlopCounter(TestCase):
flops = [ flops = [
run_uniform_flops(backend, with_backward=False) run_uniform_flops(backend, with_backward=False)
for backend in ["math", "flash", "mem_efficient", "cudnn"] for backend in ["math", "flash", "mem_efficient"]
] ]
flops_fw_math, flops_fw_flash, flops_fw_efficient, flops_fw_cudnn = flops flops_fw_math, flops_fw_flash, flops_fw_efficient = flops
self.assertEqual(flops_fw_math, flops_fw_flash) self.assertEqual(flops_fw_math, flops_fw_flash)
self.assertEqual(flops_fw_math, flops_fw_efficient) self.assertEqual(flops_fw_math, flops_fw_efficient)
self.assertEqual(flops_fw_math, flops_fw_cudnn)
self.assertExpectedInline(str(flops_fw_math), """134217728""") self.assertExpectedInline(str(flops_fw_math), """134217728""")
flops = [ flops = [
run_uniform_flops(backend, with_backward=True) run_uniform_flops(backend, with_backward=True)
for backend in ["math", "flash", "mem_efficient", "cudnn"] for backend in ["math", "flash", "mem_efficient"]
] ]
flops_fw_bw_math, flops_fw_bw_flash, flops_fw_bw_efficient, flops_fw_bw_cudnn = flops flops_fw_bw_math, flops_fw_bw_flash, flops_fw_bw_efficient = flops
self.assertEqual(flops_fw_math * 3, flops_fw_bw_math) self.assertEqual(flops_fw_math * 3, flops_fw_bw_math)
self.assertEqual(flops_fw_math * 7 // 2, flops_fw_bw_flash) self.assertEqual(flops_fw_math * 7 // 2, flops_fw_bw_flash)
self.assertEqual(flops_fw_bw_flash, flops_fw_bw_efficient) self.assertEqual(flops_fw_bw_flash, flops_fw_bw_efficient)
self.assertEqual(flops_fw_bw_flash, flops_fw_bw_cudnn)
run_nonuniform_flops = functools.partial( run_nonuniform_flops = functools.partial(
get_flops, get_flops,
@ -468,24 +448,15 @@ class TestFlopCounter(TestCase):
if backend == "math": if backend == "math":
backend = torch.backends.cuda.sdp_kernel( backend = torch.backends.cuda.sdp_kernel(
enable_flash=False, enable_flash=False, enable_math=True, enable_mem_efficient=False
enable_math=True,
enable_mem_efficient=False,
enable_cudnn=False,
) )
elif backend == "flash": elif backend == "flash":
backend = torch.backends.cuda.sdp_kernel( backend = torch.backends.cuda.sdp_kernel(
enable_flash=True, enable_flash=True, enable_math=False, enable_mem_efficient=False
enable_math=False,
enable_mem_efficient=False,
enable_cudnn=False,
) )
elif backend == "mem_efficient": elif backend == "mem_efficient":
backend = torch.backends.cuda.sdp_kernel( backend = torch.backends.cuda.sdp_kernel(
enable_flash=False, enable_flash=False, enable_math=False, enable_mem_efficient=True
enable_math=False,
enable_mem_efficient=True,
enable_cudnn=False,
) )
with backend, mode: with backend, mode:

View File

@ -287,7 +287,7 @@ class TestOptimRenewed(TestCase):
inpt = torch.randn(5, device=device, dtype=dtype) inpt = torch.randn(5, device=device, dtype=dtype)
# avoid endless recompiles by wrapping LR in a tensor if we're compiling # avoid endless recompiles by wrapping LR in a tensor if we're compiling
lr = torch.tensor(0.01) if torch.compiler.is_compiling() else 0.01 lr = torch.tensor(0.01) if torch._utils.is_compiling() else 0.01
optimizer = optim_cls([{"params": [weight]}, {"params": [bias], "lr": lr}]) optimizer = optim_cls([{"params": [weight]}, {"params": [bias], "lr": lr}])
schedulers = [scheduler_c(optimizer) for scheduler_c in schedulers_c] schedulers = [scheduler_c(optimizer) for scheduler_c in schedulers_c]

View File

@ -43,8 +43,7 @@ from torch.testing._internal.common_cuda import (
IS_JETSON, SM80OrLater, PLATFORM_SUPPORTS_FLASH_ATTENTION, IS_JETSON, SM80OrLater, PLATFORM_SUPPORTS_FLASH_ATTENTION,
PLATFORM_SUPPORTS_MEM_EFF_ATTENTION, PLATFORM_SUPPORTS_MEM_EFF_ATTENTION,
PLATFORM_SUPPORTS_FUSED_ATTENTION, PLATFORM_SUPPORTS_FUSED_ATTENTION,
PLATFORM_SUPPORTS_CUDNN_ATTENTION, PLATFORM_SUPPORTS_CUDNN_ATTENTION
tf32_on_and_off
) )
if TEST_FAIRSEQ: if TEST_FAIRSEQ:
@ -316,7 +315,6 @@ class TestTransformers(NNTestCase):
with torch.no_grad(): with torch.no_grad():
model(src, src_mask=src_mask) model(src, src_mask=src_mask)
@tf32_on_and_off(0.001)
@parametrize("use_torchscript", [False]) @parametrize("use_torchscript", [False])
@parametrize("enable_nested_tensor", [True, False]) @parametrize("enable_nested_tensor", [True, False])
@parametrize("use_autocast", [True, False]) @parametrize("use_autocast", [True, False])
@ -407,9 +405,8 @@ class TestTransformers(NNTestCase):
# no garauntees on output corresponding to masked tokens, so they may vary between slow/fast path. set all to 0. # no garauntees on output corresponding to masked tokens, so they may vary between slow/fast path. set all to 0.
fastpath_output_expanded = fastpath_output_expanded.masked_fill(src_key_padding_mask.unsqueeze(-1), 0) fastpath_output_expanded = fastpath_output_expanded.masked_fill(src_key_padding_mask.unsqueeze(-1), 0)
slowpath_output = slowpath_output.masked_fill(src_key_padding_mask.unsqueeze(-1), 0) slowpath_output = slowpath_output.masked_fill(src_key_padding_mask.unsqueeze(-1), 0)
self.assertEqual(fastpath_output_expanded, slowpath_output) torch.testing.assert_close(fastpath_output_expanded, slowpath_output, rtol=1e-7, atol=1e-5)
@tf32_on_and_off(0.001)
@parametrize("with_no_grad", [True, False]) @parametrize("with_no_grad", [True, False])
@parametrize("training", [True, False]) @parametrize("training", [True, False])
@parametrize("enable_nested_tensor", [False]) @parametrize("enable_nested_tensor", [False])
@ -453,7 +450,7 @@ class TestTransformers(NNTestCase):
[2.419836044311523, 0.017548924311996, -0.608187675476074, -0.085347734391689]]] [2.419836044311523, 0.017548924311996, -0.608187675476074, -0.085347734391689]]]
).to(device) ).to(device)
self.assertEqual(tuple(result.shape), tuple(ref_output.shape)) self.assertEqual(tuple(result.shape), tuple(ref_output.shape))
self.assertEqual(result, ref_output) torch.testing.assert_close(result, ref_output, rtol=1e-7, atol=1e-5)
@parametrize("batch_first", [True, False]) @parametrize("batch_first", [True, False])
@parametrize("training", [True, False]) @parametrize("training", [True, False])
@ -1400,7 +1397,7 @@ class TestSDPAFailureModes(NNTestCase):
q = torch.randn(size, device=device, dtype=dtype) q = torch.randn(size, device=device, dtype=dtype)
k = torch.randn(size, device=device, dtype=dtype) k = torch.randn(size, device=device, dtype=dtype)
v = torch.randn(size, device=device, dtype=dtype) v = torch.randn(size, device=device, dtype=dtype)
with self.assertWarnsRegex(UserWarning, "All fused kernels requires query, key and value to be 4 dimensional"): with self.assertWarnsRegex(UserWarning, "Both fused kernels requires query, key and value to be 4 dimensional"):
self.assertRaises(RuntimeError, lambda: torch.nn.functional.scaled_dot_product_attention( self.assertRaises(RuntimeError, lambda: torch.nn.functional.scaled_dot_product_attention(
q, k, v, None, 0.0, False)) q, k, v, None, 0.0, False))
@ -1432,7 +1429,7 @@ class TestSDPAFailureModes(NNTestCase):
make_tensor = partial(torch.rand, device=device, dtype=dtype) make_tensor = partial(torch.rand, device=device, dtype=dtype)
size = SdpaShape(2, 2, 0, 8) size = SdpaShape(2, 2, 0, 8)
q, k, v = make_tensor(size), make_tensor(size), make_tensor(size) q, k, v = make_tensor(size), make_tensor(size), make_tensor(size)
with self.assertWarnsRegex(UserWarning, "All fused kernels do not support zero seq_len_q or seq_len_kv."): with self.assertWarnsRegex(UserWarning, "Both fused kernels do not support zero seq_len_q or seq_len_kv."):
self.assertRaises(RuntimeError, lambda: torch.nn.functional.scaled_dot_product_attention( self.assertRaises(RuntimeError, lambda: torch.nn.functional.scaled_dot_product_attention(
q, k, v, None, 0.0, False)) q, k, v, None, 0.0, False))
@ -1447,7 +1444,7 @@ class TestSDPAFailureModes(NNTestCase):
size = SdpaShape(2, 2, 8, 8) size = SdpaShape(2, 2, 8, 8)
q, k, v = make_tensor(size), make_tensor(size), make_tensor(size) q, k, v = make_tensor(size), make_tensor(size), make_tensor(size)
q.as_strided_(size, [2, 2, 2, 2]) q.as_strided_(size, [2, 2, 2, 2])
with self.assertWarnsRegex(UserWarning, "All fused kernels require the last dimension of the input to have stride 1."): with self.assertWarnsRegex(UserWarning, "Both fused kernels require the last dimension of the input to have stride 1."):
self.assertRaises(RuntimeError, lambda: torch.nn.functional.scaled_dot_product_attention( self.assertRaises(RuntimeError, lambda: torch.nn.functional.scaled_dot_product_attention(
q, k, v, None, 0.0, False)) q, k, v, None, 0.0, False))
@ -2356,7 +2353,7 @@ class TestSDPACudaOnly(NNTestCase):
math_ref_lp_test = math_ref_lp_test.to(dtype=torch.float32).contiguous() math_ref_lp_test = math_ref_lp_test.to(dtype=torch.float32).contiguous()
self.assertEqual(math_ref_test, math_ref_lp_test, atol=7e-3, rtol=7e-3) self.assertEqual(math_ref_test, math_ref_lp_test, atol=7e-3, rtol=7e-3)
self.assertEqual(actual_test, math_ref_test, atol=7e-3, rtol=7e-3) self.assertEqual(actual_test, math_ref_test, atol=5e-3, rtol=5e-3)
@unittest.skipIf(not PLATFORM_SUPPORTS_MEM_EFF_ATTENTION, "Efficient Attention was not built for this system") @unittest.skipIf(not PLATFORM_SUPPORTS_MEM_EFF_ATTENTION, "Efficient Attention was not built for this system")
@parametrize("contiguous_inputs", [True, False]) @parametrize("contiguous_inputs", [True, False])
@ -2474,12 +2471,7 @@ class TestSDPACudaOnly(NNTestCase):
value = value.view(batch_size, -1, num_heads, head_dim).transpose(1, 2) value = value.view(batch_size, -1, num_heads, head_dim).transpose(1, 2)
key = key.view(batch_size, -1, num_heads, head_dim).transpose(1, 2) key = key.view(batch_size, -1, num_heads, head_dim).transpose(1, 2)
major, minor = torch.cuda.get_device_capability(device) if PLATFORM_SUPPORTS_FLASH_ATTENTION:
is_sm90_or_newer = major >= 9
if type != "nested" and PLATFORM_SUPPORTS_CUDNN_ATTENTION and is_sm90_or_newer:
assert torch._fused_sdp_choice(query, key, value) == SDPBackend.CUDNN_ATTENTION.value
elif PLATFORM_SUPPORTS_FLASH_ATTENTION:
assert torch._fused_sdp_choice(query, key, value) == SDPBackend.FLASH_ATTENTION.value assert torch._fused_sdp_choice(query, key, value) == SDPBackend.FLASH_ATTENTION.value
else: else:
assert torch._fused_sdp_choice(query, key, value) == SDPBackend.EFFICIENT_ATTENTION.value assert torch._fused_sdp_choice(query, key, value) == SDPBackend.EFFICIENT_ATTENTION.value
@ -2519,8 +2511,7 @@ class TestSDPACudaOnly(NNTestCase):
make_tensor = partial(rand_sdpa_tensor, type="dense", device=device, dtype=torch.float16, packed=False, requires_grad=True) make_tensor = partial(rand_sdpa_tensor, type="dense", device=device, dtype=torch.float16, packed=False, requires_grad=True)
query, key, value = make_tensor(shape), make_tensor(shape), make_tensor(shape) query, key, value = make_tensor(shape), make_tensor(shape), make_tensor(shape)
kernel_name = "Memory Efficient attention" if fused_kernel == SDPBackend.EFFICIENT_ATTENTION else \ kernel_name = "Memory Efficient attention" if fused_kernel == SDPBackend.EFFICIENT_ATTENTION else "Flash Attention"
"Flash Attention" if fused_kernel == SDPBackend.FLASH_ATTENTION else "cuDNN Attention"
warning_context = ( warning_context = (
self.assertWarnsRegex( self.assertWarnsRegex(
UserWarning, UserWarning,
@ -2532,12 +2523,7 @@ class TestSDPACudaOnly(NNTestCase):
with use_deterministic_algorithims(True, warn_only=warn_only): with use_deterministic_algorithims(True, warn_only=warn_only):
with sdpa_kernel(backends=[fused_kernel]): with sdpa_kernel(backends=[fused_kernel]):
with warning_context: with warning_context:
if warn_only or fused_kernel != SDPBackend.CUDNN_ATTENTION: torch.nn.functional.scaled_dot_product_attention(query, key, value).sum().backward()
torch.nn.functional.scaled_dot_product_attention(query, key, value).sum().backward()
else:
# cuDNN attention has no deterministic fallback
self.assertRaises(RuntimeError, lambda:
torch.nn.functional.scaled_dot_product_attention(query, key, value).sum().backward())
@unittest.skip("This test is not behaving deterministaclly non-deterministaclly on CI/CD") @unittest.skip("This test is not behaving deterministaclly non-deterministaclly on CI/CD")
@unittest.skipIf(not PLATFORM_SUPPORTS_FLASH_ATTENTION, "Platform does not support fused SDPA") @unittest.skipIf(not PLATFORM_SUPPORTS_FLASH_ATTENTION, "Platform does not support fused SDPA")
@ -2677,7 +2663,7 @@ class TestSDPACudaOnly(NNTestCase):
output_ref_atol, output_ref_rtol = get_tolerances(out_ref, out_lp_ref) output_ref_atol, output_ref_rtol = get_tolerances(out_ref, out_lp_ref)
# Fudge Factor when dropout is enabled # Fudge Factor when dropout is enabled
dropout_fudge_factor = 1.5 if dropout_p == 0.0 else 2.0 dropout_fudge_factor = 1.0 if dropout_p == 0.0 else 2.0
query_fudge_factor = dropout_fudge_factor query_fudge_factor = dropout_fudge_factor
grad_q_ref_atol, grad_q_ref_rtol = get_tolerances(query_ref.grad, query_ref_lp.grad, query_fudge_factor) grad_q_ref_atol, grad_q_ref_rtol = get_tolerances(query_ref.grad, query_ref_lp.grad, query_fudge_factor)
@ -2800,8 +2786,8 @@ class TestSDPACudaOnly(NNTestCase):
# Fudge Factor when dropout is enabled # Fudge Factor when dropout is enabled
dropout_fudge_factor = 1.0 if dropout_p == 0.0 else 1.75 dropout_fudge_factor = 1.0 if dropout_p == 0.0 else 1.75
mask_fudge_factor = 1.0 if attn_mask is None else 1.5 mask_fudge_factor = 1.0 if attn_mask is None else 1.5
query_fudge_factor = 2.0
query_fudge_factor = dropout_fudge_factor
grad_q_ref_atol, grad_q_ref_rtol = get_tolerances(query_ref.grad, query_ref_lp.grad, query_fudge_factor) grad_q_ref_atol, grad_q_ref_rtol = get_tolerances(query_ref.grad, query_ref_lp.grad, query_fudge_factor)
# TODO: Investigate why grad_k needs larger tolerances # TODO: Investigate why grad_k needs larger tolerances
@ -3003,8 +2989,7 @@ class TestSDPACudaOnly(NNTestCase):
device=device, dtype=dtype, requires_grad=True) device=device, dtype=dtype, requires_grad=True)
fused_op = (torch.ops.aten._scaled_dot_product_efficient_attention fused_op = (torch.ops.aten._scaled_dot_product_efficient_attention
if fused_kernel == SDPBackend.EFFICIENT_ATTENTION else torch.ops.aten._scaled_dot_product_flash_attention if fused_kernel == SDPBackend.EFFICIENT_ATTENTION else torch.ops.aten._scaled_dot_product_flash_attention)
if fused_kernel == SDPBackend.FLASH_ATTENTION else torch.ops.aten._scaled_dot_product_cudnn_attention)
# Run the math kernel on low precision references # Run the math kernel on low precision references
query_ref_lp, key_ref_lp, value_ref_lp = query_key_value_clones(query, key, value, dtype=dtype) query_ref_lp, key_ref_lp, value_ref_lp = query_key_value_clones(query, key, value, dtype=dtype)
@ -3022,10 +3007,6 @@ class TestSDPACudaOnly(NNTestCase):
kwargs["attn_bias"] = None kwargs["attn_bias"] = None
if fused_kernel == SDPBackend.FLASH_ATTENTION: if fused_kernel == SDPBackend.FLASH_ATTENTION:
kwargs['return_debug_mask'] = dropout_p > 0.0 kwargs['return_debug_mask'] = dropout_p > 0.0
if fused_kernel == SDPBackend.CUDNN_ATTENTION:
kwargs["compute_log_sumexp"] = True
if "return_debug_mask" in kwargs:
kwargs.pop("return_debug_mask")
with torch.cuda.stream(s): with torch.cuda.stream(s):
# Create real output # Create real output
output_tuple = fused_op(query, key, value, **kwargs) output_tuple = fused_op(query, key, value, **kwargs)
@ -3063,8 +3044,7 @@ class TestSDPACudaOnly(NNTestCase):
# Low Precision Math Reference # Low Precision Math Reference
out_lp_ref = F.scaled_dot_product_attention(query_ref_lp, key_ref_lp, value_ref_lp, out_lp_ref = F.scaled_dot_product_attention(query_ref_lp, key_ref_lp, value_ref_lp,
dropout_p=dropout_p, is_causal=is_causal, scale=scale) dropout_p=dropout_p, is_causal=is_causal, scale=scale)
# cuDNN attention doesn't support returning dropout mask else:
elif fused_kernel != SDPBackend.CUDNN_ATTENTION:
# Create the dropout_mask # Create the dropout_mask
dropout_mask = get_dropout_mask(output_tuple, fused_kernel, batch_size, dropout_mask = get_dropout_mask(output_tuple, fused_kernel, batch_size,
n_heads, seq_len_q, seq_len_k, dropout_p, device) n_heads, seq_len_q, seq_len_k, dropout_p, device)
@ -3082,38 +3062,37 @@ class TestSDPACudaOnly(NNTestCase):
with torch.cuda.graph(g1): with torch.cuda.graph(g1):
out.backward(upstream_grad) out.backward(upstream_grad)
g1.replay() g1.replay()
if fused_kernel != SDPBackend.CUDNN_ATTENTION or dropout_p == 0.0: out_ref.backward(upstream_grad.to(out_ref.dtype))
out_ref.backward(upstream_grad.to(out_ref.dtype)) out_lp_ref.backward(upstream_grad.to(out_lp_ref.dtype))
out_lp_ref.backward(upstream_grad.to(out_lp_ref.dtype))
# [Note] Fused Tolerances # [Note] Fused Tolerances
# Establish the numerical error between the "true" high precision math output # Establish the numerical error between the "true" high precision math output
# and the low precision math reference. We use this reference for the atol # and the low precision math reference. We use this reference for the atol
# And we use the default rtol for the low precision type. # And we use the default rtol for the low precision type.
# We then provide a fudge factor for gradients respectively to account # We then provide a fudge factor for gradients respectively to account
# for the use of the fused kernel rather than the eager implemntation. # for the use of the fused kernel rather than the eager implemntation.
output_ref_atol, output_ref_rtol = get_tolerances(out_ref, out_lp_ref) output_ref_atol, output_ref_rtol = get_tolerances(out_ref, out_lp_ref)
# Fudge Factor when dropout is enabled # Fudge Factor when dropout is enabled
dropout_fudge_factor = 1.0 if dropout_p == 0.0 else 1.5 dropout_fudge_factor = 1.0 if dropout_p == 0.0 else 1.5
query_fudge_factor = dropout_fudge_factor query_fudge_factor = dropout_fudge_factor
grad_q_ref_atol, grad_q_ref_rtol = get_tolerances(query_ref.grad, query_ref_lp.grad, query_fudge_factor) grad_q_ref_atol, grad_q_ref_rtol = get_tolerances(query_ref.grad, query_ref_lp.grad, query_fudge_factor)
# TODO: Investigate why grad_k needs larger tolerances # TODO: Investigate why grad_k needs larger tolerances
key_fudge_factor = 8 * dropout_fudge_factor key_fudge_factor = 8 * dropout_fudge_factor
grad_k_ref_atol, grad_k_ref_rtol = get_tolerances(key_ref.grad, key_ref_lp.grad, key_fudge_factor) grad_k_ref_atol, grad_k_ref_rtol = get_tolerances(key_ref.grad, key_ref_lp.grad, key_fudge_factor)
value_fudge_factor = 7 if not SM80OrLater and dtype == torch.float16 else 1.0 value_fudge_factor = 7 if not SM80OrLater and dtype == torch.float16 else 1.0
grad_v_ref_atol, grad_v_ref_rtol = get_tolerances(value_ref.grad, value_ref_lp.grad, value_fudge_factor) grad_v_ref_atol, grad_v_ref_rtol = get_tolerances(value_ref.grad, value_ref_lp.grad, value_fudge_factor)
self.assertEqual(out, out_ref.to(out.dtype), atol=output_ref_atol, rtol=output_ref_rtol) self.assertEqual(out, out_ref.to(out.dtype), atol=output_ref_atol, rtol=output_ref_rtol)
self.assertEqual(query.grad, query_ref.grad.to(query.grad.dtype), self.assertEqual(query.grad, query_ref.grad.to(query.grad.dtype),
atol=grad_q_ref_atol, rtol=grad_q_ref_rtol) atol=grad_q_ref_atol, rtol=grad_q_ref_rtol)
self.assertEqual(key.grad, key_ref.grad.to(key.grad.dtype), self.assertEqual(key.grad, key_ref.grad.to(key.grad.dtype),
atol=grad_k_ref_atol, rtol=grad_k_ref_rtol) atol=grad_k_ref_atol, rtol=grad_k_ref_rtol)
self.assertEqual(value.grad, value_ref.grad.to(value.grad.dtype), self.assertEqual(value.grad, value_ref.grad.to(value.grad.dtype),
atol=grad_v_ref_atol, rtol=grad_v_ref_rtol) atol=grad_v_ref_atol, rtol=grad_v_ref_rtol)
@skipIfRocm # Nested Tensor @skipIfRocm # Nested Tensor
@unittest.skipIf(not PLATFORM_SUPPORTS_FUSED_ATTENTION, "Fused SDPA was not built for this system") @unittest.skipIf(not PLATFORM_SUPPORTS_FUSED_ATTENTION, "Fused SDPA was not built for this system")
@ -3237,7 +3216,7 @@ class TestSDPACudaOnly(NNTestCase):
query_expanded.contiguous(), key_expanded.contiguous(), value_expanded.contiguous(), query_expanded.contiguous(), key_expanded.contiguous(), value_expanded.contiguous(),
attn_mask=None, dropout_p=0.0, is_causal=False) attn_mask=None, dropout_p=0.0, is_causal=False)
self.assertEqual(actual.contiguous(), math_ref.contiguous().to(dtype), atol=1.5e-3, rtol=1e-2) self.assertEqual(actual.contiguous(), math_ref.contiguous().to(dtype), atol=1e-3, rtol=1e-2)
@skipIfRocm # Nested tensor @skipIfRocm # Nested tensor
@unittest.skipIf(not PLATFORM_SUPPORTS_MEM_EFF_ATTENTION, "Fused SDPA was not built for this system") @unittest.skipIf(not PLATFORM_SUPPORTS_MEM_EFF_ATTENTION, "Fused SDPA was not built for this system")
@ -3400,7 +3379,6 @@ class TestAttnBias(NNTestCase):
forw_tolerances: Optional[Tolerances] = None, forw_tolerances: Optional[Tolerances] = None,
grad_tolerances: Optional[Tolerances] = None, grad_tolerances: Optional[Tolerances] = None,
backend=None, backend=None,
causal_variant=None,
): ):
if backend is not None: if backend is not None:
torch._dynamo.reset() torch._dynamo.reset()
@ -3468,11 +3446,9 @@ class TestAttnBias(NNTestCase):
if causal_variant == CausalVariant.UPPER_LEFT: if causal_variant == CausalVariant.UPPER_LEFT:
attn_bias = causal_upper_left(seq_len_q, seq_len_kv) attn_bias = causal_upper_left(seq_len_q, seq_len_kv)
else: else:
print(seq_len_q, seq_len_kv)
attn_bias = causal_lower_right(seq_len_q, seq_len_kv) attn_bias = causal_lower_right(seq_len_q, seq_len_kv)
with sdpa_kernel(backends=[SDPBackend.EFFICIENT_ATTENTION, SDPBackend.FLASH_ATTENTION, SDPBackend.MATH]): self.run_test(device, make_q_tensor, make_kv_tensor, attn_bias, forw_tol, grad_tol, backend=None)
self.run_test(device, make_q_tensor, make_kv_tensor, attn_bias, forw_tol, grad_tol, backend=None)
@skipIfRocm # CausalVariant @skipIfRocm # CausalVariant
@parametrize("causal_variant", [CausalVariant.UPPER_LEFT, CausalVariant.LOWER_RIGHT]) @parametrize("causal_variant", [CausalVariant.UPPER_LEFT, CausalVariant.LOWER_RIGHT])
@ -3503,8 +3479,7 @@ class TestAttnBias(NNTestCase):
else: else:
attn_bias = causal_lower_right(seq_len_q, seq_len_kv) attn_bias = causal_lower_right(seq_len_q, seq_len_kv)
with sdpa_kernel(backends=[SDPBackend.EFFICIENT_ATTENTION, SDPBackend.FLASH_ATTENTION, SDPBackend.MATH]): self.run_test(device, make_q_tensor, make_kv_tensor, attn_bias, forw_tol, grad_tol, backend=cnts)
self.run_test(device, make_q_tensor, make_kv_tensor, attn_bias, forw_tol, grad_tol, backend=cnts)
self.assertEqual(cnts.frame_count, 1, "Compiled graph should have 1 frame!") self.assertEqual(cnts.frame_count, 1, "Compiled graph should have 1 frame!")
@parametrize("shape", [(16, 16, 128, 128, 16), (16, 16, 128, 256, 32), (16, 16, 256, 128, 32), (1, 1, 23, 56, 15)]) @parametrize("shape", [(16, 16, 128, 128, 16), (16, 16, 128, 256, 32), (16, 16, 256, 128, 32), (1, 1, 23, 56, 15)])

Some files were not shown because too many files have changed in this diff Show More